ULTRA-WIDEBAND RADAR TECHNOLOGY
© 2001 by CRC Press LLC
ULTRA-WIDEBAND RADAR TECHNOLOGY Edited by
James D. Taylor, ...
366 downloads
1234 Views
30MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
ULTRA-WIDEBAND RADAR TECHNOLOGY
© 2001 by CRC Press LLC
ULTRA-WIDEBAND RADAR TECHNOLOGY Edited by
James D. Taylor, P.E.
CRC Press Boca Raton London New York Washington, D.C.
disclaimer Page 1 Wednesday, August 9, 2000 2:22 PM
Library of Congress Cataloging-in-Publication Data Ultra-wideband radar technology / edited by James D. Taylor. p. cm. Includes bibliographical references and index. ISBN 0-8493-4267-8 (alk.) 1. Radar. 2. Ultra-wideband devices. I. Taylor, James D., 1941TK6580 .U44 2000 621.3848--dc21
00-030423
This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use. Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, microfilming, and recording, or by any information storage or retrieval system, without prior permission in writing from the publisher. All rights reserved. Authorization to photocopy items for internal or personal use, or the personal or internal use of specific clients, may be granted by CRC Press LLC, provided that $.50 per page photocopied is paid directly to Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923 USA. The fee code for users of the Transactional Reporting Service is ISBN 0-8493-4267-8/00/$0.00+$.50. The fee is subject to change without notice. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. The consent of CRC Press LLC does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific permission must be obtained in writing from CRC Press LLC for such copying. Direct all inquiries to CRC Press LLC, 2000 N.W. Corporate Blvd., Boca Raton, Florida 33431. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation, without intent to infringe.
© 2001 by CRC Press LLC No claim to original U.S. Government works International Standard Book Number 0-8493-4267-8 Library of Congress Card Number 00-030423 Printed in the United States of America 1 2 3 4 5 6 7 8 9 0 Printed on acid-free paper
Preface Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information upon it. Samuel Johnson (1709–1784) My first book, Introduction to Ultra-Wideband Radar Systems, gave engineers and managers a practical technical theory book about a new concept for remote sensing. Ultra-Wideband Radar Technology presents theory and ideas for future systems development and shows the potential capabilities. Ultra-wideband (UWB) radar systems use signals with a bandwidth greater than 25 percent of the center frequency. In this case, bandwidth means the difference between the highest and lowest frequencies of interest and contains about 95 percent of the signal power. Example waveforms include impulse (video pulse), coded impulse trains, stepped frequency, pulse compression, random noise, and other signal formats that have high effective bandwidths. UWB radar has advanced since I entered the field in 1987. Several years ago, I received an advertisement for a UWB radar intrusion alarm and bought some as birthday presents for my special ladies. A petroleum distributors’ trade magazine described how future gas stations will use UWB communication links and transponders to identify customers and vehicles. While robots fill the tank, the individual account will be properly charged and the transaction completed without the driver leaving the car. Some companies are proposing to build locating systems for factories that will attach a UWB transponder to each container. Personal locating systems are other potential applications. Other companies are designing UWB wireless links to connect homes, offices, and schools. When I was young, Uncle Scrooge McDuck, a Walt Disney character, had a special watch that computed his income tax by radar. Given the recent commercial development of micropower UWB technology, I expect to see a UWB wristwatch radar advertised soon. It sounds like a great idea for soldiers, policemen, hunters, or anybody who might want to detect hidden persons or objects behind cover. After adding some more technology, follow-on models could calculate your taxes by radar. Gyro Gearloose, Scrooge McDuck’s inventor friend, was right on target. When I organized a session at the 1999 International Ultra-Wideband Conference in Washington, D.C., my speakers reported on UWB radar progress in ground-penetrating radar, airborne SAR systems, automotive radar, and medical imaging. In contrast to the SPIE and IEEE meetings that I usually attend, commercial development issues occupied almost half of the program. The main topic of discussion was about changing the Federal Communications Commission regulations covering low-power UWB sensing and communication systems. It appears that legal issues will be the principal obstacle to future UWB systems development. Given this background, I think that UWB radar technology will develop along parallel tracks. Commercial low-power short-range systems, using integrated circuit technology for sensing and communications, will be one path. High-power systems for remote fine resolution imaging and sensing will be the other. Microwatt power impulse radar systems can provide practical solutions to many short-range sensing and communication problems. There is now a strong interest in using UWB signals for short-range wireless interconnection and networking activities. Commercial applications will drive the development of low-power devices once the many regulatory issues are settled. Because UWB signals can provide all-weather sensing and communications over short ranges, it may appear in © 2001 by CRC Press LLC
smart vehicles and highway systems. I recommend visiting the Ultra-wideband Working Group web site at www.uwb.org for the latest developments and news. High-power systems for defense and environmental remote sensing will be the other systems development direction. The American Department of Defense Advanced Research Projects Agency (ARPA) sponsored UWB radar programs during the 1990s. Program objectives included highresolution sensing and mapping, foliage penetration for imaging hidden objects, and buried mine detection. The annual SPIE sponsored AeroSense conference has been one of the principal forums for reporting UWB radar technology activities and progress. In my opinion, long-range development goals for high-power systems will include remote environmental and biomass sensing, small target detection using long-duration pulse compression signals and bistatic techniques, and using multiple short range UWB systems to provide highresolution surveillance for industrial and urban areas. Better uses of polarimetry and signal processing can enhance UWB radar capabilities. Propagation and media characterization studies will help develop better signal processing and imaging techniques. Discussions with my professional associates indicate that understanding UWB radar requires a new philosophical approach. Because many UWB circuits work with short-duration signals, the steady-state condition is never reached. This requires analyzing the system in the time domain and looking at transient conditions, as opposed to the steady-state frequency response that characterizes many electronic systems. Fine radar resolution means that targets are much larger than the signal resolution, so they can no longer be considered as point source reflectors. Many of the rules and descriptions used for continuous sinusoidal wave signals cannot be directly applied to UWB radar problems. Concepts such as radar cross section will have new meanings as range resolution becomes smaller than the target. Chapter 1, “Main Features of Ultra-Wideband (UWB) Radars and Differences from Common Narrowband Radars,” was written by Dr. Igor Immoreev of the Moscow State Aviation Institute in Russia. He explains how UWB signals will produce effects not encountered in conventional low-resolution radar. This leads to the concept of signal spectral efficiency. Life is further complicated because fine range resolution turns the target into a series of point returns from scattering centers and creates major signal processing and target detection problems. A direct result of the time-variable UWB antenna and target characteristics is a time-dependent radar range equation. Chapter 2, “Improved Signal Detection in UWB Radars,” by Dr. Igor Immoreev, expands the concepts of Chapter 1. He presents an approach to detecting over-resolved targets by correlating multiple returns over an estimated spatial window about the physical size of the target. This solves the problem of reduced UWB radar returns from numerous scattering centers; however, it will present a new way of considering radar reflection characteristics and target radar cross section specifications. Chapter 3, “High-Resolution Ultra-Wideband Radars,” by Dr. Nasser J. Mohammed, of the University of Kuwait, presents a concept for identifying UWB radar targets. This method involves correlating the series of target returns against a library of known target signals. There is a close relationship with ideas of Chapter 2. Chapter 4, “Ultra-wideband Radar Receivers,” by James D. Taylor, examines some major theoretical issues in receiver design. This chapter starts with concepts of digitizing and recording impulse signals in a single pass, which is a major problem area building impulse radars for material recognition. Pulse compression is another UWB radar technique that has potential applications where fine range resolution is needed at long ranges. Practical guidance for estimating the bandwidth of UWB signals is given by an explanation of the spectrum of pulse modulated sinewaves. Performance prediction for UWB systems remains a problem area, and the solution may have to be specific to radar systems and waveforms. While the question cannot be answered with a single neat equation, I have provided an approach to performance estimation as a starting point. Chapter 4 is complementary to Chapters 1, 2, and 3. © 2001 by CRC Press LLC
Chapter 5, “Compression of Wideband Returns from Overspread Targets,” by Dr. Benjamin C. Flores and Roberto Vasquez, Jr., of the University of Texas at El Paso, provides a look at how to use pulse compressed signals in radio astronomy. While the ambiguity function was mentioned in Chapter 4, this chapter shows what happens to long-duration pulse-compressed signals when there are time or frequency shifts caused by target motion. Chapter 6, “The Micropower Impulse Radar,” by James D. Taylor and Thomas E. McEwan, examines low-power system technology for short-range applications. Recent advances in integrated circuit technologies will provide a wide variety of short-range sensors and communication systems. Using micropower radar techniques can put radar sensors in places never thought of before. Chapter 7, “Ultra-wideband Technology for Intelligent Transportation Systems,” by Dr. Robert D. James and Jeffrey B. Mendola, of the Virginia Tech Transportation Center, and James D. Taylor, shows how future smart highway systems can use UWB signals. Short-range sensing and communications are two requirements for watching traffic conditions and then communicating instructions to vehicles. Additionally, we can expect to see some form of radar installed in automobiles and trucks for station maintenance and collision avoidance with other vehicles in traffic. Automotive radar and communications may be a primary UWB development area in the near future; however, it will require a large effort to build smart highways and vehicles. The question of infrastructure design, systems standards, highway control schemes, communication protocols and links, and other issues must be settled before any widespread smart highway system can be built. This chapter raises the potential for vehicle tracing and location, which may raise some serious constitutional privacy issues in a democratic country. Chapter 8, “Design, Performance, and Applications of a Coherent UWB Random Noise Radar,” by Dr. Ram Narayanan, Yi Xu, Paul D. Hoffmeyer, and John O. Curtis, shows how bandwidth alone determines range resolution. Dr. Narayanan and his University of Nebraska associates built and demonstrated a continuous random noise signal radar. By preserving the random noise signal in a delay line, this experimental 1 GHz radar achieved spatial resolution of 15 cm. Such a concept would be potentially valuable for building a stealthy, low probability of intercept radar or for operating at low power levels to avoid interference with other systems. Chapter 9, “New Power Semiconductor Devices for Generation of Nano- and Subnanosecond Pulses,” by Dr. Alexei Kardo-Syssoev, of the Ioffe Physical-Technical Institute in St. Petersburg, Russia, describes the fundamentals of high-power impulse generation. Producing high-power impulse signals involves suddenly moving large amounts of current, which implies special switches that close or open in picoseconds. This chapter explains the theory of drift step recovery diodes and other high-speed switching devices. Dr. Kardo-Syssoev is the head of the Pulse Systems Group of the Ioffe Physical-Technical Institute. His engineers have provided advanced semiconductor switching devices to SRI International and other American organizations. Chapter 10, “Fourier Series-Based Waveform Generation and Signal Processing in UWB Radar,” by Dr. Gurnam S. Gill, of the Naval Post Graduate School in Monterey, California, presents another approach to generating ultra-wideband waveforms. While high-speed switching techniques are a straightforward approach to impulse generation, repeatability remains an issue. There is always a suspicion that each impulse may be slightly different from the others, which will affect signal processing. Generating UWB signals by adding many different waveforms together offers a more flexible approach to building high-power UWB radar systems, especially ones that need a highly accurate and coherent waveform. Chapter 11, “High-Resolution Step-Frequency Radar,” by Dr. Gurnam S. Gill, shows how to build a UWB radar using long-duration narrowband radar signals. Processing many narrowband returns can give the same result as an instantaneous UWB signal. This is an approach to avoiding the regulatory issues that limit high-power UWB system development. Interference with narrowband systems may force the designer to notch out certain restricted frequency bands before the system can be used legally. Dr. Gill develops the theory of using step-frequency waveforms, which transmit many long-duration, narrowband signals and then process them to achieve the effect of a UWB signal. © 2001 by CRC Press LLC
Chapter 12, “CARABAS Airborne SAR,” by Dr. Lars Ulander, Dr. Hans Hellsten, and James D. Taylor, describes a step-frequency UWB radar developed and tested by the Swedish Defence Ministry. The Coherent All Radio Band System (CARABAS) demonstrates how to build a highresolution SAR using step-frequency radar. CARABAS demonstrated both high-resolution imaging and foliage penetration expected from VHF signals. Chapter 13, “Ultra-Wideband Radar Capability Demonstrations,” by James D. Taylor, describes the state of the art in UWB radar for precision imaging, finding targets hidden by foliage, and detecting buried mines. ARPA-sponsored demonstrations showed the potential of high-power UWB radar as a practical sensing system for military applications. ERIM International, the Lawrence Livermore National Laboratory (LLNL), SRI International, MIT Lincoln Laboratory, and the Army Research Laboratory programs show the capabilities and problems of UWB radar. Chapter 14, “Bistatic Radar Polarimetry,” by Dr. Anne-Laure Germond, of the Conservertoire National des Arts et Metiers in Paris, France, and her colleagues Dr. Eric Pottier, and Dr. Joseph Saillard, presents a new approach to understanding and analyzing bistatic radar signals. Bistatic radar will be an important future technology for detecting small radar cross section targets. Using side-scattered energy for target detection has several potential advantages, including the ability to locate transmitters in protected refuges and move the receiver freely over areas in which it would be dangerous to radiate. Polarimetric radars using orthogonally polarized signals to increase target detection will be a major future radar trend. Analyzing the measured polarization shifts of reflected radar signals may provide a future method for passive target identification. The future of remote sensing may be polarimetric UWB radar. My special thanks for my collaborators who gave their time and effort to make this book possible. We hope this will stimulate new ideas to advance UWB radar technology.
© 2001 by CRC Press LLC
Acknowledgments The editor and authors of this book wish to acknowledge our families, employers, friends, supporters, opponents, and critics. Special thanks to our colleagues who inspired, assisted, and gave their frank considered opinions and suggestions. There are too many to name without unfairly omitting someone, so we must thank you for your contributions this way. We also extend our thanks to the government, industry, and university representatives who made this book possible by supporting and encouraging ultra-wideband radar technology related programs. My heartfelt thanks to all the writers for working with me and taking my lengthy critiques to heart during the revisions of their chapters. We wanted to make this book unique, useful, and readable. We thank our families and friends who supported us and provided us the time we needed to prepare this book. James D. Taylor January 22, 2000 Gainesville, Florida, U.S.A.
© 2001 by CRC Press LLC
In Memoriam Rachel Z. Taylor, 1937–1994
© 2001 by CRC Press LLC
Contributors John Curtis Environmental Laboratory U.S. Army Waterways Experiment Station Vicksburg, Mississippi, U.S.A. Benjamin C. Flores, Ph.D. Department of Electrical and Computer Engineering University of Texas at El Paso El Paso, Texas, U.S.A. Anne-Laure Germond, Ph.D. Chaîre de Physique des Composants Conservatoire National des Arts et Metiers Paris, France Gurnam S. Gill, Ph.D. U.S. Naval Postgraduate School Monterey, California, U.S.A. Hans Hellsten, Ph.D. Swedish Defence Establishment (FOA) Department of Surveillance Radar Linkoping, Sweden Paul Hoffmeyer Department of Electrical Engineering University of Nebraska Lincoln, Nebraska, U.S.A. Igor I. Immoreev Doctor of Technical Sciences, and Professor Moscow State Aviation Institute Moscow, Russia
Thomas E. McEwan, MSEE McEwan Technologies LLC Pleasanton, California, USA Jeffrey B. Mendola, MS Virginia Tech Transportation Center Blacksburg, Virginia, U.S.A. Nasser J. Mohamed, Ph.D. Electrical Engineering Department University of Kuwait Safat, Kuwait Ram Narayanan, Ph.D. Department of Electrical Engineering University of Nebraska Lincoln, Nebraska, U.S.A. Eric Pottier, Ph.D. UPRES-A CNRS 6075 Structures Rayonnantes Laboratoire Antennes et Télécommunications Université de Rennes 1 Rennes, France Joseph Saillard, Ph.D. Ecole Polytechnique de l’Université d Nantes Nantes, France James D. Taylor, MSEE, P.E. J.D. Taylor Associates Gainesville, Florida, U.S.A. Lars Ulander, Ph.D. Swedish Defence Establishment (FOA) Department of Surveillance Radar Linkoping, Sweden
Robert B. James, Ph.D. Virginia Tech Transportation Center Blacksburg, Virginia, U.S.A.
Roberto Vasquez, Jr. Raytheon Electronic Systems Bedford, Massachusetts
Alexi F. Kardo-Sysoev Doctor of Physico-Mathematical Sciences Ioffe Physical-Technical Institute St. Petersburg, Russia
Yi Xu, Ph.D. Department of Electrical Engineering University of Nebraska Lincoln, Nebraska, U.S.A.
© 2001 by CRC Press LLC
About the Editor James D. Taylor was born in Tifton, Georgia, in 1941, and grew up in North Carolina and Maryland. After earning his BSEE degree from the Virginia Military Institute in 1963, he entered active duty in the U.S. Army as an artillery officer. In 1968, he transferred to the U.S. Air Force as a research and development electronics engineer and worked for the Central Inertial Guidance Test Facility at Holloman Air Force Base, New Mexico, until 1975. He earned his MSEE in guidance and control theory from the Air Force Institute of Technology at Wright-Patterson AFB, Ohio, in 1977. From 1977 to 1981, he was a staff engineer at the Air Force Wright Aeronautical Laboratories Avionics Laboratory. From 1981 to 1991, he served as a staff engineer in the Deputy for Development Planning at the Electronic Systems Division at Hanscom Air Force Base, Massachusetts. Upon retiring from the Air Force in 1991, he worked as a consultant to TACAN Aerospace Corp. in San Diego, California, and edited Introduction to Ultra-Wideband Radar Systems for CRC Press. He has actively participated in PIERS symposiums radar workshops since 1998 and presented short courses in ultra-wideband radar in America, Italy, and Russia. His professional achievements include Professional Engineer registration from Massachusetts in 1984. He is a senior member of the Institute of Electrical and Electronics Engineers and the American Institute of Aeronautics and Astronautics. He retired from the U.S. Air Force as a Lieutenant Colonel, and is now a gentleman engineer, consultant, technical writer, editor, and novelist.
© 2001 by CRC Press LLC
Contents Chapter 1 Main Features of UWB Radars and Differences from Common Narrowband Radars Igor I. Immoreev Chapter 2 Improved Signal Detection in UWB Radars Igor I. Immoreev Chapter 3 High-Resolution Ultra-Wideband Radars Nasser J. Mohamed Chapter 4 Ultra-Wideband Radar Receivers James D. Taylor Chapter 5 Compression of Wideband Returns from Overspread Targets Benjamin C. Flores and Roberto Vasquez, Jr. Chapter 6 The Micropower Impulse Radar James D. Taylor and Thomas E. McEwan Chapter 7 Ultra-Wideband Technology for Intelligent Transportation Systems Robert B. James and Jeffrey B. Mendola Chapter 8 Design, Performance, and Applications of a Coherent UWB Random Noise Radar Ram M. Narayanan, Yi Xu, Paul D. Hoffmeyer, John O. Curtis Chapter 9 New Power Semiconductor Devices for Generation of Nano- and Subnanosecond Pulses Alexei F. Kardo-Sysoev Chapter 10 Fourier Series-Based Waveform Generation and Signal Processing in UWB Radar Gurnam S. Gill Chapter 11 High-Resolution Step-Frequency Radar Gurnam S. Gill © 2001 by CRC Press LLC
Chapter 12 The CARABAS II VHF Synthetic Aperture Radar Lars Ulander, Hans Hellsten, James D. Taylor Chapter 13 Ultra-Wideband Radar Capability Demonstrations James D. Taylor Chapter 14 Bistatic Radar Polarimetry Theory Anne-Laure Germond, Eric Pottier, Joseph Saillard
© 2001 by CRC Press LLC
1
Main Features of UWB Radars and Differences from Common Narrowband Radars Igor I. Immoreev
CONTENTS 1.1 Introduction 1.2 Information Possibilities of UWB Radars 1.3 How UWB Radar Differs from Conventional Radar 1.4 Moving Target Selection in the UWB Radar and Passive Jamming Protection 1.5 Short Video Pulse Features in UWB Radar References
1.1 INTRODUCTION The majority of traditional radio systems use a narrow band of signal frequencies modulating a sinusoidal carrier signal. The reason is simple: a sine wave is the oscillation of an LC circuit, which is the most elementary and most widespread oscillatory system. The resonant properties of this system allow an easy frequency selection of necessary signals. Therefore, frequency selection is the basic way of information channel division in radio engineering, and the majority of radio systems have a band of frequencies that is much lower than their carrier signal. The theory and practice of modern radio engineering are based on this feature. Narrowband signals limit the information capability of radio systems, because the amount of the information transmitted in a unit of time is proportional to this band. Increasing the system’s information capacity requires expanding its band of frequencies. The only alternative is to increase the information transmitting time. This information problem is especially important for radiolocation systems, where the surveillance time of the target is limited. Past radars have used a band of frequencies that does not exceed 10 percent of the carrier frequency. Therefore, they have practically exhausted the information opportunities in terms of range resolution and target characteristics. A new radar development is the transition to signals with a wide and ultra-wide bandwidths (UWBs). For designing UWB radars, as with any other equipment, we must understand the required theory that will allow us to correctly design and specify their characteristics. The theory is also necessary for defining the requirements of radars and for developing the equipment needed to create, radiate, receive, and process UWB signals. In spite of recent developments and experimental work, there is no satisfactory and systematized theory of UWB radars available. The reason is that the process of radar-tracking and surveillance with UWB signals differs considerably from similar processes when using traditional narrowband signals. The study of these differences helps us to
© 2001 CRC Press LLC
understand when the traditional theory of radar-tracking detection can and cannot be used for designing UWB radars. When traditional theory cannot be used, then we must develop new methods. In this chapter, we will examine the new information opportunities that result from applying UWB signals in radars, and the basic differences between UWB radars and narrowband radar systems.
1.2 INFORMATION POSSIBILITIES OF UWB RADARS The informational content of the UWB radars increases because of the smaller pulse volume of the signal. For example, when the length of a sounding pulse changes from 1 µs to 1 ns, the depth of the pulse volume decreases from 300 m to 30 cm. We can say that the radar instrument probing the surveillance space becomes finer and more sensitive. The UWB radar’s reduced signal length can 1. Improve detected target range measurement accuracy. This results in the improvement of the radar resolution for all coordinates, since the resolution of targets by one coordinate does not require their resolution by other coordinates. 2. Identify target classes and types, because the received signal carries the information not only about the target as a whole but also about its separate elements. 3. Reduce the radar effects of passive interference from rain, mist, aerosols, metallized strips, etc. This is because the scattering cross section of interference source within a small pulse volume is reduced relative to the target scattering cross section. 4. Improve stability when observing targets at low elevation angles at the expense of eliminating the interference gaps in the antenna pattern. This is because the main signal, and any ground return signal, arrive at the antenna at different times, which thus enables their selection. 5. Increase the probability of target detection and improved stability observing a target at the expense of elimination of the lobe structure of the secondary-radiation pattern of irradiated targets, since oscillations reflected from the individual parts of the target do not interfere and cancel, which provides a more uniform radar cross section. 6. Provide a narrow antenna pattern by changing the radiated signal characteristics. 7. Improve the radar’s immunity to external narrowband electromagnetic radiation effects and noise. 8. Decrease the radar “dead zone.” 9. Increase the radar’s secretiveness by using a signal that will be hard to detect.
1.3 HOW UWB RADAR DIFFERS FROM CONVENTIONAL RADAR 1.3.1
SIGNAL WAVEFORM CHANGES PROCESSES
DURING
DETECTION
AND
RANGING
Narrowband signals (i.e., sinusoidal and quasi-sinusoidal signals) have the unique property of keeping their sinusoidal shape during forms of signal conversions such as addition, subtraction, differentiation and integration. The waveforms of sinusoidal and quasi-sinusoidal signals keep a shape identical to that of the original function and may differ only in their amplitude and time shift, or phase. Hereinafter, shape is understood as the law of change of a signal in time. On the contrary, the ultra-wideband signal has a nonsinusoidal waveform that can change shape while processing the above specified and other transformations. Let us assume that a UWB signal S1 shown in Figure 1.1 is generated and transmitted to the antenna in a form of a current pulse. The first change of the UWB signal shape S2 occurs during
© 2001 CRC Press LLC
pulse radiation, since the intensity of radiated electromagnetic field varies proportionally with the derivative of the antenna current. The second change of the shape occurs when pulse duration in the space cτ (where c is velocity of light, τ is the pulse duration in the time domain) is less than linear size of the radiator l. When current changes move along the radiator, then electromagnetic pulses are emitted from radiator discontinuities. As a result, a single pulse transforms into a sequence of k pulses divided by time intervals τ1, τ2, . . . ,τk–1, shown as S3 in Figure 1.1. The apparent radiator length changes according to variations of the angle θ between the normal to the antenna array and the direction of the wave front. Therefore, interpulse intervals vary with this angle as follows: τ 1 cos θ,
τ 2 cos θ,…, τ k – 1 cos θ
The third change of the shape occurs when the signal is radiated by a multi-element antenna array composed of N radiators with a distance d between them. The pulse radiated by one antenna element at the angle θ is delayed by the time (d/c) sin θ compared to the pulse radiated by the adjacent antenna element. The combined pulse will have various shapes and duration at different angles θ in the far field as equation S4 in Figure 1.1. Far-field pulse shapes at different angles θ are shown in Figure 1.2. Note that the combination of multiple square pulses radiated by the fourelement antenna array and shifted in time over different angles have waveforms very different from the radiated rectangular video pulse. The fourth waveform change is S5 in Figure 1.1, and it occurs when the target scatters the signal. In this case, the target consists of M local scattering elements, or bright points, located along the line L. If the UWB signal length is cτ << L, then the each discrete target element reflects the
FIGURE 1.1 Radar signal waveform changes during transmission, target reflection, and reception. UWB waveforms may change radically during the target detection process.
© 2001 CRC Press LLC
signal and forms a pulse sequence of M pulses. The actual number of pulses, time delay τm, and intensity depend on target shape and target element pulse response hm. This pulse sequence is called the target image. The whole image represents the time distribution of scattered energy that was formed during time interval to = 2L/c. So, for the high-resolution UWB target case, the radar cross section (RCS) becomes timedependent, and now we must introduce the concept of an instantaneous target RCS. The image will change with viewing angle variations. In this case, the target secondary pattern is nonstationary and variable. Because target scattered signals will form no secondary pattern “nulls,” this promotes steady target viewing. Some target elements may have a frequency bandwidth that is out of the UWB signal spectrum, so these will act as frequency filters and change the shape even more. Note that, in this case, the radar return signal not only indicates the presence of the target but carries back much information regarding the target. If the target were smaller than the radar pulse length cτ, then no such information would be available.
Sin θ = 0
Sin θ
cτ 3d
Sin θ
cτ 2d
Sin θ
cτ d
Sin θ
cτ d
FIGURE 1.2 If a four-element array antenna shown in Figure 1.1 transmits UWB rectangular pulses, their far-field shapes will vary with the off-axis angle θ.
© 2001 CRC Press LLC
The fifth change occurs during signal propagation through the atmosphere because of different signal attenuation in various frequency bands. The sixth change of the shape occurs during signal reception. The reason for this is the same as for radiation, i.e., the time shift between current pulses induced by the electromagnetic field in the antenna elements located at various distances to the target. Figure 1.3 shows an example of an actual UWB signal reflected from a target with multiple scattering centers. Following this discussion, we conclude that the UWB signal shape changes many times during radar viewing. It is difficult to describe such signals analytically. Conventional optimal processing using matched filters or correlators is unsuitable for these signals because of the changing waveform. Therefore, one of the most important problems with UWB radar is the development of signal processing methods that maximize signal-to-noise ratio when we perform the detection of the UWB signals.
1.3.2
HOW THE UWB SIGNAL WAVEFORM AND THE ANTENNA CHARACTERISTICS MUTUALLY AFFECT EACH OTHER
In Figure 1.2, we can see as the waveform of the UWB signal changes depending on the off-axis angle. For the example of a rectangular pulse, the off-axis signals have the same energy but are changed in waveform to have a longer duration and less peak power. It is quickly apparent that our traditional concept of a single-frequency antenna directional pattern (DP) of the field no longer applies to UWB signals. The antenna directional pattern for UWB signals is measured for either peak or average power. This DP is formed only during radiation, which means that a UWB pattern is an instantaneous antenna pattern. Let us consider DP for the peak power P(θ, φ) for the N radiating element antenna array. For an example, we will take one main cross section of this DP, P(θ). Let us assume that the field video
Signal strength
cτ << L
τ = 1 ns <>
t
T = (40 – 80) ns FIGURE 1.3 An example of a high-resolution UWB signal return from a target with three distinct scatterers.
© 2001 CRC Press LLC
pulse radiated by a single radiator of the array has a rectangular waveform, the duration τ, and a peak power P1. It is clear that the video pulses going from all radiators along the normal will arrive simultaneously at a receiving point. The peak power value at this point is Pmax = NP1. As the angle θ increases, the time delay between video pulses is increasing. At an angle θ1, when sin θ1 = cτ/(N – 1)d, the peak power falls by jump in size P1 = (1/N) Pmax and becomes equal to (1 – 1/N)Pmax. When sin θ = sinθ2 = cτ(N – 2)d, then the peak power falls once more for (1/N)Pmax and becomes equal to (1 – 2/N)Pmax. So long as there is some angle θN–1, then the level of peak power will achieve the minimal value (1/N)Pmax = P1, which will be the background DP level. Thus, at the rectangular form of a radiating pulse DP, there is a step function as follows: m= N–1
P(θ) =
∑
( N – m )P 1 × δ ( θ – θ m )
m = –( N – 1 )
1 if θ = θ
where δ(θ) for a single pulse is δ(θ − θ) = ---------------------------m 0 if θ ≠ θ m The formulas relating the angle θ and quantity τ and d at various values of peak power are given for N = 4 in Figure 1.2. Now, Figure 1.4 shows how the DP P(θ) for same array varies with different values of θ. DP are represented for three quantities of the angle θ1, that is for three variants of the ratio τ and d. From the expressions presented, it is clear that the values of angles θ1, θ2, . . . , and, consequently, the DP width and the gain factor of an antenna can be changed by varying the radiator spacing d and the video pulse duration τ, and the pulse waveform in the general case. Figure 1.4 also is shows the DP for a case where cτ << d so that θ is nearly zero degrees. In this case the main lobe is so compressed that it is approaching a line and the side radiation continues to be a background of (1/N)2 level. These DP P(θ) is actually a multiplier of the array. To get the antennas complete DP, we must also consider the single radiator’s directional pattern. Figure 1.5 shows the DP of an array made of a great number of radiators (N > 100) and radiating different pulse durations. The diagrams are normalized on the maximum level. Notice that the beamwidth depends on duration of pulse, and the side radiation of an array presents a uniform background and does not have the characteristic large side lobes of narrowband antennas. Earlier, we considered the case of an antenna radiating a single pulse. If the antenna array radiates a sequence of pulses under an angle θ0, as shown on Figure 1.6, there is a second maximum DP, similar to an interference beam DP by the narrowband antenna array. It occurs due to addition of pulses radiated in the next periods of recurrence. However, for a narrowband array, the interference beam arises at shift of the next radiators waveform for the period of the high frequency. In a UWB array, the similar second maximum arises at the shift of pulses from the next radiators for the period of recurrence, which considerably exceeds the period of high frequency. It allows us to choose the pulse repetition frequency and UWB array spacing that is rather large, and by that to reduce total of its channels. The DP presented on the figures corresponds to the rectangular waveform of the field video pulse in the space, and so the DP shape for peak and average powers will be different for the other waveform of the field video pulse. Thus, the antenna DP for the UWB signal depends not only on angular coordinates but also on the time-dependent waveform, which is designated S. Therefore, the expressions for the UWB signal DP will take the form P(θ, φ, S, t) and W(θ, φ, S, t). The signal waveform S relates to its spectrum F by the Fourier transform. Therefore, the expressions for DP can be written in the form: P(θ, φ, F, ω) and W(θ, φ, F, ω). However, the basic features of the UWB signal DP indicated above for the rectangular video pulse of a field are retained for other waveforms or signal spectra.
© 2001 CRC Press LLC
P
Relative power
1.0
01 00(cτ<
Background level 0 -60
-30
0
30
60
θ
Antenna angle off axis FIGURE 1.4 The directivity pattern P(θ) for a four-element antenna array with rectangular UWB pulse waveform.
Since the DP of an antenna radiating or receiving the UWB signal becomes dependent on the signal waveform and duration, then it is obvious that the directivity factor G(θ, φ, S, t), then the gain factor K(θ, φ, S, t) of an antenna and its effective cross section A(θ, φ, S, t) become also dependent on the signal parameters. As a result, the directivity factor for the UWB signal is the ratio of the density of the UWB signal power radiated by an antenna in a specified angular direction in the UWB signal bandwidth to the density of the same power signal from an uniformly isotropic radiator in the same bandwidth. In the case of such determination, the directivity factor depends not only on the geometry of an antenna but also on matching the signal spectrum to the frequency response of an antenna. As a result, the calculation of the antenna directivity factor for the UWB signal presents great difficulties and, for the moment, it can be performed only for its simplest forms. The main conclusions are as follows: 1. The antenna DP for the UWB signal is a space-time (space-frequency) function, the characteristics of which depend both on the geometry of an antenna and the signal parameters. 2. The width and shape of the DP of the array of radiators for the UWB signal are determined by the waveform and duration of a video pulse radiated, on the one hand, and the size of aperture and the radiator spacing, on the other hand, and also by the shape of the DP array radiator. 3. Interference effects inherent in narrowband signals are not present when the UWB signal is radiated. This circumstance leads to the lack of lobes in the DP structure. In this case, the increase in the distance between array elements allows for making the DP extremely narrow without the appearance of additional diffraction maxima.
© 2001 CRC Press LLC
P,W
dB
0
-10
T = 1 ns
w
T = 0.3 ns
w
T = 0.03 ns
w
-20
-30
T = 1 ns
P
T = 0.3 ns
P
T = 0.1 ns
P
-40
10
Pattern by mean power
20 30 40 50 60 70 80 90
Pattern by peak p ower
degree
FIGURE 1.5 Peak and average power directivity patterns for a multi-element array with more than 100 radiators and different UWB video pulse lengths. The sharp beam forming for shorter-duration signals results from the waveform distortion off axis, as shown in Figure 1.2.
4. The directivity factor and the effective cross section of antennas using UWB signals are functions of time and the shape of the signal.
1.3.3
THE TARGET SCATTERING CROSS SECTION
FOR
UWB SIGNALS
One of the most complicated matters in the UWB location is the problem of signal reflection from targets and the target scattering cross section obtained when using these signals. The formal calculation of the scattering cross section, which does not depend on the signal waveform, is given by 2
2E σ = 4πR -----s2 E0
© 2001 CRC Press LLC
d
I
ο
Tr
FIGURE 1.6 A linear array antenna can form a secondary maximum due to the addition of sequentially radiated pulses. This consideration will be important for designing array antennas for coded pulse sequence signals, which could form significant second maximum sidelobes. This is another example of UWB antenna patterns being dependent on signal format.
where R is the distance to a target for which a wave incident on it can be considered as a plane one; ES is the intensity of the electric field, which is determined by the target reflection, at the radar receiving antenna; and E0 is the intensity of the electric field incident on a target. In essence, this formula compares the power density of a reflected wave arriving at the radar with the power density of a target-incident wave. Scattering theory generally assumes that the individual elements of a target scatter the energy of an incident wave independently of one another, so this target is considered as a total set of elements, each of them being an independent brilliant point. Generally speaking, such a representation of a target is not sufficiently justified, since the target elements can be mutually shadowed, and also multiple reflections of a wave are possible between the elements of a target. Let us consider the process of signal reflection from an individual brilliant point. The parameters of a reflected pulse will depend on the waveform of the pulse response characteristic of a local element and can be determined as the convolution of this characteristic, h(t), with the function f(t) describing a target-incident signal. The integral transformation of the frequency response of a signal-spectrum target can be used for this purpose. The pulse response characteristic of a local element, h(t), can be derived in the general form by solving the Maxwell equations for a signal defined as the delta-function δ(t) or its approximation and the space area that does not contain irrelevant current sources. However, the solution of these equations in the general form can be made only for a limited number of simplest elements and cannot find wide application. The geometrical optics techniques are relatively simple, but they do not provide an answer in a number of cases and, in particular, for plane surfaces. The physical optics techniques allow for solving this problem, but they do not provide the answer at the shadow boundary. The physical
© 2001 CRC Press LLC
and geometrical theories of diffraction enable us to correct results obtained within the approximation of the physical and geometrical optics, but they do not allow for estimating the contribution of a surface traveling wave to the signal scattering over an arbitrary-shape body. The matter is that a significant contribution to the pulse response characteristic of various targets is made by the socalled creeping waves, which are the surface waves propagating in the shadow region and enveloping a scatterer. A whole number of new techniques has been recently developed for their account, the singular expansion technique being the most efficient of them. However, it should be stated that the theoretical and computational means are not currently available to offer a reasonably accurate estimate of the scattering cross section of a complex target irradiated by the UWB signal. Let us consider the difference between reflected signals when a target is irradiated by narrowband and UWB signals. The spatial physical length of a narrowband signal is equal to cτNB Š L, where L is the size of a target and cτUWB ≤ L for an UWB signal. The “long” narrowband signal reflected from all N brilliant points will present a sum of N arbitrary-phase harmonic oscillations or their vector sum. In this case, the target scattering cross section is equal to N
σ NB =
∑ σk cos 2π --λ- R
k=1
where σk = the scattering cross section of a brilliant point number k R = the distance from the radar to this point Since a sum of one-frequency harmonic oscillations is also a harmonic oscillation, then the reflected signal will present an invariable amplitude and arbitrary-phase sinusoidal wave. The summation of harmonic signals reflected from the different points of a target may lead, in respect to some angular directions, to the compete compensation of the field reflected in the radar direction, which is equivalent to the null formation in the secondary target DP. We have a different picture when the UWB signal, which has cτUWB << L, is reflected from a target. In this case, the reflected signal will represent the sequence of N video pulses randomly arranged in the interval and T = L/c, forming the so-called image of a target, as shown in Figure 1.3. Video pulses making the whole image may have different amplitude. It depends on the scattering cross section of the corresponding brilliant point of the target. The polarity of these pulses may change. This depends on the magnetic permeability of the material that reflects the signal. When reflecting from a conductor, the electric component of the field changes its polarity. However, when reflecting from materials with high magnetic permeability, the wave polarity does not change. Finally, video pulses reflected from the target may change their initial (e.g., rectangular) form. This will happen if the brilliant points of the target have resonance properties as well as the frequency range that is less than the spectrum of the UWB signal. Besides, the form of the reflected signal will be complicated by re-reflections of video pulses from the brilliant points. As a result, the target scattering cross-section becomes time-dependent so that σUWB = σUWB(t). If the UWB signal processing algorithm allows for adding the reflections from individual brilliant point (see Chapter 2 of this book), then the target scattering cross section is not time dependent. n
σ UWB =
∑ σk k=1
If we assume that each brilliant point of a target reflects equal energy, then σUWB Š σNB actually in all cases, since n
n
∑ ak cos φ k=1
© 2001 CRC Press LLC
≤
∑ αk k=1
Thus, the UWB signal provides a gain in the scattering cross section magnitude. This circumstance, as well as the absence of nulls in the secondary DP of a target, favors a more stable target observation. • Two factors determine the UWB signal conversion when it is scattered by a target. The first one is related to the geometry and orientation of a target and leads to the conversion of one radiation video pulse into a video pulse burst. The second factor is related to the difference of the waveform (or spectrum) of a signal irradiating the target element from the pulse response (or frequency) characteristic of this element. This circumstance results in the change of the waveform of a single video pulse of the burst. • For the UWB signal, the target scattering cross section is a time function. In the case of the matched processing of the reflected UWB signal, the target scattering cross section will be larger than that for a narrowband signal. The secondary DP of a target will not have nulls arising under the action of a narrowband signal owing to the interference of waves reflected from different target elements. This circumstance provides for a more reliable and stable reception of reflected signals.
1.3.4
THE UWB RADAR RANGE EQUATION
The important aspect of the theory of radar observation when using UWB signals is the change in the meaning of parameters in the range equation. In the case under consideration, the directivity factor of a transmitting antenna, G, the effective cross section of a receiving antenna, A, and the effective scattering cross section of a target, σUWB, become dependent on time and signal parameters; i.e., they are nonstationary. Now, this equation involves not only constants but time functions. This feature leads to the other form of the range equation wherein the range is a nonstationary quantity and varies depending on the signal waveform and time. EG ( θ, φ, S, t )σ UWB ( t )A ( θ, φ, S, t ) R ( s, t ) ≤ 4 ------------------------------------------------------------------------------------2 ( 4π ) ρqN 0
where E ρ q N0
= = = =
the the the the
energy of a radiated signal losses in all the systems of a radar threshold signal-to-noise ratio spectral density of noise power
It should be noted that a UWB radar features specific energy losses that are not found in narrowband radars. For instance, under the conditions of short-pulse transmission, losses arise owing to the antenna rejection of the lower frequencies of the signal spectrum and the mismatch of this signal to the frequency response of an antenna. Calculation of the magnitude of losses and ways to allay these losses are considered at the end of this chapter. Under such conditions, these losses may be caused by the absence of information on the space parameters of targets. Losses can reach 10 to 12 dB in the case of the mismatched processing of a signal reflected from a target when the dimensions of this target and the number of its brilliant points are not known. The distinctive features of UWB signals processing are considered in Chapter 2 of this book. The same section contains description of the suggested method to process such signals and avoid the above mentioned losses.
1.3.5
ELECTROMAGNETIC COMPATIBILITY
When UWB radars are used, an important problem is presented by their electromagnetic compatibility with other radio electronic systems and facilities because, in this case, the frequency diversity of other systems is practically impossible.
© 2001 CRC Press LLC
When UWB radar operates jointly with conventional narrowband radar, only a slight portion of the UWB radar signal energy enters the frequency bandwidth of the narrowband radar receiver. Really, the time constant of the input circuit of a narrowband receiving device, τ1 = 1/∆f, which determines the rise time of the input signal amplitude up to the prescribed value, will be much longer than the pulse length of an UWB radar, τ. The bandwidth of a given UWB radar and that of the narrowband radar may differ by three orders of magnitude pulse lengths, e.g., 1 ns and 1 µs. This means that jamming occurring in the narrowband radar receiver due to this UWB pulse of duration τ has no time to reach a noticeable magnitude in the receiver. Besides, when both narrowband radar and UWB radar radiate equal powers, this UWB radar has the unit power per unit of the bandwidth (W/MHz), which is approximately lower by three orders of magnitude. This means that only about one-thousandth of the UWB signal power arrives at the narrowband radar receiver. As a result, the total attenuation of the UWB signal in the narrowband radar receiver is about 60 dB, as compared with the influence of the signal of similar narrowband radar on this receiver. An additional effect may be provided if narrowband radars use asynchronous jamming protection equipment and the range selection of received signals. Figure 1.7 shows the dependence of the attenuation factor of the UWB radar interference, k, on the carrier frequency, f, of narrowband radar where, in the figure, ∆f is the narrowband system bandwidth and F is the UWB radar signal bandwidth. When a narrowband radars interferes with a UWB radar, one efficient jamming protection is the frequency rejection by cutting narrowband radar signals out the UWB radar signal spectrum. This is usually done during signal processing, as shown in several of the experimental radar sets in Chapter 12. When two or more UWB radars operate jointly, it is advisable to use the time division of the signals of stations. Because of the short UWB radar signal length and the relative pulse duration reaching value of 106 to 107, the interference of neighboring radar occupies a very small range
k
Fτ 107 106 105
10 -4
104
10 -3
103
10 - 2
∆f f
102
f (MHz)
10 10
100
1000
10000
FIGURE 1.7 A narrowband radar will have a long time constant defined by the center frequency and bandwidth. Short UWB radar signals will be attenuated, because they are much shorter than the narrowband system time constants.
© 2001 CRC Press LLC
section. When radars are mutually synchronized, this section can be blanked without adverse effects to target detection. Interference gating is possible in the radar computer after the estimation of the coordinates of an interfering station has been performed. Considering that interference owing to its short length occupies an insignificant portion of the range, its presence featuring corresponding coloration may be allowed in the output data of a station. The computation show that the area of the mutual influence of UWB radars does not exceed 70 to 80 km at 1 MW peak radiated power. On the other extreme, interference from narrowband systems is a major problem in UWB radar design.
1.4 MOVING TARGET SELECTION IN THE UWB RADAR AND PASSIVE JAMMING PROTECTION The detection of airborne targets by ultra-wideband (UWB) radar involves interference problems from both natural and man-made sources. The selection of a moving target detection system must be designed around the particular UWB radar system technical features. On the one hand, considerable reduction of the pulse volume substantially decreases the target scattering cross section of the interference, facilitating the observation of the target on its background. On the other extreme, the small pulse volume enhances the influence of those interference elements that can change their position by entering or leaving the pulse volume for the pulse-repetition period. These sources increase the uncompensated residues at the output of the interleaved periodic compensation (IPC) system, thus reducing its effectiveness. The present section is devoted to the investigation of these peculiar features and their influence on the interference immunity of the UWB radar provided with the IPC. A small pulse volume permits moving targets to be separated without using the Doppler effect. If, over the repetition period Tr , a target travels a distance exceeding a range element (30 cm at τ = 1 ns), then, when interleaved periodic subtraction is applied, the signal of this target will be separated, and the signals of stationary or low-mobility targets will be suppressed. Such and IPC system must the following condition to operate: cτ ----- ≤ ν R T r 2
where vR = the radial velocity of a target This system of selection lacks “blind” velocities and does not impose special requirements on the coherence of radiated signals. The target velocity is always unambiguously measured. The target radial velocity νR can be determined in the selection system by the variation of the range to a target. The minimum determined velocity of a target would be equal to cτν R min = ------2T r
One of the main characteristics of the passive interference that determines the effectiveness of the moving target selection (MTS) system is the correlation function of the interfering reflections. Let us consider its peculiar features with respect to the UWB signal. Figure 1.8 gives the normalized correlation functions of the passive jamming Rn at different values of τ. As shown in the figure, for the narrowband signals, the correlation function depends very little on the pulse duration. However, at τ < Tav (where Tav = the period of oscillation at the average spectrum frequency, and σd = the root-mean-square deviation of the Doppler frequency of the interference), then the decorrelation of the passive interference is observed as the pulse duration
© 2001 CRC Press LLC
Rn 1.0 τ > 0.5 Tav τ > 0.25 Tav τ > 0.125 Tav
0.5
τ > 0.062 Tav
0
0.1
0.2
0.3
0.4
0.5
T rσd 2π
FIGURE 1.8 Moving target Doppler signals will cause passive interference or jamming. This plot shows the normalized correlation functions of passive jamming for different values of signal lengths.
is decreased. Physically, it can be explained by the fact that the decrease of the duration of pulse τ brings about the increase of the maximum and, consequently, medium frequency of the spectrum. As a result, the spectrum of the moving passive interference is extended, which reduces the effectiveness of the MTS. On the other hand, with the reduction of the pulse duration, the pulse volume is decreased and, respectively, the power of the passive interference is diminished. Therefore, the interference immunity of the UWB radar should be considered taking account of two opposite factors for a decrease in the pulse duration. 1. The reduction of the pulse volume (i.e., decrease in the power of the passive interference) 2. An increase in the interperiodic decorrelation of the passive interference (i.e., the decrease in the coefficient of suppression of the MTS) Figure 1.9 shows the dependence the signal-to-interference ratio Q at the output of the systems of the single and twice interleaved periodic compensation (IPC-1 and IPC-2) on duration of UWB signal τt at different values of period repetition Tr. The value Q is normalized rather so that Q0 is the signal-to-interference ratio at τt = Tav and at relative width of a spectrum of passive interference σdTr /2π = 0.1. As it follows from the figure, there are two extremes. The first of them (maximum) falls on the values of duration of the pulse τt = 0.5Tav. At τt > 0.5Tav prevailing is the first factor, and at τt < 0.5Tav is the second. With the further decrease in τT, the MTS stops influencing the interference immunity because of complete decorrelation of the interference, and only the first factor remains—reduction of the pulse volume. The level of interference is once again decreased, and Q increases. Thus, the position of the second extreme (minimum) corresponds to the complete decorrelation of the interference and absence of the velocity selection. For the most effective rejecter (IPC-2), these regularities are displayed more distinctly due to the stronger sensitivity to the correlation properties of the interference.
© 2001 CRC Press LLC
Q/Q0 dB
IPC-1
20
Trσ d
10
2π
0.03 0.06 0.12 0.25 0.5
1
= 0.02
0.04
τ
0.06 0.08 0.1
Tav
Q/Q 0
IPC-2
dB
Trσ d
30
2π = 0.02
20 0.04
10
0.06
0.08 0.1 0.03
0.06 0.12 0.25 0.5
1
2
4
τ
Tav
FIGURE 1.9 Moving target selection depends on the pulse length and the effects of signal-to-interference ratios at the system output; shown here for single (IPC-1) and double (IPC-2) interleaved periodic compensation for τ = Tav and σdTr/2π = 0.1.
Let us note that, with the decrease in the relative width of the passive interferences spectrum, σdTr /2π, the position of the second extreme is shifted to the left toward the lower-duration τ. It means that, with the decrease in the width of the Doppler spectrum of the interference and increase in the pulse repetition frequency, the complete decorrelation of the interference will occur with the shorter signal duration. Within the limits for the non-fluctuating passive interference the decrease in the pulse duration does not influence the MTS effectiveness. Thus, using MTS with respect to the UWB signal is advisable with rather narrowband interferences (local objects) in the radar with the relatively high repetition frequency (i.e., with the small range).
1.5 SHORT VIDEO PULSE FEATURES IN UWB RADAR One of the peculiar features of the UWB radar operating with short video pulses with duration τ is additional losses of energy. The point is that any antenna does not radiate in the range of frequencies lower than some fmin. On the other hand, the frequency spectrum of any video pulse
© 2001 CRC Press LLC
has the maximum at the zero frequency. The basic energy of the pulse is concentrated in the band of frequencies ∆f and restricted by some fmax usually lying in the region of the first zero of its spectrum. As a result, the frequency characteristic of the antenna and spectrum of the signal appear to be unmatched. Part of the energy that did not fall into the antenna frequency band will be lost. This is seen well in Figure 1.10, which gives the frequency characteristic of the Hertzian dipole P (length l) and spectrum of the rectangular pulse S. With respect to the signal, the antenna is essentially a high-frequency filter. The notion of the spectral efficiency η∆f was introduced to account for these losses and is a part of the total efficiency of the transmitting device. This efficiency determines the relative share of the energy of the sounding pulse falling into the operating frequency band of the antenna. W ∆f η ∆f = -------Ws
where WS = full pulse energy Wf = the energy of that part of the pulse spectrum falling into the antenna frequency band For the single-polarity pulses, the spectral losses can be rather significant. It is possible to reduce these losses by selecting an optimal duration, τopt, for each pulse form in the given band of frequencies that will have the maximum value η∆fopt spectral efficiency. The curve “a” in Figure 1.11 shows the dependence of η∆fopt on ∆f for three simple single-polarity pulses: rectangular, bellshaped, and triangular. For all considered pulses at ∆f < 3, the maximum efficiency η∆f max < 50%, which essentially worsens the efficiency of a radar. The spectral efficiency η∆f can be improved by changing the radiating pulse spectrum. With this aim in mind, spectrum S2(f) of the correcting pulse u2(t) is subtracted from spectrum S1(f) of the basic single-polarity pulse u1(t). The form and intensity of this spectrum were selected so that,
P
S 1.0
200
0.8
160
0.6
120
0.4
80
0.2
40
0 0.2
0.4
0.6
0.8
1.0
_l = _lf λ c
FIGURE 1.10 Antennas are a major cause of energy loss in transmitting UWB signals. This shows the frequency response of a dipole antenna P and the spectrum of a video pulse S. The antenna will distort the signal by passing only the higher-frequency components.
© 2001 CRC Press LLC
Spectral efficiency
η ∆f opt
}
0.8
(b)
0.6
}
(a)
0.4 0.2 0
1
3
5
∆f
7
Spectral bandwidth(∆f ) FIGURE 1.11 Single-polarity video pulses have low-frequency components that do not radiate well through antennas, as shown for cases (a). Correcting the pulse shape to eliminate the low frequencies will increase the spectral efficiency and provide better performance, as shown in (b).
in the summation spectrum SΣ(f) = S1(f) – S2(t), the low-frequency components for f < fmin were considerably smaller than in the basic spectrum S1(f), and, for f > fmin , the changes were insignificant. The corrected bipolar sounding pulse will be uΣ ( t ) = u1 ( t ) – u2 ( t )
Now the spectral efficiency will depend on the parameters of basic and correcting pulse. The possible maximum values of the spectral efficiency η∆fopt have been determined for the simplest corrected pulses, consisting of the difference between two single-polarity pulses, each of which has a simple waveform of rectangular, bell-shaped, or triangular. The dependencies of η∆fopt on ∆f, computed for different forms of the basic and correcting pulses, are shown in curve (b) of Figure 1.11. The introduction of a corrected pulse appreciably improved the radar efficiency. Figure 1.12 gives the curves indicating the dependence of the ratio of the maximum spectral efficiency of the pulses with the correction and without correction η∆fopt(with corr)/η∆fopt(without corr) on the signal spectrum ∆f. These curves make it possible to estimate the effectiveness of correction of the pulse form so as to increase the spectral efficiency. With the growth of ∆f, the correction of the pulse form becomes less effective, decreasing from 2 at ∆f = 3 down to 1.2 at ∆f = 10. In the general case, the correcting pulse u2(t) can be shifted relative to the basic pulse by the time t0. Then it can be written that uΣ ( t ) = u1 ( t ) – u2 ( t – t0 )
and we shall get for the summary spectrum 2
2
S Σ ( f ) = [ S 1 ( f ) – 2S 1 ( f )S 2 ( f ) cos 2πft 0 + S 2 ( f ) ]
1 --2
Figure 1.13 shows the dependence of the maximum spectral efficiency η∆fopt on the relative delay γ = t0/τ and the correcting pulses of different forms on the basic rectangular pulse. Curves (a)
© 2001 CRC Press LLC
η∆f opt(with corr) η∆f opt(without corr)
3
2
1
0
1
3
5
7
∆f
9
FIGURE 1.12 The dependence of the spectral efficiency of pulses with and without corrections to the signal spectrum.
Spectral efficiency
η∆fopt
0,8
∆f = 3
0,6
}b }a
0,4 0,2 0 0
2
4
6
8
10 12
Correcting pulse relative delay
14
16
γ = t0 I τ
γ = t0 I τ
FIGURE 1.13 Shifting the correcting pulse by some time increment can improve the system’s performance. This plot shows the effects of the relative time delay on spectral efficiency.
reflect the dependence on η∆fopt on the delay γ at the constant parameters of correction. Curves (b) in Figure 1.13 reflect the same dependence for the case where optimal correction parameters were sought for each value of γ. From a practical point of view, the greatest interest will be the corrected pulses, whose basic and correcting pulses do not overlap by time, i.e., when t0 > (τ1/2 + τ2/2). For the case γ > 1.5, the
© 2001 CRC Press LLC
pulses follow each other, and η∆fopt becomes less than at γ = 0 but remains higher than in the uncorrected pulse case. Thus, when selecting video pulse UWB radar waveforms, it is necessary to take into account the spectral efficiency because, for the singe-polarity pulses, it can be considerably less than 1. This is especially true of the pulses having the ratio of the high spectrum frequency to the lower one equal to ∆f < 3. In this case, the efficiency does not exceed 50 percent. By increasing ∆f, the spectral efficiency increases so that, at ∆f ≈ 10, it can reach 85 to 90%. Therefore, it is advisable to use the correction of sounding pulses at ∆f < 3, which provides higher values of spectral efficiency. The correction of the pulse waveform makes it possible to increase the spectral efficiency at ∆f ≤ 3 by two times, and about 1.2 times at ∆f ≈ 10.
REFERENCES 1. Harmuth, H., Nonsinusoidal Waves for Radar and Radio Communications. Academic Press, New York, 1981. Translation into Russian. Radio i Svyaz, Moscow, 1985. 2. Harmuth, H., “Radar Equation for Nonsinusoidal Waves.” IEEE Transactions on Electromagnetic Compatibility, No. 2, v. 31, 1989, pp. 138–147. 3. L. Astanin and A. Kostylev, Fundamentals of Ultra-Wideband Radar Measurements, Radio i Svyaz, Moscow, 1989. (Published as Ultrawideband radar measurements: analysis and processing, IEE, UK, London, 1997.) 4. Stryukov B., Lukyannikov A., Marinetz A., Feodorov N., “Short impulse radar systems.” Zarubezhnaya radioelectronika No. 8, 1989, pp. 42–59. 5. Immoreev, I., “Use of Ultra-Wideband Location in Air Defence.” Questions of Special Radio Electronics. Radiolocation Engineering Series. Issue 22, 1991, pp. 76–83. 6. Immoreev, I. and Zivlin, V., “Moving Target Indication in Radars with the Ultra-Wideband Sounding Signal.” Questions of Radio Electronics, Radiolocation Engineering Series, Issue 3, 1992. 7. Shubert, K. and Ruck, G., “Canonical Representation of Radar Range Equation in the Time Domain.”SPIE Proceedings: UWB Radar Conference, Vol. 1631, 1992. 8. Immoreev, I. and Vovshin, B., “Radar observation using the Ultra Wide Band Signals (UWBS),” International Conference on Radar, Paris, 3–6 May, 1994. 9. Immoreev, I. and Vovshin, B., “Features of Ultrawideband Radar Projecting.” IEEE International Radar Conference, Washington, May, 1995. 10. Immoreev, I., Grinev, A., Vovshin B., and Voronin, E., “Processing of the Signals in UWB Videopulse Underground Radars,” International Conference, Progress in Electromagnetions Research Symposium. Washington, DC, 22–28 July, 1995. 11. James D. Taylor (ed.), Introduction to Ultra-Wideband Radar Systems. CRC Press, Boca Raton, FL, 1995. 12. Osipov M., “UWB radar,” Radiotechnica, No. 3, 1995, pp. 3–6. 13. Bunkin B. and Kashin V. “The distinctive features, problems and perspectives of subnanosecond video pulses of radar systems.” Radiotechnica, No. 4–5, 1995, pp. 128–133. 14. Immoreev, I., “Ultrawideband (UWB) Radar Observation: Signal Generation, Radiation and Processing.” European Conference on Synthetic Aperture Radar, Konigswinter, Germany, 26–28 March, 1996. 15. Immoreev, I., “Ultrawideband Location: Main Features and Differences from Common Radiolocation,” Electromagnetic Waves and Electronic Systems. Vol. 2, No. 1, 1997, pp. 81–88. 16. Immoreev I. and Teliatnikov L. “Efficiency of sounding pulse energy application in ultrawideband radar.” Radiotechnica, No. 9, 1997, pp. 37–48. 17. Immoreev, I. and Fedotov, D. “Optimum processing of radar signals with unknown parameters,” Radiotechnica, No. 10, 1998, pp. 84–88. 18. Immoreev, I., “Ultra-wideband radars: New opportunities, unusual problems, system features,” Bulletin of the Moscow State Technical University, No. 4, 1998, pp. 25–56.
© 2001 CRC Press LLC
2
Feature Detection in UWB Radar Signals Igor I. Immoreev
CONTENTS 2.1 2.2
Introduction Brief Overview of Conventional Methods for Optimal Detection of Radar Signals 2.3 Quasi-optimal Detectors for UWB Signals 2.4 Optimal detectors for UWB Signals References
2.1 INTRODUCTION Any radar signal scattered by a target is a source of target information. However, the returned scattered signal will combine with radar receiver front-end internal noise and interference signals. Each signal processing system must provide the optimal way to extract desired target information from the mixed signal, noise, and interference input. Optimum is a term relative to the radar system’s mission or function. What is optimal for one use will not be so for another. Information quality depends on the process that determines the algorithm for analyzing the mixture of signal, noise, and interference and sets the rules for decisions after the analysis is complete. This decision may be based on the detection of the echo signal, the value of the measured signal parameters such as the Doppler shift, power spectral content, or other criteria. The algorithm process efficiency is defined by the statistical criterion, which helps to determine if this algorithm is the best possible one for the application. An algorithm is called the optimum if the information is extracted in the best way for a particular purpose, and the resulting distortions of information resulting from processing operations are minimal. Radar functional requirements will determine the sophistication of the signal processing algorithms. Simple binary detection provides minimal information and shows only that some target is present. Distinguishing and resolving several targets requires additional information requirements and therefore a larger signal bandwidth. If the signal parameters are time variable, the quantity of the information received must be large enough for the recovery of echo signals. Sophisticated applications such as target imaging and recognition will require even more information. For passing information, the channel frequency bandwidth and the signal bandwidth are the determining factors. An advanced “smart” radar that can resolve multiple targets in a small space, or image and identify targets, will need a wider bandwidth signal than other systems. Target information begins with target detection. Therefore, primary attention will be given to this problem in this chapter.
© 2001 CRC Press LLC
Barton, Skolnik, Shirman, Sosulin, Gutkin, and others have described the problems of detection for targets concealed by noise in Refs. 1 through 5. The problem is that past work was done mainly for narrowband signals, i.e., sinusoidal and quasi-sinusoidal signals. Mathematical treatment of sinusoidal signals is simplified, because they do not change their waveforms during the processing operations of summation, subtraction, differentiation, and integration, which occur during radiation, reflection from the target, and reception at the radar receiver. In this case, we mean that the signal amplitude, frequency, and initial phase of narrowband echo signals can change. Target reflection can modulate any of the signal parameters, but the shape of narrowband signals is unchanged during target location and remains a sinusoidal harmonic oscillation. For narrowband signal and target detection, the known signal waveform is a priori information. This feature allows using matched filters and correlators to process narrowband radar signals. In the case of high-information-content ultra-wideband (UWB) radar signals, not only the signal parameters but also the signal shape will change during processing operations mentioned above. As shown in Refs. 6 and 7, the UWB signal changes shape during target locating many times. As a result, the shape of a signal at the processor input differs essentially from the shape of a radiated signal. The changed signal waveform contains target information, as shown by Van Blaricum and Sheby in Ref. 6. As a result, conventional optimal processing methods, such as matched filtering and signal correlation, are impossible to implement, because there is no a priori signal waveform information. Building successful UWB radars will require new processing algorithms. The objective of UWB radar optimal processing algorithms should be to give the maximum signal-to-noise ratio at the processor output for signals with unknown shape. To solve this problem, first we consider the conventional narrowband signal processing algorithms in Section 2.2 then examine quasi-optimal and optimal detection methods for UWB signals in Sections 2.3 and 2.4.
2.2 BRIEF OVERVIEW OF CONVENTIONAL METHODS FOR OPTIMAL DETECTION OF RADAR SIGNALS The majority of radar target detection prediction problems can be solved using the methods of statistical decision theory. Those methods analyze the receiver output voltage during a certain period of time and reach a decision about whether a target return signal is present or absent in the voltage. Because the signal must be described statistically, the quality of detection is expressed as the probability of detection and false alarm for given target conditions. Two conditions should be met to make a reliable target detection decision. First, we must have some preliminary (a priori) information about the constituents of receiver output voltage. A well known noise probability density W0(u) and signal + noise probability density W1(u) can be used as a priori information. Later, we will show that the shape of a desired signal can be also used as a priori information. Second, the output voltage processing and target presence detection must be performed according the particular algorithm. This process must increase the volume of the received (a posteriori) information on the constituents of output voltage to the maximum. Furthermore, we consider this algorithm. We can have two groups of events for binary detection. The first group comprises two events, which reflect the actual situation in the radar surveillance area. They are event A1, when the target is present, and event A0, when the target is absent. Each of these two events has a probability of occurrence described by integrated distribution functions P(A1) and P(A0). These events form a full group and are incompatible, because only one of them may happen at a time; therefore, P(A1) + P(A0) = 1. The second group is another two events, which reflect the actual situation at the signal processor output after the received voltage has been processed and the decision has been made. These two events are A′ 1 , meaning that the target is present, and event A′ 0 , meaning the target is absent. The © 2001 CRC Press LLC
probabilities of occurrence of these two events are P( A′ 1 ) and P( A′ 0 ). These events are incompatible, so they too form a full group P ( A′1 ) + P( A′ 0 ) = 1. One event of the first group and one event of the second group will be noted in every surveillance area cell when detecting a target. As a result, only one of four possible variations of simultaneous occurrence of two independent events appears in every volume cell. Two of these variants apply to the true case so that the events A1 and A′ 1 correspond to reliable target detection, and the events A0 and A′ 0 correspond to the case when targets are not detected because none is there. Another two variants are wrong decisions cases where the events A1 and A′ 0 correspond to the miss of a target and the events A0 and A′1 correspond to a false alarm where no target is present, but one is indicated. These wrong case variants result from the statistical (noise) characteristics of the receiver output voltage. As is known, the probability of a simultaneous occurrence of two compatible and dependent events P(An + A′k ) is determined by the multiplication of probabilities. It is equal to the product of the probability of one even P(An) and the conditional probability of the occurrence of the second event calculated under the assumption that the first event has already occurred P( A′k /An): (2.1)
P ( A n + A′ k ) = P ( A n ) ⋅ P ( A′ k /A n )
As shown in Figure 2.1, the conditional probability of false alarm, given the condition that the signal is absent, is the probability that noise voltage u(t) will exceed the threshold value u0. ∞
P ( A′ 1 A 0 ) = P [ u ( t ) ≥ u 0 ] =
∫ W0 ( u )du
(2.2)
u0
Then the probability of false alarm is ∞
P ( A 0 + A′ 1 ) = P ( A 0 ) ⋅ P ( A′ 1 A 0 ) = P ( A 0 ) ∫ W 0 ( u ) du
(2.3)
u0
+U
0
W0(u)
t
U0 = U
thresh
-U
False alarm probability
FIGURE 2.1 False alarm probability for random noise, or the chance that random noise will exceed some threshold value.
© 2001 CRC Press LLC
Figure 2.2 shows the conditional probability that the signal will be missed when it is present, or the probability that the signal + noise voltage u(t) will not exceed the threshold value u0: P ( A′ 0 A 1 ) = P [ u ( t ) ≤ u 0 ] =
u0
∫0 W1 ( u )du
(2.4)
The probability that the desired signal will be missed is determined by the following expression: u0
P ( A 1 + A ) = P ( A 1 ) ⋅ P ( A A 1 ) = P ( A 1 ) ∫ W 1 ( u ) du ′ 0
′ 0
(2.5)
0
The events (A0 + A′1 ) and (A1 + A′ 0 ) are incompatible. In accordance with the rule of composition of probabilities, the probability that one of two wrong decisions will be made is P [ ( A 0 + A′ 0 ) or ( A 1 + A′ 0 ) ] = P ( A 0 + A′ 1 ) + P ( A 1 + A′ 0 ) u0
∞
= P ( A 0 ) ∫ W 0 ( u )du + P ( A 1 ) ∫ W 1 ( u )du
(2.6)
0
u0
If we change the limits of integration, this expression will take the following form: u0
∞
P [ ( A0 + A′ 1 ) or ( A 1 + A′ 0 ) ] = 1 – P ( A 0 ) ∫ W 0 ( u )du + P ( A 1 ) ∫ W 1 ( u ) du 0
(2.7)
u0
signal
{
+U
probability of signal missing
Wo (u) 0 Uent
U = 0 U thresh
W1 (u)
-U
FIGURE 2.2 Conditional probability that the signal will not be detected when present, or the probability that the signal plus noise voltage u(t) will not exceed the threshold u0.
© 2001 CRC Press LLC
The probability of making true decision will be P [ ( A 0 + A′ 0 ) or ( A 1 + A′ 1 ) ] = 1 – [ P ( A 0 + A′ 1 ) or ( A 1 + A′ 0 ) ] u0
∞
= P ( A 0 ) ∫ W 0 ( u )du + P ( A 1 ) ∫ W 1 ( u )du 0
(2.8)
u0
To find the optimum threshold level u0, it is necessary to determine threshold value for which the probability of making true decision is maximum. For this purpose, we calculate the following derivative: dP [ ( A0 + A′ 0 ) + ( A 1 + A′ 1 ) ] ------------------------------------------------------------------du 0
(2.9)
and then set it to zero. As a result, we get P(A0)W0(u0) = P(A1)W1(u0) or P ( A0 ) W1 ( u0 ) ----------------- = ------------W0 ( u0 ) P ( A1 )
(2.10)
Figure 2.3 shows the noise probability density W0(u) and the signal + noise probability density W1(u). It is evident from the picture that, the larger the signal amplitude, the higher the threshold level must be. For P(A0) = P(A1) = 0.5, the optimum threshold level is defined by the point of crossing of two probability density W0(u) and W1(u). The necessary condition for making the decision on target presence is W1 ( u ) P ( A0 ) -------------- ≥ -------------W0 ( u ) P ( A1 )
(2.11)
We can make a decision on target absence by reversing the inequality. This inequality is true for the value of the noise voltage and the signal plus noise voltage in one moment of time, and it comprises a one-dimensional probability density W0(u) and W1(u). The inequality can be extended to the case where the decision is made from N voltage values, which
D W (u)
W (u)
0
1
F 0 FIGURE 2.3
U
thresh
u U
ent
The noise distribution function W0(u) and the signal plus noise distribution function W1(u).
© 2001 CRC Press LLC
we can get from the ensemble of realization at one time moment or from one realization at different time moments: W 1 ( u 1 , u 2 , u 3 ,…, u N ) P ( A 0 ) --------------------------------------------------≥ -------------W 0 ( u 1 , u 2 , u 3 ,…, u N ) P ( A 1 )
(2.12)
In this case, the probability density W0 and W1 become multidimensional. This most simple statistical criterion is called the ideal observer criterion. In practical cases, the disadvantage of this criterion is that we do not know the a priori probability P(A1) that a desired target is present, and the probability P(A0) that a desired target is not in the radar surveillance area. There is one further problem in that the ideal observer criterion does not consider the consequences of wrong decisions. To overcome the ideal observer criterion, we introduce weight coefficients B and C in the equation, which describes the estimation of probability of wrong decisions. These coefficients characterize the losses caused by false alarm and target miss: P [ ( A 0 + A′ 1 )or ( A 1 + A′ 0 ) ] = B ⋅ P ( A 0 + A′ 1 ) + C ⋅ P ( A 1 + A′ 0 )
(2.13)
In this case, the following inequality must be satisfied to make a decision on the target presence: W 1 ( u 1 , u 2 , u 3 ,…, u N ) B ⋅ P ( A 0 ) --------------------------------------------------≥ ----------------------W 0 ( u 1 , u 2 , u 3 ,…, u N ) C ⋅ P ( A 1 )
(2.14)
This statistical criterion is called the minimum risk criterion. Its practical implementation is rather difficult, not only because the priori probabilities P(A1) and P(A0) are unknown, but also because the a priori estimations of weight coefficients B and C are unknown as well. This criterion, along with the ideal observer criterion, is referred to as the Bayes criterion. One more well known criterion is the maximum likelihood criterion. The probability density for N random voltage values at the receiver output W(u1, u2, u3,…, uN), which we mentioned above, is named the likelihood function. The maximum likelihood method helps to determine the maximum value of this function. To perform this operation, we must take the derivative of the likelihood function with respect to the desired signal and set it to zero. The solution of this equation helps to find the maximum likelihood estimation of the signal. For example, if random values of voltage at the receiver output: u1, u2, u3,…, uN are distributed normally, the estimation is equal to their average value. This method gives less dispersed estimations than other methods. Such estimations are called efficient, so the criterion of the optimum operations, which use the maximum likelihood method, is the estimation efficiency. If the maximum likelihood criterion is used, then the decision on target presence is made when the likelihood function W1 exceeds the likelihood function W0: W 1 ( u 1 , u 2 , u 3 ,…, u N ) --------------------------------------------------≥1 W 0 ( u 1 , u 2 , u 3 ,…, u N )
(2.15)
As was mentioned above, we need some a priori probabilities for making decisions on target presence, but in many practical cases these will be unknown. Another widely used criterion, which does not depend on these probabilities, is called the Neumann–Pearson criterion. This provides the maximum probability of detection D = P(A1 + A′ 1 ), at the constant false alarm rate F = P(A0 + © 2001 CRC Press LLC
A′ 1 ). According to this criterion, the threshold value u0 located in the right part of the likelihood expression is chosen for a given conditional probability of false alarm: ∞
P [ u ( t ) ≥ u0 ] =
∫ W0 ( u )du
(2.16)
uo
So, in many cases, the solution of the problem of target detection is reduced to the calculation of the following ratio: W 1 ( u 1 , u 2 , u 3 ,…, u N ) Λ = --------------------------------------------------W 0 ( u 1 , u 2 , u 3 ,…, u N )
(2.17)
This ratio is called the likelihood ratio. We make a decision on target presence when this ratio exceeds some constant level u0, given according to the selected criterion. The calculation of the likelihood ratio helps to design the optimum receiver. Conventional methods for optimal radar signal detection use the noise probability density at the receiver output as a priori information. This noise is usually approximated by so called white noise, which has an equally distributed spectral power density N0 (W/Hz) within the receiver bandwidth ∆f and the normal time probability density of the voltage u: 2
1 u W 0 ( u ) = -------------- exp – ---------2 2σ 2πσ
(2.18)
This probability density has zero average value, and its dispersion is σ2 = N0∆f. The samples of noise voltage are statistically independent if they are spaced at the ∆t = 1/2∆f. Then, the likelihood function for N noise samples is the product of N factors so that N
N
i=0
i=1
N 1 1 2 W 0 ( u 1 , u 2 , u 3 ,…, u N ) = ∏ W 0 ( u i ) = -------------- exp – ---------2 ∑ u i 2πσ 2σ
(2.19)
The probability density for signal plus noise depends on a signal structure. We usually use a hypothetical signal to understand the general laws of optimal processing in the conventional radar theory. For this case, all the signal parameters are fully known except the time of arrival. Therefore, the signal plus noise probability density differs from the noise probability density only by a nonzero average value that is equal to the signal amplitude. 2 1 ( u – s )- W 1 ( u ) = -------------- exp – ----------------2 2πσ 2σ
(2.20)
The likelihood function for the signal plus noise is N
1 N W 1 ( u 1 , u 2 , u 3 ,…, u N ) = ∏ W 1 ( u i ) = -------------- exp 2πσ i=0
© 2001 CRC Press LLC
N
1 2 – ---------2 ∑ ( u i – S i ) 2σ i=1
(2.21)
The likelihood ratio for a fully known signal will be N
N
i=1
i=1
2 W 1 ( u 1 , u 2 , u 3 ,…, u N ) 1 2 Λ = --------------------------------------------------= exp – ---------2 ∑ s 1 • exp -----2 ∑ u i s i W 0 ( u 1 , u 2 , u 3 ,…, u N ) 2σ σ
(2.22)
Considering that σ2 = N0 × ∆f and ∆t = 1/2∆f, we can write 1/σ2 = 2 × ∆t/N0. Then, we have N
N
1 2 2 Λ = exp – ------ ∑ s 1 ∆t • exp ------ ∑ u 1 s i ∆t N0 N0 i=1
(2.23)
i=1
In conventional theory, the next step we use is to the transfer to the limits at ∆t→0. But it should be noted that here ∆f→∞ and therefore σ2→∞; that is, the noise power grows infinitely high. Nevertheless, such a model is used in practice. Then, we go from the summation to the integration in the time interval from 0 to T, where N random values of the receiver output voltage u1, u2, u3,…, uN are located: T
T
0
0
1 2 2 Λ = exp – ------ ∫ s ( t )dt • exp ------ ∫ u ( t ) ⋅ s ( t ) dt N N0 0
(2.24)
To remove the exponential member in this expression and simplify the design of the optimal receiver, a logarithm of Λ is calculated instead of the value of Λ, thus, T
T
2 1 2 lnΛ = ------ ∫ u ( t ) • s ( t ) dt – ------ ∫ s ( t ) dt N0 N0
(2.25)
0
0
In this equation, the second member is the ratio of signal energy to spectral power density, which does not depend on receiver output voltage u(t). For the known signal and given noise power density, this member has constant value, which can be considered when we determine the threshold level u0 or can be included in it. To get optimum detection algorithm, we must calculate the following integral: T
2----u ( t ) • s ( t ) dt N0 ∫
(2.26)
0
and compare the received value to a threshold. This expression is a correlation integral, and it determines the association between the signal s(t) and the receiver output voltage u(t). We might say that this integral indicates how closely the received voltage resembles the desired signal. It is clear that the knowledge of the desired signal parameters is very important a priori information for this detection method. The circuit design that performs conventional correlation processing with the use of this integral is shown in Figure 2.4. It is clear from the circuit that two signals are used for processing; they © 2001 CRC Press LLC
U(T)
uecho(t)+un(t)
∫ uref(t)
FIGURE 2.4
uthresh = u0
Conventional correlation processing block diagram.
are a reference signal (a delayed radiated signal is usually used) and an echo signal that is received together with the noise and is fed into the correlator. Here, we introduce some notation so that uref(t) is reference signal voltage, uecho(t) is echo signal voltage, un(t) is noise voltage, and u(T) is the voltage at the correlator output at the end of accumulation period T. Considering this notation, the voltage at the correlator output is T
2 u ( T ) = ------ ∫ u ( t )u ref ( t ) dt N0
(2.27)
0
A target is considered as a detected one if u(T) > u0. The receiver output voltage u(t) is the correlator input. In various cases, this voltage will be as follows: u(t) = un(t) if the echo signal is absent, or u(t) = uecho(t) + un(t) if the echo signal is present. Now consider each of these cases. If an echo signal is absent, then the correlation integral has the following form: T
2 u ( T ) = ------ ∫ u n ( t )u ref ( t ) dt No
(2.28)
0
In reality, this integral indicates the variations of noise probability density while the noise is passing through the correlator circuits. This fact is of great importance for the target detection procedure, because the noise probability density at the correlator output determines the threshold level u0 if the echo signal is absent. For example, if for some reason the noise dispersion at the correlator output changes value, then to retain a constant value of false alarm rate, the threshold level must be changed according to the Neumann–Pearson criterion (see Figure 2.1). Let us consider the factors that influence the noise dispersion in this case. The noise voltage at the correlator input is normally distributed, so the normal law is preserved while the noise is passing through the correlator circuits. The average value of this noise distribution is zero, but the dispersion of the noise voltage at the correlator output differs from those at the correlator input. There are some reasons for this difference. First, the spectral noise density N0 can vary during the operation process, which will cause noise dispersion variations. Second, the multiplication of the noise voltage with the reference signal will cause an increase in the noise dispersion proportionally to the signal energy E. To retain the constant rate of false alarm with the increase in noise dispersion, we must raise the threshold level u0. But to retain the given detection probability, it is necessary to increase the signal-to-noise ratio, that is, to increase the energy scattered from a target by increasing the transmitter power or antenna gain. © 2001 CRC Press LLC
In practical cases, some measures for stabilization of the threshold level and false alarm rate are taken. For this purpose, the noise voltage and the reference signal voltage are normalized, i.e., divided by 1/ N 0 and 1/ E correspondingly. In this case, the threshold level u0 will be constant, but to retain the given detection probability, the signal-to-noise ratio must be increased. In a general case, the noise dispersion at the correlator output depends on the integration interval T as well. But this dependence is lacking if the receiver bandwidth is matched to the duration of the integrated signal. If the echo signal is present in the correlator input voltage, then the correlation integral takes the form
2 u ( T ) = -----N0
T
T
∫ uref ( t )uecho ( t ) dt + ∫ uref ( t )un ( t )dt
(2.29)
0
0
The first integral of this expression determines the nonzero average value of the probability density W1(u). The second integral determines the dispersion of this probability density; it will be equal to the dispersion of probability density W0(u) for the case when the echo signal is absent. Figure 2.5 shows the mutual arrangement of probability densities W0(u) and W1(u) at the conventional correlator output related to values of the reference and echo signals. The influence of a reference signal on dispersions of these probability densities and threshold levels is evident from the plot. By using the data of Figure 2.5, the detection characteristics can be plotted for this case. It is evident from the plot that the false alarm probability is ∞
F =
1
2
exp – ---------2 du ∫ ------------- 2σ 2πσ u
(2.30)
u0
W(u)
W0(u), uecho=0, uref=1.
0.45 0.4 0.35 0.3 0.25 0.2
W0(u), uecho=0, uref=2. W0(u), uecho=0, uref=8. W1(u), uecho=2, uref=2.
0.15 0.1 0.05 0
W1(u), uecho=8, uref=8.
0 0.8 1.6 2.4 3.2 4.0 4.8 5.6 6.4 7.2 8.0 8.8 9.6 u FIGURE 2.5 Dependence of the functions W0(u) and W1(u) at a conventional correlator output on the reference and return (scattered) signals. The value u is the value of the sum of the correlated reference and target signals plus the correlated reference and noise signals.
© 2001 CRC Press LLC
and the detection probability is ∞
D =
(u – s)
1
2
- du exp – ----------------2 ∫ ------------- 2σ 2πσ
(2.31)
ν0
Using the probability integral, x
2 1 u- du Φ ( x ) = ---------- ∫ exp – --- 2 2π
(2.32)
–∞
we can write the expressions for F and D as follows: 2s- D = 1 – Φ u 0 – ---- N 0
F = 1 – Φ ( u 0 ),
(2.33)
On the basis of these formulas, the detection characteristics (that is, the dependence of detection probability D on signal-to-noise ratio q = s/N0 for the constant value of the false alarm probability F) are plotted in Figure 2.6. These characteristics were plotted for the fully known signal, and so they are the best among the detection curves that might be plotted for other types of signal. In all further discussion, we shall consider these characteristics as the standard. In addition to the scheme shown in Figure 2.4, there are some other ways to perform the procedure described by the correlation integral. One widely used solution is a matched filter that has an impulse response matched with the detected signal shape. The impulse response enters into the expression under the integral sign instead of the reference signal. The signal-to-noise ratio at the matched filter output is similar to this ratio at the correlator output. But we will not discuss matched filters in this chapter, because they are designed for processing signals with a priori known shape and cannot be used for processing signals with unknown shapes, such as UWB signals.
D 0.8 -10
10
0.6
-8
10 -6
10
0.4
-4
F = 10 0.2
-12
10 1
2
4
6
8
q
FIGURE 2.6 Detection probability D of a conventional correlator for a fully known signal for different probability of false alarm rates F vs. the signal-to-noise ratio q.
© 2001 CRC Press LLC
2.3 QUASI-OPTIMAL DETECTORS FOR UWB SIGNALS As was shown previously, a real UWB signal scattered from a target has an intricate shape, as shown in Figure 2.7, and its parameters, such as a duration and a number, location and amplitude of signal maximums are unknown. The lack of a priori information on signal parameters makes it impossible to describe such a signal analytically and to introduce some a priori information about the signal into a signal processor. There are some other difficulties that can be added to those mentioned above. • Target multiple returns. The decrease in radiated pulse duration (nearly three orders of magnitude compared to conventional narrowband radar) increases the number of range resolution cells many times. A target will provide a series of returns from the combined scattering centers at each range resolution increment. The requirements for signal processor capacity and memory volume are increased correspondingly. • Target motion effects. Because of target movement, the scattered signals received in adjacent pulse repetition periods can arrive from different resolution cells. If a target has a radial velocity VR = 800 km/h and pulse repetition period Tr = 1 ms, target movement during this period is VR Tr = 22.2 cm. At the same time, the length of a resolution cell is only 15 cm when the pulse duration is 1 ns. This results in some difficulties such as accumulation and inter-period compensation when we use algorithms that process signals from various repetition periods. Target multiple returns and motion effects are the conditions under which UWB target return signals must be detected. In principle, it is possible to realize the procedure for optimal detection of an unknown target that has a large number of point scatters. The returns from point scatterers can be resolved into distinct “bright points.” Van der Spek first proposed such a processing algorithm in Ref. 8. Let us suppose that a target has the length L and occupies N resolution cells x1, x2, …, xN in space. The signals scattered by bright points are present in K cells, and the other cells are “empty.” Processing all combinations from N elements on K bright points can provide optimal detection of the unknown signal. This algorithm realizes the detection of a fully known signal, as one of these combinations must coincide with a signal scattered from a target. The schematic diagram for such an optimal detector is described in Ref. 8 and shown in Figure 2.8, which shows that a practical realization of this scheme requires many processing channels. For example, if the number of resolution cells is N = 40 within the observation interval, the signal bandwidth is 1 GHz, and the number of expected bright points is K = 8, then the number of processing channels required
t FIGURE 2.7 Power levels from an UWB signal scattered from a target when the range resolution is smaller than the target size. The target return becomes a series of low-energy returns from scattering centers. This concept differs from that of the usual target radar cross section models in which the resolution is considerably larger than the target.
© 2001 CRC Press LLC
Const
( )
x1 x2
K
2
exp
exp
xk Threshold
xN
{
exp
Const FIGURE 2.8
N
Signal energy
CK channels
k
An optimal detector for a multiple-point scatterer target.8
is 2.9 × 1010. The structure of such detector is very complex and cannot be realized using presentday electronic components. Van der Spek proposed two simpler algorithms that can realize quasi-optimal processing of unknown signals.8 The first algorithm uses the changes of the energy at the detector output when a signal scattered by a target is received. It is shown in Ref. 8 that if N = K (where a scattered signal is present in all resolution cells within the observation interval), the optimal detector shown in Figure 2.8 is modified into a quadratic detector with a linear integrator as shown in Figure 2.9. In this case, the integration is performed over all N resolution cells, so there is no need to have a priori information on presence and location of K bright points. This detection scheme is called the energy detector. If we use this detector when K ≠ N, additional losses result from the summation of noise in “empty” resolution cells within an observation interval. By increasing the number of bright points K within this interval, the detection curve of energy detector approaches the detection curve for optimal detector for fully known signal. To reduce the losses in energy detector when K << N, another quasi-optimal algorithm was presented in Ref. 8. This algorithm makes ranking of signals scattered by bright points within the observation integral. The observation interval is selected to roughly approximate the estimated target size. In this interval, maximum signal amplitudes are selected using the “sliding window.” Only K maximum amplitudes from N resolution cells are quadratically processed and linearly
Uent(t)
Linear channel
Square-law device
Integrator
Threshold
U(T)
FIGURE 2.9 If a target is present within K cells within the observation interval, the detector of Figure 2.8 is modified into a quadratic detector with a linear integrator.
© 2001 CRC Press LLC
summed. Figure 2.10 is the block diagram of an example single-channel rank detector. The procedure for selecting signal maximums for a target comprising three bright points is also shown here. If K is unknown, then a multichannel rank detector is used, and in each channel a various number of signal maximums scattered by target bright point are summed. Output signals from all detection channels are combined. The efficiency of such detector is much higher, but it is achieved by complicating the detector scheme. Besides quasi-optimal detectors for signals with unknown parameters mentioned above, we would like to discuss the “by-point” detector described by Bakut and Bolshakov et al. in Ref. 9. The operation algorithm of this detector does not take into account a signal structure. Target detection is performed by successive comparison of the signal voltage with threshold level in each resolution cell and combining the results in a logical OR scheme. The detector does not accumulate signals scattered by different bright points, which causes large energy losses, which are partly compensated by logical accumulation of decisions (probabilities). The advantage of this detector is ease of realization. N
2
( )
X1 1
X2
k maximum signals k
XN
Threshold
a. Functional block diagram Uent(t)
Delay line
Comparator
Delay line
Σ
Comparator
Delay line
Comparator b. Schematic diagram of the signal ranking process
t
T - observation interval
U output of first stage
t U output of second stage t U output of third stage t c. Time sequencing of target signal maximum selection process
FIGURE 2.10 A single-channel rank detector.
© 2001 CRC Press LLC
To estimate the efficiency of the three previously discussed quasi-optimal detectors, we can compare them with a detector for a fully known signal, which we consider as the standard. The mathematical modeling of the processing algorithms for these four detectors featured an observation interval of N = 100 resolution cells. If the signal duration is 1 ns, it corresponds to a target resolution length of 15 m. The number of bright points K was varied from 1 to 32. Normally distributed white noises with zero average values and equal dispersions were used for modeling all detectors. While modeling, the energy detector summed the samples squared within the whole observation gate. The multichannel rank detector was modeled using the six-channel scheme, the squared summation of 1, 2, 4, 8, 16, and 32 samples was performed in a corresponding channel. Channel output signals were weighted and combined in a “selection of maximums” scheme. Weight coefficients were selected so as to give equal false alarm rates in every channel. The “by-point” detector actually was identical to the first channel of multichannel rank detector, which operated with only one bright point. The Table 2.1 shows the energy losses of threshold signal (dB) for three detection algorithms relative to the standard algorithm for detection probability D = 0.5 and false alarm rate F = 10–3.
TABLE 2.1 Threshold Energy Signal Losses (dB) for Three Detection Algorithms for D = 0.5 and F = 10–3 Number of “bright points” Algorithm
1
4
8
16
32
Energy algorithm
7.5
8.1
5.2
3.0
2.1
Multichannel rank algorithm
2.7
4.8
3.0
1.6
1.5
“By-point” algorithm
2.5
5.6
4.2
3.8
4.7
Comparing results, we see in Table 2.1 that the energy detector is less effective than other detectors if the number of bright points is small. If a target configuration becomes more complex so that the number of bright points increases, then the relative losses for this detector is reduced, and for K = 32 it is even more efficient than a “by-point” detector but still is less efficient than a multichannel rank detector. This result can be easily explained. If the number of target bright points is small, the energy detector accumulates many noise samples, while the “by-point” detector selects only one bright point. If the number of bright points is increased, then the number of noise samples accumulated by the energy detector is reduced, and its efficiency grows. At the same time, the “bypoint” detector does not use the full signal energy, as it does not accumulate. In the limit, if the number of bright points approaches N = 100, the energy detector becomes identical to the standard detector for a fully known signal. It should be noted that the “energy detector” and the “by-point” detector can be operated effectively in the opposite situations, when the number of bright points is very small or very large. It would help to develop a two-channel detector that uses these two detectors operating simultaneously. The output signals are combined by maximum. Such a scheme may be also represented as a simplified multichannel rank detector comprising only two channels. One of these channels can detect targets with very simple configuration, and the other channel can detect targets with very complex configuration. Before combining by maximum, the output signals are normalized to equalize the false alarm rates, as it is performed in a multichannel rank detector. Figure 2.11 shows the detection curves for the quasi-optimal detectors discussed above. © 2001 CRC Press LLC
Single channel rank Energy
D Multichannel rank
1.0
–3
F = 10
Full known signal
By points
0.5
Signal/noise -10
-8
-6
-4
-2
2
0
4
by one brilliant point, dB
FIGURE 2.11 Comparison of quasi-optimal detector characteristics for the case F = 10–3.
2.4 OPTIMAL DETECTORS FOR UWB SIGNALS 2.4.1
INTRODUCTION
Scattered UWB signals will have unknown parameters determined by the target size and shape. However, there is one parameter that is determined only by the radar and does not depend on a target shape. The pulse repetition period Tr is the only a priori signal information that can be used for target detection. It was already known, a long time ago, that it was possible to use radiated signal periodicity for target detection when the scattered signals are concealed by noise.10
2.4.2
DETECTION SCHEME
To use signal periodicity for UWB signal target detection, we introduce the modification shown in Figure 2.12 to the scheme of conventional correlation processing already shown in Figure 2.4.
uecho(t)+un,2(t)
∫
U(T)
uecho(t-Tr)+un,1(t)
Uthresh
Delay Tr FIGURE 2.12 Modified conventional correlation processing for detecting multiple scatterer targets. This is called the inter-period correlation processor (IPCP).
© 2001 CRC Press LLC
In this modified scheme, a signal received in the previous observation period and delayed at pulse repetition period Tr is the reference signal. This scheme has three dissimilarities from the conventional correlator. 1. The received signal is compared not with a radiated signal but with a signal scattered by a target. 2. Noise signals feed both correlator inputs but are not correlated, because they are received in different repetition periods. 3. The integration period T is determined not by the radiated signal duration but by the observation interval or the scattered signal duration. For example, if a physical target length is L, the integration time for the system of Figure 2.12 is equal to T = 2L/c – τ, where c is velocity of light and τ is the radiated signal duration. Therefore, the signal shape is the parameter that determines the efficiency of the modified correlation receiver. This type of signal processing is named inter-period correlation processing (IPCP). After the correlator combines both the current and delayed signal samples taken from two repetition periods, the following processing is performed at the same time interval. This helps to analyze IPCP operations using single-dimensional distribution functions. As a statistical operation criterion, we use the Neumann–Pearson criterion. To determine the generals laws of IPCP operation, we first analyze the processing algorithm for a signal scattered by a stationary target where the signals received in two sequential periods are identical. This case is considered as the standard one for IPCP, just as the reception of fully known signal is considered as the standard case for a conventional correlation processor. So we will compare the detection of a signal scattered by a stationary target in IPCP and detection of a fully known signal in a conventional correlation processor; both signals are mixed with the same white noise. This helps to get objective estimation of the efficiency of this method. Practically speaking, IPCP efficiency will depends factors such as target movement during pulse repetition period Tr , the relation of the processing gate width and the target size, the validity of implementing normal distribution laws to describe statistical noise parameters of in ultrawideband signals, etc. The estimation of the influence of these factors on detector operation will help to determine the IPCP efficiency.
2.4.3
IPCP OPERATION PROCESSING
AND
DISTINCTION FROM CONVENTIONAL CORRELATION
For the IPCP, the voltage at the correlator output is T
2 u ( T ) = ------ ∫ u ( t )u ( t – T r ) dt N0
(2.34)
0
When a scattered signal is absent, this voltage is determined by the following expression: T
T
0
0
2 2 u ( T ) = ------ ∫ u n ( t )u n ( t – T n ) dt = ------ ∫ u n,1 ( t )u n, 2 ( t ) dt N0 N0
(2.35)
In this case, the integrated expression is the product of two normally distributed and uncorrelated noises. Unlike the conventional correlator, the distribution function of this product has a constant dispersion, as it does not depend on the reference signal. This product of two noises is integrated © 2001 CRC Press LLC
during the observation interval T. The duration of this interval can be varied as it is determined by the physical target length. The dispersion of the product of two noise voltages will be increase proportionally to the integration time interval. It should be mentioned that this time interval is constant in the traditional correlator, as it is determined by the duration of a radiated signal. Noise distribution function W0 at the IPCP output obtained by the integration will determine the threshold level, which provides the given false alarm rate. So, the threshold level in IPCP scheme is determined by the length of an expected target (i.e., observation interval), while in a conventional correlator it is determined by the reference signal (if the normalization is not used). If a received signal is present at the correlator input then the correlator output voltage is determined by the following expression: 2 u ( T ) = -----N0
T
T
T
T
∫ uecho ( t )uecho ( t ) dt + ∫ uecho ( t )un,1 ( t ) dt + ∫ uecho ( t )un,2 ( t ) dt + ∫ un,1 ( t )un,2 ( t ) dt 0
0
0
(2.36)
0
As in the case of conventional correlator, the first integral of this expression determines the energy of received signal, and so a non-zero average value of the output distribution function W1. The fourth integral is identical to the integral of the expression for u(T) if the signal is absent. It determines the shape of the function W1, which must coincide with the shape of the function W0. But when the signal is present, the second and the third integrals emerge and have additional influence on the shape function W1. The multiplication of noises with received signal and the integration over the observation interval T lead to an increase in the dispersion this noises. The result is what happens when the multiplication of the noise with a reference signal leads to the increase of noise dispersion in a conventional correlator. This results in the increase of the dispersion of output distribution function W1. It becomes more than the dispersion of the distribution function W0. This is the general picture of IPCP operation.
2.4.4
DISTRIBUTION FUNCTIONS W0
AND
W1
IN THE
IPCP SCHEME OUTPUT
To calculate the detection characteristics, the distribution functions W0 and W1 at the IPCP scheme output should be determined. The distribution function W0 is a result of the processing of input normal noises in the multiplier and the integrator. At the multiplier output, we have the product of noise voltage samples. The distribution function for the product of normally distributed random values y = u1 × u2 is determined in Ref. 11. If u1 and u2 are correlated random values with a correlation coefficient R and dispersions σ 21 and σ21 , the distribution function is Ry --------------------------------
2 σ σ (1 – R ) 1 y W m ( y ) = ---------------------------------- e 1 2 K 0 ------------------------------2 2 σ1 σ2 ( 1 – R ) πσ 1 σ 2 1 – R
(2.37)
where K0(x) is the second kind, zero-order Bessel function taken from an imaginary argument. In the case discussed, we have R = 0 and σ 21 = σ 22 . As a result, the expression of Wm(y) can be simplified as follows: 1 y W m ( y ) = ---------2 K 0 -----2 σ πσ
(2.38)
Figure 2.13a shows the distribution function Wm(y) as well as the distribution function Wn(u) for normal random values normalized with the function Wm(y) over the square. The comparison of © 2001 CRC Press LLC
W 0.75
0.6
W m (y ) 0.45
Fig 2.13b
0.3
Fig 2.13c
0.15
Wn(u) 0
2
-4.0 -3.5 -3.0 -2.5 -2.0 -1.5 -1.0 -0.5 0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
u/σ 2 y/σ
a)
W 0.18 0.15 0.12 0.08
Wn(u) 0.06 0.03 0
Wm (y) u/σ
0.25
0.5
0.75
1.0
1.25
1.5
1.75
2.0
2.25y/σ
2 2
b)
W -4
10
Wm(y) -5
5*10
Wn (u) -5
10
u/σ
3.0
3.5
4.0
5.0
4.5
5.5
6.0
2
6.5 y/σ 2
c)
FIGURE 2.13 Inter-period correlation processing: (a) comparison of the distribution function for the product of two normally distributed random value Wm(y) and for one normally distributed random value Wn(u); (b, c) “tails” these distribution functions.
© 2001 CRC Press LLC
these functions shows that the product of normal noises is less dispersed than normal noise at the correlator input. This can be explained by the fact that overshoots of one noise are compensated by the low level of the other noise during multiplication. The coincidence between overshoots of two independent noises has a very low probability. Nevertheless, we have such a probability, which explains the fact that the duration of “tails” of the distribution function Wm(y) greatly exceeds the duration of “tails” of the distribution function Wn(u), as shown in Figure 2.13b and 2.13c. This fact is of great importance for IPCP as the threshold level determined by the function Wm(y) became higher than the threshold level determined by the function Wn(u) for the given false alarm rate, especially if it is small. To detect a target, n samples of the input voltage taken during the observation interval are used. So, if the samples are statistically independent, then the distribution function at the multiplier output is 1 y W m ( y 1 ,y 2 ,y 3 ,…, y n ) = ---------2 K 0 -----2 πσ σ
n
(2.39)
The product of noises from the multiplier output is fed to an integrator. As was shown above, the distribution function at the integrator output depends on the integration time T. According to the central limit theorem, the noise distribution function at the correlator output will increasingly approach the normal distribution function as the integration time increases, and its dispersion will grow in value. If UWB signal duration is τ = 1 ns, the real targets can occupy from 10 to 100 range resolution cells, and time T can be equal to 20τ to 200τ correspondingly. Such a time interval may be insufficient to get the normal distribution law of distribution function of the product of two normal noises Wm(y). So, this function will have an intermediate position between the distribution function of the noise product and the normal distribution function approaching the normal distribution with increasing the integration time. This distribution function is shown in Figure 2.14 relative to the integration time which varies from 10τ to 80τ, at 10τ intervals. It is rather difficult to get an analytical expression that describes the distribution function at the integrator output when the input distribution function differs from the normal. B.R. Levin wrote, “The problem of transformation of distribution functions in a linear dynamic (inertial) system is very difficult to solve when its input is not a Gaussian random process. We do not have the correct decision for this problem that can be used in practice.”12 The same difficulties emerge when we determine the distribution function W1. It has two features different from the distribution function W0. One of them is just the same as in the conventional correlator. That is the displacement of the average value function by the value proportional to the signal energy. The second feature is a peculiar feature of the IPCP where the additional increases in the dispersion of the distribution function W1 are caused by the multiplication of a signal and noise in every correlator channel. The approximate view and mutual positions of functions W1 and W0 at IPCP are shown in Figure 2.15.
2.4.5
DETECTION CHARACTERISTICS
As it is rather difficult to have analytical expressions for distribution functions W1 and W0 at the IPCP output, we use mathematical modeling to plot detection characteristics. Figures 2.16a and 2.16b show the detection characteristics of IPCP for a signal scattered by a stationary target for two values of false alarm rate 10–2 and 10–4. These figures also show the detection characteristics of a conventional correlator for fully known signal and energy detector characteristics for the same false alarm rates. To make the comparison valid, the duration of a received signal is taken to equal to the duration of a radiated signal for a one-point target. © 2001 CRC Press LLC
W(y) Wm(y)
0.6
T=10τ T=20τ T=30τ T=40τ T=50τ T=60τ T=70τ T=80τ
0.5 0.4
0.3 0.2
0.1 0 –15
–10
–5
0
5
10
15
y
FIGURE 2.14 Dependence of the distribution function Wm(y) on the integration period.
Wo(u)
F
D
W1(u)
UtreshUent FIGURE 2.15 Inter-period correlator processor (IPCP) distribution functions W0(u) and W1(u).
© 2001 CRC Press LLC
u
1
D
1
0.8
D
0.8
1 2
4
3
0.6
1
0.6
0.4
2
4
3
0.4
0.2
F = 10
-2
0.2
F = 10
-4
q 0
1
4
3
2
5
6
7
q 0
2
6
4
8
10
b)
a)
FIGURE 2.16 IPCP detection curves for a signal scattered by a stationary target: (1) IPCP for stationary target plus criterion processing, (2) traditional correlator for a fully known signal, (3) IPCP for a stationary target, and (4) energy detector.
The analysis of results shows that the IPCP detection characteristics approach the conventional correlator detection characteristics for high false alarm rates (10–2). The difference between positions of these characteristic increases with reducing the false alarm rates (10–4). This can be explained by the long duration of the tails of the distribution function Wm(y). In IPCPs, the given level of false alarm rate can be maintained by setting the threshold level higher than in the conventional correlator. At the same time the detection characteristics of IPCP are much better than those of the energy detector. Figure 2.17 shows the dependence of detection characteristics on the integration time T, which is determined by a target length. The false alarm rate 10–4 and the integration time T equal to 2τ, 10τ, and 20τ were taken for modeling. This figure also shows the detection characteristics for a conventional correlator. It is shown from the picture that, with increasing target length, the IPCP detection characteristic for a stationary target approaches more and more the conventional correlator characteristic when it detects the fully known signal. The reason for this is that distribution function Wm(y) approaches the normal distribution while integrating noise samples.
1D 0.8
1
2
3
4
5
0.6 0.4 0.2
q 0
2
4
6
8
10
FIGURE 2.17 Dependence of IPCP detection parameters on the integration period: (1) traditional correlator for fully known signal for T = 2τ, (2) IPCP for a stationary target for T = 20τ, (3) IPCP for a stationary target for T = 10τ, (4) IPCP for a stationary target for T = 2τ, and (5) energy detector for T = 2τ.
© 2001 CRC Press LLC
2.4.6
CRITERION PROCESSING
OF
UWB SIGNALS
AFTER AN
IPCP
The IPCP detection characteristics can be improved by additional criterion processing. The criterion processing scheme memorizes the resolution cells in which the output signals from the threshold scheme are present. This operation is performed in several pulse repetition periods Tr. After that, the cells in which the signals emerge repeatedly are determined. Only the signals from resolution cells that correspond the criterion selected (“two of two,” “two of three,” “three of four,” etc.) are passed through the scheme. This results in a great reduction in false alarms at the processor output. However, the detection probability is also decreasing at the same time. It should be mentioned that, in practice, the detection probability is always higher than the false alarm rate. So, the detection probability is decreasing more slowly than the false alarm rate. Criterion processing can be effectively used when a low level of false alarm rate is required (i.e., 10–4 and less). Let us discuss the general case of “n of k” criterion. If the scattered signal is absent and the number of coincidences of threshold voltage samples i in k repetition periods is i Š n, then the false alarm rate at the criterion processing scheme output is equal to k
F =
∑ Ck F0 ( 1 – F0 ) i
i
k–i
(2.40)
i=n
where Cik is the number of combinations of k elements taken i at a time, and F0 is the false alarm rate in one repetition period. If a signal scattered from a target is present for the same conditions, the detection probability is k
D =
∑ Ck D0 ( 1 – D0 ) i
i
k–i
(2.41)
i=n
where D0 is the detection probability in one repetition period. Let us apply the simplest criterion processing “two of two” scheme to threshold signals at the IPCP output. In this case, Cik = 1 , D = D20 , F = F 20 . Figure 2.18 is the schematic diagram of a criterion processor. The signal samples from the same range cells received in two repetition periods are fed to the AND logical scheme. Only coincident samples are passed through the scheme as shown in Figure 2.19. As this takes place, the detection parameters are being changed as shown in Figure 2.20. Figures 2.16a and 2.16b (dotted line) show the IPCP detection characteristics after the criterion processing case “two of two.” The characteristics are calculated for the false alarm rate indicated in Figure 2.16. They are located to the left of the standard characteristics of the conventional correlator calculated for the fully known signal. This is a result of combining signals from two repetition periods in the AND scheme, which operates as a multiplier in this case; this procedure is identical to the accumulation.
IPCP AND Threshold
FIGURE 2.18 Criterion processing block diagram.
© 2001 CRC Press LLC
Tr
U(T)
After first period Tr False alarm
Target
False alarm
n
U(T)
After second period Tr Target
False alarm
False alarm
n
U(T)
After criterion processing Target
n FIGURE 2.19 An example of criterion IPCP processing of a “two of two” criterion. Signal samples from the same range cells received in two repetition periods are fed to the AND logical gate shown in Figure 2.18. Only coincident samples are passed through the AND gate and appear as an output. In this case, n is the voltage sample after threshold. After second period
D=0,5; F=10-3 D=0,25; F=10-6 After first period
AND
D=0,5; F=10-3
Tr FIGURE 2.20 The changes of detection parameters after IPCP criterion processing.
2.4.7
THE MOVING TARGET CASE
A moving target will cause problems if it passes from one resolution cell to another during pulse repetition period emergence. We can solve this problem by using a multichannel scheme similar to the Doppler filtration system, which provides optimal detection of moving targets. The similar © 2001 CRC Press LLC
multichannel scheme can be used for selection of optimal integration time T while detecting target with various physical lengths L. The losses resulted from the multichannel configuration of the scheme can be calculated using the conventional methods that are valid for the similar multichannel digital Doppler systems. An example of a multichannel system intended for optimal detection of a moving target with unknown physical length is illustrated in Figure 2.21.
2τ
urec(t-Tr+τ)
D I
I urec(t-Tr+2τ)
D
4τ
∫ ∫
nτ
S E L E C T
2τ
Delay Tr+τ
V
urec(t)
D I V I D E R
∫
D I V I D E R
Delay Tr+2τ
∫
4τ
∫ ∫
nτ
S E L E C T
S e l e c t i o n
U(T)
o f
U thresh
E 2τ
R
urec(t-Tr+nτ)
Delay Tr+nτ
D I V I D E R
∫
4τ
∫ ∫
nτ
S E L E C T
m a x i m u m
FIGURE 2.21 Multichannel systems for detecting moving targets of unknown physical length.
REFERENCES 1. David K. Barton, Modern Radar System Analysis, Artech House, Boston, 1998. 2. Merrill I. Skolnik, ed., The Radar Handbook, McGraw-Hill, New York, 1989. 3. Jakov D. Shirman, ed., Radioelectronic System: Fundamentals of Design and Theory, MAKVIS. Moscow, 1998. 4. Sosulin Yu. G., Theoretical Foundations of Radars and Radionavigation, Radio and Communications, Moscow, 1992. 5. Gutkin L.S., The Theory of Optimal Methods for Reception of Radar Signals in Fluctuating Noises, Soviet Radio, Moscow, 1972. 6. James D.Taylor, ed., Introduction to Ultra-Wideband Radar Systems, CRC Press, Boca Raton, FL, 1995. 7. Immoreev I. Ya., “Main Features Ultra-Wideband (UWB) Radars and Differences from Common Narrowband Radars,” Chapter 1, this book. 8. Van der Spek G.A., “Detection of a distributed target,” IEEE Trans on Aerospace and Electronic Systems, Sept. 1971, AES-7, N5, pp. 922–931. 9. Bakut P.A., Bolshakov I.A., et al. Problems of Radar Statistical Theory, vol. 1, Soviet Radio, Moscow, 1963. 10. Lee Y.W., Cheatham T.P., Wiesner J.B., “Application of correlation analysis to the detection of periodic signals in noise,” Proceedings of the IRE, No. 38, p. 1165, 1950.
© 2001 CRC Press LLC
11. Levin B.R., The Theory of Random Processes and Its Application to Radio Engineering, Soviet Radio, Moscow, 1960. 12. Levin B.R., Theoretical Foundations of Statistical Radio Engineering, Radio and Communications, Moscow, 1989. 13. Immoreev, I., “Ultra-wideband radars: New opportunities, unusual problems, system features,” Bulletin of the Moscow State Technical University, no. 4, 1998, pp. 5–56. 14. Immoreev, I., Fedotov, D., “Optimum processing of radar signals with unknown parameters,” Radiotechnica, no. 10, 1998, pp. 84–88. 15. Fedotov, D., Immoreev, I., Ziganschin, E., “Digital processing of ultra-wideband radar signals,” Second International Conference, Digital Processing of Signals and Its Application, Moscow, June 1999.
© 2001 CRC Press LLC
3
High-Resolution Ultra-Wideband Radars Nasser J. Mohamed
CONTENTS 3.1 Introduction 3.2 Target Signature 3.3 Target Recognition 3.4 Correlation Coefficient Algorithm 3.5 Target Signature Variation with Orientation 3.6 Target Course Recognition 3.7 Pulse Compression 3.8 Conclusions References
3.1 INTRODUCTION Carrier-free radars are a class of ultra-wideband radars with a relative bandwidth close to 1 or a fractional bandwidth close to 25%.1,2 Such radars may have a range resolution that is much shorter than the length of typical military targets such as aircraft and missiles. When a target is illuminated by a short pulse, the returned signal from the target contains information about target shape, dimensions, and orientation with respect to radar. The returned signal is referred to as the target signature, and the information contained in the target signature can be used for target recognition.3,5 Carrier-free radars could perform target recognition by utilizing target dimension for classification and target shape for identification, or may use both for target recognition.6 In addition, the target aspect angle can be used either for target course recognition, if the target shape is known, or for target recognition at a known aspect angle or orientation.7,8 A number of techniques and methods have been used for target recognition with conventional radars, such as impulse response techniques, which are based on the linear system theory;9 pole zero location, which is based on the singularity expansion method (SEM);10 range profile techniques, which are obtained by performing fast Fourier transform (FFT);11 the bispectrum method;12 and the inverse synthetic aperture radar (ISAR).13 Such techniques and methods invariably use algorithms such as the correlation coefficient,7,8 the nearest neighbor decision rule,3 the neural networks as well as other techniques. One approach for improving the efficiency of target recognition method has been to find efficient algorithms including neural networks. Such algorithms, along with their improvements, can also be used by carrier-free radars for target recognition; for example, the correlation coefficient algorithm has been used for matching observed targets to targets in a database.7,8 However, the carrier-free radar utilizes, in addition, target physical features such as dimensions and shape to reduce the number of computations required for target recognition, where each computation can be carried out by an efficient algorithm.6
© 2001 CRC Press LLC
Target recognition by carrier-free radars has several inherent attributes. First, the target signature is obtained directly in the time domain, since it is the sum of returns from different parts of the target. Second, the parameters of the target signature can be associated with target features and characteristics. For example, the time variation of the target signature is related to target shape, and the time duration of the target signature is linearly proportional to target length. Third, depending on the targets of interest, one can make different target recognition schemes. For example, if the targets of interest all have different dimensions, then the duration of the target signature can be used for target recognition. However, when the targets have different shapes and approximately equal lengths, then target shape can be used for target identification. In general, target recognition can be carried out by using target dimension for target classification and target shape for target identification. In reality, a radar signal rarely consists of one pulse, but of many pulses structured by various coding techniques such as Barker codes or Complementary codes.14 The pulse compression principle is the process of transmitting a radar signal with low peak power but high energy, with a time resolution of a single pulse rather than the resolution of the coded signal.14,16 The advantage of using pulse compression is to detect targets at long operational ranges with a high range resolution needed for target recognition. Conventional and carrier-free radars may incorporate the pulse compression principle. Pulse compression with carrier-free radars is accomplished by generating a long sequence of N coded pulses with a total duration of Tc = NT, where T is the duration of each coded pulse. This maximizes energy transmission while reducing radiated power levels. The receiver compresses the reflected long pulse sequence by signal processing techniques, such as a correlation process, into a shorterduration, higher-power pulse of duration T << Tc . This provides a shorter resolution equal to (cT/2).14,15
3.2 TARGET SIGNATURE A radar target arranged in a two-dimensional Cartesian coordinate system referred to as airplane axis coordinate system is shown in Figure 3.1a. The target is an airplane with a length L = 15 m and a wing span W = 12 m represented by seven point-like scatterers in the airplane axis coordinate system. The coordinate system has been chosen to position the nose of the target at the origin and y D(5,6)
6
y
4 2 1.5 A(0,0) 0 Nose 1.5 -2
F(10,1.5) C(5,0) 5
r
G(15,0) 15 x 10 E(10,-1.5) Tail Fuselage
-4 -6
(x j ,yj )
Wing
Radar
j
R
o
x
B(5,-6) Airplane axis Coordinate system x,y
Airplane axis coordinate system x,y
(a)
(b)
FIGURE 3.1 (a) Airplane represented by seven point scatterers in the airplane axis coordinate system xy with length L = 15 m and wing span W = 12 m. (b) Radar at a large distance R from the origin of airplane axis coordinate system. © 2001 CRC Press LLC
the length of the target along the positive x axis. The nose of the airplane is represented by the point scatterer A, the wing is represented by the point scatterers B, C, and D, the fuselage is represented by the point scatterers E and F, and the tail is represented by the point scatterer G, respectively. The radar is located at a distance R from the origin of the airplane axis coordinate system as shown in Figure 3.1b. A point scatterer P on the surface of the airplane with coordinates (xj, yj) has the distance rj from the radar. This distance can be expressed in Cartesian coordinates as 2 1⁄2
2
rj = [ ( R + xj ) + yj ]
(3.1)
Since the distance between radar and the origin of the airplane axis coordinate system is very large, the distance rj of Equation (3.1) can be approximated to6 2
2
( xj + yj ) « R
rj = R + xj
2
(3.2)
The requirement that the distance between radar and any point on the surface of the target be much larger than the largest linear dimension of the target is usually satisfied in a typical radar application. Point scatterers on the surface of the airplane along the y axis in the airplane axis coordinate system have equal distances from the radar. As a result of the distance approximation in Equation (3.2), the influence of the y coordinate on the distance measurements can be ignored. A high-resolution carrier-free radar transmits a signal consisting of N pulses. Each pulse has the time variation f(t) with duration T defined in terms of the normalized beta function 4 --t- 1 – --t- 0 ≤ t ≤ T f ( t ) = T T 0 elsewhere
(3.3)
as shown in Figure 3.2a. The transmitted signal can be expressed in terms of its N pulses as N–1
s(t) =
∑ f ( t – iTD )
(3.4)
i=0
where TD is the time separation between the pulses. The transmitted signal of Equation (3.4) with beta time variation according to Equation (3.3) is shown in Figure 3.2b for the first three pulses for N = 3. Each point scatterer of the target returns a signal with the same time variation as the transmitted signal s(t), but multiplied by an attenuation factor aj due to propagation medium and delayed by the time delay t´j due to the distance rj. The received signal can be written in terms of the transmitted signal, since it is the sum of returns from the M point scatterers of the target M
r( t) =
∑ aj s ( t – t′j )
(3.5)
j=1
For simplicity, the attenuation factor aj is increased to 1 by amplification. Target signature is defined as the time variation of a returned signal from a target with M point scatterers due to a single pulse of the transmitted signal.6 Accordingly, the target signature of the airplane of Figure 3.1a has the form © 2001 CRC Press LLC
f(t) 1 0.5
T/2
T
t
(a) S(t) 1 0.5 0.6 0.4 0.2 T
2TD
TD
t
(b) W(t') 3 2.5 2 1.5 1 0.5 20
40
60
80
100
t'(ns)
(c) r(t') 3 2.5 2 1.5 1 0.5 2TD
TD
t'
(d)
FIGURE 3.2 (a) Transmitted pulse f(t) with beta time variation and duration T. (b) Transmitted signal s(t), consisting of three pulses separated by the time TD. (c) Target signature w(t´) due to the pulse f(t) of (a). (d) Returned signal r(t´) consisting of three target signatures due to s(t) of (b).
M
w(t) =
∑ f ( t – t′j )
(3.6)
j=1
The round trip time delay t´j can be obtained from Equation (3.2) in terms of the distance R and the x coordinate of the jth point scatterer with coordinates (xj, yj). r t′ j = 2 ---j c 2R 2x = ------- + -------j c c = t0 + tj © 2001 CRC Press LLC
j = 1, 2, …, M (3.7)
where c is the speed of light, and the time delays are defined by 2R t 0 = ------c 2x t j = -------j c
j = 1, 2, …, M
(3.8)
The time delay t0 is due to the distance R from the radar location to the nose tip of the airplane, while tj is due to the distance xj from the origin of the coordinate system to the point scatterer (xj, yj) in the airplane axis coordinate system. The target signature of Equation (3.6), with the help of Equations (3.7) and (3.8), becomes M
∑ f ( t – t0 – tj )
w(t) =
(3.9)
j=1
Since the time delay t0 is common to all point scatterers of the target, it can be used as a reference to measure the time delays tj of the M point scatterers. The target signature of Equation (3.9) can be rewritten in terms of the transformed time variable t´ as6 M
w ( t′ ) =
∑ f ( t′ – tj ) j=1
t′ = t – t 0
(3.10)
and shown in Figure 3.2c for the airplane of Figure 3.1a and a transmitted pulse of duration 10 ns. It is evident from Figure 3.2c and Equation (3.10) that the target signature depends on the target shape and geometry, represented by M point scatterers along with their distribution in the airplane axis coordinate system represented by the time delay tj. In addition, it also depends on the time variation of the transmitted pulse f(t) as well as on the radar range resolution determined by the transmitted pulse duration T. The target signature of Figure 3.2c contains four scattering centers where each has the time variation of the transmitted pulse f(t). A scattering center is defined as the sum of returns from a number of point scatterers that have the same x coordinate and any y coordinate. This is not surprising, since the time delay of Equation (3.7) depends on the x coordinate of the point scatterers only. The first scattering center is the return from the nose tip of the airplane and consists of a single pulse. However, the second scattering center consists of three superimposed returns from the wing producing the pulse with peak magnitude 3. The last two scattering centers are due to returns from the fuselage and the return from the tail. Indeed, the target signature represents a onedimensional image or a range image, since it provides information about the target variation along the radar line of sight. This feature can be utilized to distinguish between targets of different shape, size, and dimensions referred to as target recognition. The received signal of Equation (3.5) can be expressed in terms of the target signature of Equation (3.6) by combining Equations (3.4), (3.3), and (3.6) as N–1
r ( t′ ) =
∑ w ( t′ – iTD ) i=0
© 2001 CRC Press LLC
(3.11)
and shown in Figure 3.2d for N = 3. The received signal, in this case, consists of three target signatures, since the transmitted signal of Equation (3.4) consists of three pulses as shown in Figure 3.2b. In general, the received signal consists of N target signatures separated by TD corresponding to the N pulses of the transmitted signal. It is possible to combine the N target signatures to improve the signal-to-noise ratio or to obtain information about target velocity resolution if the target is moving. The effect of the transmitted pulse duration T and the radar range resolution cT/2 on the target signature can be determined by considering target signatures, according to Equation (3.10), for the airplane of Figure 3.1a and a transmitted pulse with beta time variation with six time durations corresponding to six range resolutions as shown in Figure 3.3. The returned pulses from individual point scatterers of the target do not overlap when the time duration T is 10 ns or 33 ns, as shown by the function w(t´) in Figure 3.3, for T = 10 ns and T = 33 ns. The target signature, in this case, consists of a sequence of pulses with beta time variation and duration T, but with an amplitude that is determined by target geometry. One can readily recognize the different sections of the airplane from the target signature. As the pulse duration increases beyond T = 33 ns, the returned pulses from target point scatterers begin to overlap, and the target signature will subsequently depend on the duration T. In general, when the transmitted pulse duration T is short (T < 2L/c = 100 ns) corresponding to range resolution smaller than target length (cT/2 < L), the target signature provides information about the target since the transmitted, and received signal time variations are significantly different, as evident from the functions w(t´) of Figure 3.3 for T = 40 ns and 50 ns. However, when the pulse duration T increases to 2L/c = 100 ns, the target signature becomes very similar to the transmitted pulse, and practically no information about the target can be conveyed as evident from Figure 3.3 for T = 100 ns. In this case, the radar range resolution cT/2 equals the target length L. For large values of T corresponding to radar range resolution becoming larger than the target length, the target is modeled as a point scatterer, since the target signature has the same time variation as the transmitted signal as evident from Figure 3.3 for T = 200 ns. The duration of the target signature Td depends on the transmitted pulse duration T and the largest target dimension along the line of sight between radar and target. The target length in the airplane axis coordinate system is L = largest x – smallest x
(3.12)
The relationship between pulse duration T, target signature duration Td, and target length L is 2L T d = ------ + T c
(3.13)
It is evident from Equation (3.13) that target signature duration can be used to measure airplane length, since it is linearly proportional to target length along the radar line of sight. Targets of different dimensions and any shape can be characterized by their length along the radar line of sight. This feature is very significant, since it can be used for target classification and target identification. An inherent advantage of representing targets by their dimensions is the electronic countermeasure capability (ECM), where small targets with high reflectivity can be distinguished from large targets with low reflectivity based on their length obtained from target signature duration.
3.3 TARGET RECOGNITION Target recognition based on target signature is the process of selecting a particular target signature from a database that is most similar to an observed target. The database usually contains a large © 2001 CRC Press LLC
FIGURE 3.3 Effect of time duration T on target signature w(t´) shown for the target signature of the airplane of Figure 3.1a and six values of the pulses duration T.
number of target signatures for a desired set of targets. For example, a target may have several target signatures representing the target under different loading conditions and at different orientations with respect to radar. As a result, targets that can be recognized are limited to the targets that are in the database. An observed target may have an exact match in the database, or it may have an approximate match, or it may not have a matched target signature in the database. Each case will influence the target recognition process differently. The selection of a particular target signature from a database is usually carried out by an algorithm. There are a number of algorithms that have been used with conventional radars that can also be used with carrier-free radars, such as those based on the correlation coefficient or on the © 2001 CRC Press LLC
nearest neighbor concepts, or on the concept of neural networks. We will adopt the correlation coefficient algorithm. The main two elements that affect the selection of a particular target recognition method and technique are the size of the database and computational capability of the matching algorithm. The former influences the storage requirements and the search time, since large databases may require large memory and long search times. Transformation techniques, such as the wavelet transform,17 have been used to reduce the storage requirements. However, such transformations may add to the overall computational requirements. The latter requires computational efforts, since the recognition of each target can be realized by a certain number of correlations, and the number of correlations performed will increase with the number of target signatures in the database, although the time of performing each correlation is assumed to be optimized. Research efforts have been directed toward finding efficient algorithms for target recognition as well as through utilization of target characteristics such as shape, geometry, and dimensions to reduce the number of computations required for target recognition. The target signature is a function of the transmitted signal, radar range resolution, and target features such as shape, dimension, and orientation. For a transmitted pulse with beta time variation and a range resolution of 1.5 m, the target signature will depend on target features or characteristics at a particular orientation. For example, when the target is in the airplane axes coordinate system, the target signature can provide information about target shape and target dimension at this particular orientation. The time variation of the target signature and the time variation of the scattering centers provide information about target shape, while the duration of the target signature provides information about target length along the radar line of sight. The utilization of target length for target classification and target shape for target identification will lead to computationally efficient target recognition methods. In fact, if all targets of interest have different lengths, as measured by the radar, then target recognition can be based on target length, which is determined from target signature duration. In this case, the length of an observed target, which is a number, can be associated with a particular target in the database, and the target can be identified based on its length. A possible target recognition processor based on target features such as target length and target shape for a target at a known aspect angle is shown in Figure 3.4. The target recognition process is carried out in two stages: target classification based on target length and target identification Target Identification L1
TI - 1
1 2 6
L2
TI - 2
1 2 6
Wo (t)
Observed Target at known aspect angle
Target Classification L3
TI - 3
Dimension
1 2 6
L4
TI - 4
1 2 6
Shape
FIGURE 3.4 Principles of target recognition based on target features such as dimensions for target classification and shape for target identification. © 2001 CRC Press LLC
based on target shape. Target classification is the process of grouping targets according to their dimensions in a number of classes and associating an observed target with a particular target class based on the observed target signature duration. The observed target signature wo(t) is passed through the target classification circuit to determine a particular class among the four classes L1, L2, L3, and L4 as shown in Figure 3.4. Each class corresponds to a set of targets with specific dimensions; for example, the class L1 corresponds to targets with lengths between L = 14 m to L = 18 m. In addition, there are four target identification circuits (TI-1, TI-2, TI-3, and TI-4) shown in Figure 3.4, corresponding to the four target classes L1, L2, L3, and L4. Each target identification circuit has six stored target signatures with different shapes and lengths and therefore can identify six targets. For example, the target identification circuit TI-1 has six target signatures whose length ranges from L = 14 m to L = 18 m, which corresponds to L1. The database contains 24 target signatures grouped into four classes, and each class has six target signatures. Once the target class has been determined, a target identification process will be initiated to identify the observed target signature against a subset of target signatures from the database using target shape through comparing their time variation rather than using the total database. The observed target signature wo(t) in Figure 3.4 is the input to the target identification circuit to determine its class among the four target classes L1, L2, L3, and L4. For example if L1 is determined as the target class, then only the target identification circuit TI-1 will be turned on to identify the observed target signature wo(t) with respect to the set of six stored target signatures, while the other three target identification circuits (TI-2, TI-3, and TI-4) will be turned off. In this case, the target identification procedure will be carried out using 6 target signatures from the database rather than using all the 24 target signatures of the full database. The computational efforts required are the calculations needed for six comparisons where each comparison can be carried out by an efficient algorithm. The improvements that can be achieved in target recognition by replacing the 24 target signatures of the database by a subset of 6 target signatures are primarily due to utilizing target features such as target length. However, when the target length is not used as a feature in the recognition process, one has to compare the observed target signature wo(t) with all of the 24 target signatures in the database. In this case, the computational efforts required are the 24 comparisons. One can readily appreciate the utilization of target dimension for improving target recognition by reducing the number of computations required for target recognition. In general, target recognition based on target features using a database containing Q targets arranged in P classes would require P comparisons rather than the Q comparisons, where each comparison may be performed by a number of calculations performed with a particular algorithm. An additional advantage is that the computational improvements gained by utilizing target features is independent of the calculation efficiency of the algorithm used for the actual computations. When the targets in the database all have different dimensions and any shape, the target recognition process will be reduced to target classification only without the need for target identification. In this case, less computational effort is required, since the target identification processes TI-1 to TI-4 are not needed.
3.4 CORRELATION COEFFICIENT ALGORITHM Consider that the observed target signature wo(t) in Figure 3.4 has been classified in class L1, which contains targets with lengths between L = 15 m and L = 18 m. The database for target identification of this particular class TI-1 has six target signatures w1(t), w2(t),..., w6(t) as shown in Figure 3.5. The first three target signatures, w1(t), w2(t), and w3(t), have equal lengths of L = 15 m and different shapes and widths, while the last three, w4(t), w5(t), and w6(t), have equal lengths of L = 18 m and different shapes. Target identification in any one of the TI-1 to TI-4 is accomplished by minimizing the integral square difference Gi between an observed target signature wo(t) and a sample target signature wi(t) from the database containing P target signatures. © 2001 CRC Press LLC
T = 10ns, L = 15m
w1(t) 3.0
w 2(t) 3.0
2.5
2.5
2.0
2.0
1.5
1.5
1.0
1.0
0.5
0.5
T = 10ns, L = 15m
0.0
0.0
0
30
60
90
0
120
30
w3 (t)
60
90
120
t(ns)
t(ns)
T = 10ns, L = 15m
3.0
w 4 (t) 3.0
2.5
2.5
2.0
2.0
1.5
1.5
1.0
1.0
0.5
0.5
T = 10ns, L = 18m
0.0
0.0
0
30
60
0
120
90
30
w5 (t)
60
90
120
t(ns)
t(ns)
T = 10ns, L = 18m
w 6(t)
3.0
3.0
2.5
2.5
2.0
2.0
1.5
1.5
1.0
1.0
0.5
0.5
T = 10ns, L = 18m
0.0
0.0
0
30
60
120
90
0
30
60
90
120
t(ns)
t(ns)
FIGURE 3.5 Database containing six sample target signature where w1(t), w2(t), and w3(t) have equal length of 15 m, and w4(t), w5(t) and w6(t) have equal length of 18 m.
Td
Gi =
∫ [ wo ( t ) – wi ( t ) ] dt 2
i = 1, 2, 3, …, P
(3.14)
0
The sample target signature from the database that reduces the minimum value of Gi, according to Equation (3.14), will be used as the identified target. When the observed target signature wo(t) is identical with a sample target signature from the database, the integral square difference Gi of Equation (3.14) will assume its minimum value, and Gi becomes zero. This is the criterion that will be used for target identification. The integral of Equation (3.14) can be expanded to © 2001 CRC Press LLC
Td
Gi =
Td
Td
∫ w ( t ) dt + ∫ w ( t ) dt – 2 ∫ 2 0
0
2 i
0
w 0 ( t )w i ( t ) ( dt )
i = 1, 2, 3, …, P
0
= E 0 + E i – 2K i
(3.15)
where the first and the second integrals are the energies Eo and Ei of the target signatures wo(t) and wi(t), respectively, and the third integral is the cross correlation coefficient defined by Td
Ki =
∫ w0 ( t )wi ( t ) dt
i = 1, 2, 3, …, P
(3.16)
0
Since the integral square difference Gi of Equation (3.15) can be minimized by using the largest cross correlation coefficient Ki of Equation (3.16), we can use instead for target identification the normalized cross correlation coefficient Ki ρ i = -------------E0 Ei Td
1 = --------------- ∫ w 0 ( t )w i ( t ) dt E0 Ei 0
i = 1, 2, 3, …, P (3.17)
When the observed target signature equals to a sample target signature from the database, wo(t) = wi(t), the normalized cross correlation coefficient ρi of (3.17) becomes maximum with value unity, otherwise it is less than one. We will consider two cases of interest: when the observed target signature has an exact match in the database and when it does not have an exact match, but it can be approximated to a target signature in the database. The coefficient ρi of Equation (3.17) is shown in Figure 3.6 for six observed targets, w1(t), w2(t),..., w6(t), that have exact matches in the database. For example, when the observed target signature wo(t) = w1(t), the coefficient ρi is evaluated for all six target signatures of the database of Figure 3.5, and the results are shown in Figure 3.6 for wo(t) = w1(t). In this case, the coefficient ρi assumes its maximum value of unity at i = 1, indicating that the observed target signature is identified as w1(t). Similarly, the coefficient ρi assumes its maximum value of unity at i = 2, 3, 4, 5, and 6 in Figure 3.6 for the observed target signatures wo(t) = w2(t), w3(t), . . . , w6(t), respectively, indicating in each case that the observed target signature is identified as w2(t), w3(t),..., w6(t), respectively. The second case of interest is when the observed target signature can be approximated to a sample target signature in the database. For example, the four target signatures w7(t) through w10(t) shown in Figure 3.7 have the same shape and dimensions as the target signatures in the database of Figure 3.5, but their structures have been changed either due to extra load such as bombs and missiles or landing gear. The target signatures w7(t) and w8(t) are similar to w1(t) and w2(t), while the target signatures w9(t) and w10(t) are similar to w5(t) and w6(t). It is conceivable that the observed target signature may not have a match in the database, although the database may contain several signatures for each target. In such a case, target identification is based on selecting from the database the target signature that is most similar to the observed target rather than selecting a matched one. The coefficient ρi of (3.17) is shown in Figure 3.8 for the observed target signatures wo(t) = w7(t), w8(t), w9(t), and w10(t). In each case, the coefficient ρi assumes a maximum value with © 2001 CRC Press LLC
ρ
ρ
wo(t) =w1(t)
1.0
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
wo(t) = w2(t)
0.0
0.0 0
1
2
3
4
5
0
6
1
2
i
4
5
6
5
6
5
6
i
wo(t) =w3(t)
ρ
3
wo(t) =w 4(t)
ρ
1.0
1.0 0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2 0.0
0.0 0
1
2
3
4
5
6
0
1
2
wo(t) =w 5(t)
ρ
3
4
i
i
wo(t) = w 6(t)
ρ
1.0
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0.0
0.0 0
1
2
3 i
4
5
6
0
1
2
3
4
i
FIGURE 3.6 Normalized correlation coefficient ρi between on observed target signature wo(t) and six target signatures wi(t) of the data base of Figure 3.5. The observed targets are wo(t) = w1(t), w2(t), . . . , w6(t).
magnitude less than unity, indicating that the observed target signature can be approximated to a sample target signature from the database. For example, the coefficient ρi has a maximum value of 0.89 at i = 1, indicating that the observed target signature is approximated to w1(t). Indeed, this decision is correct, since the target signatures w7(t) and w1(t) are for the same target. Similarly, for the observed targets wo(t) = w8(t), w9(t), and w10(t) the coefficient ρi has maximum value less than unity at i = 2, 5, and 6, indicating that w8(t) is approximated to w2(t), w9(t) is approximated to w5(t), and w10(t) is approximated to w6(t). The identification of the observed target w8(t), w9(t), and w10(t) are, indeed, correct. © 2001 CRC Press LLC
w7(t)
T = 10ns, L = 15m
w 8(t)
3.0
3.0
2.5
2.5
2.0
2.0
1.5
1.5
1.0
1.0
0.5
0.5
T = 10ns, L = 15m
0.0
0.0 0
30
60
90
120
0
30
t(ns)
w9(t)
60
90
120
t(ns)
T = 10ns, L = 18m
w10(t)
3.0
3.0
2.5
2.5
2.0
2.0
1.5
1.5
1.0
1.0
0.5
0.5
0.0
T = 10ns, L = 18m
0.0 0
30
60
90
120
0
30
60
90
120
t(ns)
t(ns)
FIGURE 3.7 Four test target signatures that are not included in the database of Figure 3.5. The target signatures w7(t) and w8(t) have equal lengths of L = 15 m, and w9(t) and w10(t) have equal lengths of L = 18 m.
3.5 TARGET SIGNATURE VARIATION WITH ORIENTATION The airplane target of Figure 3.1a, in the radar ground plane, is at a distance R from the radar and at azimuth angle β as shown in Figure 3.9. Then, the distance R is measured from the radar to the nose tip of the airplane, and the azimuth angle is measured in a counterclockwise direction from the positive x axis. The airplane is aligned along the positive x axis with its nose tip positioned at the origin of the airplane axis coordinate system xy. The orientation of the airplane with respect to the radar is a function of the radar ground plane coordinate system x´y´, the azimuth angle β, and the distance R. Approaching targets are characterized by the azimuth angles 0° ð β ð 90°, and departing targets are characterized by the azimuth angles 0° ð β ð 180°. The course of the airplane is determined by the minimum distance Rmin shown in Figure 3.9, which can be measured in terms of the ground distance R and the azimuth angle β. R min = Rsinβ
(3.18)
Since the distance R can be measured by the radar, the course of the airplane is determined when the azimuth angle β is known. Hence, target course recognition can be realized by determining © 2001 CRC Press LLC
FIGURE 3.8 Normalized correlation coefficient ρi similar to that of Figure 3.6 but for observed targets wo(t) = w7(t), w8(t), w9(t), and w10(t).
xl
Yl
Y
Airplane Course R
β
x
X R min
R
Approaching Airplane
Departing Airplane
Yl
xl
Y
β
Radar Ground Plane
Radar
β = Azimuth Angle
FIGURE 3.9 Airplane of Figure 3.1a in the radar ground plane coordinate system x´y´ at an azimuth angle β. Approaching targets have an azimuth angle 0° ð β ð 90° and departing targets 0° ð β ð 180°. © 2001 CRC Press LLC
from the backscattered signal the azimuth angle β. In addition, target course and the orientation of the airplane in the radar ground plane can be determined by utilizing the target signature and the transformation of the ground plane coordinate system x´y´ into the airplane axis coordinate system x y as shown in Figure 3.10. The relations between the coordinates xy and x´y´, according to Figure 3.10 are x′ = x cos β + ysinβ
(3.19)
y′ = – xsinβ + ycosβ
(3.20)
A point P on the surface of the airplane, as shown in Figure 3.10, has the coordinates (x, y) in the airplane axis coordinate system and the coordinates (x´, y´) in the radar ground plane coordinate system. The coordinate x´ gives the distance from the nose tip of the airplane in the direction of the line from aircraft to radar. The y´ coordinates gives the distance perpendicular to this line. As the airplane moves along its course, the airplane axis coordinate system xy does not rotate. However, the ground plane coordinate system x´y´ rotates in a counterclockwise direction as the airplane arrives and then departs from the radar, as shown in Figs. 3.9 and 3.10. A pulse transmitted at time t = 0 will be returned back to the radar by the point scatterer i on the surface of the airplane at a distance rj from the radar, as shown in Figure 3.10, at the time 2r t j ( β ) = ------j c
(3.21)
The distance rj can be approximated as shown in Figure 3.10 to Y l
Y
P
Yl
xl
xll
Yl
Course of Airplane Xl
X
rl
R
R min
β
Radar FIGURE 3.10 Relationship between target axis coordinate system xy and ground plane coordinate system x´y´ for a point p on the surface of the airplane. © 2001 CRC Press LLC
1 --2 2 j
2
r j = [ ( R + x′ j ) + y′ ]
2
2
( x′ j + y′ j ) « R
= R + x′ j
2
(3.22)
The approximation in Equation (3.22) is similar to the approximation in Equation (3.2). The time delay of Equation (3.21) can be rewritten in terms of the target axis coordinate system xy and the azimuth angle β by first using Equation (3.22) in Equation (3.14) and then using Equation (3.19) in Equation (3.21) as 2 ( R + x′ j ) t j ( β ) = ----------------------c 2R 2x′ = ------- + ---------j c c 2 = t 0 + --- ( x j cosβ + y j sinβ ) c
(3.23)
where the time delay t0 is defined in Equation (3.8). The time delay t j ( β ) of Equation (3.23) is the sum of two time delays: the round-trip time delay t0 from the radar to the origin of the coordinate system, and the round-trip time delay from the origin of the coordinate system to a point P with coordinates (x´, y´) in the radar ground plane coordinate system. While the influence of the y coordinate was ignored in Equation (3.2) and subsequently in Equation (3.8), it is directly used in the time delay t j ( β ) of Equation (3.23). When the azimuth angle β = 0° , the time delay of Equation (3.23) will depend on the x coordinate. 2x t j ( 0° ) = t 0 + -------j c
(3.24)
which is similar to Equation (3.7), since the target is viewed head-on by the radar in the airplane axis coordinate system. When the azimuth angle increases to β = 90° , the time delay of Equation (3.23) reduces to 2y t j ( 90° ) = t 0 + -------j c
(3.25)
The time delay of Equation (3.25) depends on the y coordinates alone, and the airplane is at broadside in the ground plane coordinate system. At this orientation, the width of the aircraft is measured by the radar as the length of the aircraft. The target signature of the airplane of Figure 3.9 in the radar ground plane can be obtained in a similar way to that of Equation (3.6) and has the form M
w ( t,β ) =
∑ f [ t – tj ( β ) ]
(3.26)
j=1
The expression of Equation (3.26) can be obtained in terms of the azimuth angle β and the airplane axis coordinate system xy by using Equation (3.23) in Equation (3.26). © 2001 CRC Press LLC
M
∑f
w ( t′,β ) =
j=1
2 t – t 0 – --- ( x j cosβ + y j sinβ ) c
M
=
∑f j=1
2 t′ – --- ( x j cosβ + y j sinβ ) c
t′ = t – t 0
(3.27)
The target signature, according to Equation (3.27), for a transmitted pulse with time variation and duration T = 10 ns, is shown in Figure 3.11 for the azimuth angles β = 0, 30, 60, . . . , 180. The target signature in the radar ground plane for β = 0° is similar to the target signature in the airplane axis coordinate system of Figure 3.2c, since Equation (3.27) reduces to Equation (3.10). M
w ( t′,0° ) =
2x j
- ∑ f t′ – -----c
j=1 M
=
∑ f ( t′ – tj ) j=1
(3.28)
As the azimuth angle increases from β = 0°, the time variation of the target signature changes significantly as shown in Figure 3.11 for β = 30° and β = 60°. When the azimuth angle becomes β = 90°, the target signature of Equation (3.27) becomes M
w ( t′,90° ) =
2y j
- ∑ f t′ – -----c
(3.29)
j=1
The target signature of Equation (3.29) is shown in Figure 3.11 by w(t´, 90). All point scatterers along the x axis, which represents the length of the aircraft, will be received at the same time, as shown in Figure 3.11 by the large pulse with magnitude 3 at time t´ = 0, since the target signature w(t´, 90) of Equation (3.29) depends on the y coordinate only. The two pulses at t´ = ±50 ns are due to the wing tips, and the other two pulses at t´ = ±10 ns are due to the point scatterers representing the fuselage. As the azimuth angle increases beyond 90°, the target signature exhibits the symmetry relation w ( t′,β ) = w ( – t′ ,180° – β )
(3.30)
as shown in Figure 3.11 for β = 120, 150, and 180. For example when β = 180°, the target signature according to Equation (3.30) is w ( t′,180° ) = w ( – t′ ,0° )
(3.31)
Since the airplane is moving away from the radar, the first returned pulse, according to Figure 3.9, is from the tail, and the last is from the nose tip, which results in the reversed time symmetry of Equation (3.31). The symmetry relation of Equation (3.30) is shown in Figure 3.11 by comparing the target signatures w(t´, 0) with w(t´, 180), w(t´, 30) with w(t´, 150), and w(t´, 60) with w(t´, 120). © 2001 CRC Press LLC
FIGURE 3.11 Target signature variation with azimuth angle β = 0°, 30°, 60°, . . . , 180° for the airplane of Figure 3.1a.
The target signature of Equation (3.27) is shown in Figure 3.12a and 3.12b by a threedimensional surface as a function of time delay t´ and azimuth angle β; the time delay represents the distance along the radar line of sight, while the azimuth angle represents target orientation with respect to radar. The target signature shown in Figure 3.12a is for the azimuth angles 0° ð β ð 90°, which correspond according to Figure 3.9 for arriving targets. Along the azimuth axis β = 0°, one always gets the target signature in the airplane axis coordinate system. This result can be understood from the target signature of Equation (3.28) shown by w(t´, 0) in Figure 3.11. The target signature in Figure 3.12a has been obtained for the azimuth angles β = 0, 5, 10, . . . , 90. The two-dimensional target signature of Figure 3.12b is the same as that of Figure 3.12a, but for the azimuth angles 0° ð β ð 180°; arriving targets are characterized by the azimuth angles 0° © 2001 CRC Press LLC
FIGURE 3.12 Target signature of the airplane of Figure 3.1a as a function of time delay t´ and azimuth angle for (a) 0° ð β ð 90°, and (b) 0° ð β ð 180°.
ð β ð 90°, while departing targets are characterized by the azimuth angles 0° ð β ð 180°. One can readily recognize the symmetry relation of Equation (3.30) with respect to β = 90° due to arriving and departing targets.
3.6 TARGET COURSE RECOGNITION The target signature changes significantly with target length along radar line of sight and target azimuth orientation. In the former, the target signature shows individual target features such as different sections of the aircraft for each azimuth angle. Hence, the target signature can be used for target recognition at any azimuth orientation. In the latter, the variation of target signature with azimuth angle can be used for azimuth angle recognition or target course recognition. The database required for target course recognition for a particular target with a known shape can be obtained from the target signatures of Figure 3.12a, which are shown in Figure 3.13 as single plots w(t, 0°), w(t, 10°), w(t, 30°), . . . , w(t, 90°). The time variation of each target signature shows the target structure at each azimuth angle β. The airplane course, shown in Figure 3.9, can be determined when the azimuth angle β is known; this is referred to as one-dimensional target © 2001 CRC Press LLC
FIGURE 3.13 Data base containing 10 target signatures w(t, β) of the airplane of Figure 3.1a at the azimuth angle β = 0°, 10°, . . . , 90° for target course recognition.
course recognition. A database similar to that of Figure 3.13, containing a much larger collection of target signatures for many more target courses or azimuth angles, can also be generated in a similar way if desired. Target course recognition, based on the changes in the target signature due to target orientation, is the process of selecting from the database of Figure 3.13 the target signature that is most similar to the target signature observed by the radar. The normalized correlation coefficient of Equation (3.17) will be used to match an observed target signature at an unknown © 2001 CRC Press LLC
azimuth angle, with one sample target signature from the database of Figure 3.13 to determine the azimuth angle or the target course. We will consider target course recognition for three cases of interest. In the first case, the target course can be determined exactly; in the second case, the target course can be determined approximately; in the third case, the target course becomes ambiguous. The normalized coefficient of Equation (3.17) is shown in Figure 3.14 for six observed target signatures at six different orienta-
FIGURE 3.14 Normalized correlation coefficient ρi between an observed target signature wo(t, β) and ten target signatures of the database of Figure 3.13. The observed target signatures are at the azimuth angle β = 20°, 33°, 54°, 75°, 86°, and 88°. © 2001 CRC Press LLC
tions—w(t, 20°), w(t, 33°), w(t, 54°), w(t, 75°), w(t, 86°) and w(t, 88°), respectively. The coefficient ρi is obtained, according to Equation (3.17), by comparing the observed target signature with the ten target signatures in the database of Figure 3.13. When the observed target signature w(t, 20°) matches a sample target signature in the database of Figure 3.13, the normalized correlation coefficient ρi assumes its maximum value of unity at the azimuth angle βi = 20° indicating that the unknown target course is at βi = 20° as shown in Figure 3.14 for w(t, 20°). In this case, the target course is determined exactly, since the observed target signature has a match in the database. However, it is not always possible to get an exact match as in Figure 3.14, since the database may not contain target signatures for all possible azimuth angles. A practical case occurs when the observed target is located at the azimuth angle βi = 33° with the target signature w(t, 33°), which does not match a sample target signature in the database. Since the observed azimuth angle βi = 33° is very close to 30°, it can be approximated by the sample target signature w(t, 30°), as shown in Figure 3.14 for w(t, 33°). In this case, the coefficient ρi assumes the maximum value of 0.91, rather than 1, at the azimuth angle βi = 30°, indicating an approximate target course at βi = 30° rather than the exact target course at βi = 33°. Another interesting example is when the azimuth angle increases and becomes βi = 54°, where the correlation coefficient in Figure 3.14 for w(t, 54°) is maximum with value 0.86, which is less than unity, indicating that the approximate target course at the azimuth angle βi = 50°. However, the coefficient ρi at βi = 50° is very close to the coefficient ρi at βi = 60°, as is evident from Figure 3.14, which makes the course recognition rather unreliable. When the azimuth angle β of the observed target signature w(t, 75°) takes on values at midpoint between two adjacent azimuth angles of the database, the target course becomes ambiguous, as shown in Figure 3.14 for w(t, 75°). In this case, the coefficient ρi takes on approximately equal values at the two azimuth angles 70° and 80°, and the target course cannot be determined without ambiguity. This result is not surprising, since the azimuth angle βi = 75° can be approximated to either βi = 70° or βi = 80°. One can improve the accuracy of the approximation by increasing the size of the database to include target signatures in the interval between two successive azimuth angles. Similarly, when the azimuth angle increases to βi = 86°, the correlation coefficient for the observed target signature w(t, 86°) in Figure 3.14 becomes maximum at βi = 90°. However, the target course cannot be determined approximately, since the values of the correlation coefficient at the two azimuth angles βi = 80° and 90° are approximately equal. When the observed azimuth angle increases to βi = 88°, the coefficient for the observed target w(t, 86°) in Figure 3.14 has a large maximum at βi = 90°, indicating an approximate course at βi = 90°. One concludes that target course can be determined approximately when the deviation between the observed and the database target signatures is small—less than 3°. This conclusion is based on target recognition in the noise-free case, which represents a high signal-to-noise ratio. It is expected that for low signal-to-noise ratios, the approximations will decrease, depending on the actual value of the signal-to-noise ratio.
3.7 PULSE COMPRESSION Pulse compression is a technique that obtains high-range resolution with long signals. It is based on generating long waveforms by coding the transmitted signal and the processing of the received signal by a correlation processor. There are a number of codes that can be used for pulse compression. The codes are selected based on their length and on their autocorrelation properties. For example, Barker codes with length N digits have the ratio of the peak magnitude of the main correlation lobe to the peak sidelobe as N. Target recognition using pulse compression with carrier-free signals based on Barker codes can be realized by transmitting a coded waveform using Barker codes. The transmitted signal using one character of the Barker code can be expressed by © 2001 CRC Press LLC
N–1
∑ ci f ( t – iT )
s( t) =
(3.32)
i=0
where c i is the element of the Barker code { c o ,c 1 ,…,c i ,…,c N – 1 } , and f ( t ) is the time variation of the individual code element with duration T. The transmitted signal of Equation (3.32) is shown in Figure 3.15a for the Barker code { +++ – + } of length N = 5 and a duration of Tc = NT = 5T. The autocorrelation function of the transmitted signal of Equation (3.32) can be obtained as ∞
∫ s ( u )s ( u – t ) du
K(t) =
–∞ ∞ N–1N–1
∫ ∑ ∑ cp cq f ( u – pT )f ( u – t – qT ) du
=
–∞ p = 0 q = 0 N–1N–1
∑ ∑ cp cq Kf [ t – ( p – q ) T ]
=
(3.33)
p = 0q = 0
where K f ( t ) represents the autocorrelations of the code element f(t) defined as +∞
Kf ( t ) =
∫ f ( u )f ( u – t ) du
(3.34)
–∞
S (t)
1
0.5 1
2
3
4
5
t/T
-0.5 K(t) -1
5
(a) 4 3 2 1 -4
-2
2
4
t/T
(b) FIGURE 3.15 (a) Barker code {+++–+} of length N = 5 and duration 5T. (b) Autocorrelation function of the Barker code in (a) with pulse compression ratio of N = 5.
© 2001 CRC Press LLC
The time variation of the autocorrelation of Equation (3.33) is shown in Figure 3.15b for the Barker code of Figure 3.15a. The peak magnitude of the main lobe is N = 5, and the maximum magnitude of the sidelobe is 1, producing a pulse compression ratio of N = 5. Longer Barker codes can also be used to obtain higher pulse compression ratios. The returned signal from a target with M point scatterers for a transmitted signal using Barker codes can be obtained by using Equation (3.32) in Equation (3.5) as M
r( t) =
∑ aj s ( t – t′j ) j=1 M N–1
∑ ∑ aj ci f ( t – t′j – iT )
=
j = 1i = 0
(3.35)
In fact, the returned signal of Equation (3.35) is the target signature, since it is due to one character of the Barker code shown in Figure 3.15a. The target signature based on the pulse compression principle can be written in terms of the time variable t´ by first using Equation (3.7) for tj´ and then using Equation (3.10) for t´ as M
w ( t′ ) =
N
∑ ∑ ci f ( t′ – tj – iT )
(3.36)
j = 1i = 1
The target signature of Equation (3.36) depends on the code used c i , the time variation of the code element f(t), and the nominal time duration T. For the Barker code of Figure 3.15a and the time variation f(t) as well as the airplane of Figure 3.1a, the dependence of the target signature of Equation (3.36) on the nominal time duration T is shown in Figure 3.16 for six values of T = 2 ns, 5 ns, 6 ns, 8 ns, 10 ns, and 12 ns. The target signatures of Figure 3.16 for T = 2 ns, 5 ns, and 6 ns show that the scattering centers can be distinguished individually. However, each scattering center consists of one character of the Barker code of length 5T but with a different magnitude that depends on the airplane features such as shape and geometry. One may recognize that each scattering center in the target signature for a transmitted signal consisting of a single pulse as shown in Fig. 3.3 for T = 10 ns is replaced with one character of the Barker code as shown in Figure 3.16 for T = 2 ns, 5 ns, and 6 ns. As the time duration increases to 8 ns and more, the returned coded signal from different parts of the target will overlap, resulting in different time variations of the target signatures for different time durations as evident from Figure 3.16 for T = 8 ns, 10 ns, and 12 ns. Target recognition using pulse compression would require a database similar to that of Figure 3.5 used for the single-pulse case. The database using Barker code of length N = 5 is generated by Equation (3.36) for the targets of the database of Figure 3.5 and shown in Figure 3.17 for T = 4 ns. Target recognition based on the correlations coefficient algorithm of Equation (3.17) is shown in Figure 3.18 for the observed targets w 0 ( t ) = w 1 ( t ), w 2 ( t ),…, w 6 ( t ) . In each case, the correlation coefficient is the largest for w 0 ( t ) = w 1 ( t ) where i = 1, 2, . . . ,6. This shows that target recognition based in the pulse compression principle can be realized for observed targets that have a match in the database.
3.8 CONCLUSIONS Target recognition with ultra-wideband radars using carrier-free signals can be realized in the time domain by selecting from a database the target signature that is most similar to an observed target signature. The selection process is carried out by the correlation coefficient algorithm. The target signature is a function of target features such as dimensions and shape as well as the transmitted © 2001 CRC Press LLC
w(t')
w(t') T =2 ns,Barker code
3
3
2
2
1
1 20
40
60
80 100
T =5 ns,Barker code
t'
t' 20
-1
-1
-2
-2
-3
-3
40
60
80
100 120
w(t')
w(t')
T =8 ns,Barker code
T =6 ns,Barker code 3 4
2 1
2 20
40
60
80
100 120
t' 20
-1 -2
40 60
t' 80 100 120 140
-2
-3 w(t')
T =10 ns,Barker code
w(t')
4
4
3
3
2
2
1
-1
T =12 ns,Barker code
1 t' 20 40 60 80 100 120 140
-2
25
50
75 100 125 150
t'
-1 -2
FIGURE 3.16 Effect of nominal time duration T on the target signature based on the pulse compression principle using the Barker code {+++–+} and the airplane of Figure 3.1a for six values of T.
signal time variation and nominal time duration, which determines the radar range resolution. The time variation of the target signature is related to target shape, and the time duration is directly proportional to target length along the radar line of sight. When the radar range resolution is much shorter than target length, the scattering centers in the target signature are separated and can be distinguished individually. As the nominal time duration increases, the scattering centers will overlap, resulting in a different time variation for the target signature. However, when the range resolution becomes larger than target length, target recognition is not possible, since the time variation of the target signature degenerates to that of the transmitted pulse and no information about the target can be obtained. Target recognition efficiency, measured in terms of the required number of calculations, can be improved in two ways: by using efficient algorithms and by utilizing target features. There are a number of algorithms that have been used with conventional radars that can also be used with carrier-free radars. In addition, target features such as dimensions can be used for target classification, and target shape can be used for target identification. The combination of both, efficient algorithms and target features, can improve the target recognition efficiency by reducing the required computational efforts. © 2001 CRC Press LLC
w1(t) T=4 ns,L=15 m,Barker code 3
w2(t) T=4 ns,L=15 m,Barker code 3
2
2
1
1 20
40
60
t' ns 80 100 120
20
40
60
t' ns 80 100 120
-1
-1
-2
-2
-3
-3
w3(t) T=4 ns,L=15 m,Barker code 3
w4(t) T=4 ns,L=18 m,Barker code 3
2
2
1
1 20
40
60
t' ns 80 100 120
20
40
60
t' ns 80 100 120
-1
-1
-2
-2
-3
-3
w5(t) T=4 ns,L=18 m,Barker code 3
w6(t) T=4 ns,L=18 m,Barker code 3
2
2
1
1 20
40
60
t' ns 80 100 120
20
-1
-1
-2
-2
-3
-3
40
60
t' ns 80 100 120
FIGURE 3.17 Database similar to that of Figure 3.5 but using the pulse compression principle with the Barker code {+++–+}.
When the observed target signature has a matched target signature in the database, the correlation coefficient assumes its maximum value of unity, and the target can be recognized correctly. However, when the observed target does not have a match but approximates a target signature in the database, the correlation coefficient assumes a maximum value but with a peak magnitude less than unity. In this case, the target is not recognized correctly but approximated to a target in the database. The course of a known target at an unknown azimuth angle with respect to radar can be recognized. The database, in this case, contains signatures of known target at a number of different azimuth angles in the radar ground plane. When an observed target at an unknown azimuth angle has a match in the database, the correlation coefficient assumes the maximum value of unity at a particular azimuth angle indicating a correct target course. However, when the observed target at an unknown azimuth angle does not have a match in the database, the target can be determined approximately if the azimuth angle deviation is small and cannot be determined if the deviation is large. © 2001 CRC Press LLC
FIGURE 3.18 Normalized correlation coefficient similar to that of Figure 3.6 using the pulse compression principle for six observed targets using the Barker code {+++–+}.
Target recognition at long radar ranges can be accomplished by utilizing the pulse compression principle based on transmitting long coded waveforms using Barker codes. For Barker codes of length five digits with a nominal time duration of less than 6 ns, the scattering centers in the target signature have the time variation of one character of a Barker code and can be identified separately. Increasing the nominal time duration will change the target signature time variation due to overlap of returned signals. The database for target recognition based on the pulse compression principle is generated for a Barker code of length five digits and a nominal time duration of 4 ns. Target recognition of observed targets that have a match in the database is possible, since the correlation coefficient assumes its maximum value of unity for similar targets in the database.
REFERENCES 1. J.D. Taylor, “Ultrawideband Radar,” 1991 IEEE MTT-D International Microwave Symposium Digest Part 1, Boston, MA, USA, June 5–7, 1991, pp. 367–370. © 2001 CRC Press LLC
2. D.E. Iverson, “Coherent Processing of Ultra Wideband Radar Signals,” IEE Proc. Radar, Sonar Navig., vol. 141, no. 3, June 1994, pp. 171–179. 3. Y.T. Lin and A.A. Ksienski, “Identification of Complex Geometrical Shapes by Means of Lowfrequency Radar Returns,” The Radio and Electronic Engineer, vol. 46, no. 10, October 1976, pp. 472–486. 4. C.R. Smith and P.M. Goggans, “Radar Target Identification,” IEEE Antennas and Propagation Magazine, vol. 35, no. 2, April 1993, pp. 27–38. 5. B. Bhanu and T.L Jones, “Image Understanding Research for Automatic Target Recognition,” IEEE AES System Magazine, October 1993, pp. 15–22. 6. N.J. Mohamed, “Target Signature Using Nonsinusoidal Radar Signals,” IEEE Trans. Electromagn. Compat., vol. 35, no. 11, November 1993, pp. 457–465. 7. N.J. Mohamed, “Target Course Recognition Using Nonsinusoidal Look-down Radars,” IEEE Trans. Electromagn. Compat., vol. 36, no. 2, May 1994, pp. 117–127. 8. N.J. Mohamed, “High Resolution Nonsinusoidal Radars with Three-Dimensional Target Structure,” IEEE Trans. Electromagn. Compat., vol. 36, no. 3, August 1994, pp. 229–241. 9. E.M. Kennaugh and D.L. Moffatt, “Transient and Impulse Response Approximations,” Proc. IEE, vol. 53, August 1965, pp. 893–901. 10. M.L. Van Blaricum and R. Mittra, “AS Technique for Extracting the Poles and Residues of a System Directly from its Transient Response,” IEEE Trans. Antennas and Propagation., vol. AP-23, no. 6, November 1975, pp. 777–781. 11. H.J. Li and S.H. Yang, “Using Range Profiles as Feature Vectors to Identify Aerospace Objects,” IEEE Trans. Antennas and Propagation., vol. 41, no. 3, March 1993, pp. 261–268. 12. Johny, E.D. Garber and R.I. Moses, “Radar Target Identification Using the Bispectrum: A Comparative Study,” IEEE Trans. Aerospace and Electronic Systems, vol. 31, no. 1, January 1995, pp. 69–77. 13. D.L. Mensa, High Resolution Radar Imaging, Dedham, MA., Artech House, 1981. 14. H.F. Harmuth, Nonsinusoidal Waves for Radar and Radio Communication, New York, Academic Press, 1981. 15. N.J. Mohamed, “Carrier-free Signal Design for Look Down Radars,” IEEE Trans. Electromagn. Compat., vol. 37, no. 1, February 1995, pp. 51–61. 16. E.C. Farnett and G.H. Stevens, “Pulse Compression Radar,” in Radar Handbook, Skolnik, M.I. (ed.), New York: McGraw Hill, 1990. 17. E.J. Rothwell, K.M. Chen, D.P. Nyquist, J.E. Ross and R. Bebermeyer, “A Radar Target Discrimination Scheme Using the Discrete Wavelet Transform for Reduced Data Storage,” IEEE Trans. on Antennas and Propag., vol. 42, no. 7, July 1994, pp. 1033–1037.
© 2001 CRC Press LLC
4
Ultra-Wideband Radar Receivers James D. Taylor
CONTENTS Introduction Section 1 A Digital UWB and Impulse Receiver Case Study 4.1 Background 4.2 Technical Objective 4.3 Frequency Domain Channelized Receivers 4.4 Time Domain Channelized Receiver 4.5 The Battelle Ultra-Wideband Receiver 4.6 Receiver Output De-aliasing Section 2 Pulse Compression Signals and Radar Signal-to-Noise Improvement 4.7 Introduction 4.8 Signal Correlation and Signal-to-Noise Ratio Improvement 4.9 Correlator Output Time Sidelobes and Pulse Compression 4.10 Phase-Coded Waveforms 4.11 Pulse Compression Waveform Generation and Processing 4.12 Conclusions Section 3 Bandwidth and Power Spectral Density of Pulse Compression Waveforms 4.13 Introduction 4.14 Baseband Signal Analysis 4.15 Bandpass Signal Power Spectral Density 4.16 Conclusion Section 4 Performance Prediction for Pulse Compression UWB Radars 4.17 Introduction 4.18 General Radar Performance Equation 4.19 Range Performance Prediction for Pulse Compression 4.20 Performance Prediction Analytical Considerations 4.21 Target Effects on UWB Performance Prediction 4.22 Conclusions on Performance Prediction
INTRODUCTION Any radar design must make trade-offs between the function and signal waveform, and both will drive the transmitter and receiver design and the system cost. Because of the special technical problems in detecting ultra-wideband (UWB) signals, receivers will be a major problem in radar system design. The two principal areas of interest for UWB receivers are short-duration impulse and long-duration pulse-compression radar signals.
© 2001 CRC Press LLC
IMPULSE RADAR RECEIVERS Receiving short-duration UWB signals and preserving the waveform is a major problem for impulse radar designers, especially when the reflected signal must be analyzed to measure target characteristics. The analog-to-digital converter (ADC) speed limits the signal bandwidth for single-pass signal digitization. Insufficient sample points can degrade system performance and introduce system errors. One solution to limited ADC speed is to divide, or channel, the wideband signal in either frequency or time and then perform signal digitization with many ADCs. Section 1 describes how this was done in a channelized, time-domain digital receiver developed by the Battelle Laboratories Columbus Operation in Columbus, Ohio, USA. If the waveform does not need to be preserved, then impulse receiver design becomes simpler. Chapter 8 describes the integrated circuit MicroPower Impulse Radar (MIR) developed by the Lawrence Livermore National Laboratory. The MIR receiver shows how to get increased range performance by using signal integration. Many impulse radar systems described in Chapter 12 use signal integration and synthetic aperture radar techniques to image and detect small targets.
PULSE-COMPRESSION RADAR RECEIVERS Long-range target detection requires more energy than impulse radar systems can easily provide. One practical approach is to use a wide-bandwidth waveform and pulse compression coding methods. Pulse compression can provide the fine range resolution of impulse signals with the high signal energy of conventional narrowband radars. Because range resolution depends solely on the bandwidth, encoding can increase the signal bandwidth to give wideband signal resolution the longduration, low-power, and high-energy signals needed for long-range target detection. This technique is sometimes called spread spectrum radar. Section 2 describes signal pulse compression waveforms and detection methods. Section 3 shows how to estimate the spectrum of pulse-compression signals. When radar resolution is smaller than the target, we must rethink the whole problem of radar target characteristics. Section 4 discusses performance prediction when the radar resolution is smaller than the target. Immoreev suggests a solution in Chapter 2.
Section 1 A Digital UWB and Impulse Receiver Case Study 4.1 BACKGROUND This section describes the work done by George T. Ruck of the Electronic Systems Group, Battelle Columbus Operations, Columbus, Ohio. It presents a design approach for building ultra-wideband receivers for one-shot signal digitization using the latest analog-to-digital converter (ADC) and computer technology.1 Single-pass reception and recording of short-duration impulse signals was a problem in earlier ultra-wideband impulse radar research. In the early periods of UWB radar research, experimenters used digitizing oscilloscopes and other equipment for receivers. Generally, these were not fast enough to record the returned impulse waveform in one pass. Because the returned signal was reconstructed from many returned signals, the actual waveform was questionable. UWB radar techniques such as the singularity expansion method (SEM) and higher-order spectral processing for target analysis must accurately preserve the transmitted and received radar waveforms.2 Ruck’s work has wide applications in high-speed signal digitizing. © 2001 CRC Press LLC
4.2 TECHNICAL OBJECTIVE The problem was to design a general-purpose digital ultra-wideband receiver for impulse radar systems using 1 ns to 500 ps pulse widths. This implies 1–2 GHz bandwidths for adequate processing and digitizing. A practical UWB radar design needs to digitize many returns over the pulse repetition interval for processing and target detection. The proposed solution was a channelized receiver that divided the signal into several signal parts before using multiple ADC converters. Channelized receivers can be built in either the frequency or time domain. Ruck evaluated both approaches before adapting the time-domain approach.
4.3 FREQUENCY-DOMAIN CHANNELIZED RECEIVERS Figure 4.1 shows a typical channelized receiver architecture for a 0 to 1 GHz receiver. The duplexer, or channel dropping filter, is the key receiver component for breaking the signal down into component bandwidths of 0–200 MHz, 200–400 MHz, etc. For this design, the subchannel bandwidth is determined by the ADC clock rates. If the ADC clock rate is 200 MHz, then five channels are needed along with a 1:5 duplexer. The problem is that duplexers are expensive and can limit the receiver’s performance by introducing signal distortions. Each receiver subchannel contains information associated with a particular 200 MHz bandwidth. If the signal is composed of multiple narrow-bandwidth signals, such as the signals from a Fouriertransform-generated or step frequency-chirped signal, this is a natural way to break up the signal. Adding narrower-bandwidth channels can aid in Fourier processing the digitized signal from each analog channel.
1 GHz
RF
LOW PASS
AMP
CHANNEL DROPPING FILTER
0- 200 MHz
A/ D
200- 400 MHz
0- 200 MHz
A/ D
MIXER
400- 600 MHz
0- 200 MHz
A/ D
MIXER
0- 200 MHz
600- 800 MHz
A/ D
MIXER
0- 200 MHz
800- 1000 MHz
MIXER
A/ D
LO FIGURE 4.1 Frequency-domain channelized receiver for 0–1 GHz signals. The duplexer, or channel dropping filter, is a key component. Impulse signals can cause the channel filters to ring and introduce errors into the recorded signal. There is an additional disadvantage in that the ADC outputs are not simply related to the time domain waveform. (From Ref. 1 with permission of SPIE and Battelle International.) © 2001 CRC Press LLC
The receiver shown in Figure 4.1 has a problem reconstructing waveforms, because the ADC samples in each channel are not simply related to the time-domain waveform. Incident wideband impulse waveforms can make the channel-dropping filters ring, which introduces distortions into the reconstructed waveform. If the dropping filters do not have the required good impulse responses and the sharp cutoffs, then this will limit the receiver performance.
4.4 TIME-DOMAIN CHANNELIZED RECEIVERS The time-domain receiver shown in Figure 4.2 is a better approach to recovering complicated timedomain waveforms. This design divides the input signal power and uses a series of time delays to space the time samples between channels every nanosecond. Time sample spacing gives the system an effective 1 ns sampling rate, which is five times better than the 5 ns sampling rate of the 200 MHz clocked ADCs. A true 1 GHz signal would require a 2 GHz sample rate. For the time-domain architecture of Figure 4.2, this can be provided by more time delays and ADCs.
4.5 THE BATTELLE ULTRA-WIDEBAND RECEIVER Ruck developed two receivers using variations of the Figure 4.2 time-domain architecture. These receivers can provide much higher data rates than the typical transient digitizers used in UWB radar applications. The program design objective was to develop an advanced digital channelized receiver with an instantaneous 1 GHz full coherent bandwidth. Battelle’s primary receiver was the sub-Nyquist design shown in Figure 4.3, which included I and Q channels, with six ADC converters in each channel. The five delay lines in each channel permit using of sub-Nyquist sampling rate ADCs clocked at 166.66 MHz. The delay lines insert a 0 to 5 ns time lag in each ADC input. Instead of using a set of eleven 500 ps offset delay lines to provide an effective 2 GHz sampling rate, the receiver forms I and Q channels. This approach is more compatible with the use of complex FFT processing to form frequency domain subchannels. 0- 1GHz
1 GHz
RF
LOW PASS
AMP
1:5 POWER DIVIDER
A/ D
TIME
0- 1GHz A/ D
DELAY
TIME
0- 1GHz A/ D
DELAY
TIME
0- 1GHz A/ D
DELAY
TIME
0- 1GHz A/ D
DELAY
FIGURE 4.2 Time-domain channelized receiver for 0 to 1 GHz signals. The power divider eliminates the impulse response ringing of the frequency domain duplexer. (From Ref. 1 with permission of SPIE and Battelle International.) © 2001 CRC Press LLC
200 MHz
0- 1GHz
l
RF AMP
mixer
1 : 6 POWER DIVIDER
TIME
0 Deg
POWER DIVIDER
RF AMP
S/H
A/ D
S/H
A/ D
S/H
A/ D
S/H
A/ D
0- 1GHz
TIME
0- 1GHz
DELAY
TIME
0- 1GHz
DELAY
0- 1GHz
DELAY
POWER
HYBRID
0- 1 GHz
A/ D
DELAY
TIME
LOW PASS
S/H
mixer
0 Deg 1,0
A/ D
0- 1GHz
DELAY
TIME
S/H
DIVIDER
POWER DIVIDER
90 Deg 200 MHz
LO
0- 1GHz
0 Deg mixer
TIME
Q
1 : 6 POWER DIVIDER
mixer
RF AMP
A/ D
S/H
A/ D
S/H
A/ D
S/H
A/ D
S/H
A/ D
S/H
A/ D
0- 1GHz
DELAY
TIME
S/H
0- 1GHz
DELAY
0- 1GHz TIME DELAY
TIME
0- 1GHz
DELAY
TIME DELAY
0- 1GHz
FIGURE 4.3 Battelle sub-Nyquist sampling receiver using 200 MHz A/D converters. (From Ref. 1 with permission of SPIE and Battelle International.)
© 2001 CRC Press LLC
Practical considerations of cost and high-speed memory dictated the choice of 12 slower ADCs instead of 5 faster ones, as shown in Figure 4.4. At the time, 5 ns memory in a 16k×4 ECL configuration was available and being used with slower ADCs. The receiver shown in Figure 4.4 can provide up to 14.7 km of one-way range data with a data point every 6 in. (15 cm) for radar applications. The 500 MHz ADCs can use a 250 MHz clock, which requires a 4 ns or faster memory. At the time of the design during 1991, 3.5 ns ECL chips with a 1024×4 configuration were used with high-speed ADCs. This limits the radar range to 614 m one-way with the same 6 in. data point spacing. The component cost and 16:1 advantages in memory size gained by using the slower ADCs made this the preferred configuration.
4.6 RECEIVER OUTPUT DE-ALIASING When the discrete time sampling intervals do not allow for the signal high-frequency components, then aliasing results. The aliasing phenomenon introduces errors in the computed FFT amplitudes of low-frequency components due to using discrete time sampling. Battelle developed and tested a de-aliasing approach to recover multiple signals in the frequency domain, with the appropriate amplitude and phase being assigned to each frequency bin over the entire 1 GHz bandwidth. Figure 4.5 shows a de-aliasing approach using four 500 MHz ADCs. The 8 bit ADC takes 500 million samples/second with a 1.2 GHz input bandwidth. In-phase (I) and quadrature (Q) components are sent to power splitters before digitizing and time delays. This results in two linearly independent sets of data and effectively doubled the sampling rate. Because two linearly independent samples are created at each sampling point, the I and Q samples at 500 MHz result in 500 MHz of frequency data instead of the 250 MHz expected by sampling at that rate. De-aliasing works by sending the analog I and Q signals with a frequency content between 0 Hz and 1 GHz into two power splitters. Half the I component energy goes to A/D0 for conversion, while the other half goes through delay line τ1 to A/D2. The Q component gets a similar treatment with direct digitizing by A/D1 and delayed digitizing by A/D3. Digitized data is processed through a time domain window and input to the FFT process shown as FFT0 and FFT1. Inserting the time delay on the I and Q components of FFT1 generates two more sets of linearly independent data points. The I and Q processing created two linearly independent data points at each sampled point, and the delay doubles that number to four. Four linearly independent data points at each of the 500 MHz points results in an effective 1 GHz usable bandwidth. Recovering the full 1 GHz of usable data requires the following de-aliasing technique. Suppose that calculating a DFT requires 64-point FFTs to get 128 frequency components between 0 Hz and 1 GHz. Examine the first FFT data set which consists of 64 complex points output by FFT0 and 64 complex points output by FFT1. The FFT0 and FFT1 output points are respectively C0j and C1j where j = 0, 1, . . . , 63. Then, 1
C pj =
∑ Amj e
i ( 2πf j + m2πf p )τ p
p = 0,1; j = 0,1,…,63; i =
–1
(4.1)
m=0
where p indicates FFTp coefficients, and the Amj are the desired complex frequency amplitudes at frequencies fj plus fs where fs is the 500 MHz sample rate. Therefore, Equation (4.1) implies that, because of aliasing, the complex output of the FFT0’s first bin, C00, is a linear sum of the energy at frequency f0 plus the energy at f0 + fs. The energy at frequency f0 + fs is aliased into this bin. By the same logic, the first coefficient of FFT1, C10, is a linear sum of energy at frequency f0 and energy at f0 + fs. From Equation (4.1), C 00 = A 00 + A 10, © 2001 CRC Press LLC
C 10 = A 00 e
i2πf 0 τ 1
+ A 10 e
i ( 2πf s )τ1
(4.2)
500 MHz
mixer
RF AMP
1: 2 POWER DIVIDER
0- 1GHz
1,0
AMP
POWER DIVIDER
RF
0- 1GHz A/D
DELAY
0 Deg
POWER DIVIDER
HYBRID
LOW PASS
TIME
mixer
0 Deg
0- 1 GHz
A/D
POWER DIVIDER
90 Deg
LO 0 Deg
RF AMP
mixer
Q
1: 2 POWER DIVIDER
mixer
0- 1GHz A/D
500 MHz TIME
0- 1GHz A/D
DELAY
FIGURE 4.4 Alternative sub-Nyquist sampling receiver using 500 MHz ADCs. This design requires both higher-speed ADCs and memory. (From Ref. 1 with permission of SPIE and Battelle International.) © 2001 CRC Press LLC
Power Splitter I
Time-Domain Window No Delay
ADC 0 FFT
0
C
FFT 1
C
Window
No Delay
ADC
0j
1
Q Window
Delay Tau Power Splitter
1
ADC 2
Window
Delay Tau 1
ADC 3
j = 0. 1 .... 63
FIGURE 4.5
De-aliasing technique hardware. (From Ref. 1 with permission of SPIE and Battelle International.)
© 2001 CRC Press LLC
1j
This can be generalized and written as a set of linear equations for each j where j = 0, 1, . . . , 63, so that C 0j
1
=
C 1j
e
1
i2πf j τ 1
e
A 0j
i ( 2πf j + 2πf s )τ 1
(4.3)
A 1j
Because the frequencies and delays are known and constant, we can express Equation (4.4) as C 0j
=
C 1j
1 1 A 0j G 0j G 1j A 1j
(4.4)
where G 0j = e
i2πf j τ 1
,
G 1j = e
i ( 2πf j + 2πf s )τ 1
(4.5)
Writing the determinant matrix, ∆j equals Gij - G0j, and the solution for unknown complex amplitudes becomes G C A 0j = ------1j- C 0j – ------1j- , ∆j ∆j
C G A 1j = ------1j- – ------0j- C oj ∆j ∆j
(4.6)
This can be rewritten to require only two complex multiplications and one complex subtraction for each coefficient A0j and A1j where j = 0, 1, . . . , 63. The results are the desired (non-aliased) complex amplitudes for each of the 128 frequency bins between 0 Hz and 1 GHz. Notice the two important features of this scheme. First, the de-aliasing step shown in Equation (4.6) needs to be performed only on bins with non-de-aliased amplitudes (absolute value of Cpj) greater than some threshold. This can save some processing time. Second, this technique can identify multiple signals of interest with no increase in complexity; it can identify 20 signals as easily as finding one signal. For a specific case when a signal is present at both C0j and C1j, this will accurately sort out and identify both signals. George T. Ruck, the original researcher, wrote, “Some de-aliasing schemes exhibit problems in this case.”1
REFERENCES 1. George T. Ruck, “Ultra-wideband radar receiver,” SPIE Proceedings, Vol. 1631, Ultra-wideband Radar, 1992, pp. 174–180. 2. James D. Taylor, Introduction to Ultra-Wideband Radar Systems, CRC Press, Boca Raton, FL, 1995.
Section 2 Pulse Compression Signals and Radar Signal-to-Noise Improvement 4.7 INTRODUCTION Pulse compression techniques to can improve the signal-to-noise ratio (SNR) in radar receivers. Conventional radar with an unmodulated pulse signal has a bandwidth of 1/τ, where τ is the signal © 2001 CRC Press LLC
duration, but we can improve the SNR and signal detection by using spread spectrum techniques to increasing the bandwidth beyond 1/τ. Radar detection requires that the return signal exceed the receiver noise level, which includes the receiver shot noise due to bandwidth, radar clutter returns, and interference from other radar sets, jammers, communications transmitters, etc. Pulse compression methods achieve a better SNR by converting long-duration, low-power return signals into short-duration, high-power signals to help boost them above the noise level. Compressed waveform processing is also called correlation, matched filtering, convolution, or North filtering. This chapter uses the term correlation for the detection of compressed waveforms.1 Radar detection range depends on the transmitted energy, and resolution depends on the signal bandwidth. To achieve both long-range and fine resolution, the problem is to increase the signal energy without increasing the output power and to get a larger bandwidth than the 1/τ resulting from a simple pulse-modulated sine wave. Small spatial resolution is important when the target position must be known exactly, but the required larger bandwidths and the resulting high receiver noise levels limit detection range. One solution is to use a long-duration pulse with a deliberately large bandwidth, which means signal waveforms that are not simple sine waves. There are some additional advantages to using pulse-compression signals. Increasing the signal bandwidth by coding provides a uniquely identifiable return signal, which lets the receiver reject other signals. Signal encoding permits several different radars to use the same spectrum without mutual interference. The correlation detection process turns a long-duration low-power signal energy into a shorter higher-power pulse, and it suppresses random noise and unwanted signals. The resulting correlator output has a higher signal-to-noise ratio than it would have with a longduration narrowband received signal. The integrating process in correlation reduces noise levels because of the low coincidence between the reference signals, random noise, and interference. However, some minimum received signal power must be present to provide a correlated output spike that exceeds the noise floor and receiver detection threshold. When the transmitted signal is reflected by random clutter, the overlapping signals will appear as noise, which will not be correlated or produce a detectable output. There is an additional advantage, because pulse-compression signals can reduce signal detectability by electronic warfare equipment. Using pulse-compression radar can buy time by complicating the enemy’s problems in detecting the signal and devising countermeasures. A radar that is difficult to detect by existing electronic warfare equipment is sometimes called a whispering radar. Deliberately increasing the signal bandwidth by coding is called spread spectrum in communications systems. Using spread spectrum techniques can enhance performance, provide privacy through the signal concealment, and permit spectrum sharing by several units. Pulse compression radars are sometimes called spread spectrum.2 When pulse compression, or spectrum spreading, produces a signal that has a fractional bandwidth BWf = 2(fH – fL)/(fH + fL) > 0.25, then we can consider it an ultra-wideband radar.3 Note that Russian literature such as Astanin4 defines ultra-wideband as BWf > 1. This chapter will discuss signal correlation and signal-to-noise ratio improvement, linear FM chirp signals and matched filtering, phase-coded waveforms used to provide greater performance improvement through time sidelobe suppression, and signal generation and detection methods.
4.8 SIGNAL CORRELATION AND SIGNAL-TO-NOISE RATIO IMPROVEMENT One strategy for signal-to-noise ratio improvement is to multiply the received signal with a reference signal and then integrate the product over the known length of the signal, as shown in Figure 4.6. When the received and reference signals match, the integrated energy will give a high power output over an interval that is shorter than the signal duration. The ratio of signal length to correlator output signal length is called the compression ratio. The ratio of average received power to correlator © 2001 CRC Press LLC
x(t)
x(
h(t) H( )
)
y(t) Y( )
+
u1(t)
i(t) l( )
X u2(t)
Signal
2
S N
0
n1(t) N1( )
0
1 0
50
100
150
Time
x(t) X( )
+
n2(t) N2( )
Correlation value
Peak -2
Sidelobes 0
1 50
50
0
100
Time Shift Signal
Perturbing function
Additive noises
Multiplier
Integrating filter
Output
FIGURE 4.6 Generalized correlator block diagram. The received signal is multiplied by a reference signal and the product integrated over the signal time interval. The peak output power will be higher than the input power level depending on the ratio of the highest and lowest frequencies in the signal. © 2001 CRC Press LLC
output power depends on the methods of pulse compression or coding used. The pulse compression design problem is to find a waveform whose autocorrelation function provides a single peak output at one point, and a low output, or sidelobes, at all other times.
4.8.1
PULSE COMPRESSION THEORY
In 1957, Paul E. Green presented the analytical basis for using signal correlations to improve radar signal detection in “The Output Signal to Noise Ratio of Correlation Detectors,” and described a correlation detector that multiplies two waveforms and performs a smoothing, or integrating, function. The detector consists of a multiplier and integrating filter as shown in Figure 4.6.6 The two input signals u1(t) and u2(t) will have a nonzero correlation. For practical purposes, they are the signal waveform x(t) perturbed by additive noise n1(t) and n2(t), and possibly distorted other ways. Green’s analysis assumed that • All signal and noise components are independent, stationary, and ergodic random functions of time with Gaussian first- and second-order amplitude functions. • Signals are Fourier transformable and have power density spectra X(ω), N1(ω), and N2(ω), respectively, all confined to 2"W, a closed interval in ω. • All time functions have single-sided frequency spectra. • All network system functions use double-sided frequency spectra. • Only one correlator input includes a filter h(t). Multiple filters and effects can be lumped into the signal through h(t). • The instantaneous product of u1(t) and u2(t) is the ideal four quadrant multiplier output. • The integrating filter is a realizable two terminal device with a complex system function I(ω), which is Fourier transformable into the filter impulse response I(t). In the case where I(t) is a rectangular pulse of duration T, the filter is an ideal integrator with integration time T. The filter is tuned to ∆ so that the maximum frequency response is at ∆. Further ∆ is at least equal to 2πW, which is the bandwidth of significant values of the signal and noises. Other appropriate forms will have an effective integration time T equal to the reciprocal of the effective noise bandwidth of the filter, as shown in Figure 4.7.
4.8.2
PULSE COMPRESSION SIGNAL-TO-NOISE RATIO ANALYSIS
Green’s analysis can be summarized as follows. Given the correlator shown in Figure 4.6, determine the output signal-to-noise ratio so that of the dc output of the integrating filterSNR = Square --------------------------------------------------------------------------------------------------------------Fluctuation power at the same point This gives the general result for the bandpass detector case SNR where the effect of the integrating filter is W f = I max ( ω )
–2
∞
∫0
2
I ( ω ) ) dω
(4.7)
which is the effective noise bandwidth of the filter in radians per second. The resulting SNR expression is © 2001 CRC Press LLC
Output
2
T
0
-2
0
50
100
150
time
a. Low pass filter output 2
Output
T 0
2π -2
0
50
100
150
time
b. Bandpass filter output Impulse response of ideal integrators used in the correlator analysis.
FIGURE 4.7
2
2
∞ ∞ ∫ X ( ω )Re [ H ( ω ) ] dω + ∫ X ( ω )Im [ H ( ω ) ] dw 0 0 S 1 --- = --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----∞ N 0 W 2 2 2 f ∫ { X ( ω ) [ H ( ω ) ] + X ( ω )N1 ( ω ) + X ( ω ) H ( ω ) ( N2 ( ω ) + ∆ ) + N1 ( ω )N2 ( ω ) } 0
(4.8)
The reciprocal of Wf /2π is the effective integration time, since it produces the same (S/N)0 as an ideal integrating filter having a rectangular impulse response of duration 2π/Wf . For the lowpass filter case, the effective integration time is π/Wf . Applying Equation (4.8) to a restricted case and simplifying so that H(ω) = unity and so that the time function x(t), n1(t), and n2(t) have the same spectral shapes produces W S- 1 1 1 --= --------x- ε + ----- + ----- + ---------- N 0 FW ρ1 ρ2 ρ1 ρ2
–1
(4.9)
ε = a quantity equal to 2 for a lowpass filter or 1 for a bandpass detector Wx = the effective noise bandwidth of the signal and two noise values n1(t) and n2(t) ρ1 and ρ1 = signal-to-noise power ratios X(ω)/N1(ω) and X(ω)/N2ω, respectively F(d) = a spectrum form factor where D is the density function and
where
© 2001 CRC Press LLC
∞
∫0 D ( ω )dω 2
F ( D ) = ---------------------------------------------∞ D max ( ω ) ∫ D ( ω ) dω
(4.10)
0
Table 14.1 shows F(D) form factor values. Simplifying further for the restricted case of two white noises N01 and N02 watts/radian/sec and H(ω) is unity, then W N 02 N 01 N 02 W′ N 01 S --- = ------x εF + --------- ------ + --------- + --------------2 N 0 Wf X max X max X max W x
–1
(4.11)
where Xmax = the maximum value of signal power density spectrum X(ω) W′ is the bandwidth of the two white (rectangular spectrum) noises and we assume that W′ is large enough to include all of X(ω). Note that Equation (4.11) is similar to Equation (4.10) except in defining the SNR in terms of power densities, where the signal power density is at the signal spectrum peak. The form factor enters a different way, and both these equations show that SNR depends on the ratio of the signal bandwidth to the filter bandwidth, or ratio of the filter integration time to the period of the signal. TABLE 14.1 Form Factor Values for Common Filters5 Type of Spectrum
Density Function D(ω) 1for ω in bandwith Ω -----------------------------------------------------0 otherwise
Rectangular
– 2 ω – ω c /Ω for ω – ω c < Ω /2 1 + --------------------------------------------------------------------------------0 otherwise
Triangular
Form Factor F(d) 1
2/3
Gaussian
exp [ – ( ω – ω c ) ]
1-----2
Exponential
exp [ – ω – ω c ]
1/2
First-order Butterworth (single tuned RLC Circuit)
[ 1 – ω – ωc ]
nth-order Butterworth
[ 1 – ω – ωc
2
2
–1
2n
]
–1
1/2 1 – 1/2n
Correlator Output The correlator output for a signal x(t) correlated with a reference signal x(t – τ) will resemble the correlation function of the input waveform, expressed as 1 T lim - ----- x ( t )x ( t – τ ) dt φ ( τ ) = --------------T → ∞ 2T ∫–T
(4.12)
where T = integration time For practical computation in a sampled digital electronic correlator, the result of evaluating the average of a number of sample pairs of x(t) will be © 2001 CRC Press LLC
N
1 φ ( τ ) = ---- ∑ a n b n ( τ ) N
(4.13)
1
where N = the number samples and an and bn are samples of x(t) separated by the interval τ6 Figure 4.8 shows the autocorrelation waveforms of a single sine wave cycle, a square pulse, a four cycle square wave, and a linear chirped signal. The output reaches a maximum value at time shift τ = 0. The output has time sidelobes when the signal has many cycles, as shown in Figure 4.8c and d. In the case of Figure 4.8c, setting the detection threshold too low would result in multiple detections of the same target. Later sections will cover sidelobe size and minimization. There is no output value improvement unless the bandwidth is large, as shown in Figure 4.8d, where the chirp signal has a highest-to-lowest frequency ratio of 3, which gives an output signal three times larger than the input signal. The time sidelobes are small because the signal produces a high correlation only at one instant. This is the result expected from Equations (4.12) and (4.13). Figure 4.9 shows that the correlation process suppresses noise. When the input signal is mixed with noise, the correlated value of the noise and signal will be much lower, while the correlated output value of the signal will be higher. However, the correlated value of the signal will be higher than the noise level. If the noise level is large enough, then the correlated noise output can overcome the correlated signal output. This is what we would expect from Equation (4.11), which shows the output SNR as a function of the bandwidth and individual signal to noise ratios. The correlator output will also be smaller if the received signal is distorted.
4.8.3
PULSE COMPRESSION SUMMARY
Pulse compression increases the effective SNR by using a wide bandwidth coded signal. The SNR increase is limited by the ratio of the highest to the lowest frequency in the signal. The detector output will be the autocorrelation function of the signal waveform, which is a short, high-power peak and time sidelobes. The signal waveform affects the time sidelobes as will be shown later.
4.9 CORRELATOR OUTPUT TIME SIDELOBES AND PULSE COMPRESSION High range resolution radar applications may include target detection, object classification, imaging, terrain mapping, precision ranging, and distributed clutter suppression. There are two approaches to range resolution. The first is to use a simple short-duration, high-power pulse, or a low-power, long-duration wideband coded pulse. The detected coded pulse output will be a peak signal that occurs when the received and reference signals coincide in time, and accompanying time sidelobes that result from signal mismatch when they are not coincident, or |τ| > 0. When a compressed pulse waveform signal of duration T passes through a matched filter, the output will be 2T in duration and have a peak value proportional to the compression ratio as shown in Figure 4.9a. The responses outside of |τ| < τ are called range or time sidelobes. The range sidelobes from any given range bin may appear as signals in an adjacent range bin and must be controlled to avoid false alarms and multiple target indications. Sidelobe measures include peak sidelobe level (PSL), which is associated with the probability of a false alarm in a particular range bin due the presence of a target in a neighboring range bin. maximum sidelobe power Peak sidelobe level (PSL) = 10 log -------------------------------------------------------------------peak response © 2001 CRC Press LLC
(4.14)
20
Correlated value
Signal
2
0
−2
50
0
100
0
-20
150
-100
-50
0
Time
50
100
50
100
50
100
Time shift
a. Signal sine wave cycle and correlator output 40 Correlator output
Signal
2
0
−2
50
0
100
20
0
-20
150
-100
-50
0
Time
Time shift
b. Square pulse and correlator output 50 Correlated value
Signal
2
0
−2
0
100
50
0
-50
150
-100
-50
Time
0 Time shift
c. Four cycle square wave and correlator output 20
Correlation value
Signal
2
0
−2 0
50
100
150
Time
10
0
-10 -50
0
50
100
Time shift
d. Chirped signal BW > 1/τ FIGURE 4.8 Correlation output examples. The correlator output will peak when the received and reference signal coincide. When they signals do not coincide, then the output is a lower value, called a time sidelobe. The objective in waveform design is to suppress the time sidelobe value as much as possible. © 2001 CRC Press LLC
2 Correlation value
Signal
2
0
2
0
50
100
0
2
150
-50
0
50
100
Time shift
Time
a. Linear chirp signal and correlator output 2 Correlated noise output
Noise value
2
0
2
0
50
100
150
0
2
Time
-50
0
b. Noise signal and correlator output
50
100
Time shift
2 Correlator output
Signal + Noise
2
0
2
0
50
100
150
Time
0
2
-50
0
c. Signal plus noise and correlator output FIGURE 4.9
50
100
Time shift
Effects of noise on correlator output. Correlation reduces the output noise level.
The integrated sidelobe level (ISL) is a measure of the energy distributed in the sidelobes. It is important in dense target scenarios and when distributed clutter is present and quantifies the sidelobe level. Total side lobe power Integrated sidelobe level (ISL) = 10 log ---------------------------------------------------peak response
(4.15)
Loss in processing gain (LPG) is the loss in SNR when a the receiver has a mismatched, as opposed to matched, filter.7 CR Loss in processing gain (LPG) = 10 log ---------------------------------peak response
(4.16)
LPG quantifies the loss in SNR performance due to using a mismatched filter in the receiver. © 2001 CRC Press LLC
Sidelobes are a natural product of correlation, but they can interfere with proper target detection or produce false targets, as shown in Figure 4.8c. Sidelobe suppression is a major consideration when selecting pulse compression waveforms.
4.9.1
LINEAR FM CHIRP SIGNALS
AND
TIME SIDELOBES
The linear frequency modulated, or linear FM chirp, is the simplest form of pulse compression signal. The linear FM chirp signal is widely used in radar applications and is a good place to start a discussion of practical pulse compression. Linear FM chirp detection commonly uses a technique called matched filtering, which produces an output described by the correlation function of the signal.8,9,10,11 Charles E. Cook presented an intuitive explanation of linear FM pulse compression in Ref. 8. Cook said that evolutionary trend of military radar is to extend detection range for a given size target. The obvious solution is to transmit more energy; however, this means a longer pulse length and less resolution. However, to increase detection range without degrading resolution means increasing the transmitter tube performance in terms of maximum and average output power. But more transmitted energy results in increased weight, prime power consumption, and system cost. The other approach is to enhance the SNR in the receiver by using pulse compression techniques to increase the detected signal power level. The pulse compression device became known as the matched filter and produces an output that is the autocorrelation of the transmitted signal. Looking at it from another way, the matched filter correlates the signal waveform against a reference stored as the filter transfer function. Matched filters are sometimes called North filters or conjugate filters. Early work by R.H. Dicke and S. Darlington proposed essentially identical approaches. Dicke reasoned that a linearly swept carrier frequency (as shown in Figure 4.10a, b, c, d, e, and f), when used with a matched filter with time delay vs. frequency characteristics shown in Figure 4.10g, would delay each frequency component and provide an output as shown in Figure 4.10h. The matched filter performs provides an output that is the correlation function of the input signal, so the correlator output does not follow the input signal form but gives a peak output value indicating the reception of a particular waveform. The matched filter output is proportional to the input signal cross correlated with a replica of the transmitted signal delay by time t1. For this case, cross correlation of signals y(t) and λ(t) is defined as R(t) =
∞
∫–∞ y ( λ )s ( λ – t ) dλ
(4.17)
The output of a filter with an impulse response h(t) when the input is yin(t) = s(t) + n(t) is y0 ( t ) =
∞
∫–∞ yin ( λ )h ( t – tλ ) dλ
(4.18)
For the matched filter case, then h(λ) = s(t1-λ), so the previous equation becomes y0 ( t ) =
∞
∫–∞ yin ( λ )s ( t1 – t + λ ) dλ
= R ( t – t1 )
(4.19)
so that the matched filter output is the cross correlation between the received signal corrupted by noise, and a replica of the transmitted signal. The transmitted signal replica is built into the matched filter as its frequency response function. If the input signal is the same as the reference signal s(t) © 2001 CRC Press LLC
Amplitude
Aunc
T t t1
t2
(a) Amplitude vs time of linear FM chirp pulse. Frequency
fc- f/2 f2
fc
fc + f/2
(e) Amplitude spectrum
f1 t1
t
t2
o
Amplitude
(b) Frequency vs time of linear FM chirp pulse
fc- f/2
fc + f/2
(f) Phase spectrum Time delay
(c) Linear FM chirp pulse waveform
Amplitude
fc
t
A
t1
t2 t1
t
t2
f2 f 1 Frequency (g) Matched filter time delay characteristics
(d) Compressed linear FM chirp pulse output
o2(t)
-4 dB T f
-13.2 dB
Range sidelobes Small target response
time 1/ f 1/2f
+1/ f
-1/ f
2T (h) Linear FM chirp matched filter output and sidelobe structure
FIGURE 4.10 Linear FM pulse compression: (a) transmitter waveform, (b) frequency of transmitted waveform, (c) representation of the time waveform, (d) output of the pulse compression filter, (e) amplitude spectrum, (f) phase spectrum, (g) chirp pulse output, and (g) matched filter response and sidelobe structure.
used to design the filter, then the output will be the autocorrelation function. Figure 4.8 shows some common autocorrelation functions.1 On a practical level, the compressed signal amplitude and phase spectra will look like Figure 4.11a. One approach to matched filter design is the bridged-T all-pass network, shown in Figure 4.11b.8 © 2001 CRC Press LLC
AMPLITUDE
PHASE
13:1
COMPRESSION RATIO 13:1
wo
w/2 wo wo
wo
w/2
52:1
wo
w/2
wo
w/2
wo
wo
wo
w/2
52:1
wo
w/2
wo
w/2
130:1
wo
w/2
wo
wo
w/2
wo
w/2
130:1
wo
w/2
wo
w/2
wo
a. Post compression amplitude and phase spectra for various compression ratios
L1 C2 L2
L2
L3 R
R C1
b. General form of bridged-T all-pass network used to implement matched filters for the above amplitude and phase spectra
FIGURE 4.11 Practical pulse compression spectra and a circuit for implementing them. (Reprinted with permission of IEEE from Ref 8.)
4.9.2
FM CHIRP SIGNAL TIME SIDELOBES
The linear FM chirp is a widely used pulse compression signal, because it is easy to generate, is insensitive to Doppler shifts, and has many ways to generate the signal. © 2001 CRC Press LLC
Linear FM chirp signals have disadvantages. First, excessive range-Doppler cross coupling results when the Doppler shifted signal is correlated and shows a different time of arrival. Overcoming the range Doppler error requires knowing, or determining, the range, or Doppler, by other means. Figure 4.12 shows the effects of range-Doppler coupling. Linear pulse compression also gives high time sidelobes, as shown in Figure 4.10g. The highest time sidelobe occurs at –13.2 dB below the peak output. This requires setting a high threshold level. Smaller targets may get lost in the time sidelobes. Nonlinear chirp waveforms are a way to suppress time sidelobes. For example, the waveform shown in Figure 4.13a has a frequency vs. time function of f ( t ) = W --t- + T where
K1 = K2 = K3 = K4 = K5 = K6 = K7 =
7
2πnt
- ∑ kn sin ----------t
(4.20)
n=1
–0.1145 +0.0396 –0.0202 +0.118 –0.0082 0.0055 – 0.0040
This nonlinear FM chirp signal can produce a –40 dB time Taylor time sidelobe pattern. Nonlinear FM chirp signals have several advantages. They do not need time or frequency weighting for sidelobe suppression, because the FM modulation of the waveform provides the desired amplitude spectrum. They provide matched filter reception and low sidelobes that are
f1 + fd fd f1 FREQUENCY
Compresssible Segment
f2 ∆td
t1
t1 + ∆td
t2
Time
FIGURE 4.12 Range-Doppler coupling of linear FM. The signal frequency shift results in a miscorrelation and error in the estimated time of arrival. Also, less of the signal is correlated, which reduces the output signal strength. The miscorrelation will also produce higher time sidelobes. © 2001 CRC Press LLC
Frequency
3200
3100
3000 0
50
100
150
Time
a. A nonsymmetrical nonlinear chirp signal from Eqn 14 . This produces a.-40 dB side lobe pattern
fo +B/2
Frequency
Frequency
fo +B/2
fo
fo-B/2
T
Time
b. Nonsymmetrical nonlinear chirp frequency vs time
fo
fo-B/2
T
Time
c. Symmetrical nonlinear chirp frequency vs time.
FIGURE 4.13 Nonlinear FM chirp signals: (a) Taylor series nonlinear chirp with a –40 dB time sidelobe pattern (b), nonsymmetrical nonlinear chirp, and (c) symmetrical nonlinear chirp.
compatible in design, and they eliminate signal-to-noise losses associated with weighting. Figure 4.13b and c shows some typical nonlinear waveforms. However, nonlinear FM chirp signals also have disadvantages. They are more complex, and there has been limited development of nonlinear-FM generator devices. They require a separate FM modulation design for each amplitude spectrum to achieve the required sidelobe level. Note that nonlinear FM signals can come in two nonsymmetrical and symmetrical forms, as shown in Figure 4.13. The nonsymmetrical form retains some of the Doppler cross coupling of the linear FM waveform. The symmetrical form frequency increases (or decreases) during the first half of the pulse (t ≤ T/2) and then decreases (or increases) during the second half, (t ≥ T/2).11,7
4.9.3
FREQUENCY STEPPED PULSE COMPRESSION
Frequency stepping is another pulse compression method that applies discrete frequency steps to the transmitted signal, as shown in Figure 4.14, with three different frequency stepping signals. All have the same pulse compression features at zero Doppler, but they have different characteristics when some Doppler shift is present. For the frequency-stepped pulse of Figure 4.14a, τT = transmit subpulse length; τTc = compressed pulse length; N = number of subpulses; fk = frequency of the kth subpulse for k = 1, . . . ,N; ∆fs = fk – f k – 1 = subpulse frequency step for k = 1, . . . , N; and ∆f = N ∆fs = total frequency excursion. © 2001 CRC Press LLC
N
f1
f2
f3
fN-1
Frequency
ττ fN
Time
Time
fN
f3
f2
f1
Frequency
(a) Discrete linear FM
fN-1
Time
Time
Frequency
(b) Scrambled frequency stepping ττ f1
fN-1
f2
fN
Time
T
Time
(c) Interpulse frequency stepping
tT = transmit subpulse length tc = compressed pulse length
N = number of subpulses f k = frequency of the kth subpulse for k = 1,...,N fs = f k – f k–1 = subpulse frequency step for k = 1,...,N Df = N Df s = total frequency excursion FIGURE 4.14 Frequency stepped waveforms: (a) Discrete linear FM, (b) scrambled frequency stepping, and (c) interpulse frequency stepping.
Assume that the transmit subpulse length ∆fs = 1/τT because ∆fs > 1/τT and ∆fs < 1/τT give undesirable compressed waveform characteristics.13 Then, the waveform approximates a linear FM waveform of duration T and with a total bandwidth B = N ∆fs = N/τc. The compressed pulse length becomes τ 1 τ c = --- = ----T B N
(4.21)
Nτ 2 T CR = ---- = ----------T- = N τT τ T /N
(4.22)
and the pulse compression ratio will be
Frequency stepped waveform variations include a scrambled frequency coded pulse as shown in Figure 4.14b. This uses the same frequency components as the linearly stepped pulses of Figure 4.14a, except that they occur randomly in time. However the bandwidth is the same, which results in the same compression ratio CR = N2. The same analysis applies to the interpulse frequency stepping shown in Figure 4.14c. © 2001 CRC Press LLC
All the frequency stepped pulse waveforms respond the same way to zero Doppler conditions from zero radial velocity targets; however, when there is a Doppler shift, then their characteristics are different. The linearly stepped frequency pulse has characteristics similar to the linear FM chirp signal. The Doppler characteristics of the other two depend on the order and spacing employed. Edward C. Farnett and George H. Stevens discuss these waveforms and range-Doppler characteristics in Skolnik’s Radar Handbook, 1990.13 Also see Nathanson.12
4.10
PHASE-CODED WAVEFORMS
Phase modulating a radar carrier signal with a square wave coded baseband signal is called phasecoded modulation and is another approach to getting low sidelobe values from a pulse compression signal. A phase-coded waveform has long similarly coded intervals at the beginning and progressively shorter intervals as the signal duration approaches its end. Figure 4.15 shows binary phase coding, transmitted waveform, correlator, and detector output waveforms. Only a few code sequences out of the many possible will produce all low sidelobes. Barker and complementary codes can produce minimal sidelobes.
4.10.1
BARKER CODES
Barker codes are binary phase codes that have autocorrelation, or sidelobe values less than or equal to 1/N in size, where N is the code length, and the maximum output is normalized to 1. Table 4.2A lists the known barker codes, including the longest known 13-element code. The Barker code’s advantages are minimum possible sidelobe energy and uniformly distributed sidelobe energy. Therefore, Barker codes are sometimes called perfect codes. There are a number of nearly perfect longer codes with almost uniform and minimal energy sidelobes. McMullen gives a list of nearperfect codes in Ref. 14. TABLE 4.2A Known Barker Codes1 Code Length
Code Elements
PSL (dB)
ISL (dB)
+ – , ++
– 6.0
–3.0
++ – , + – +
–9.5
–6.5
4
++ – +, +++ –
–12.0
–6.0
5
+++ – +
–14.0
–8.0
1
+
2 3
7
+++ – – – + –
–16.9
–9.8
11
+++ – – – + – – + –
–20.8
–10.8
13
+++++ – – ++ – +
–22.3
–11.5
TABLE 4.2B A Combined Barker Code Example +
+
+
–
+
++ – +
++ – +
++ – +
– –+–
++ – +
Combined Barker Codes The Barker code’s 13-bit maximum size limits the SNR improvement in radar applications. However, the known Barker codes can be combined to generate sequences longer than 13 bits with low © 2001 CRC Press LLC
T=13τ τ
+
1
+
+
+
+
-
-
+
+
-
+
-
+
Time
(a) Thirteen bit binary coded sequence coding and binary phase code waveform Tapped Delay Line
+
+
+
+
+
-
-
+
+
-
+
-
+
INPUT
Filter Matched to Pulse of Width τ
(b) Autocorrelation computation using a tapped delay line with weighting
Amplitude
13
1 -13τ
-τ 0 +τ T
+13 τ
Time
T
(c) Correlator output format for a 13 bit coded sequence
FIGURE 4.15 Binary phase-coded pulse compression signal and autocorrelation function.
sidelobes. For example, a 20:1 pulse compression ratio system can be made using either the 5 groups of four-element, or four groups of five-element Barker codes. The five group of four-element Barker code uses a five-bit Barker code, where each bit is a four-bit Barker code. Table 4.2B shows an example 20-bit Barker code. The combined code correlator is a combination of filters matched to the individual codes. Individual codes (and filters) are called subcodes (or subsystems or components) of the full code (system). A directly implemented correlator would consist of a tapped delay line whose impulse response is the time inverse of the code as a combination of subcode-matched filters. © 2001 CRC Press LLC
Figure 4.16 shows an example of a combined filter implementing a 5 × 4 combined Barker code. The first filter stage, on the upper right, is the matched filter for the inner (four-bit) code. The second stage is the five-bit code matched filter, with the active taps spaced four taps apart. This filter is the equivalent to the 20-tapped delay line matched filter, which has an identical impulse response. However, the number of active arithmetic elements (+, –) in the combined filter is 9, the sum of the subcode lengths. The equivalent 20-bit code filter would have 20 elements, the product of the subcode lengths. These general results apply to codes that are the combination of any number of subcodes. The decreased number of arithmetic elements for combined Barker matched filter has some advantages. First, there are many possible codes of various lengths appropriate for different modes of a radar system, such as surveillance, tracking, and identification. Second, the combined Barker code requires less arithmetic processing elements than a single tapped delay line matched filter. Third, the procedure for finding the tap weights of an n-length ISL optimized filter requires solving n linear equations in n unknowns. As n grows longer, the solution is more difficult. Sidelobe reduction techniques may be used for a long code by combining filters optimized for each subcode.15 Fourth, the designer needs to know only the sidelobe characteristics of each component of the code, so determining the combined sidelobe characteristics is easy. For example, the PSL of a system is approximately the PSL of its weakest component. The ISL is approximately the root sum of the squares of the subsystem’s ISLs. The LPG is approximately the sum of individual LPGs. However, the ISL and LPG show some sensitivity to order.16
4.10.2
PSEUDORANDOM CODES
Pseudorandom sequences (PN codes) are another approach to signal coding for pulse compression with lower but not minimal sidelobes. PN codes are easy to generate, have good sidelobe properties, and are easily changed algorithmically. Maximal-length PN codes (maximal-length binary shift register sequences, maximal-length sequences, or simply m-sequences) are the most useful. These have many potential radar and spread spectrum communications system applications.2 A binary shift register with feedback connections that can generate a PN code is shown in Figure 4.17. The shift register is initialized in a nonzero state and the system clocked to circulate the bits, which are picked off at appropriate outputs. This process generates a 2n – 1 sequence where n is the number of shift registers. To get a maximal length, or nonrepeating, sequence, the feedback paths must correspond to the nonzero coefficients of an irreducible, primitive polynomial
+
+
+ 0 0 0 + 0 0 0 + 0 0 0
-
−
+
0 0 0 +
Output to Detector FIGURE 4.16 A correlator block diagram for a 5 × 4 combined Barker code. © 2001 CRC Press LLC
Code Input
modulo 2 of degree n. For example, a polynomial of degree five is (1) + (0)x + (0)x2 + (1)x3 + (0)x4 + (1)x5, as shown in Figure 4.17. Note that the constant 1 term corresponds to the adder feedback to the first bit of the register. The 1 value coefficients of the x3 and x5 terms correspond to feedback from the adder to the third and fifth in the register. Generally, the shift registers may be initialized in any nonzero state to generate a m-sequence. Figure 4.17 shows an initial state of 0,1,0,0,0. The properties of maximum length pseudorandom codes are given in Table 4.3. A designer can construct m-sequence PN code generators from degree 1 to degree 34 (output length 234 – 1) using the list of irreducible polynomials in Peterson and Weldon’s, Error Correcting Codes.17 There are some practical PN code considerations. First, notice that for a large N = 2n – 1, the 1⁄2 peak sidelobe is approximately ( 1 ⁄ N ) in voltage when the signal is normalized to 1. The values depend on the particular sequence. For example, with N = 127, (n = 7) the PSL varies between –18 and –19 dB, as opposed to the –21 dB predicted by the rule of thumb. The rule of thumb approximation improves as N increases. Second, for a continuous, periodic flow of PN codes through a matched filter, the output is a periodic peak response of N (in voltage) and a flat range sidelobe response of –1. Third, PN codes are appropriate for pulsed radar applications in which only a few closely spaced targets are expected in the field of view. These PM waveforms are unsuitable for high-target-density and extended clutter situations because of their relatively high ISL level. Fifth, PN codes must use either a fully tapped delay line or bank of shift registers for each code bit to compress the signal at all ranges. Therefore, PN m-sequences have been more popular in communications and CW radar applications than in pulsed radar applications.
4.10.3
POLYPHASE CODES
The Barker and combined Barker codes are biphase codes that carry information as the 180° phase shift of the carrier signal (ϕ). A more complicated wave for is possible using M multiple phase shift conditions such that 2π' φ k = ------- k M
(4.23)
for k = 0, . . . , M – 1, to code a long constant amplitude pulse. Proper design will produce a desired matched filter output with suitable range sidelobes and a sufficient peak value for detection above noise.There are several approaches including Frank, Welty, and Golay codes. (0)X 1
(0)X 2
(0)X 3
(0)X 4
(1)X 5
0
1
0
0
0
(1)X 0
(2)
FIGURE 4.17 Maximal length binary shift register with initial state. Used for generating a degree five pseudorandom sequence code for pulse compression. (Adapted from Ref. 7, Figure 15-9.) © 2001 CRC Press LLC
TABLE 4.3 Maximum Length Pseudorandom Codes and Their Properties12 Degree (number of stages) and length
Polynomial octal
Lowest peak sidelobe amplitude
Initial** conditions, decimal
Lowest RMS sidelobe amplitude
Initial conditions, decimal
1(1)
003*
0
1
0.0
2(3)
007*
–1
1,2
0.707
1,2
3(7)
013*
–1
6
0.707
6
4(15)
023*
–3
1,2,6,9,10,12
1.39
2,8
5(31)
045*
–4*
5,6,26,29 (9 conditions) 2,16,20,26
1.89 1.74 1.96
6,25 31 6
6(63)
10.3*
–6
1,3,7,10 26,32,45,54 (9 conditions) (9 conditions)
2.62
35 7
1
2.81 2.38
7(127)
203* 211* 235 247 253 277 313 357
–9 –9 –9 –9 –10 –10 –9 –9
1,54 9 49 104 54 14,20,73 99 15,50,78,90
4.03 3.90 4.09 4.23 4.17 4.15 4.04 4.18
109 38 12 24,104 36 50 113 122
8(225)
435 453 455 515 537 543 607 717
–13 –14 –14 –14 –13 –14 –14 –14
67 (20 conditions) 124,190,236 54 90 (10 conditions) (6 conditions) 124,249
5.97 5.98 6.1 6.08 5.91 6.02 6.02 5.92
135 234 246 218 90 197 15 156
*Only single mod-two adder required. ** Mirror images not shown. Source: Nathanson, Radar Design, p. 465.
Frank Codes Frank polyphase codes can provide a discrete approximation of a linear FM chirp waveform.13,19 For each integer M, there generally is a Frank code of length N = M2 that uses phase shifts 2π/M, 2(2π/M), . . . , (M – 1)(2π/M), 2π. The Frank code of length N will have peak signal to sidelobe approaching π N for large values of N (as opposed to N for pseudorandom codes). The Frank code is an alternative to pseudorandom codes when the radar application involves expected extended clutter or a high-density target environment. Because the Frank code discretely approximates linear FM signals, the autocorrelation functions degrade due to Doppler shifting. However, the degradation is not as fast as in binary phase shift codes. So the Frank code has © 2001 CRC Press LLC
potential applications where binary code Doppler sensitivity is a problem. Frank codes show the range-Doppler coupling inherent in linear FM waveforms. Because Frank codes are discrete, the smooth peak response degradation of linear FM waveforms may appear as a loss of detection at certain intermediate velocities, that is, blind speeds. Designers need to consider this property when designing Frank codes. Codes similar to Frank codes, but with better Doppler tolerance are described by Ketschmer and Lewis in Ref. 19. Welty and Golay Codes Welty and Golay codes are another approach to total sidelobe cancellation. They have a unique property of providing equal value but negative sidelobes. Adding the two correlator outputs, as shown in Figure 4.18, can reduce the sidelobes to zero and double the output peak. Welty codes are a general set of polyphase codes having this property. Golay codes are a subset of these sidelobe-canceling codes that are binary phase codes. These advantages are purchased at the expense of increased signal processing complexity and the need for two sets of matched filters and a summing junction.
2
Code B
Correlator output
40
20
0
0 -2 0
50
-20
100
Time
-50
0
50
Time shift
+
+
+
-
-
-
+
-
+ + + -- - + -
+
+
+
-
+
+
-
+
+++-++-+
Summing Junction
100
-1, 0 -1 0 -3 0 1 8 1,...
50
0
-50 -50
0
50
Time Shift
0,0,0,0,0,0,16 1,0,1,0,3,0,-1,8,-1,... Correlator output
40
Code B
2
20
-2 0
50 Time
0
-20 -50
0
50
Time shift
FIGURE 4.18 An example of a Golay (sidelobe-canceling) code pair of length 8. © 2001 CRC Press LLC
0
100
4.11
PULSE COMPRESSION WAVEFORM GENERATION AND PROCESSING
The advantages of pulse compression have associated costs in system complexity and losses. This section describes methods for generating phase-coded waveforms and waveform processing methods.
4.11.1
GENERATING PHASE-CODED WAVEFORMS
Radar transmitters may generate phase-coded waveforms using either active or passive methods, as shown in Figure 4.19. Passive Analog Filter It is theoretically possible to design an analog filter that will give any desired waveform hr(t) for an impulse input. Practically, this is limited to certain waveforms, the most important being the linear FM waveform, which is generated by sending an impulse into a dispersive delay line and then band limiting and gating the output. Figure 4.19a shows a passive analog filter. Memory Readout Figure 4.19b shows how a waveform can be digitized, or computed, at equal time intervals. The discrete values are clocked out of a shift register and converted to an analog signal that is then up converted to the transmitter frequency and transmitted. Active Generation Figure 4.19c shows a programmed control voltage driving a voltage controlled oscillator (VCO) to generate linear or nonlinear FM pulsed waveforms up to several hundred megahertz. Active Phase Coder Figure 4.19d shows how a code-controlled signal directs a sine wave pulse signal between a 0 and 180° phase shifter to generate a biphase code. Multiple phase codes and other complicated waveforms would use additional phase shifters. Burst Generator Figure 4.19e shows a coherent comb generator producing a series of discrete frequencies. A code controller selectively transmits single frequencies or combinations.15
4.11.2
PASSIVE IF WAVEFORM PROCESSING
An analog filter can theoretically have any transfer function Hr(f), but there are practical limits on what can be done. Lumped constant LRC filters are primarily used for bandpass and spectrum weighting applications and combined with dispersive acoustic delay lines, as shown in Figure 4.20. Dispersive Delay Lines Dispersive delay lines can take several forms, as shown in Figure 4.21. Generally, these convert the electrical signal into sound waves. The signal is filtered by acoustic diffraction and transit times differences, and then by converting acoustic waves back into electrical signals. The problem is that large losses occur when converting electrical signals to sound and back again. Dispersive delay lines may require gains from 30 to 60 dB ahead of a dispersive acoustic filter. Table 4.4 shows typical delay line characteristics. © 2001 CRC Press LLC
Gate
h,(t) (a) Passive analog filter generation
D/A Storage registers
Control voltage
frequency
(b) Memory readout generation
voltage
Gate
(c) Active generation 180 0
Code Control
(d) Active phase coder f1, O1
f1, O1
f2 , O2
fn,On
f2 , O2 fn,On
Comb generator
Code Control
(e) Burst generator FIGURE 4.19 Different pulse compression waveform generators. (Adapted from Ref. 16, Figures 26 and 27.) © 2001 CRC Press LLC
Hr(f)
Envelope Detector
a. Lumped constant pulse detection
b. Dispersive delay (linear FM)
c. Surface wave tapped delay line FIGURE 4.20 Passive analog IF waveform processing methods. (Adapted from Ref. 16, Figure 28.)
TABLE 4.4 Table of Typical Dispersive Delay Characteristics20 f0 (MHz)
B x f0
τΒ
τ (µs)
Insertion Loss (dB)
<30
~0.1
<200/1
1000
30–60
15–150
<0.5
12f0
Diffraction grating on a strip
<15
<0.5
<550
~40
Surface acoustic wave
600
<0.5
Type Metallic strip Diffraction grating
Metallic Strip Dispersive Delay Lines Figure 4.21a shows a typical metallic strip delay line schematic. Generally, these units are thin steel or aluminum ribbons, coiled and placed into temperature-controlled ovens. The transfer function is a nearly linear delay vs. frequency, as shown. These units are used only with linear FM signals. Typical characteristics include f0 < 30 MHz and a bandwidth B = 0.1 f0. © 2001 CRC Press LLC
3-100 mils thick
Steel / Aluminum Strip
Group delay (sec)
Transducer Typical Values τB < 200 / 1
B = 0.1 f0
τ < 1000 µs Insertion loss 30 - 60 dB
f0 < 30 MHz
T To
B fo
(a) Metallic strip dispersive delay line
Freqency
Typical transfer function and characteristics shown.
High Frequency λH Output
Out
Quartz
Low Frequency
Bw , 0.5 fo 15 MHz < f o < 150 MHz
λt
τB < 12 fo
In
Input V= 0.15 "/ µs
(b) Wedge diffraction grating delay line
(c) Perpendicular diffraction grating delay line
Metal sheet delay medium 1/2 λH Diffraction grating fH
fo < 15 MHz B < 0.5 fo τ < 550 µs Loss
1/2 λL
fo < 600 MHz B < 0.5 f o
40 dB
Quartz or Lithium niobate
fL Out
In Transducer
(d) Diffraction grating on a strip
(e) Surface acoustic wave delay line
FIGURE 4.21 Delay lines for compressed waveform processing. (Adapted from Ref. 16, Figures 29–32.)
Diffraction Grating Delay Line A diffraction grating will pass a narrow band of frequencies and block others. Both units are built using a nondispersive quartz medium. There are two variations of diffraction grating delay line (DGDL) using transmission and reflecting gratings, as shown in Figure 4.21b and c. The wedge DGDL passes the acoustic signal depending on the grid spacing. In the case shown, higher-frequency signals must travel a greater distance to reach the shorter-wavelength diffraction © 2001 CRC Press LLC
grating, while lower-frequency signals quickly reach the long-wavelength grating. The output signal is as shown. Reversing the orientation of the diffraction grating (i.e., so that the long wavelength is farther away and short wavelengths closer to the input) would reverse the output waveform. The perpendicular diffraction grating delay line, shown in Figure 4.21c, uses reflection instead of transmission to achieve the same effect. Typical DGDL characteristics are center frequency f0 between 15 and 150 MHz and a bandwidth of 0.5 f0. Diffraction Grating on a Strip A diffraction grating may be etched onto a metal sheet. Different frequency sound waves are reflected from different potions of the etched pattern. Figure 4.21d shows the diffraction grating on a strip schematically. These devices have a typical center frequency below 15 MHz and a bandwidth of less that 7.5 MHz, and about a 40 dB insertion loss. Surface Acoustic Wave Delay These units use sonic wave properties of a quartz or lithium niobate bar. Taps can be placed where desired and are more flexible than the bulk wave devices described earlier. These devices have potentially large bandwidths and great design flexibility. Westinghouse developed an inexpensive 13-bit Barker code device. Figure 4.21e shows a linear FM implementation. Fiber Optic Delay Line The dispersive delay lines described earlier have some fundamental limitations and are lowerfrequency devices. Fiber optic filters modulate a laser signal with the RF signal, then process the optical signal and convert it back to analog again. Fiber optics can handle higher frequency ranges or provide shorter time delays than other analog methods. Light travels about 11.8 inches in 1 nanosecond in a vacuum; however, in fiber optics, it travels about 6 to 10 inches per nanosecond. This feature means that it is possible to build delay lines with closely spaced taps. The finite impulse response filter, shown in Figure 4.22, is one approach to fiber optical delay line processing.
τ1 τ2 τ3 τ4 Input
t Output t=0
τ1 τ2
τ3
FIGURE 4.22 A fiber optic finite impulse response filter. © 2001 CRC Press LLC
τ4
t
Other Optical Technologies Optical processing may provide the future analog and digital signal processing technologies.
4.11.3
DIGITAL SIGNAL PROCESSING
FOR
PULSE COMPRESSION
Continuing advances in computer technology make digital processing a feasible solution for some types of radar processing. The advantages of digital processing are stability, reproducibility, and flexibility. There are practical limits only on the waveforms and filter functions that digital processing can simulate given sufficient processing time. Analog-to-digital conversion is a major problem in real-time radar signal processing, because a received analog signal must be converted to digital format before processing. The A/D driving requirements are the signal bandwidth, which drives the conversion rate and sampling rates, number of bits resolution required, and processing speed. If the signal bandwidth is too high for a single A/D converter, then multiple A/D converters or periodic sampling can be applied to digitize the signal. Section 4.1 of this chapter covered some approaches to high-speed analog-to-digital conversion. Real Time vs. Delayed Processing The amount of information extracted from a radar signal depends on the signal bandwidth and amount of processing. Simple target detection requires minimal processing, because it determines only if the returned signal level exceeded some detection threshold. Performing more sophisticated processing, such as taking Fourier transforms of return signals, extracting signal information by the singularity expansion method or bispectral processing,3 or high-resolution synthetic aperture radar processing, requires more time, storage, and usually off-line processing. Techniques such as parallel processing can increase the computational speed.
4.11.4
PRACTICAL DIGITAL PROCESSING EXAMPLES
Time and Frequency Domain Correlation Processing Figure 4.23a shows a typical I and Q channel time-domain processor that performs the convolution (mathematical correlation) defined by F(x) =
x
∫0 f ( t )g ( x – t ) dt
(4.24)
in the time domain. In this case, the signal is digitized, then correlated against a set of digital reference values corresponding to the reference signal waveform. The summation at any instant is the cross correlation, or convolution, of the received and reference signal. Correlation can also be done digitally in the frequency domain as shown in Figure 4.23b. Signals up to 10 MHz and unlimited time duration can be handled by either configuration. In this case, the received Fourier transform is cross correlated with the coefficients of the reference signal Fourier transform. Digital Processing for a Burst Radar The burst radar transmits a sequence of signals, e.g., discrete frequency pulses, which are then processed as a unit. Figure 4.24 shows an L-band radar configured to transmit a 256-MHz bandwidth burst waveform, which puts this radar in the ultra-wideband class of systems. The signals are generated digitally and processed digitally as shown in Figure 4.25.20 © 2001 CRC Press LLC
ur(t)
900
l
Q
A/D
A/D
u(t)
h(t)
Shift Register
v(t)=u(t)*h(t)
Multipliers Summer
(a) Time-domain digital processing
ur(t)
900
l
U(f)
Q
A/D
A/D
H(f) Filter transfer function H(f)
F F T
V (f)
β = 10 MHz τ = 1 ms
Shift Register
I F F T
Shift U(f) Multipliers Register V(f) = UIf)*H(f)
(b) Frequency-domain digital processing FIGURE 4.23 Digital signal processing in time and frequency domains. (Adapted from Ref. 18, Figures 33 and 34.)
Digital systems can spread linear frequency-modulated processes over the available time. After digitizing, the received signal data goes into memory for non-real-time processing, either faster or slower. The problem is to complete all processing before new data arrives. Increasing the processing rate requires improved computer hardware. © 2001 CRC Press LLC
1174 MHZ 1178 MHZ fT 1426 MHZ
1 2 3 4
64
470 MHZ 704 MHZ 708 MHZ
500 MHZ
956 MHZ
30 MHZ
X32
Quadrature Detectors
Combiner
A/D
Frequency Selection
Digital Phase Compression
Phase Coder 01 /32
Filters
A/D
I
Q
064 /32
Buffer
Combined Frequency Generator 1/32(828+4K) MHZ -31
Weighting / Compensation
1 2
3
64
FFT 1 2
3
64
FIGURE 4.24 Digital burst waveform implementation. (Adapted from Ref. 18, Figure 35.)
Trade-Offs between Analog and Digital Processing A digital system can have an arbitrarily long TW product if the processing time is long enough. Increasing the word length will decrease processing errors. However, increased precision comes at higher hardware costs for memory, computational units, or word lengths. The designer must determine where the point of diminishing returns comes for a particular system. Computational Components These fall into the categories of memory and logic. Designing a digital radar processor will take the active cooperation of signal processing engineers and digital equipment designers to build a specialized processor. Using available computers for processing is another alternative.15 © 2001 CRC Press LLC
High Range Resolution ( f = 4 MHz Total Bandwidth = 256 MHz)
8 MHz Chirp
1
2
3
4
64
Uncompressed Burst Return 1
2
3
1
Q
Compensation and Weighting
X
X 1
X 2
X 3
X 4
64
4
64
64-point FFT 1
2
3
2.24 feet
1.9 feet
123 Feet
FIGURE 4.25 Burst waveform processing. (Adapted from Ref. 18, Figure 36.)
4.12
CONCLUSIONS
One approach to enhancing radar performance is to expand the signal bandwidth by modulating the radar carrier frequency. Techniques such a linear FM chirp are used in radar sets. Other techniques, such as phase-coded waveforms, are already used in spread spectrum communication systems. Future designs will probably expand the bandwidth as requirements for range resolution, clutter suppression, imaging, and identification grow. Eventually, some special-purpose radars will reach the present proportional bandwidth ultra-wideband definition. A compromise between operational requirements, system complexity, and cost will drive each radar design. The designer must decide how much bandwidth and relative bandwidth is needed to accomplish a particular mission. Hardware, software, and available device characteristics will set the limits.
REFERENCES 1. Skolnik, Merrill I., Introduction to Radar Systems, 2nd ed., McGraw-Hill, New York, NY, 1980. © 2001 CRC Press LLC
2. Dixon, Robert C.E., Spread Spectrum Systems with Commercial Applications, John Wiley & Sons, New York, NY, 1994. 3. Taylor, James D., ed, Introduction to Ultra-Wideband Radar Systems, CRC Press, Boca Raton FL, 1995. 4. Astanin, L. Yu, A.A. Kostylev, Yu. S. Zinoviev and A. Ya. Pasmurov, Radar Target Characteristics: Measurements and Applications, CRC Press, Boca Raton, FL, 1994. 5. Green, P.E., “The Output Signal to Noise Ratio of Correlation Detectors,” IRE Trans. Information Theory, Mar. 1957, pp. 10–18 and p. 82, June 1957. 6. Singleton, H.E., “A Digital Electronic Correlator,” Proceedings of the IRE, Dec. 1950, pp. 1422–1428. 7. Marvin Cohen, “Pulse Compression in Radar Systems,” Ch. 15, Principles of Modern Radar, J.L.Eaves and E.K. Reedy, eds., Van Nostrand Reinhold, New York, 1987, pp. 480–483. 8. Cook, Charles E.,”Pulse Compression-Key to More Efficient Radar Transmission,” Proceedings of the IRE, Mar. 1960, pp. 310–316. 9. Woodward, Philip M., “Information Theory and the Design of Radar Receivers,” Proceedings of the IRE 39(12), pp. 1521–1524, 1951. 10. Lee, Y.W., T.P. Cheatham, and J.B. Weisner, “Application of Correlation Analysis to Detection of Periodic Signals in Noise,” Proceedings of the IRE, 1950, pp. 165–1171. 11. Farnett, Edward C. and George H. Stevens, “Pulse Compression Radar,” Chap. 10, Radar Handbook, 2nd ed. M.I. Skolnik, ed., McGraw-Hill, New York, 1990. 12. Nathanson, F.E., Radar Design Principles, McGraw-Hill, New York, 1969. 13. Skolnik, Merrill I., ed., The Radar Handbook, McGraw-Hill, New York, 1969. 14. McMullen, “Radar Short Course Notes,” Technology Service Corp, Washington, DC, 1978. 15. Sinsky, A.I. Chapter 7, “Waveform Selection and Processing,” in Radar Technology, Eli Bookner, ed., Artech House, Norwood, MA 1977, pp. 123–142. 16. Cohen, M.N., “Binary Phase-coded Pulse Compression,” Internal Report No. 1293-R-0021, Norden Systems, Norwalk, CN, 1979. 17. E.W. Peterson and E.J. Weldon, Error Correcting Codes, 2nd ed., MIT Press, Cambridge, MA, 1972, Appendix C. 18. Frank, R.L., “Polyphase codes with good nonperiodic correlation properties,” IEEE Trans. Info Theory, Vol. 9, Jan. 1963, pp. 43–45. 19. Ketschmer and Lewis, “Doppler Properties of Polyphase Coded Waveforms,” NRL Report 8635, Naval Research Laboratory, Washington, DC, 1982. 20. Purdy, R.J., “Signal Processing Linear Modulated Signals,” Ch 10, Radar Technology, Eli Bookner, ed., Artech House, Waltham, MA, 1986, pp. 155–162.
Section 3 Bandwidth and Power Spectral Density of Pulse Compression Waveforms 4.13
INTRODUCTION
All radar engineers are familiar with the rule of thumb that the bandwidth of a pulse radar signal is 1/τ, where τ is the pulse duration. This section explains the theory behind this relation and shows how it applies to pulse compression and spread spectrum signals. The results will let you quickly sketch the power spectral density (PSD) of a pulse compression signal. I have presented an abbreviated version of the derivation presented by Alex W. Lam and Sawask Tantaratana in Theory and Applications of Spread Spectrum Systems.1
4.14
BASEBAND SIGNAL ANALYSIS
Start by assuming that a radar signal is random phase-shifted pulse code, modulated, as shown in Figure 4.26a. The baseband binary signals are unit magnitude rectangular pulses signals, defined as © 2001 CRC Press LLC
A0
A
-t
A0
t
0 -A
A1
4t
3t A2
5t
6t
8t
7t
t 8
A-1
2t
(a) Baseband random signal waveform expressed by X (t) = where A -1 = –A, A 0
A2
8
Ak p(t t ' – γ – kt') k=
= A, A1 = –A, A2 = –A, A 3 = A,...
ox(f)
Rx(τ)
A2t First null
ft –t
t
0
–4 –3 –2 –1
t
(b) Autocorrelation function of X(t)
Ry
0
1
2
3
4
(c) Power spectral density of X(t)
(τ)
oy (f)
A2 /2
A2 t / 4 –t
0
t
τ
fc = 4/ t
–f c
2
–A /2
0
2/t
fc
f
(e) Power spectral density of Y(t), square wave modulated sine wave.
(d) Autocorrelation function of Y(t), a square wave modulated sine wave.
FIGURE 4.26 Baseband and bandpass signal autocorrelation and power spectral density determination. The square wave represents a best-case signal modulation. The transmitted signal has a bandwidth β = 2/t´, which defines the first frequency sidelobe bandwidth containing 95 percent of the signal power.1
p t ( t ) = 1, 0 ≤ t ≤ T 0, otherwise
(4.25)
The Fourier transform of Equation (4.25) will be φ [ p T ( t ) ] = Tsinc ( fT )e
– πfT
(4.26)
where sinc(t) = sin(πt)/πt. The area under both the sinc(t) function and the sinc2(t) function is 1. ∞
∫– ∞
sinc ( t )dt =
∞
∫–∞ sinc ( t ) dt 2
= 1
(4.27)
Take the case of a baseband signal with only binary values as shown in Figure 4.26a. Here, the baseband signal relation is © 2001 CRC Press LLC
∞
∑ Ak pt′ ( t – γ – kt′ )
X(t) =
(4.28)
–∞
where t´ = the duration of one bit and a constant. { . . . , A–2, A–1, A0, A1, A2, . . . } are independent and identically distributed (i.i.d.) random variables, taking the values ±A with equal probability. γ is a random variable uniformly distributed from 0 to t´ to make a random signal X(t) wide sense stationary. Because this signal will be detected by correlation with a known reference signal, we need to know that the autocorrelation function of X(t) is R x ( τ ) = E [ X ( t )X ( t + τ ) ]
(4.29)
where E(.) denotes the expectation, so that 2 τ A 1 – ---- , τ ≤ t′ Rx ( τ ) = t′ 0, otherwise 2
= A Λ t′ ( τ )
(4.30)
where Λt´ (t) is a triangular function of unit height and area t´ as shown in Figure 4.26b, τ 1 – ---- τ ≤ t′ A t′ = t′ 0, otherwise
(4.31)
Note that the Fourier transform of Λt´ (t) is t´ sinc2(ft´). Because X(t) is a real (not complex) random signal, Rx(τ) is symmetric with respect to τ. When the signal X(t) is autocorrelated with X(t + τ), they reach their maximum similarity, or output value, when they coincide at τ = 0. They have some similarity when they overlap, but they do not coincide, so that 0 < |τ| < t´. In this region, some portion of each increment of X(t) has the same value as X(t + τ). If γ = 0 and 0 < τ t´, then X(t) and X(t + τ) have no similarity because X(t) is independent of the value of X(t + τ), because they correspond to different bit intervals. The Fourier transform of the autocorrelation gives the PSD, which is shown in Figure 4.26c. 2
2
ϕ x ( f ) = A t′sinc ( ft′ )
(4.32)
Because of the sinc2 nature of the function, the first PSD nulls are at f = ±1/t´. Other nulls occur at even multiples nf. The PSD maximum is A2t´ at f = 0. Note that the value of the x axis is ft´, regardless of the value of t´, which is the average power of X(t). The PSD shows that the average power spreads out over a large bandwidth if t´ is small, such as happens for a high-bit-rate or shortduration signal. The bandwidth will be small if the value of t´ is large, for a low-bit-rate signal. For a baseband (modulation) signal, the bandwidth is defined as the first null bandwidth. Therefore, the bandwidth of the binary square wave X(t) is 1/t´, as shown in Figure 4.26c. © 2001 CRC Press LLC
4.15
BANDPASS SIGNAL POWER SPECTRAL DENSITY
The baseband signal is the modulation that carries information. Now consider what happens when the baseband signal modulates a carrier signal, as occurs in some forms of binary coded radar waveforms for pulse compression. Using the baseband signal to modulate a carrier frequency, then Y ( t ) = X ( t )cos ( 2πf c t + θ ) where
(4.33)
fc = a constant carrier frequency θ = a random phase uniformly distributed over (0, 2π) and independent of X(t), and makes Y(t) wide sense stationary
Then, the PSD of Y(t) expressed in terms of the autocorrelation function and PSD of X(t) is 1 R y ( τ ) = --- R x ( τ )cos ( 2πf c τ ) 2 1 ϕ y ( f ) = --- { ϕ x ( f – f c ) + ϕ x ( f + f c ) } 4
(4.34) (4.35)
For the case of a random binary signal X(t), then 2
A R y ( τ ) = ----- Λ t′ ( τ )cos ( 2πf c τ ) 2
(4.36)
2
A t′ 2 2 ϕ y ( f ) = --------- { sinc [ ( f – f c )t′ ] + sinc [ ( f + f c )t′ ] } 4
(4.37)
and Rx ( 0 ) =
∞
∫ – ∞ ϕ x ( f ) df
= A
2
(4.38)
The autocorrelation and PSD are shown in Figure 4.26d and e. The spectra are centered on ±fc. For bandpass signals, use the null-to-null width as the bandwidth. Therefore, the null-to-null bandwidth of Y(t) is 2/t´. The average power of Y(t) is Ry.(0) = A2/2, which is half the average power of X(t). The autocorrelation used fc = 4/t´. Plotting the power distribution and cumulative power in Figure 4.27 shows that the 2/t´ bandwidth contains about 95 percent of the power.
4.16
CONCLUSION
For UWB coded pulse compression signals, the signal bandwidth can be estimated as 2/t´, where t´ is the duration of each bit or chip in the baseband signal. The signal power spectral density follows the sinc2 pattern and is centered around the carrier frequency fc. The signal bandwidth is for 95 percent of power, not the conventional –3 dB bandwidth. © 2001 CRC Press LLC
2
Power
sinc2(t)
0.3
0.2
0.95 Max power 1
0.1 0
0
5
Frequency (a) Sinc2 Function
10
0
5
10
Frequency (b) Cumulative power vs frequency
FIGURE 4.27 Power allocation in a sinc2 function. Most of the power is contained in the main lobe defined by f = 1/t´. This is not the conventional –3 dB bandwidth used in many bandwidth definitions.
REFERENCE 1. Alex W. Lam and Sawask Tantaratana, Theory and Applications of Spread Spectrum Systems, IEEE/EAB Study Guide, IEEE, Piscataway, NJ, 1994.
Section 4 Performance Prediction for Pulse Compression UWB Radars 4.17 INTRODUCTION This section is about how small range resolution affects radar performance prediction. We know that pulse compression radar signals improve the signal-to-noise ratio for a given large size target. Now we must consider how radar resolution smaller than the target size will affect the returned signal. Fine range resolution turns an extended target into a set of reflecting points at different ranges, so there is no longer a single large reflector acting as a point target. Radar range performance prediction will require reexamining the basic physics to find a relation that will accurately predict results for a given waveform and class of targets. The absence of any standard methodology, or measurement data for over-resolved targets, will be a major problem in UWB radar design and analysis. Chapters 1, 2, and 3 also discussed this problem.
4.18
GENERAL RADAR PERFORMANCE EQUATION
Radar performance prediction accounts for the energy transmitted, reflected from a target, and appearing in a receiver. If the received signal energy exceeds some threshold value for a given probability of detection and false alarm, then detection occurs.1,2,3 However, when large bandwidth and pulse compression result in range resolution smaller than the target, then our conventional notions of radar cross section need reexamination.4 We will start by addressing the conventional radar range equation for an over-resolved target case. Figure 4.28 shows the geometry and factors in the radar equation. For the high-resolution UWB pulse compression radar case, the radar will receive reflected energy, amplify, and possibly heterodyne it to an intermediate frequency (IF), correlate the returned signal with a reference, and determine if the correlator output exceeds some detection threshold that indicates a target. © 2001 CRC Press LLC
Transmitted waveform ∆R Time
∆R
Effective RCS at each range
Correlator
Target Detection
Correlator output
IF Amp Output Voltage
IF Amp
Time Received waveform from three Scattering centers on same target
Time Correlator output showing three separate target returns different ranges 2
Rmax =
P
= transmitted-signal power (at antenna terminals)
τ
= transmitted pulse length = target radar cross section at range r
t
σr
Fl
2
P τGt Gr σ F F t t r (4 π ) 3 kT D C L s o B
= pattern propagation factor for transmitting antenna to target path
_1 4
Fr = pattern propagation factor for target to receiving antenna path k = Boltzman's constant (1.38 × 10 W/K) Ts = receiver noise temperature deg K Do = detectability factor C = bandwidth correction factor b
L
= loss factor = transmitter output power power delivered to antenna
FIGURE 4.28 The performance prediction problem for a pulse compression radar and an over-resolved target.
Note that all receiver models discussed in this chapter assume infinite dynamic range. Any actual receiver analysis must consider the receivers linear range and the limiter effects, which can seriously degrade performance relative to standard detection curves.1 Because a moving target will produce a different return with each pulse, we need to start by considering single pulse detection with a pulse compressed signal. We must assume that single pulse detection case, because a high-resolution UWB system may not have consecutive returns that can be integrated, as would occur with a low-resolution narrowband radar system.
4.18.1
MINIMUM DETECTABLE SIGNAL
FOR
GIVEN PERFORMANCE
Starting with conventional detection, take the case of a single-pulse automatic detector receiver with a lowpass or bandpass design as shown in Figure 4.29. White noise added to the narrowband input signal represents the receiver chain thermal noise effects. Half of the composite signal plus noise is multiplied by both a cosine wave and then lowpass filtered to get the in-phase component, or I channel. The same process applies to the quadrature component, or Q channel. The lowpass filter matches the pulse envelope spectral shape as closely as possible to provide a fully matched, © 2001 CRC Press LLC
cos(ωt) Receiver Noise
Threshold Low-Pass Filter
Low-Pass Filter
Input Signal
Linear Detector (I2 + Q2))1/2 Q
Compare
Alarm
sin(ωt)
FIGURE 4.29 A single-pulse automatic detector with lowpass, or baseband, design for discussion of singlepulse detection theory.1
optimum filter. In this case, the filter with a 6 dB filter bandwidth is approximately equal to the reciprocal of the pulse duration will give nearly optimum performance. For practical receivers, consider that 1. From a practical standpoint, the bandpass and lowpass filters produce the same result mathematically. 2. Analog receivers tend toward bandpass designs, because analog signal processing components are easier to build at convenient intermediate frequencies. 3. Digital receivers are usually built as lowpass processors at baseband, since this improves the lowest possible frequencies and requires lower digitizing and processing rates. Figure 4.29 shows a baseband implementation, because it is easier to describe analytically. The first performance analysis step is to determine the minimum signal-to-noise ratio (SNR) for a required probability of detection and false alarm for single pulse detection, then describe the statistics of the receiver output in Figure 4.29. In terms of receiver pulse amplitude, start by representing I and Q components as Gaussian random variables given by I = a cosθ + x
(4.39)
Q = a cosθ + y
(4.40)
where
a = the peak amplitude of the pulse at the filter output θ = the phase difference between the STALO (stable local oscillator) and the received signal x and y = the independent Gaussian distributed random variables with zero mean and variance equal to the total system noise power
Then, 2
2
x + y = P n = B n kT s where
Pn = the noise power k = Boltzmann’s constant 1.38 × 10–23 W/(Hz-K0)
© 2001 CRC Press LLC
(4.41)
Bn = the receiver noise bandwidth Ts = the receiving system noise temperature The linear envelope R of the signal pulse noise is 2
2
R = (I + Q ) 2 1⁄2
2
= [ ( a cos θ + x ) + ( a sin θ + y ) ] 2
2 1⁄2
= [ ( a + x′ ) + y′ ]
(4.42)
where x´ and y´ = new, independent, equal-variance, Gaussian random variables obtained by the coordinate rotation x′ = y sinθ + x sin θ
(4.43)
y′ = y cosθ – x sin θ
(4.44)
The probability density function (PDF) R was derived by Rice (1945) and is now referred to by his name, so the Ricean PDF is 2 2 aR R (R + a ) f sn ( R ) = ----- exp – ---------------------- I 0 ------- Pn Pn 2P n
(4.45)
where I0 = the zero-order Bessel function, or Bessel function of imaginary argument When no signal is present, e.g. a = 0, and because I0(0) =1, the distribution is equal to the Rayleigh distribution 2 R Rf sn ( R ) = ----- exp – -------Pn 2P n
(4.46)
The probability of false alarm is the probability that the threshold voltage is exceeded when no signal is present, that is P fa =
∞
∫T
2
f n ( R ) dR = e
( – T )/ ( 2p n )
(4.47)
where T = the threshold level Similarly, the probability of detection is given as Pd =
∞
∫T fsn ( R ) dR
(4.48)
We can show that, for a given threshold level, the probability of detection depends only on the SNR defined by © 2001 CRC Press LLC
2 Ps S 1 ⁄ 2a ---- = --------------- = ------------N KP n kT s B n
(4.49)
where Ps = the signal power at the detector input Ts = the system noise temperature Figure 4.30 shows the detectability factor plotted against probability of detection for given levels of false alarm. Note that the results were derived for a linear detector, so Figure 4.30 applies equally to any detector law, provided only single-pulse detection is considered. (n.b. For noncoherent integration of several pulses the detector law can have a significant effect.) To show an application, assume that the detector output is sampled at intervals t for which the output statistics are independent. Then, the false-alarm rate is P fa s R fa = -----∆t
(4.50)
Usually, independent or nearly independent samples result from sampling the output at a rate equal to the system bandwidth or lower (i.e., t ∆t ≥ 1/Bn). 20
DETECTABILITY FACTOR (SIGNAL-TO-NOISE RATIO), DECIBELS
15
P fo =10 -16 10 -14
10 -12 10 -10 10 -8
10
10 -6 10 -5 10 -4 10 -3
5
10 -2
0
-5 10 -1
-10
-15 .001
.01
.1
.5
.9
.99
.999
PROBABILITY OF DETECTION
FIGURE 4.30 Signal-to-noise ratio (detectability factor) for single-pulse detection. (Reprinted with permission from Lamont Blake, Radar Range Performance Analysis, Artech House, Inc., Norwood, MA, USA. www.artechhouse.com.) © 2001 CRC Press LLC
s
R fa = P fa B b
(4.51)
For simple pulse radars, Bn =1/τ, and Equation (4.45) becomes the false-alarm probability divided by the pulse length. As written, Equation (4.45) is more general and can be applied to both pulse-compression and simple pulse systems. For a system that continuously compares the detector output to a threshold, DiFranco and Rubin5 show that the average false-alarm rate is given by s
R fa =
– 4πlnP fa B rms P fa
(4.52)
where Brms = the RMS receiver bandwidth defined as 1 ∞ 2 2 B rms = ----- ∫ f G ( f ) df B n –∞
(4.53)
where G(f) = the power spectral response of the receiver, assumed to be normalized so that the maximum gain is unity Bn = the noise bandwidth1
4.18.2
EFFECTS
OF
PULSE COMPRESSION
ON
DETECTABLE SIGNAL LEVEL
The previous section showed how to set the minimum signal-to-noise ratio for a given performance level, i.e., desired probability of detection with a given false alarm rate. This section covers the detection (or visibility) factor, which helps to define the minimum detectable signal level for signal pulse detection with and without pulse compression and losses. The detectability factor for pulse radar is the ratio of single-pulse signal energy to noise power per unit bandwidth that provides stated probabilities of detection and false alarm. It is measured in the intermediate-frequency amplifier and using an intermediate-frequency filter matched to the single pulse, followed by optimum video integration.6 Vmin is the minimum detectable power for a given Pd and Pfa, which occurs when the filter characteristics are matched to the waveform. The lowest value of Vmin will be V0. Then, Vmin ≥ V0, and Cb is a factor Cb Š 1, such that Vmin = V0Cb. When Cb = 1, the filter is matched and Cb > 1 otherwise so that, V0 Cb ( S/N ) min B n = ----------τ
(4.54)
Note that this is also called the visibility factor, a term dating from the times when radar detection depended on visual observation of the CRT display.1 V0 is also called D0. Therefore, letting D0 = V0 to conform to later usage, then D0 Cb ( S/N ) min B n = -----------τ
(4.55)
so that the visibility factor D0 is determined to be ( S/N ) min B n τ D 0 = ---------------------------Cb © 2001 CRC Press LLC
(4.56)
4.18.3 DETECTABILITY FACTOR AND TYPES
OF INTEGRATION
Receiver noise has a pulse-to-pulse random amplitude and phase fluctuation. It can be shown that the average power level of the nose is N (number of integrated pulses) times larger after coherent (phasor) addition. Because signal power increases by N2 for coherent integration, while the noise power increases by N, the net result of coherent integration is to increase SNR by a factor N, as shown in Figure 4.31a. The net results for coherent noise integration, as shown in Figure 4.31b, will be a random vector whose amplitude may or may not exceed the amplitude of individual pulses. Coherent integration does not reduce the noise variation, because both the mean noise power and the standard deviation of the noise power increase by the same factor N. Thus, the threshold setting is the same with or without coherent integration; however, the output signal power increases as N2. The output statistic after coherent integration of N pulses is identical to that which would be achievable by a single pulse N times as long, the condition in pulse compression. Thus, to
Q
NTH PULSE
N 3RD PULSE 2 PULSE ND
1ST PULSE
I
(a)
Q
3RD PULSE 2ND PULSE
4TH PULSE
1ST PULSE 5TH PULSE
I
RESULT (b)
FIGURE 4.31 Coherent integration of signals and noise: (a) phase diagram showing the effects of coherent integration on the signal, and (b) phasor diagram showing the effect of coherent integration on random noise. (Reprinted with permission from Lamont Blake, Radar Range Performance Analysis, Artech House, Inc., Norwood, MA, USA. www.artechhouse.com.) © 2001 CRC Press LLC
compute the detectability factor for the coherent addition of N pulses, simply reduce the singlepulse detectability factor by the processing gain N. So, for a pulse compression system, the result is D c D 0 ( N ) = ------1 N
(4.57)
c
where D 0 ( N ) = the detectability factor for the coherent addition of N pulses D1 = the detectability factor for single-pulse detection
4.19
RANGE PERFORMANCE PREDICTION FOR PULSE COMPRESSION
Coherent integration of pulses, and pulse compression, will produce the same result in reducing the single-pulse detectability factor. Substituting, the maximum range becomes 2
R max
where F and Fr Gt and Gr σ λ D1 N
2
G t G r σλ F t = NP t τ -------------------------------------3 ( 4π ) kT s D 1 C b L
1⁄4
(4.58)
= = = = = =
transmitter and receiver antenna propagation factors transmitter and antenna gains target radar cross section signal center frequency wavelength single-pulse detectability factor number of pulses integrated for coherent integration, or the pulse compression ratio for pulse-compressed signals Cb = filter factor, >1 L = system not accounted for elsewhere
Note that the quantity NPtτ is the total energy transmitted during the coherent processing time of the receiver, so that by letting c
E t = NP t τ
(4.59)
our maximum range equation becomes 2
c
2
2 1⁄4
E t G t G r σλ F t F r R max = --------------------------------------3 ( 4π ) kT s D 1 C b L
(4.60)
This equation was derived for coherent pulse integration, but it applies to almost any type of radar c system as long as E t corresponds to the total energy transmitted during the coherent processing time of the receiver. Therefore, for a single-pulse system, the coherent integration time is equal to the pulse duration, and c
Et = Pt τ © 2001 CRC Press LLC
(4.61)
For a pulse-compression system, the coherent processing is performed within one pulse, and the above equation applies if τ is interpreted as the uncompressed pulse length and if any noncoherent integration is accounted for by using an appropriate value of D0 in place of D1.1 Pulse compression techniques permit transmitting a long coded pulse length τ and compressing to a shorter pulse length τc in the receiver. Pulse compression ratios τ/τc or the equivalent timebandwidth product Bτ can be as high as 100,000, although smaller values are used.8 The effect is that of a radar transmitter transmitting a pulse length of τc at a power level equal to P(τ/τc). To determine which pulse length to use in pulse radar performance prediction, consider the that the product Ptτ represents the transmitted pulse energy. So, if Pt is the actually transmitted power, then τ must be the actual transmitted, or uncompressed, pulse length.
4.20
PERFORMANCE PREDICTION ANALYTICAL CONSIDERATIONS
The validity of any performance prediction process depends on how closely the assumptions model physical reality. Given that philosophical approach, a performance analysis must consider ideal and matched filter losses.
4.20.1
LOSSES
IN
PULSE COMPRESSION RECEIVERS
Receiver pulse compression is done by an ideally matched filter. The phase or frequency coding shortens, or compresses the received echo pulse by a factor equal to the time-bandwidth product, that is, the product of the time duration τ and its frequency bandwidth B. Compression increases the effective power level by the same factor. Unfortunately, practical pulse compressors are not perfect matched filters, and there will be some loss, probably in the range of 0.5 to 2 dB. Time and range sidelobe suppression requires some departure from the true matched filter and produces losses that can be included in either Cb or L. An accurate analysis must consider that filter losses will result from the following: 1. Uncorrected Doppler shift of the received signal. As discussed earlier, a Doppler-shifted coded pulse compression signal will produce a smaller output than a non-shifted signal. 2. Digitization loss, resulting from the limited number of bits handled in the processor, or from range-cusping loss. 3. Pulse compression is indirectly useful against noise jamming by permitting greater transmitted pulse energy without exceeding peak power limitations and keeping the range resolution. 4. Pulse compression is useful against pulse interference from other sources. Unless the radar is deliberately jammed, the interfering pulses will probably not be properly coded for optimum processing in the pulse compression filter. The pulse compression filter will reduce the amplitude of uncoded pulses and smear them out of the time or range dimension. Remember that correlators will produce an output from the desired signal or any signal that more or less resembles the reference signal. Effective jamming or deception could result from carefully selected coded sequences transmitted back to the radar set.
4.20.2
PULSE COMPRESSION
AND
SIGNAL-TO-CLUTTER RATIO
Pulse compression produces an improved signal-to-clutter (S/C) ratio, because the clutter power is proportional to the compressed pulse length. This is because τc is used for the radar range resolution cell equation for low grazing angles ψ, where the pulse length determines the clutter area A, so that © 2001 CRC Press LLC
π 2 cτ A = --- R c Gθ 1 φ 1 ----- secψ r 2
(4.62)
where ψ = the grazing angle at the intersection ϕ = the azimuth beamwidth θ = the elevation beamwidth For the low grazing angle case, cτ A = R c φ 1 -------c secψ 2
(4.63)
For the case of volume clutter, as might be encountered in jamming,1 cτ π 2 V = --- R c θ 1 φ 1 -------c r 2
4.21
(4.64)
TARGET EFFECTS ON UWB PERFORMANCE PREDICTION
Conventional radar performance prediction assumes that the target is much smaller than the range resolution and acts as a single equivalent reflecting surface with the value called the radar cross section (RCS). For monostatic radar cases, the RCS depends on the azimuth angle of the radar signals arrival, as shown in Figure 4.32. The rate of fluctuation of RCS can be classified by such systems as Swerling Class 1, 2, or 3.1 Bistatic RCS prediction will be more much more complicated and must consider both the arrival angle and the reflected energy at specific backscatter angles. Nicholas J. Willis gives an excellent discussion of bistatic radar cross section estimation in Reference 3. The detected signal changes when the range resolution is smaller than the target. Imagine a high-resolution radar system using a linear FM chirp signal. The transmitted chirp pulse waveform varies linearly with time during the pulse as shown in Figure 4.33. This case results in three returns from large scattering surfaces at different ranges. When processed, each return will appear as shown in the lower plot. It is apparent that the strength of each pulse depends on the effective target RCS at each range increment. An over-resolved target will produce a series of returns from the effective radar cross section at each particular range and aspect angle. Assuming that the azimuth resolution is much larger than the target, we can expect three general cases of target return for different spatial resolutions, as shown in Figure 4.34a, b, and c.4 If the azimuth resolution is smaller than the target, then there would be further resolution of the target into scattering centers grouped in range and azimuth, as shown in Figure 4.35. Examining the physical results of high-resolution radar results shows that single-pulse high-resolution radar detection will depend on the largest individual return exceeding detection threshold. This means that the target high-resolution RCS may be less than for low-resolution radars. Range performance prediction using Equation (4.61) will require knowing the RCS of the most reflective target segment. Selecting threshold of detection settings may become a complicated process, and setting the threshold too low can result in high probabilities of false alarm. 2 Astanin suggests that the radar cross section of each resolution cell will be σ ≈ λ 0 , where λ0 8 is the average wavelength radar signal. To take into the account high-resolution effects on perfor2 mance prediction, define σ = λ 0 , where a is an exact relation for a given class of targets. Inserting this relation into Equation (4.61) produces © 2001 CRC Press LLC
0 40.0
15
–15
30
– 30 15.0
45
– 45
– 10.0
60
– 60
75
–75
– 35.0
90
– 90
105
– 105
120
– 120
– 135
135 – 150
150 165
– 165
180
FIGURE 4.32 Typical azimuthal radar cross section variation. This case shows the S-band RCS of an aircraft measured at different angles. Note the large variations over shall small changes in viewing angle. (Reprinted from J. D. Taylor, Introduction to Ultra-Wideband Radar Systems, CRC Press, 1995.)
4
c
R max
4
2 1⁄4
E t G t G r σλ F t F r = --------------------------------------3 ( 4π ) kT s D 1 C b L
(4.65)
2
The high-resolution radar cross section σ = λ 0 may vary with aspect angle. For any given azimuth and elevation angle combination, there will be multiple returns that depend on the physical geometry and characteristics of each resolution cell. There could be many resolution cells that are close to the highest resolution cell. In any case, the target characteristics must be measured against the proposed radar signal waveform to make accurate performance predictions.
4.21.1 TARGET IMAGING If the receiver can handle the array of high-resolution azimuth and range returns, then the radar returns could form a target image for detection or identification, which may help the false alarm © 2001 CRC Press LLC
f TRANSMIT
ECHO FROM TARGET OF 3 SCATTERERS
TRANSMIT & RECEIVE PULSES
t f
LOCAL OSCILLATOR OUTPUT
t f MIXER OUTPUT
t PWR
TARGET SIGNATURE
t FIGURE 4.33 Stretch waveform processing showing the return from a target with three scattering centers. (Reprinted from Donald R. Wehner, High Resolution Radar, 2nd ed., Artech House, Inc., Norwood, MA, USA. www.artechhouse.com.)
problem. Figure 4.36 shows an example of target response from a 3.2 GHz pulse-compressed highresolution return.4 After processing and applying some a priori knowledge, the radar processor could form an image like Figure 4.37 or 4.38. With sufficient information and processing, some identification of the target aircraft type may be possible. In most practical cases, the radar azimuth resolution probably will not be good at long ranges, so the target response will vary with range due to the changing azimuth or cross-range resolution. High-resolution radar performance might be specified in terms of maximum detection range for a given class of target. Another approach would be to specify a reliable identification range for a given class of target.
4.22
CONCLUSIONS ON PERFORMANCE PREDICTION
Predicting high-resolution ultra-wideband radar performance requires understanding the interaction of waveforms and targets. Conventional radar cross section will no longer apply to performance prediction, because the target will become a set of small reflectors at different ranges. The one thing we might safely assume is that any of these small reflectors will probably have less RCS than for conventional low-resolution radars. On a practical level, a requirement for a ultra-wideband radar would be to detect, identify, and track a given class of target at some maximum range. In this case, serious high-resolution radar design may require an extensive target high-resolution RCS measurement program. Ultimately, the operations analyst and radar designer must work together to find some optimal resolution, pulse compression ratio, and bandwidth combination for a given purpose such as tactical air surveillance. © 2001 CRC Press LLC
CORRELATOR OUTPUT
L
TIME
CORRELATOR OUTPUT
a. Resolution >L
TIME
CORRELATOR OUTPUT
b. Resolution about L/10.
TIME
c. Resolution << L FIGURE 4.34 Effects of radar resolution on the return signal and target detection.
© 2001 CRC Press LLC
Azimuth
A B C D L
A
Range
Power
R= c /2
B
Power
Time
C
Power
Time
D
Power
Time
Time FIGURE 4.35 If the target is over-resolved in range and azimuth, then the detected signal will be a set of returns for each azimuth cell at each range resolution.
© 2001 CRC Press LLC
RADAR LOCATION
WV-2 SUPER CONSTELLATION
RANGE VIDEO
LEFT
AMPLITUDE
APPROX. RADAR BEAM AXIS
RIGHT
ANGLE VIDEO
CORRESPONDING ZERO ANGLE ERROR VOLTAGE TIME
FIGURE 4.36 Range and angle video of a flying aircraft taken with a high-resolution monopulse radar using a short pulse waveform. This shows the breakup of an over-resolved target into individual scatterers. (From Howard, D.D., “High resolution monopulse tracking radar,” IEEE Trans Aerospace and Electronic Systems, Vol. AES-11, No. 5, Sept. 1975, pp. 749–755.)
© 2001 CRC Press LLC
HEIGHT
TH
IMU
AZ
RA
NG
E
HEIGHT
TH
IMU
AZ
RA
NG
E
FIGURE 4.37 A 3-D image of a small craft at sea made by the Naval Ocean Systems Center using a 3.2 GHz stepped frequency waveform. (Reprinted from Donald R. Wehner, High Resolution Radar, 2nd ed., Artech House, Inc., Norwood, MA, USA. www.artechhouse.com.)
© 2001 CRC Press LLC
FIGURE 4.38 High resolution radar images of a MiG19 and F-104 aircraft based on measurements and processing with a priori knowledge. (Reprinted with permission from L. Y. Astanin, Radar Target Characteristics: Measurements and Applications. Copyright CRC Press.)
REFERENCES 1. Blake, Lamont V., Radar Range-Performance Analysis, Artech House, Norwood, MA, 1986. 2. Byron Edde, Radar Principles, Technology and Applications, PTR Prentice Hall, Englewood Cliffs, NJ, 1993. 3. Skolnik, Merrill I., The Radar Handbook, 2nd edition, McGraw-Hill, New York, NY, 1990. 4. Wehner, Donald R., High Resolution Radar, 2nd Ed., Artech House, Norwood, MA, 1994. 5. DiFranco, J.V. and Rubin, W.L., Radar Detection, Prentice Hall, Englewood Cliffs NJ, 1968. 6. L. Yu Astanin, A.A. Kostylev, Chapter 6, “Design of Radar Meters Using UWB Signals,” in Yu Astanin, A.A. Kostylev, Yu. S. Zinoviev and A. Ya. Pasmurov, Radar Target Characteristics: Measurements and Applications, CRC Press, Boca Raton FL, 1994, pp. 201–235.
© 2001 CRC Press LLC
5
Compression of Wideband Returns from Overspread Targets Benjamin C. Flores and Roberto Vasquez, Jr.
CONTENTS 5.1 Abstract 5.2 Radar Imaging Principles 5.3 Random Signals for High Resolution 5.4 Binary Phase Codes 5.5 Ideal Image of a Gaussian Scattering Function 5.6 Conclusions 5.7 Acknowledgment References
5.1 ABSTRACT Radar imaging is an advanced remote sensing technique that maps the reflectivity of distant objects by transmitting modulated signals at radio frequencies and processing the detected echoes. By proper waveform selection, it is currently possible to image the surface of planets or asteroids from Earth with a relatively high degree of resolution, despite the astronomical distances to these objects. Waveforms that are used for radar astronomy are characterized by a large spectral bandwidth and long time duration. In particular, random waveforms, such as binary phase codes and frequency hop codes, are used in radar astronomy because of their high resolution capabilities and the low levels of signal clutter they tend to exhibit upon compression. The ambiguity function, which is the correlation of a signal with its return for a moving target, shows the resolution and sidelobe levels achievable with the selected signal. This function is used to predict the amount of signal clutter an image will have. In radar astronomy, a thumbtack ambiguity function is desired in which there is a narrow peak response at zero delay and Doppler indicating a single, stationary scatterer, and low, semi-uniform sidelobes elsewhere. In this chapter, analytical expressions for the mean, mean square, and variance of the ambiguity function for random binary phase codes are given to characterize the resolution and average sidelobe behavior for different length codes. The expression for the mean, which lacks sidelobe structure in delay, is used to demonstrate an ideal case of imaging using binary phase codes. Imaging is accomplished by convolving the mean of the ambiguity function with a model of an overspread planetary target, yielding a two-dimensional reflectivity distribution of the model. Similarly, an expression for the mean of the ambiguity function for frequency hop codes is convolved with the target model to obtain an image of the target. The parameters that are required for each waveform to obtain equivalent images are compared to show the advantages and disadvantages of each waveform implementation. To characterize the average amount of signal clutter that is to be expected in delay and Doppler for binary phase codes, expressions for the mean square and variance of the ambiguity
© 2001 CRC Press LLC
function are given. Plots for these expressions are offered to show that the average signal clutter level decreases for longer codes.
5.2 RADAR IMAGING PRINCIPLES 5.2.1
RADAR CONCEPTS
The most basic radar experiment entails the transmission of a signal in the direction of some target, which reflects the signal back toward the radar, and the subsequent detection of the echo signal. In modern systems, the echo signal may be used to describe a target’s location (i.e., range), analyze target motion, and map the scattering features of the target. For a detailed description of the radar signal processing required for these purposes, see e.g., Refs. 1, 2, 9, 27, 29, 31, 32, 37, and 40. Target position and motion estimation are traditional priorities for military systems, aircraft sensors, weather monitoring, and traffic monitoring. Target imaging has found many uses in diverse scientific fields such as radio astronomy, automatic object recognition, and remote sensing among others. Planetary radar astronomy is radar imaging applied to spatial objects within our solar system. A major concern that arises for such an application stems from the large distances to the targets and a system’s capability to detect weak echoes from them.* Consequently, even with the best antennas and low-noise transceiver systems,† the task of successfully detecting an echo is overwhelming. To conceptually understand the radar imaging process, assume the simple case of a short, monotone (single-frequency) pulse of duration T seconds. The signal reflected from a target is received by the radar after a delay time τ0, which corresponds to the range to the target. However, since the pulse has a duration of T seconds, the reflected signal contains information not only on the parts of the target located at a distance ( c ⁄ 2 )τ 0 but also about its parts located at distances up to ( c ⁄ 2 ) ( τ 0 + T ) . Consequently, we try to make the duration of the signal as small as possible in an effort to simulate an impulse in time, thus increasing the accuracy in range measurements. However, this is done at the expense of signal power. For target detection, the energy E of the transmitted signal must exceed some threshold level E0 of noise. Realizing that the energy of the transmitted signal is equal to the product PT of the transmitter power P and the signal duration T, it is evident that a short pulse has very limited energy. Therefore, even with the most powerful transmitter with power PO, from the condition POT ð EO, it follows that the duration of the transmitted signal cannot be shorter than EO/PO. In applications where the object being imaged is far from the radar, a short pulse does not have the energy required to properly detect the echo. Thus, the short monotone pulse is not commonly used for imaging. The previous arguments lead to the concept of resolution. Resolution is a measure of the minimum distance between two scatterers at which they can still be detected individually. For two-dimensional imaging, this is separated into range and Doppler resolution. Range resolution refers to the minimum separation that can be detected along the radar’s line of sight as shown in Figure 5.1. For any waveform, this is proportional to the reciprocal of the signal bandwidth. In
FIGURE 5.1
Scatterer detection in range.
* The power of a signal that has traveled a distance d toward an object is proportional to 1/d2. The power of an echo returning to the radar is proportional 1/d4. Thus, for large distances, signal power at any point is very low relative to the transmitted power. † NASA uses the 300 m Arecibo radiotelescope and the Goldstone radio astronomy facility.
© 2001 CRC Press LLC
the case of a single, monotone pulse, the signal bandwidth is equal to the reciprocal of the pulse duration 1 β = --T
(5.1)
Thus, pulse duration is related to range resolution by the expression 2 1 T = --- = --- ∆r s c β
(5.2)
so that the range resolution in meters is c1 c ∆r s = --- --- = --- T 2β 2
(5.3)
where c is the speed of light in free space. Note that increasing the signal bandwidth β enables higher range resolution (corresponding to a smaller numerical value of ∆rs). Thus, there is a tradeoff of signal energy for a larger signal bandwidth for the single, monotone pulse. This trade-off can be overcome by using phase or frequency modulated waveforms. These signals have a phase or frequency that varies in some fashion within a certain bandwidth over the duration of the signal. By maintaining this bandwidth through modulation, the signal duration can be increased to allow the transmission of more energy for better signal detection in the presence of noise. In other words, modulating the signal makes the pulse duration and bandwidth independent of each other.
5.2.2
SIGNAL PROCESSING
A commonly implemented receiver for imaging applications is the correlator receiver or matched filter. This type of receiver is equivalent to a filter whose impulse response matches the transmitted waveform.31,37 Echoes from a target are received by the matched filter and correlated with a copy of the transmitted signal. For a stationary point target, the output of the filter is the autocorrelation of the signal. The general mathematical equation for the autocorrelation of a signal s(t) is defined in Equation (5.4). R(τ) =
∫ s ( t ) ⋅ s ( t – τ ) dt *
(5.4)
where t is time and τ is time delay. The autocorrelation is made up of a main lobe at zero delay τ = 0 and sidelobes* elsewhere. In general, the autocorrelation of any sequence has its maximum at τ = 0 and is also symmetric about this point.16 This maximum, known as the main lobe of the autocorrelation, is proportional to the signal’s energy and indicates the presence and location of a point scatterer. The width of the main lobe reveals the range resolution for the signal s(t). An example of an autocorrelation of a Barker code† is shown in Figure 5.2. Notice that the sidelobes may lead to ambiguities in the position of other weaker scatterers. The sidelobes of the autocorrelation are in a sense self-noise, since they are noise components inherent to the signal being used. Any signal will generate some amount of self-noise. For this reason, much effort has been placed into minimizing sidelobes. The higher these sidelobes are, the more clutter an image will have. Using waveforms with low as well as uniform sidelobe structures eliminates spurious false peaks. This condition corresponds to high resolution capability with insignificant clutter. * A sidelobe is one of the lobes surrounding the main response of the target. † Barker codes are binary phase codes whose autocorrelation sidelobes all peak uniformly at a level equal to the inverse of the length of the code (for a normalized autocorrelation main response).
© 2001 CRC Press LLC
FIGURE 5.2
5.2.3
Normalized autocorrelation for length 13 Barker code.
RANGE-DOPPLER RADAR IMAGING
Suppose a signal s(t) is transmitted with some form of phase modulation. Let r(t – τ0) represent the reflected signal received at time τ0. The power of this signal is proportional to the reflectivity (radar cross section) of a scattering feature (center) located at a distance d = cτ 0 /2 from the radar. If the target has many scattering centers, we would first receive the signal reflected from the scattering center of the target that is closest to the radar. Subsequently, we would get the signal reflected from the more distant parts of the target. The result of this is a superposition of the returns from all the scatterers of the target. The correlation of these echoes with a replica of the transmitted signal yields a profile of the target in which each scattering center is resolved. This profile is a one dimensional image of the target’s reflectivity versus range. Rotational motion between the radar and target enables two-dimensional imaging of a target by further separating the scatterers in a Doppler dimension that is perpendicular to the line of sight. The position of a scatterer in this dimension is related to its speed relative to the radar. Cross-range resolution denotes the ability to distinguish scatterers perpendicular to the radar line of sight, as illustrated in Figure 5.3. It is directly proportional to Doppler resolution, which is generally given by the inverse of the integration time (total duration) T of a signal being used.38 Thus, cross range resolution at a wavelength λ is given by λ ∆r c = ----------2ωT
(5.5)
where ωT is the aspect change of the target. As stated previously, the echo signal r ( t – τ 0 ) characterizes the total reflective properties of all the points of a target that are located at a distance cτ 0 /2 from the radar. On a spherical planet, such points form a circle equidistant to the Earth on the planet’s surface as shown in Figure 5.4. To obtain an image of the planet’s surface, one must
FIGURE 5.3
© 2001 CRC Press LLC
Scatterer detection in cross range.
take into consideration the planet’s rotation. Different points on the circle mentioned above have different speeds. Thus, their responses can be separated by their Doppler changes in frequency.* This imaging process takes into consideration not only the time delay of the radar signal (that corresponds to range), but also the change in frequency (that corresponds to cross-range). Therefore, the mapping approach is called range-Doppler radar imaging. For a detailed description of radar imaging, see Refs. 6, 25, and 30. The output of the matched filter is expressed mathematically as R(τ) =
∫ s ( t ) ⋅ r ( t – τ ) dt *
(5.6)
where r*(t – τ) is the complex conjugate of the return signal. For a moving target, the return signal is actually equal to the transmitted signal Doppler shifted, or r ( t – τ ) = s ( t – τ )e
j2πf d t
Using this in Equation 5.6, the expression for the ambiguity function is obtained as follows: χ ( τ,f d ) =
∞
∫–∞ s ( t )s* ( t – τ )e
– j2πf d t
dt
(5.7)
Note that this expression depends on two variables: delay τ and Doppler frequency fd. Thus, the ambiguity surface shows the output of the matched filter for a stationary (fd = 0) or moving target,
FIGURE 5.4
Diagram showing contours of constant delay and Doppler for planetary imaging.
* The Doppler effect refers to a shift in the center frequency of the echo signal (with respect to the reference) due to the target’s motion relative to the observer.
© 2001 CRC Press LLC
range and Doppler resolution, as well as sidelobe structure in the delay-Doppler plane. The ambiguity function for a binary phase code of length 13 pulses is shown in Figure 5.5.
5.3 RANDOM SIGNALS FOR HIGH RESOLUTION Wideband waveforms are used for imaging applications, because they enable high range resolution. For planetary imaging, additional requirements are high amounts of clutter suppression and little tolerance for Doppler shifts. For this type of application, the target’s speed and rotation rate ω are known. What is desired is the location of scattering centers along narrow strips making up the planet’s surface, as shown in Figure 5.6. Random waveforms are used in such instances due to the desirable autocorrelation properties they exhibit. For instance, the autocorrelation of white noise,
FIGURE 5.5
Normalized ambiguity surface for length 13 Barker code.
FIGURE 5.6 Narrow strip illustrating radar image compilation for planets. The strip is within a defined Doppler resolution. As the planet rotates, this strip will cover different sections of the planet, which will eventually lead to a full image of the planet. © 2001 CRC Press LLC
which is a signal with random amplitude, frequency, and phase, is the Dirac delta function.16 A delta function has a zero width and lacks sidelobe structure. In this sense, white noise is an ideal signal for imaging, offering the best resolution possible with no signal clutter. Unfortunately, white noise has an infinite bandwidth, which makes the implementation of a matched filter a formidable task. However, it is possible to transmit a signal that has a random parameter, such as frequency or phase, with some restrictions. Frequency-hopped waveforms, which are randomly modulated in frequency and have a finite bandwidth, are currently under development but require relatively expensive systems. Random phase modulation—in particular, binary phase coding—is a more appealing option.
5.4 BINARY PHASE CODES To remedy the power and duration constraints addressed in Section 5.2.1, long binary sequences are processed with the correlator receiver and an averaging procedure to enable the precise imaging of targets with close scatterers. These sequences are characterized by a large time-bandwidth product* that enables high resolution and accuracy in either one-dimensional or two-dimensional imaging. Hence, high-quality images showing great detail can be constructed. A binary phase code (BPC) is a sequence of N pulses with a random phase of 0 or π associated with each pulse. A random BPC can be expressed by the following equation: N
s(t) =
∑e
j ( ω0 t + φi )
{ u [ t – ( i – 1 ) T ] – u [ t – iT ] }
(5.8)
i=1
where ω0 is the carrier frequency for transmission, and T is the duration of each pulse. The unit step function u(t) denotes the switching from pulse to pulse, and φ is a random variable that takes on values of 0 and π with equal probability as indicated by the probability density function 1 p ( φ ) = --- [ δ ( φ ) + δ ( φ – π ) ] 2
(5.9)
At baseband, s(t) looks like an amplitude-modulated waveform with random amplitudes of +1 and –1, as shown in Figure 5.7. The autocorrelation of such a sequence exhibits some desirable characteristics. For instance, the width of the autocorrelation’s main lobe is equal to the duration
FIGURE 5.7
Random binary phase code.
* Time-bandwidth product refers to the product of the duration of the signal and the signal bandwidth. Waveforms with a large time-bandwidth product enable high resolution imaging of targets at far distances.
© 2001 CRC Press LLC
of a single pulse. Thus, the range resolution is equal to cT/2. A direct result of the dependence of range resolution to pulse width is that range resolution is limited by the switching time from pulse to pulse that current technology allows. (The current switching time for the Arecibo systems is Ý100 µs.) This is a consequence of the relative simplicity of BPCs. As described in Section 5.5, this constraint is eliminated for frequency modulated signals.
5.4.1
AMBIGUITY FUNCTION
OF
BINARY PHASE CODES
The ambiguity function characterizes the compressed output of the matched filter receiver (not considering external noise sources). Analytical expressions for the ambiguity function for random BPCs were derived in Ref. 36 to characterize the average ambiguity properties of this type of signal. The results of this study are discussed here. For simplicity, the signal s(t) of Equation (5.8) is assumed to be at baseband to derive expressions for the expected value, mean square, and variance of the ambiguity function as denoted by Equations (5.10), (5.11), and (5.12). E { χ ( τ,f d ) } =
E χ ( τ,f d )
2
∞
∫–∞ E [ s ( t )s* ( t – τ ) ] e
= E
∞
∫–∞
j2πf d t
(5.10)
dt 2
s ( t )s* ( t – τ )e 2
j2πf d t
dt
VAR { χ ( τ, f d ) } = E { χ ( τ,f d ) } – E { χ ( τ,f d ) }
(5.11)
2
(5.12)
The situation depicted in Figure 5.8 was used to derive the equations, where a received code is correlated with an infinite string of pulses containing one copy of the transmitted code initiated at t = 0. This is a more realistic situation because, in actual implementations for planetary imaging, several binary phase codes are transmitted in a long string so that the returns are averaged to cancel as much noise as possible. These expressions were tested by comparing them with simulated averages. The same situation depicted in Figure 5.8 was used to calculate the ambiguity functions of 1,000 binary phase codes generated at random. These ambiguity surfaces were averaged to obtain the mean, mean square, and variance. Comparing the results with the derived expressions point by point, an average variance on the order of 10–7 was attained. Thus, the derived expressions accurately describe the actual ambiguity functions. The square of the mean of the ambiguity function normalized with respect to the main lobe is sin(πf d NT ) 1 2 E { χ ( τ,f d ) } norm = --------------2 ( T – τ )sinc ) [ πf d ( T – τ ) ] -------------------------sin(πf d T ( NT )
2
(5.13)
FIGURE 5.8 Correlation process in which an echo is correlated with an infinite string of pulses. The random amplitudes ±1 are represented by xi. © 2001 CRC Press LLC
for 0 ð τ ð T. This equation includes only the main lobe of the ambiguity function and sidelobes along and immediately near the Doppler axis. The maximum of this expression occurs at τ = fd = 0, that is at the origin of the ambiguity surface. This maximum is representative of the signal energy, which follows from the characteristics of autocorrelations discussed in Section 5.2.2. The expression is graphed in Figure 5.9 for random binary phase codes of length N = 10 pulses and pulsewidth T = 0.44 ms. Setting fd = 0 in Equation (5.13) yields an expression for the square of the mean of the autocorrelation T–τ 2 2 2 E { χ ( τ,0 ) } norm = R ( τ ) norm = ----------- T
(5.14)
which is merely a quadratic. When τ = T, the result is zero. Thus, the autocorrelation is a maximum at τ = 0 as expected, decays quadratically to zero at τ = T, and is zero for τ Š T. This is shown in Figure 5.10. It should be clear that the range resolution is again given by cT ∆r s = ------ [ meters ] 2
(5.15)
To extract Doppler resolution, we set τ = 0 in Equation (5.13). This gives us the behavior of the ambiguity surface along the Doppler axis as follows: 2
2
E { χ ( 0,f d ) } norm = sinc ( πf d NT )
(5.16)
This sinc reaches its first null when πfdNT = π, or when fd = 1/NT. Thus, the Doppler resolution is 1 ∆f d = ------- [ Hertz ] NT
FIGURE 5.9
(5.17)
Square of the mean of the ambiguity function for binary phase codes (N = 10, T = 0.44 ms).
© 2001 CRC Press LLC
FIGURE 5.10 Square of the mean of the autocorrelation for binary phase codes (N = 10, T = 0.44 ms).
We may conclude that range resolution can be increased by decreasing the pulsewidth T, while Doppler resolution can be increased by transmitting more pulses. Therefore, small T and large N correspond to high resolution capability. These two relationships for resolution were already known, and they merely verify the equations derived in Ref. 36. The mean square of the ambiguity function shows the expected sidelobe structure in delay and Doppler. The resulting normalized expression is 1 2 2 2 2 2 E { χ ( τ,f d ) } norm = E { χ ( τ,f d ) } norm + ---------2 τ sinc [ πf d τ ] NT
(5.18)
for the interval 0 ð τ ð T, and 1 2 2 2 E { χ ( τ,f d ) } norm = ---------2 { [ ( n + 1 )T – τ ] sinc [ πf d ( ( n + 1 ) ( T – τ ) ] NT 2
2
+ n ( T – τ ) sinc [ πf d ( nT – τ ) ] }
(5.19)
in the interval nT ð τ ð (n + 1)T for n Š 1. This is shown in Figure 5.11 for the same length and pulsewidth given previously. The highest sidelobes peak at multiples of the pulsewidth T at a level 1/N along the delay axis. Thus, the signal-to-signal noise compression factor is N (since these expressions are squared). Also, in between the sidelobe peaks along the delay axis, the sidelobe level decays to 1/(2N) quadratically, and rises again to 1/N. This indicates that as sequences are correlated, there will be more correlation as pulses overlap completely, and less correlation as they pass each other. The behavior of the ambiguity function in Doppler for constant delays that are multiples of the pulsewidth are determined by setting the delay τ = nT in Equation (5.19) as follows: 1 2 2 E { χ ( nT,f d ) } norm = ---- sinc [ πf d T ] N
(5.20)
Note that the sidelobes of the ambiguity surface will tend to follow a sinc2 pattern in Doppler, with the maximum sidelobe variations occurring along the delay axis, and most of the variations in the region –1/T ð fd ð 1/T. © 2001 CRC Press LLC
FIGURE 5.11 Mean square of the ambiguity surface for binary phase codes (N = 10, T = 0.44 ms).
The variance of the ambiguity function indicates regions of the ambiguity surface where sidelobes will tend to appear and change from code to code. The resulting expression for the variance is composed of only the quadratic terms and the mean square 1 2 2 VAR norm { χ ( τ,f d ) } = ---------2 τ sinc [ τf d T ] NT
(5.21)
for 0 ð τ ð T and 1 2 2 VAR norm { χ ( τ,f d ) } = ---------2 { { [ ( n – 1 ) ( T – τ ) ] }sinc [ τf d ( ( N + 1 ) ( T – τ ) ] NT 2
2
+ ( nT – τ ) sinc [ πf d ( nT – τ ) ] }
(5.22)
for nT ð τ ð (n + 1)T. Note that all that remains in the region 0 ð τ ð T is a quadratic in delay, which starts at a level of zero along the Doppler axis. For zero Doppler, the variance rises to the level 1/N at a delay τ = T. The sidelobe structure explained above for the mean square also applies here. The variance is plotted in Figure 5.12.
5.4.2
FREQUENCY-CODED SIGNALS
A frequency hop coded signal is a sequence of pulses in which the frequency changes in a random fashion from pulse to pulse, spanning some bandwidth. In general, these pulses may or may not be contiguous. However, due to the power constraints described in Section 5.2.1, only contiguous pulses are considered for planetary imaging. Thus, it is the case described here. A random frequency-hopped signal consists of N pulses, each of pulsewidth T. The pulses are frequency modulated by a discrete set of random frequencies that span a bandwidth, β. A uniform probability density function describes the selection of frequencies to ensure that the spectrum will © 2001 CRC Press LLC
FIGURE 5.12 Variance of the ambiguity surface for binary phase codes (N = 10, T = 0.44 ms).
have white characteristics over the bandwidth and a fast sidelobe roll-off. This also ensures that the ambiguity function will have a sharp peak at the origin and a low level of self-noise. A frequency-hopped code can be represented in the time domain by N
s( t) =
∑ exp [ jθi ( t ) ] { u [ ( t – ( i – 1 ) )T ] –u [ t – iT ] }
(5.23)
i=1
where the phase θi(t) is i–1
θ i ( t ) = 2πf i [ t – ( i – 1 ) T ] + 2πT ∑ f n
(5.24)
n–1
Note that the summation term is needed in the expression for θ to ensure phase coherency. The frequency term fi is a random variable of the ith pulse and is characterized by the uniform probability density function, M
1 p ( f ) = ----- ∑ δ ( f – f i ) M
(5.25)
i=1
where M is the number of frequency states or discrete frequencies available. This is shown in Figure 5.13. An expression for the mean of the ambiguity function derived in Ref. 4 is plotted in Figure 5.14 for frequency-coded signals of length N = 4 pulses with four frequency states (M = 4), and a pulsewidth of T = 1 ms. The figure shows the ambiguity function with a time span of 8 ms and a Doppler span of 2 kHz to reveal the width and distribution of sidelobes near the main lobe. The final expression, which is not included here, verified that resolution in delay and Doppler are given by © 2001 CRC Press LLC
c c ∆r s = ------ = -------------2β 2M∆f
(5.26)
1 ∆f d = ------NT
(5.27)
For a frequency-modulated waveform, the bandwidth is equal to the range of frequencies uses for frequency modulation and is independent of the pulsewidth T. Thus, delay resolution is equal to the reciprocal of the coherent integration time NT, which is the same result obtained via binary phase codes.
FIGURE 5.13 Probability density function of frequency for frequency-hopped codes. Note that this is a uniform distribution, and the signal bandwidth is equal to the product of the number of frequency states M and the frequency increment or interval ∆f.
FIGURE 5.14 β = 2.3 kHz).
Square of the mean of the ambiguity function for frequency hop codes (N = M = 4, T = 1 ms,
© 2001 CRC Press LLC
In the expression for the mean for frequency hop codes, small sidelobes appear along the delay axis, as opposed to the case for binary phase codes where no sidelobes exist for the mean. Thus, for any frequency code, some sidelobes will appear along the delay axis of the ambiguity function. However, these are small, and there is a fast roll-off that follows a 1/τ pattern associated with them. Thus, they will not contribute too much distortion.*
5.5 IDEAL IMAGE OF A GAUSSIAN SCATTERING FUNCTION Our ultimate goal is to obtain an accurate image of a target. The spreading factor of the target dictates whether it will appear overspread to the radar, which required the use of clever modulation techniques. This is briefly introduced here. For more information, see Ref. 11. Due to the size of a particular planet, as well as its rate of rotation, the planet may appear overspread. A target is overspread if its time-bandwidth satisfied the condition τp βD > 1
(5.28)
where τp is the delay extent† of the planet and βD is the Doppler bandwidth. The target’s Doppler bandwidth is defined by the maximum velocity components of the target that the radar will observe. For a rotating planet, these velocities are ν max = ± ωR
(5.29)
where ω is the planet’s rotation rate in radians/second, and R is the radius of the planet. The resulting Doppler bandwidth can thus be calculated as 4ν max β D = -----------λ
(5.30)
Assuming measurements are to be taken for the planet Mars at X-band (8.4 GHz), the spreading factor can be calculated as follows: Mars’ diameter is approximately 6,960 km, which yields a delay extent of 22 ms. Mars’ rotation rate is approximately 24.62 hr, which yields a Doppler bandwidth of 30 kHz. Multiplying the delay extent and Doppler bandwidth, the spreading factor is 660, which reveals that the target is highly overspread. To gain an insight into the resolution quality of images obtained using binary phase coding or frequency hop coding, a simplified model of the planet is used. The Gaussian scattering function has some merit in modeling overspread targets11 and is thus used to model a soft planet. This scattering function is shown in Figure 5.15. We arbitrarily select a 50 × 120 pixel resolution as a criterion to clearly image Mars. The delay resolution necessary is thus 0.44 ms/pixel, which translates to a signal bandwidth of 2.3 kHz. The Doppler resolution is 250 Hz/pixel. The parameters required for binary phase codes and frequency hop codes to attain these resolutions are given in Table 5.1. For binary phase codes, the required bandwidth of 2.3 kHz dictates the pulsewidth T = 0.44 ms. Using this pulsewidth requires the transmission of 10 pulses per code to attain the desired Doppler resolution. For frequency hop codes, we select a pulsewidth of T = l ms for ease of hardware constraints. The number of pulses that must be transmitted is therefore N = 4, which is much less than that required for binary phase coding. In addition, M = 4 discrete frequencies spaced equidis* Recall that the imaging done here is only to compare the resolutions in delay and Doppler using each waveform. Actually, to measure distortion effects due to sidelobe structure, the mean square of the ambiguity function, which shows the average sidelobe structure of the ambiguity surface further from the origin, should be used. † The delay extent is the length of the target along the radar’s line of sight. Since the target here is a planet, the delay extent is equal to the diameter of the planet.
© 2001 CRC Press LLC
FIGURE 5.15 Gaussian scattering function modeling the overspread characteristics of Mars.
TABLE 5.1 Mars Parameters for Binary Phase Coding and Frequency Hopping Binary Phase Code
Frequency-Hopped Signal
N (no. pulses)
System Parameter
10
4
M (no. frequency states)
—
4
β (signal bandwidth)
2.3 kHz
2.3 kHz
T (pulsewidth)
0.44 ms
l ms
tantly within the required 2.3 kHz bandwidth are used so that each pulse may have a different frequency. Figures 5.9 and 5.14 showed the resulting analytically derived ambiguity functions for each case. For each waveform type, the mean square of the ambiguity function is convolved two-dimensionally with the Gaussian scattering function. This convolution process is represented as the energy distribution over the delay-Doppler plane χ. E [ P ( τ,f d ) ] =
∫ ∫ δ ( τ′,fd ′ ) E [ χ ( τ – τ′ ), ( fd – fd ′ )
2
]dτ′df d ′
(5.31)
where δ ( τ′,f d ′ ) represents the scattering function of the planetary object. Notice that the mean square of the ambiguity function can be expressed as 2
E { χ ( t,f d ) } = VAR { χ ( τ,f d ) } + E { χ ( τ,f d ) }
2
Substituting Equation (5.32) into Equation (5.31), the following expression is obtained: © 2001 CRC Press LLC
(5.32)
E [ P ( τ,f d ) ] =
∫ ∫ σ ( τ′,fd ′ ) VAR { χ ( τ – τ′ ), ( fd – fd ′ ) }dτ′dfd' – ∫ ∫ σ ( τ′,f d ′ ) E [ χ ( τ – τ′ ,f d – f d ′ ] dτ′df d ′ 2
(5.33)
The first term of Equation (5.33) is associated with the self-noise of the delay Doppler image, whereas the second term is associated with the energy of the target. Thus, the noise-free image is obtained by convolving the square of the mean of the ambiguity function with the scattering function. The resultant image obtained using binary phase coding, shown in Figure 5.16, shows spreading in Doppler due to the expected Doppler sidelobes of the ambiguity function. For the frequencyhopped case, the image of Figure 5.17 shows spreading in delay due to the delay sidelobes of the mean. For either waveform type, the sidelobes of the ambiguity functions generate some distortion, which is negligible for these two cases in which only the noise-free images are considered. The effect seen for both cases is similar to convolving the Gaussian scattering function with an impulse function. A true image will have more distortion, which may be minimized if the sidelobe levels of the ambiguity function remain low and uniformly distributed throughout the delay-Doppler plane.
5.6 CONCLUSIONS The use of wideband waveforms allows high-resolution imaging of astronomically distant targets. Enhanced resolution is derived from the large bandwidths the waveforms have, which is accomplished while maintaining high levels of signal power through increased signal duration. In particular, random wideband waveforms enable high resolution, as well as high levels of signal clutter
FIGURE 5.16 Image of Gaussian scattering function using binary phase codes. © 2001 CRC Press LLC
FIGURE 5.17 Image of Gaussian scattering function using frequency hop codes.
suppression for ease of detection of the scatterers of a target. Such methods have enabled Earthbased planetary mapping of neighboring planets, such as Mars, Mercury, and Venus. Binary phase coding is one scheme that has been successfully implemented for some time now. Frequency hop coding, which is currently under development, is a relatively more complex scheme, but it has some features that are not currently available with binary phase coding. For binary phase codes, range resolution is proportional to the width of a single pulse of a code. Frequency hopping eliminates this dependence on the pulsewidth through frequency modulation, thereby allowing enhanced resolution through a larger modulation bandwidth. However, this is done at the expense of a more complex and expensive system requiring high precision oscillators for accurate frequency hopping. The results of analytical analyses performed in Refs. 36 and 4 are presented to show the expected ambiguity properties of binary phase codes and frequency hop codes. For binary phase codes, the average level of self-noise is shown to decrease for longer codes. In addition, the imaging capabilities of each waveform type are demonstrated using the analytically derived results and a Gaussian scattering function as a model of an overspread target. It is evident that both binary phase codes and frequency hop codes can generate high-quality images. To demonstrate a more realistic image including at least signal self-noise, the effects of sidelobes, which were not included in the above noise-free images, should be added. This can be readily done for binary phase codes, as the sidelobe behavior has been characterized and documented here. Frequency hopping requires further analytical development, which would consist of obtaining expressions for the mean square and variance.
5.7 ACKNOWLEDGEMENT The Continuous Engineering, Science, and Technology Advancement (CUESTA) program provided partial support for this work. This program is a joint venture between the University of Texas at © 2001 CRC Press LLC
El Paso and the Jet Propulsion Laboratory, aimed at increasing the nation’s pool of Hispanic engineers and scientists.
REFERENCES 1. R.S. Berkowitz, Modern Radar, John Wiley & Sons, NY, 1965. 2. A.B. Carlson, Communication Systems, An Introduction to Signals and Noise in Electrical Communication, McGraw-Hill, NY, 1986. 3. N. Chang and S.W. Golomb, “On n-phase Barker sequences,” IEEE Transactions on Information Theory, 1994, Vol. 40, No. 4, pp. 1251–1253. 4. R.W. Chiu, “The stochastic properties of a coherent hop frequency modulated waveform,” thesis, The University of Texas at El Paso, December 1995. 5. M.N. Cohen, P.E. Cohen, and M. Baden, “Biphase codes with minimum peak sidelobes,” IEEE National Radar Conference Proceedings, 1990, pp. 62–66. 6. C.E. Cook and M. Bernfeld, Radar Signals, Academic Press, NY, 1967. 7. J.P. Costas, “A study of a class of detection waveforms having nearly ideal range-Doppler ambiguity properties,” Proceedings of the IEEE, 1984, Vol. 72, No. 8, pp. 996–1009. 8. Y. Davidor, Genetic Algorithms and Robotics, A Heuristic Strategy for Optimization, World Scientific, Singapore, 1991. 9. J.L. Eaves and E. K. Reedy (eds.), Principles of Modern Radar, Van Nostrand Reinhold, NY, 1987. 10. S. Eliahou, M. Kervaire, and B. Saffari, A new restriction on the length of Golay complementary sequences, Bellcore Tech. Mem. TM-ARH 012-829, October 24, 1988. 11. J.V. Evans and T. Hagfors, Radar Astronomy, McGraw-Hill, NY, 1968. 12. B.C. Flores, R. Vasquez, and R. Chiu, “Coherent radar imaging of overspread targets using high resolution phase coded and frequency coded waveforms,” SPIE Proceedings, Orlando, Florida, April 1996. 13. B.C. Flores and R. Vasquez, “Analysis of Fourier transform receiver implementation for FM waveform compression,” Army Research Lab (ARL) Report, Contract DAAD07-90-C-0031, Task Order 95-0, September 1995. 14. B.C. Flores and R.F. Jurgens, “A random hop frequency modulation approach for planetary radar imaging,” Progress Report, Jet Propulsion Laboratory, Pasadena, CA., September 1994. 15. B.C. Flores, A. Ugarte, and V. Kreinovich, “Choice of an entropy-like function for range-Doppler processing,” Proceedings of the SPIE/International Society for Optical Engineering, Vol. 1960, Automatic Object Recognition III, 1993, pp. 47–56. 16. W.A. Gardner, Introduction to Random Processes with Applications to Signals and Systems, McGrawHill, New York, NY, 1990. 17. D.E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, NY, 1989. 18. S.W. Golomb (ed.), Digital Communications with Space Applications, Prentice-Hall, Englewood Cliffs, NJ, 1964. 19. S.W. Golomb and R.A. Scholtz, “Generalized Barker sequences,” IEEE Trans. Inform. Theory, 1965, Vol. IT-11, pp. 533–537. 20. J.K. Harmon, M.A. Slade, R.A. Velez, A. Crespo, M.J. Dryer, and J.M. Johnson, “Radar mapping of Mercury’s polar anomalies,” Nature, Volume 369, pp. 213–215, 19 May 1994. 21. J.R. Klauder, “The design of radar systems having both range resolution and high velocity resolution,” Bell Systems Technology Journal, 1960, Vol. 39, pp. 809–819. 22. V. Kreinovich, C. Quintana, and 0. Fuentes, “Genetic algorithms: what fitness scaling is optimal?” Cybernetics and Systems: an International Journal, 1993, Vol. 24, No. 1, pp. 9–26. 23. N. Levanon, Radar Principles, John Wiley and Sons, Inc., New York, 1988. 24. N. Levanon, “CW alternatives to the coherent pulse train-signals and processors,” IEEE Transactions on Aerospace and Electronic Systems, 1993, Vol. 29, No. 1. 25. D.L. Mensa, High Resolution Radar Cross-Section Imaging, Artech House, Norwood, MA, 1991. © 2001 CRC Press LLC
26. J.L. Mora, B.C. Flores, and V. Kreinovich. “Suboptimum binary phase code search using a genetic algorithm,” In Satish D. Udpa and Hsui C. Han (eds.), Advanced Microwave and Millimeter-Wave Detectors, Proceedings of the SPIE/International Society for Optical Engineering, Vol. 2275, San Diego, CA, 1994, pp. 168–176. 27. F.E. Nathanson, Radar Design Principles, McGraw-Hill, NY, 1969. 28. H.T. Nguyen and V. Kreinovich, “On re-scaling in fuzzy control and genetic algorithms,” Proceedings of the 1996 IEEE International Conference on Fuzzy Systems, New Orleans, September 8-11, 1996. 29. A.W. Rihaczek, Principles of High-Resolution Radar, McGraw-Hill, NY, 1969. 30. A.W. Rihaczek, “Radar waveform selection,” IEEE Transactions on Aerospace and Electronic Signals, 1971, Vol. 7, No. 6, pp. 1078-1086. 31. M.I. Skolnik, Introduction to Radar Systems, McGraw-Hill, NY, 1980. 32. M.I. Skolnik (ed.), Radar Handbook, McGraw Hill, NY, 1990. 33. C.V. Stewart, B. Moghaddam, K. J. Hintz, and L. M. Novak, “Fractional Brownian motion models for synthetic aperture radar imagery scene segmentation,” Proceedings of the IEEE, 1993, Vol. 81, No. 10, pp. 1511–1521. 34. R.J. Turyn, “On Barker codes of even length,” Proceedings of the IEEE, 1963, Vol. 51, No. 9 (September), p. 1256. 35. R.J. Turyn and J. Storer, “On binary sequences,” Proceedings of the American Mathematical Society, 1961, Vol. 12, pp. 394–399. 36. R. Vasquez, “Search for suboptimum binary phase codes through ambiguity function characterization using a genetic algorithm,” thesis, The University of Texas at El Paso, May 1997. 37. D.R. Wehner, High Resolution Radar, Artech House, Norwood, MA, 1987. 38. D.R. Wehner, High-Resolution Radar, Artech House, Norwood, MA, 1995. 39. K.M. Wong, Z.Q. Luo, and Q. Lin, “Design of optimal signals for the simultaneous estimation of time delay and Doppler shift,” IEEE Transactions on Signal Processing, 1993, Vol. 41, No. 6, pp. 2141–2154. 40. P. M. Woodward, Probability and Information Theory with Applications to Radar, McGraw-Hill, NY, 1953.
© 2001 CRC Press LLC
6
The Micropower Impulse Radar James D. Taylor and Thomas E. McEwan
CONTENTS 6.1 Introduction 6.2 Impulse Radar Background 6.3 The Micropower Impulse Radar (MIR) System 6.4 MIR Applications 6.5 Conclusions References
6.1 INTRODUCTION This chapter is about the integrated circuit chip Micropower Impulse Radar (MIR), which can be used to provide a general-purpose radar sensor. The Lawrence Livermore National Laboratory (LLNL) developed the MIR and licenses it for commercial applications.
6.2 IMPULSE RADAR BACKGROUND Impulse radar transmits and receives short-duration, ultra-wideband (UWB) impulse signals, which are typically 0.5 to 2 ns long. The short pulse length gives a range resolution from 0.25 to 1 ft. Impulse radar detection range depends on radiated energy, receiver design, target size, and signal processing. For example, ground probing radar units generally have short ranges, while some airborne impulse radars have ranges of several miles. Impulse signals are useful because they operate a frequencies that can penetrate solid media such as the earth, concrete, rock, building walls, foliage, etc. Ground penetrating radars are usually pulled over the ground while the receiver collects, plots, and displays multiple signal returns to form images of buried objects, tunnels, or discontinuities in the material. Any change in the index of refraction will produce a reflected signal, and the larger the change, the greater the reflection. Impulse radar technology started in the 1970s, and its progress is shown by the patents of DeLorenzo, Robbins, Toulis, Morey, Ross, Lerner, Fullerton, and others.1–14 A ground penetrating radar community uses impulse radars for geophysical surveying, construction, road and bridge deck inspection, agriculture, forensics, and related applications.15 These users rely on imaging of multiple impulse radar returns to reveal buried or concealed objects. Unexploded ordnance clearing operations in Kuwait after the Gulf War of 1991 used ground probing radar to detect buried bombs and shells.16 ARPA and other agencies have sponsored research efforts to show develop impulse radar capabilities for mine detection and high-resolution battlefield surveillance.17–20
© 2001 CRC Press LLC
6.3 THE MICROPOWER IMPULSE RADAR (MIR) SYSTEM 6.3.1
INTRODUCTION
The Micropower Impulse Radar (MIR) is an integrated circuit radar system that can provide a general-purpose sensor for penetrating materials that cannot be penetrated by light or sound. Random dithering of each radar pulse helps keep the microwatt power level signal from interfering with other MIR units or nearby electronic devices. The MIR makes radar sensing possible for many short-range applications.
6.3.2
TECHNICAL OVERVIEW
The MIR is built on a small surface mount circuit board and is about 1-1/2 in2, as shown in Figure 6.1. This unit contains a complete ultra-wideband impulse radar.21 Table 6.1 gives MIR’s technical characteristics.
6.3.3
MIR MOTION DETECTOR
McEwan’s ultra-wideband radar motion sensor patent describes one MIR chip application. Figure 6.2a shows a simplified block diagram of the motion detection sensor. The MIR transmits short, ultra-wideband electromagnetic pulses of length τ = 1–2 ns. This gives the unit a range resolution ∆R = cτ/2, or 0.5 to 1 ft. The receiver is designed to only receive signals from a preset range R. Any change in reflectivity at range R produces an output from the proximity detector. Setting the detection range gate establishes a detection shell around the sensor, as shown in Figure 6.2b. If nothing changes at range R, then the integrated return signal remains constant. This system works, because stationary clutter signals are integrated as part of the constant return signal level. However, when anything penetrates the shell, then it will change the reflectivity at range R, which causes a change in the return signal level. Large signal level changes will be detected in the proximity detector and produce an alarm signal.
1.5" 1.5"
1.5"
1.5" Power Range delay Output FIGURE 6.1
Micropower
© 2001 CRC Press LLC
TABLE 6.1 MICROPOWER IMPULSE RADAR (MIR) SPECIFICATIONS21 Item
Specifications
Antenna pattern (H–plane)
–360° with dipole antenna –160° with a cavity backed monopole –Narrower with horn/reflector/lens
Center frequency
1.95 GHz or 6.5 Ghz ± 10%
Emission bandwidth
500 MHz @ 1.95 Ghz center
Average emission power
~1 µ W (measured)
Duty cycle
<1%
PRF (average)
2 MHz ± 20%
PRF coding
–Gaussian noise –Low coherence swept FM –Pseudonoise
Receiver noise floor
<1 µ V RMS
Receiver gate width
250 ps for 1.95 GHz system
Range delay
RC analog, pot/DAC controllable
Range delay jitter
<1 ps
Range delay stability
RC component limited over temperature (drift in range delay expands/shrinks shell)
Detection range
Adjustable from 2 in > 20 ft
Motion passband
0.3–10 Hz, Doppler-like signature
Analog output
~0.1–2V peak on motion sensing Hand motion at 6 ft vies ~300 m V peak
Receiver gain
70 dB
Power
5 V @ 8 mA, Normal power mode 2.5 V @ 20 µA, long battery life version
Size
1.5 in2 SMT PCB w/1.5 ft long wire dipole elements
Semiconductors
74AC04 CMOS (1 ea.) Bipolar or CMOS opamps (2 ea., quads) Bipolar RF transistor @ >4 GHz FT (2 ea.)
As shown in Figure 6.2, the motion detector includes both transmitter and receiver elements. • Transmitter. The transmitter contains a random interval generator, pulse repetition interval (PRI) generator, and an impulse generator, as shown in Figure 6.3. The random noise generator uses three transistors to generate a random noise signal that randomly varies the pulse repetition interval. Varying the PRI helps prevent setting up continuous periodic signals that would interfere with other electronic systems. The pulse repetition interval (PRI) generator is a set of cascaded inverter gates that produce a constant pulse repetition interval (PRI) signal. A typical MIR PRI is 1 µs, or 1 million pulses per second. The impulse generator uses the constant PRI and noise outputs to generate the transmitter and the receiver strobe signals. Applying the varying pulse repetition signal to the step recovery diodes (SRD) produces a 100 ps or faster voltage level transition that goes directly to the antenna. Passing the signal into the antenna differentiates the pulse and radiates a Gaussian shaped electromagnetic signal from the antenna.23 • Receiver. The receiver, as shown in Figure 6.2a, consists of an adjustable delay, a strobe impulse generator, an ultra-wideband (UWB) receiver and detector, and a proximity © 2001 CRC Press LLC
Random Interval Generator Noise Generator
Impluse Transmitter
PRI Generator
TRANSMITTER
UWB Receiver and Detector
Strobe Impluse Generator
Adjustable Delay
Range Adjustment
ALARM OUTPUT
Proximity Detector
RECEIVER
Range
Set Range
MIR Motion Detector
Range
Alarm Output
FIGURE 6.2 MIR motion detector applications. (a) Simplified block diagram of the motion detector sensor. Two antennas are shown; however, only one is actually used. (b) The motion detection unit can set up a shell at a present range R, with thickness ∆R. the unite integrates successive returns from the shell and gives an alarm if there is any sudden change in reflectivity.
detector. The adjustable delay sets the MIR range by varying the time between transmitting the signal and turning the receiver on. This is a simple variable resistor setting. The strobe impulse generator receives a delayed PRI signal from the adjustable delay and then generates a short impulse signal of the same duration as the transmitted signal. This delayed impulse signal turns the receiver on when a return is due from objects at range R. Figure 6.4 shows the receiver block diagram, which has two integrating single-ended samplers and an operational amplifier. Figure 6.5 shows the receiver schematic diagram. When any signal arrives from the +UWB and –UWB inputs, it will be shorted to ground through C3 and D4 unless it corresponds to the preset range R. Applying the pulse generator input to C5 temporarily lowers the potential at the junction of D1 and D2 so that they conduct and charge C1 and C2. When the strobe input is applied, then the © 2001 CRC Press LLC
+sv
+SV
1
1
1
2
2
2 2 SRD
SRD 2 SRD
Random Noise Generator
PRI Generator
Inpulse Generator
6' or 12' Range Selector Switch
FIGURE 6.3 Impulse radar transmitter schematic. The unit has three functional unit: the random noise generator, the pulse repetition interval generator, and the impulse generator. This schematic shows a range selector switch that can change the maximum range from 6 to 12 ft. (Source: after McEwan, UWB Radar Motion Sensor, U.S. patent 5,361,070, Nov. 1, 1994, Fig. 5.)
+UWB INPUT
INTEGRATING SINGLE ENDED SAMPLER
SINGLE ENDED PULSE GENERATOR
A +
+UWB INPUT
DETECTED UWB MODULATION
INTEGRATING SINGLE ENDED SAMPLER
FIGURE 6.4 Block diagram of the MIR UWB receiver showing the major functional blocks and units. (Source: after Figure 6.2a, McEwan, U.S. Patent 5,345,471, Sept. 6, 1994.)
received signals will go to the integrating single-ended integrators R1, C3 and R2, C4. These integrators hold each successive received signal. A sudden increase or change in signal level will be differentiated through C5 and indicate the presence of an intruder. The advantage of the McEwan UWB receiver circuit of Figure 6.5 is that it can be built on an integrated circuit chip. To appreciate the advantages of this receiver circuit fully, compare it with the prior art signal sampling UWB receiver shown in Figure 6.6. The prior art circuit requires baluns and a diode bridge and is driven by a pair of strobe transmission lines, T1 and T2. When all four diodes are driven into conduction by pulses of T1 and T2, the shunting effect of the impedance of T1 and T2 combines with the UWB source impedance Z and the diode resistance Rd, resulting in a voltage division in the transfer of the input voltage to the charge holding capacitor CHold. The resulting efficiency is about 25 percent. Since post detection noise dominates the total noise, then low detection efficiency means a poor signal-to-noise ratio on weak signals.23 • MIR VHF security alarm system. Figure 6.7 shows the transmitter, receiver, and alarm output circuit integrated into a single security alarm system. This unit approximates the MIR unit using a small printed circuit board and wire dipole antenna.22 © 2001 CRC Press LLC
C1
C6
R1
+UWB INPUT
C5
+
C3
R5
A1
RT
-
R8 R7
D1
R3
+ +Bias
A2 R8
PULSE GENERATOR INPUT
C7
RT
CS
-
DETECTED UWB MODULATION
C7 C8
R4 D2
R7
+ A3
+UWB INPUT
R2
C2
C4
RT
FIGURE 6.5 Micropower Impulse Radar (MIR) UWB receiver schematic diagram. (Source: after Figure 2a, McEwan, U.S. Patent 5,345,471, Sept. 6, 1994.)
T1
– UWB Input
CH
RT _
RT + UWB Input
CL
+ A1
CHold
UWB
+ CH T2
Pulse Generator Input UWB
_
A3 Detected UWB Output
CL
FIGURE 6.6 Prior art signal sampling UWB receiver. This receiver is driven by a balanced pair of strobe transmission lines T1 and T2. When all four diodes are driven into conduction by pulses on T1 and T2, the shunting effect of the impedance of T1, T2 combined with the UWB source impedance and the diode resistance result in a voltage division in the transfer of the input voltage to charge holding capacitor CHold. The net efficiency may by about 25%. Since post-detection noise dominates the total noise, low detection efficiency means a poor signal-to-noise ratio on weak signals. (Source: after Figure 5, McEwan, U.S. Patent 5,345,471, Sept. 6, 1994.)
6.4 MIR APPLICATIONS We used the motion detector to explain the MIR operation and to show the general principles of circuit operation. The preset range feature gives the MIR great sensitivity to changes in reflectivity but limits its applications to detection at one range only. However, MIR technology can be the basis for many applications. Some applications that follow may require different MIR receiver designs but will still use the basic technology.21 © 2001 CRC Press LLC
+ 5V
+ 5V 1
1
1
1
2
2
+5V
Q1 1
+5V 2
2
Q2 RANGE PRI Generator
Noise Generator
TRANSMITTER
POWER
5V Reg
+V5 D
9V
+V5 SENSITIVITY LED
LED
+ 3
-
D 4 +
TRANSMIT/ RECEIVE ANTENNA
3 +
4 +
D + 3
-
UWB Receiver
ALARM OUTPUT
D
Alarm Output
RECEIVER
FIGURE 6.7 VHF security alarm circuit. The circuit operates with a 2 ns transmit pulse applied to an 18 in. dipole antenna. The waveform is a step that is differentiated with ringing by the antenna. The circuits are similar to the ones of Figs. 6.2 and 6.4. The PRI generator operates has a 100 kHz output. Specific components are as follows: Q1 = 2N5109, Q2 = 2N2369. CMOS ICs 1 and 2 are 74HC04 inverters. IC3 is TL27L4, and IC4 is TLC27L2. Schottky diodes are 1N5711. (Source: after McEwan, UWB Radar Motion Sensor, U.S. Patent 5,361,070, Nov. 1, 1994, Fig. 6.)
6.4.1
MEDICAL REMOTE SENSING
AND IMAGING WITHIN THE
BODY
Detecting movement within the body is a potential application. For example, consider a throat microphone for voice recognition and other applications that would work by sensing movement, not sound waves. A smart hospital bed that can monitor a patient’s heart and breathing rate remotely is another possibility. Another possible use is for a portable bullet-and-shrapnel detector for military and civilian paramedics, aid stations, and emergency rooms.
6.4.2
SECURITY
Because the MIR signals can penetrate solids, it provides an ability to build intrusion detection units concealed behind opaque panels or camouflaged as nondescript items. Security is enhanced, because a concealed sensor unit is less likely to be discovered and disarmed. Considering the low power, wide spectrum, and randomly occurring nature of the MIR signal, it would be difficult to build a radar detector to show the location of MIR units. © 2001 CRC Press LLC
6.4.3
AUTOMATION
Any commercial and industrial process that requires movement detection could use MIR sensors.
6.4.4
TRANSPORTATION
MIR sensors could be used to build smart vehicles or smart highways. When used as roadside detectors, they could provide data for traffic management. When used as proximity sensors on vehicles, they could permit traffic to flow with a closer interval than required for safety. Connecting a MIR based proximity sensor to an automobile cruise control system would be a basis for vehicles that can travel in dense packs to increase highway capacity.
6.4.5
ENTERTAINMENT
The sensors could be used for motion detection and proximity detection systems for interactive games.
6.4.6
MATERIALS EVALUATION
The components of the MIR could be used to build ground- and wall-penetrating radars. A MIR technology-based system would have potential applications in examining structures, roads, and natural formations for defects, cracks, etc. Analysis of the return signal can provide information on the reflecting materials, which suggests potential capabilities to identify or sort materials.16,19
6.4.7
TOOLS
Systems for cutting, drilling, excavation, mining, etc. could use MIR sensors. Finding wall studs and reinforcing bars is a current impulse radar application where the small size would make the MIR competitive with other systems.
6.4.8
MINE DETECTION
AND IDENTIFICATION
Combining the MIR with appropriate return signal processing could add the capability for imaging, or direct identification, of buried objects from their return signal characteristics. Adding signal processing to distinguish between explosive devices and buried scrap metal pieces would be valuable in limiting work to necessary excavations. The impulse signal could detect nonmetallic mines when combined with proper signal processing.24
6.4.9
MILITARY SENSING
OTHER THAN
MINE DETECTION
The MIR could be used to build perimeter security systems and foliage-penetrating motion detectors. Impulse radars are already in use guarding high-security sites against intruders. Hand-held motion detectors would provide the police officer or soldier with an extra edge in night, jungle, or urban operations.
6.4.10 RADAR CAMERA The MIR transmitter and receiver can be the basis for radar imaging using synthetic aperture and inverse synthetic aperture techniques. When combined with a high-gain antenna, the MIR could provide target images.
6.4.11 COMMUNICATIONS Impulse interval modulation communication links are another potential application for the MIR components. Replacing the random noise generator with a pulse interval modulator could provide © 2001 CRC Press LLC
a way to transmit analog or digital signals. The technical trick is to synchronize and demodulate the received signal. MIR technology provides a convenient way to build an impulse radio link. An impulse radio system using these principles was described by Larry Fullerton in his patent descriptions for a time domain radio transmission system25 and a spread spectrum radio transmission system.26 If the UWB radar receiver can synchronize with the pulse of another system, then the variation of the impulse interval can be demodulated and used to send information.
6.5 CONCLUSIONS The MIR is a simple and practical application of impulse UWB radar to range measurement. The technology is the property of the Lawrence Livermore Laboratory, which licenses its use. Any potential user should plan to work with the Federal Communications Commission, or national regulatory agencies, to solve any potential interference and licensing issues as soon as possible.
REFERENCES 1. DeLorenzo, Joseph D. “Radar target-identifying apparatus,” U.S. Patent No. 3,523,292, Aug. 4, 1970. 2. Robbins, Kenneth W., “Short base-band pulse receiver,” U.S. Patent No. 3,662,316, May 9, 1972. 3. Toulis, William J. “Detection and classification system utilizing unipolar pulses,” U.S. Patent No. 3,686,669, Aug. 22, 1972. 4. Morey, Rexford M., “Geophysical surveying system employing electromagnetic impulses,” U.S. Patent No. 3,806,795, Apr. 23, 1974 5. Ross, Gerald F. “Base-band pulse object sensor system,” U.S. Patent No. 3,772,697, Nov. 13, 1973. 6. Lerner, Robert M. “Ground radar system,” U.S. Patent No. 3,831,173, Aug. 20, 1974. 7. Ross, Gerald F., “Apparatus and method for measuring the level of a contained liquid,” U.S. Patent No. 3,832,900, Sept. 3, 1974. 8. Young, Jonathan D. and Ross Caldecott, “Underground pipe detector,” U.S. Patent No. 3,976,282, Jun. 29, 1976. 9. Ross, Gerald F., and Kenneth Robbins, “Range readout apparatus,” U.S. Patent No. 3,979,749, Sept. 7, 1976. 10. Ross, Gerald F., “Apparatus and method for sensing a liquid with a single wire transmission line,” U.S. Patent No. 3,995,212, Nov. 30, 1976 11. Young, Jonathan D. and Ross Caldecott, “Underground pipe detector,” U.S. Patent No. 4,062,010, Dec. 6, 1977. 12. Cronson, Harry M., Gerald F. Ross, Basrur R. Rao, Werner Lerchenmueller, and Prentis B. Drew, “Collision avoidance system using short pulse signal reflectometry,” U.S. Patent No. 4,254,418. May 3, 1981. 13. Jehle, Robert E., and David F. Hudson, “Impulse transmitter and quantum detection radar system,” U.S. Patent No. 5,095,312, Mar. 10, 1992. 14. Kim, Anderson H., Maurice Wiener, Stephen Levy and Robert J. Zeto, “Light activated high power integrated pulser,” U.S. Patent No. 5,146,075, Sept. 8, 1992. 15. Third International Conference on Ground Penetrating Radar, Abstracts of the Technical Meeting, May 14-18, 1990. U.S. Geological Survey Open-File Report 90-414, May 1990. 16. Kolchum, E.H. “GPS and other new technologies help clear ordnance from Kuwaiti desert,” Aviation Week and Space Technology, April 27, 1992, pp. 54-55. 17. LaHaie, Ivan J., ed., “Ultra-wideband Radar,” Proceedings of the SPIE, Vol. 1632, 22-23 Jan. 1992, SPIE, Bellingham WA, 1992. 18. Vickers, Roger S., ed., “Ultrahigh Resolution Radar,” Proceedings of the SPIE, Vol. 1875, 20 Jan. 1993, SPIE, Bellingham WA, 1993. 19. Dubey, Abinash C., Ivan Cindrich, James M. Ralston and Kelly Rigano, eds., “Detection Technologies for Mines and Minelike Targets,” Proceedings of the SPIE, Vol. 2495, 17-21 Apr. 1994, SPIE, Bellingham WA, 1992. © 2001 CRC Press LLC
20. Cindrich, Ivan, Nancy K. Del Grande, “Aerial Surveillance Sensing Including Obscured and Underground Object Detection,” Proceedings of the SPIE, Vol. 2217, 4-6 April 1994, SPIE, Bellingham, WA, 1992. 21. “Micropower Impulse Radar (MIR) Technology Overview,” Lawrence Livermore National Laboratory, March 27, 1995. 22. McEwan, Thomas E., “Ultra-wideband radar motion sensor,” U.S. Patent No. 5,361,070, Nov. 1, 1994. 23. McEwan, Thomas E., “Ultra-Wideband Radar Receiver,” U.S. Patent No 5,345,471, Sept. 6, 1994. 24. Sheby, David and Vasilis Marmarelis, “High-Order Signal Processing for Ultra-Wideband Radar Signals,” Chapter 11, Introduction to Ultra-Wideband Radar Systems, J.D. Taylor, ed., CRC Press, Boca Raton FL 1995. 25. Fullerton, Larry W, “Time domain radio transmission system,” U.S. Patent No. 4,813,057, Mar 15, 1989. 26. Fullerton, Larry W, “Spread Spectrum radio transmission system,” U.S. Patent No. 4,641,317, Feb. 3, 1987.
© 2001 CRC Press LLC
7
Ultra-Wideband Technology for Intelligent Transportation Systems Robert B. James, Jeffrey B. Mendola, and James D. Taylor
CONTENTS 7.1 Introduction 7.2 UWB Overview 7.3 Intelligent Transportation System (ITS) Overview 7.4 UWB Technology Applications 7.5 Conclusions References
7.1 INTRODUCTION Ultra-wideband (UWB) radio and radar is a new technology that has a wide range of applications including range measurement, materials penetration, and low probability of interception and interference communication systems. Ultra-wideband signals are unusual, because they have a bandwidth greater than 25 percent of center frequency—compared to less than 1 percent for conventional radar and radio signals. Recent developments make it possible to build a transmitter and receiver on a single chip at low cost. The applications versatility and low cost may make UWB technology a key technology for future highway and vehicle control systems. This chapter is about the concepts of intelligent transportation systems (ITSs) and how UWB technology can solve some basic requirements better than competing technologies. Some possible UWB technology applications include automatic vehicle identification (AVI), advanced traveler information systems (ATISs), advanced traffic management systems (ATMSs), transportation planning, collision avoidance, and automated highway systems (AHSs). This chapter starts with a brief overview of UWB technology, gives an overview of ITS, and finally discusses ways to apply UWB technology to specific ITS problems.
7.2 UWB OVERVIEW Ultra-wideband technology is a new field of communications and radar that uses ultra-short pulses of energy and complex pulse trains for sensing and communication. The distinguishing characteristic of ultra-wideband signals is their large ratio of instantaneous bandwidth to center frequency. For comparison, normal narrowband signals have bandwidth-to-center-frequency ratios that are around 0.01 or less, and wideband signals (e.g., spread spectrum signals) have bandwidth-to-centerfrequency ratios that are about 0.01–0.25. However, UWB signals have a bandwidth-to-center-
© 2001 CRC Press LLC
frequency ratios that are 0.25 or larger by the American definition (Russian texts classify UWB as having a bandwidth 100 percent of center frequency). The large bandwidth means that UWB signals can carry more information (such as range resolution, target interaction, and data) than can narrowband signals. Ultra-wideband impulse signals can be produced by generating an electromagnetic energy pulse that may be about a nanosecond in duration. This makes the pulse duration on the same order as one RF cycle at 1 GHz. The applications described here assume ultra-wideband impulses signals, sometimes called nonsinusoidal signals. There are other UWB waveforms that may have ITS applications in the far future; however, these will not be discussed here. In addition to the short duration pulses, these devices can be made with very high pulse repetition frequencies on the order of 1 MHz (1 µs pulse repetition interval). Varying the interval between impulses can modulate the signal to carry information.1,2,3 By using correlation of pulse interval coded signals, these devices can both communicate and measure distances very exactly. The ITS applications suggested here will use very low radiated power levels, usually less than a milliwatt. These capabilities are possible with current solid state electronics and low-cost chips can be made to perform a wide range of applications. As currently foreseen, the UWB devices for an ITS will be relatively simple and use off-the-shelf technology. Therefore, UWB devices can offer accurate location and sophisticated communication at an affordable price.
7.3 INTELLIGENT TRANSPORTATION SYSTEM (ITS) OVERVIEW 7.3.1
WHAT
IS AN INTELLIGENT
TRANSPORTATION SYSTEM?
The phrase intelligent transportation systems (ITS) describes a number of different concepts that will apply advanced technology such as remote sensors, computers, communication, and automatic control to highway transportation problems. Examples of these systems include electronic signs that warn drivers of upcoming congestion, electronic toll booths that can automatically collect tolls as the vehicle passes, systems that detect accidents and then coordinate the response to them, and highways that allow fully automated driving by controlling the vehicles. Some functions of an ITS include the collection, distribution, and use of various information technologies to improve the transportation system’s safety, efficiency, and mobility. The long-range ITS program objective is to permit more vehicles to safely use existing highway systems. For example, an automated highway system (AHS) will to automate highway driving, and an electronic toll tag management (ETTM) system will use radio links to automate toll collection.
7.3.2
ITS COMPONENTS
AND
APPLICATION AREAS
Most ITSs will have the same basic components. All ITSs will need sensors and communication transceivers to collect and distribute information. Since most ITS applications must remotely sense objects or communicate with mobile receivers, these links usually involve some sort of wireless technology. Remote sensing of vehicles and communications from the highway control system to vehicles, or between vehicles, will definitely require wireless communications. However, for other parts of the system, such as communications between stationary sensors, wire line technology such as fiber optics may be more practical. Once the sensor elements have collected traffic data, some central control system can convert the data into information and directions to control the highway and vehicles traveling on it. UWB technology is seen as useful in implementing the wireless sensors and transceivers for many applications, and so this chapter will deal with these components in an ITS.
7.3.3
ITS REQUIREMENTS
Sensor and transceiver technology for an ITS must meet some specific requirements. For example, the AHS needs accurate vehicle location data, whereas some other systems only need to know © 2001 CRC Press LLC
when a vehicle passed a certain point. Systems that will send information and control signals to vehicles, or that use multiple distributed systems, will require some effective multiple access communications technology. Crowded environments, such as urban highways, will require that sensors and communications have high immunity to multipath and other interference. In addition, for applications that, like AHS, have a large number of mobile users, a low-cost technology would be beneficial to reduce the cost of each mobile receiver. Any technology that meets all these requirements will be very attractive, since it could be used in many different systems.
7.3.4
AVAILABLE TECHNOLOGIES
In addition to UWB systems, there are a some other potential wireless technologies that the ITS could use. Passive object sensing technologies include infrared (IR), magnetic, sonic, millimeter wave, and electro-optical and video devices. Active sensors and transponders can use IR, ultrasonic, and microwave technologies for detection, location, and low-data-rate communications. Higher-data-rate communications would most likely require a conventional radio link. Although all these technologies could be used, it will be shown that UWB technology has some particularly attractive features for ITS applications.
7.4 UWB TECHNOLOGY APPLICATIONS 7.4.1
PAST
AND
PRESENT UWB TECHNOLOGY WORK
AND
SOURCES
There are a number of efforts across the country to develop UWB into marketable devices. It is highly likely that UWB products may become commonplace due to their low cost and wide range of applications. This section reviews who is working in the area and what has been done. Dr. Henning Harmuth did much of the important early theoretical work in nonsinusoidal impulse technology.4 He was with Catholic University in Washington, DC, for many years until he retired a few years ago. Now he still works on his own with Russian, Ukrainian, and other foreign researchers to develop a large current antenna device to make UWB devices capable of transmitting extremely high powers. Dr. Merrill I. Skolnik, well known as the author of Introduction to Radar Systems and editor of The Radar Handbook, was a long time Director of Radar for The Naval Research Laboratory in Washington, DC. He became interested in UWB radar for military applications in 1980 and has written technical papers on the subject.5 He has pursued UWB as a personal area of interest. In 1994, Lawrence Livermore Laboratories announced the successful development of the Micropower Impulse Radar (MIR). They are currently licensing the technology for various commercial applications. They have released specifications for a wide range of chips with many potential applications including medical; speech; security; energy conservation; residential, commercial, and industrial automation; transportation; entertainment; materials evaluation; tools; communications; underground object detection, including buried mines and ordnance; military radar; and a radar camera.1 The MIR devices are estimated to cost about $10 in sufficient production quantities, so the low cost and advantages of radar over competing technologies will make it competitive with ultrasonic and optical sensors. Amerigon has recently acquired a Lawrence Livermore license to investigate MIR technology for short-range vehicle radar applications. Aetherwire & Location, Inc., based in California, continues to do some of the most advanced work on cooperative UWB location and communication. They have used ARPA funds and venture capital to develop a chip that can co-locate other chips with centimeter accuracy and communicate with the devices. These devices will use cooperative ranging and be capable of much longer ranges than the Lawrence Livermore devices that use a one-way travel path. Also, they have implemented a sophisticated receiver front end to carry out a large amount of signal correlation that can uniquely © 2001 CRC Press LLC
identify each chip. Indications from the company are that the devices can be made for $20 in quantity. As the system is released, more information will be made available to the public. Pulson Communications is a small company that has a number of patents on UWB technology as it related to communications. The company has made breadboard-type demonstrations of UWB communications over long ranges.3 One demonstration involved sending music over a link separated by several kilometers. Also, they have worked with Professor R. A. Scholtz, from the University of Southern California, to show the multiple access communication capabilities of UWB technology.2 Lieutenant Colonel James D. Taylor, USAF (retired), edited and published Introduction to Ultra-Wideband Radar Systems in 1995.6 This was the first book specifically written to present a complete overview of ultra-wideband theory, phenomenology, technology, and systems design. The annual SPIE Aerosense technical conferences have been the principal UWB technical forum since 1992. These yearly conferences present the latest work in UWB radar and related signal processing. The IEEE Radar and Electromagnetic Compatibility annual conferences also present UWB work. There are also yearly conferences on ground probing radar sponsored by NIST and the electromagnetic pulse (EMP) conferences sponsored by the U.S. Air Force Philips Laboratory in Albuquerque, NM, which cover related technology and practical applications. At the 1995 ITS America annual meeting, authors from the Virginia Tech Center for Transportation Research (CTR) published a paper titled “An Infrastructure Controlled Cooperative Approach to Automated Highways.”7 The National Automated Highway System Consortium selected that concept as one of the initial concepts it will explore. At this stage, there are many concepts being studied, and only a few will be chosen by the consortium for prototype development.
7.4.2
IMPORTANT UWB TECHNOLOGY FEATURES
Range Finding The initial intended application for UWB technology was ranging and detection of small buried or concealed objects. UWB ground-probing radars have been used in geophysical surveying, construction, and agriculture for many years to measure water, look for rock formations, and measure ice thickness. The wideband nature of the signal allows it to penetrate many solids and fresh water with various degrees of penetration. Electromagnetic signals will be reflected by any sudden change in the index of refraction, which is why the ground-probing radar is able to provide subsurface profiles and detect underground water sources, problems in rock formations, etc. The short pulses provide a location accuracy of better than several centimeters over several meters of range. The possibilities for UWB radar expanded significantly when it was discovered that high-power UWB impulses could be generated with solid state devices. Developments in light activated semiconductor switching (LASS) have been driving high-power impulse development. Many military applications started with a requirement to obtain better accuracy over longer ranges. However, radar detection at long ranges still requires energy, and the gigawatt power levels needed for longrange, high-resolution detection of small targets still present some practical problems. For short range ranging and sensing, the UWB MIR radar chip offers the advantages of precise ranging with relatively low radiated power levels. Chapter 6, “The Micropower Impulse Radar,” has a detailed explanation of UWB chip radar systems. To fully appreciate the advantages of UWB chip sensors, consider that getting centimeter ranging accuracy with conventional narrowband pulse radar techniques requires at least millimeterwave transmitters and very short pulse lengths, on the order of tenths of nanoseconds. Millimeter waves happen to be easily absorbed by the atmosphere and have a limited range. Ultrasonic, IR, and UV sensors have similar attenuation problems due to moisture, aerosols, and other factors. The UWB chip sensor operates roughly in the 1 to 10 GHz frequency range, which keeps it away from atmospheric absorption problems. It also has a low duty cycle, about 10–6, so that the average © 2001 CRC Press LLC
radiated power is in the microwatt range. The only limitation is generating a strong enough pulse to obtain the ranges needed by a particular application. The existing 1 ns pulse duration technology can get 15 cm accuracy with ranges in the tens of feet, with hundreds of feet ranges possible with received signal range gating and integration.1 By using cooperative ranging, difference in arrival times of 50 ps or less are possible, yielding accuracy of a fraction of a centimeter. At this time, breadboard technology demonstrations show even better performance resulting in ranges of a kilometer or more using milliwatts of power. Present research is attempting to increase the capabilities of UWB radar chip technology. Multiple Access Communications A number of vendors are developing UWB multiple access communications devices such as those of Scholtz and Pulson Communications. Mr. R. Scholtz, of Bedford, MA, developed a time-hopping scheme that uses pulse position modulation to encode and send information.2 This analysis shows the feasibility of operating a large number of simultaneous channels with this technology. Figure 7.1 shows a picture of a UWB signal that uses this time-hopping technique. Other modulation techniques, such as phase coded pulse trains, can be used. Pulson Communications has demonstrated data transmission rates of hundreds of kilobits per second (kbps) over multiple kilometer ranges.3 One of their tests showed transmission of 125 kbps at a range in excess of 7 km with a raw bit error rate (BER) of 10–3. The effective isotropic radiated power was below 100 mW. The pulse train correlation capabilities of UWB technology can provide ways to identify returns from individual transponders. If each UWB transponder can respond with a code that varies the pulse train sequence, then a receiver can be set to only receive that code and identify and determine the range to a particular transponder. Such system could monitor the individual packages in a warehouse. Each package would carry a transponder set with a individual code. Simulations have shown that if a chip were placed on each package in the warehouse and there were a million packages, then the UWB locating system could precisely locate any one package within a matter of seconds. The same principle could be applied to a crowded traffic environment where this form of signal separation is vital to safe operations. The combination of range finding and multiaccess communications in one package makes UWB ranging and identification technology well suited for many different ITS applications. UWB techModulation and pseudorandom coding
Normal pulse positions FIGURE 7.1 UWB time-hopping impulse position modulation. Varying the interval between pulses can eliminate interference with narrowband systems. If each transmission is composed of time-coded impulses, then the system will have capabilities for identification or information exchange. © 2001 CRC Press LLC
nology based ranging, identification, and multiple access communications could be easily built using a common multipurpose sensor/transceiver for multiple applications. Interference Rejection The capability to operate in a crowded RF environment without interfering with, or experiencing interference from, other narrowband communications systems or other similar devices is a key feature for UWB sensing and communications ITS devices. The ITS devices’ UWB communications will use a train of subnanosecond pulses, which may be separated by less than a microsecond in some cases. To prevent these trains of pulses from interfering with narrowband communications, the timing of the pulse-coded signals will be varied with a pseudorandom code, which spreads the pulse train energy over a broad spectrum to avoid setting up any continuous-frequency signal that is detectable by narrowband sources. Figure 7.1 shows this principle of pseudorandom coding. Many unique pseudorandom codes can be generated, based on the size of the pulse train and the number of unique spacings that can be obtained between pulses. The coding may be random to prevent interference in simple range sensors, or varied according to some known modulation scheme for communications. This type of coding permits a large number of devices to operate or communicate a large amount of information without mutual interference. As techniques are developed to better control the spacing of the pulses, the amount of information that can be communicated will increase. Also, the high pulse resolution provides a natural ability to reject multipath transmissions from cars, buildings, bridges, etc. Path differences of greater than 4 cm are easily rejected with UWB sensor technology. Spectrum Issues The advantage of UWB sensors and communications for ITS is that each UWB pulse is spread over a very wide instantaneous bandwidth. If the UWB device signal is either pseudorandom pulsecode modulated or transmitted with a random interval between impulses, then the signal energy remains spread over a broad band and does not have a change to create any constantly interfering signal. The argument for using UWB is that devices the power levels of the devices are so low (microwatts) and the duty cycle relatively small, at about 0.1%, that the average radiated power is spread over about 1 GHz of bandwidth. For example, the power spectral density of a 1 GHz bandwidth UWB signal would be 30 dB down from the same power 1 MHz bandwidth signal. While there is potential for interference within a few feet of the UWB device, the problem may be solved by proper choice of impulse shape and filtering to remove frequency regions that actively interfere with some other narrowband users. The low cost and many potential applications of UWB devices are legitimate concerns to other spectrum users. Some researchers think that the greater UWB device interference problem may turn out to be narrowband interference with the ITS UWB devices. Consider that the proposed ITS UWB operating region overlaps many licensed spectral regions. This means that there is a potential problem of interference from narrowband signals, which will raise the ambient noise levels and degrade the sensors and signal-to-noise ratio. The potential problem is that interference may either degrade ITS UWB device performance or require higher-power devices to overcome the effects of interference to meet ITS requirements. The interference issue will require some careful planning and coordination during the early stages of design. Official approvals to use UWB systems may be a prove to be a major stumbling block in building an ITS with UWB technology. Any proposal to use UWB radar sensors or communications should be coordinated with proper FCC and NTIA approval authorities early in the design process. Contacts at FCC have indicated that a ruling on these devices is possible in the coming years and that the FCC is likely to modify Part 15 devices to include UWB with some power limitations. © 2001 CRC Press LLC
7.4.3
POTENTIAL ITS APPLICATIONS
OF
UWB TECHNOLOGY
UWB technology can provide both communications and radar sensing capabilities for ITS applications. Figure 7.2 shows some potential ways that UWB technology can support ITS. The following sections will discuss these applications in detail.
7.4.4
VEHICLE SENSING
AND IDENTIFICATION
Applications Moving vehicle sensing and identification on particular roadways is a major ITS system requirement. Short-range communication links and sensors can perform the detection and identification task more effectively than other methods. Examples of applications include automated vehicle identification (AVI), electronic toll tag management (ETTM), advanced traffic management systems (ATMS), transportation planning, and traffic surveillance. Existing AVI systems use wireless technologies to extract a vehicle ID from a vehicle passing through a beam. Many of these systems are narrowband systems operating around 902 MHz or 2.4 GHz. These devices are used in both highway and rail systems to identify vehicles passing a specific site. These AVI systems must use complex processing to prevent misidentification of interference sources. Transportation agencies using these systems need to make these devices readable and writable to allow for future applications. UWB devices would easily be able to perform the functions of these AVI devices. Their low susceptibility to interference, low power requirements, low cost, location capability, and correlation capability make them ideal for these and future applications. Such a system could acquire and track vehicles throughout a whole region by using multiple omnidirectional antennas and appropriate interconnections between sensing stations. The advantage of UWB sensor and communications devices is that they can easily reject interference sources. Short pulse and short range resolution eliminates the problems of multipath signal reception. If the device uses correlation detection, then the correlator can reject interference from other UWB devices and reduce narrowband interference by 30 dB or greater. Electronic toll tag management (ETTM) is a potential near-term AVI system application. The ETTM identifies vehicles for automatic toll collection. People with AVI technology on their vehicles would be able to set up credit or debit accounts for roadway tolls. Whenever the vehicle passes a - ATIS - AHS - Roadside-tovehicle communications - AHS-Vehicle-tovehicle communications
-Collision avoidance
- AVI/ETTM - ATMS - Transportation planning
FIGURE 7.2 ITS applications of UWB technology by providing ranging and communications between vehicles and the roadside and between vehicles. © 2001 CRC Press LLC
toll station, it would be identified, and the toll would be charged to the user’s account. The objective is a system that can automatically and reliably identify a vehicle that passes through a toll sensing area at highway speed. Any requirement for the vehicle to slow down defeats the objective of highway safety and increased usage. If the system had a two-way link and receiver in the vehicle, then the roadside toll collection system could transmit back to the vehicle the exact amount charged against user’s account. This feature would give users more confidence in the system. An early application of ETTM technology to commercial vehicles could save significant time at weigh stations and border crossings. ATMS is another application that could use an AVI-like technology. An ATMS uses various sources of information to perform dynamic traffic assignment. The goal of dynamic traffic assignment is to keep a highway system operating efficiently by using information about traffic incidents and congestion to divert traffic away from these bottlenecks and control traffic approaching them. An ATMS would also help coordinate the emergency response to an incident. The ATMS can use UWB devices to measure link travel time from one point to another. The vehicle ID can be extracted with a probe vehicle equipped with the device, then later segments of the roadway can look for that ID to determine the time that the probe vehicle took getting from one point to another. Gathering accurate statistical data for highway control will require using a number of probe vehicles. Given enough roadside devices, the exact vehicle track can be monitored throughout its trip. Evaluation of probe vehicles statistics can indicate the location of incidents or potential traffic flow problems. A more elaborate ATMS could watch the probe vehicle lane changes and decelerations to give more information to the traffic management system. Vehicle identification can be useful in transportation planning by collecting information needed to construct an origin destination (OD) matrix. This matrix statistically describes the routes taken by vehicles in a certain area. Knowing this information helps in transportation planning by showing where a new road should be placed to reduce the congestion in an area. UWB devices can be used in an OD matrix collection project to effectively track a percentage of probe vehicles distributed through the system. Accurate traffic modeling within a region will be the basis for spending of hundreds of millions of dollars on construction. Only simple vehicle counts are presently used to estimate OD matrix values. Determining many exact OD paths from The AVI system data collected from UWB sensors would be a far more reliable source of planning information than what is presently available. In the past, OD matrices were prepared by doing costly and time-consuming traveler interviews. Randomly equipping vehicles in a region with a UWB device, or using devices already deployed for AVI or some other application, is a potentially cost-effective way to collect accurate OD information. There are several types of traffic surveillance devices currently used to count traffic flows and monitor traffic activities and most of these devices have inductive loops imbedded in the roadway. However, the high cost of maintaining these systems has led to the development of new embedded and above-the-road technologies. UWB sensors offer a possible replacement for inductive-loop devices. Because UWB signals can penetrate solids and have communications capabilities, they can be embedded in the pavement and still monitor the presence of a vehicle above. A single chip can be embedded an inch or so in a pavement and still obtain the reflection of a metallic object a few feet above the roadway. The device could also periodically communicate counts to a roadside data collection site. AVI Roadside Receiver The practical implications of using UWB devices solely for vehicle sensing are great. When vehicle identification is required, then the devices will have to be part of a transponder system and to communicate information to and from the vehicle. The low duty cycle of the UWB signal allows for a relatively simple protocol for the identification process. There will be one roadside UWB beacon located where identification needs to be performed. This roadside device will periodically © 2001 CRC Press LLC
send out a message encoded as a short sequence of pulses. Vehicle based UWB transponders will recognize this signal and will realize that the vehicle is approaching the beacon. When in range, each vehicle’s transponder device will transmit back a coded sequence of pulses showing its unique identification number. Because there will be multiple vehicles on the roadway, it will be possible that some vehicles signals will collide; however, the signal’s low duty cycle can minimize this effect. For example, consider the following situation. A roadside beacon that interrogates approaching vehicles has a pulse repetition interval (pri) of 2 s and uses a 2 ns impulse width. This will give an unambiguous range of 300 m (977 ft). Assume that signal levels will be too low for reception beyond the ambiguous range. Therefore, the vehicles traveling at 100 km/hr will be within the beacon’s range for 10.8 s. Both the interrogation packet and vehicle identification packet signals have the same basic structure. Both the interrogator and vehicle will send UWB pulses about 2 ns wide with a 2 µs pri. Given these conditions, both signals have a duty cycle of 0.001. If we take the case where each vehicle responds to the beacon signal by transmitting its ID code, then a 30-bit ID code packet would provide more than enough unique identification codes for every vehicle in the U.S. To ensure accurate identification, each vehicle ID transponder system could encode each bit of the ID signal using multiple pulses to give an error correcting capability. Assuming that 10 pulses are used to encode each bit and that one pulse is sent per pri, then the length of the ID packet will be 30 × 10 × 2 µs (0.6 ms). It is also assumed that the interrogation packet also uses a 30 bit/10 pulses per bit/1 pulse per pri format, even though the packet does not need to be this long. Thus, the interrogation packet will also be 0.6 ms long. Each interrogator will transmit its total signal, and the vehicle’s transponder will not respond until it receives every pulse of every bit of the interrogation packet. This feature prevents the vehicles from accidentally responding to any UWB signal—for example, signals from nearby vehicles. Also, by using different codes for the interrogation packets, different systems in the same area will be able to solicit responses from different vehicles. Given this arrangement, the beacon will interrogate the vehicles approximately every 1.2 ms. Because the vehicle’s are within the unambiguous range for a 2 s pri, every vehicle’s ID code will start to arrive at the beacon within 1 pri after the end of the interrogation signal, neglecting the processing delays. The vehicle IDs will definitely overlap in this system, but the individual pulses of each ID will not overlap (at least for vehicles within the same lane), because they have a duty cycle of 0.001 and fall into different range gates. Thus, each pulse can be correctly received. Each receiver will require processing to determine which pulse belongs to which ID, an operation similar to a TDMA system deinterleaver. In operation, the receiver will first perform pulse detection to determine where all the pulses are. Then the receiver will go back to the raw receiver signal to determine the modulation for each detected pulse and use the pulse detection results. Since each pulse of a particular vehicle’s ID falls into a range gate that will be relatively constant over the length of the ID (0.6 ms), the receiver will be able to sort out the pulses into their correct IDs. This will require extra, but not necessarily complex, processing. If the beacon needs extra processing time, then it can always delay the transmission of the next interrogation packet. Time should not be a problem, because the vehicles will normally receive thousands of interrogation packets while in the interrogator’s unambiguous range. Transferring small amounts of data back to individual vehicles, or transmitting the vehicle IDs to a central system, are other reasons to delay transmitting the next interrogation pulse. By using this response method, each transmitted pulse would only be 2 ns duration and have a physical length of about 0.6 m, or about 2.0 ft. The chance of overlapping vehicle responses becomes much less given this method of vehicle identification. Figure 7.3 shows the general signal structure of the interrogation and response sequence when the interrogation and ID packets are both 4 pris long. Figure 7.3 shows that the low duty cycle of the UWB signal causes the responses of the different vehicles to be interleaved. This is because © 2001 CRC Press LLC
One pri One pulse or a sequence of pulses
One pri
Interrogation packet One particular vehicle's ID packet
FIGURE 7.3
General structure of the interrogation and response sequence.
the vehicles will be at different ranges from the beacon and will therefore fall in different range gates. There are a number of potential identification modulation systems, including pulse position modulation. These numbers used here give us a feel for the durations of the signals and suggest the feasibility of using UWB impulse interrogators to identify and determine vehicle ranges. Further design is needed to determine the values that will best suit the system. Assuming that the beacon signal uses a similar coding scheme for its interrogation packet, then there will be a large number of interrogation and response sequences while the vehicle is in range. For example, when the interrogation and ID packets are 300 pris long, the beacon will receive approximately 9000 responses from the vehicle while it is in range. If the range of the beacon signal is set less than the unambiguous range, the roadside receiver would be able to determine each vehicle’s signal, because it will fall into particular range gate as was shown in Figure 7.3. This is also useful if the system wants to track the vehicle. The range gate will be approximately constant during the reception of the ID packet because, at 100 km/hr, a vehicle’s position will only change 2.8 cm per millisecond. The only problems that could occur would be overlapping between the packets of more than one vehicle within the same range gate. Since two vehicles cannot occupy the same space, this would only happen between vehicles in different lanes. (See Figure 7.4.) The probability of this occurring would be very small, due to the signal’s low duty cycle. Also, since the identification process is repeated many times while the car is in range, then it is very likely that there will be numerous responses that are not involved in overlaps. Same range gate
Vehicle transponder
Beacon
FIGURE 7.4
Situation where two vehicles fall in the same range.
© 2001 CRC Press LLC
If the only function of this system were vehicle identification, then the system could be simplified greatly. The range could be reduced, since the receiver would not need thousands of responses to identify the vehicle. Also, directional antennas could be used to cover individual lanes. However, this AVI system has many other potential uses that would require this complexity. During the time after the vehicle has been identified but is still in range, two-way communications could be performed. For example, the driver could be provided with various types of traveler information such as the congested traffic ahead or the facilities available at the next exit. In a highly evolved ATMS that has many beacons, the system would not only want to identify vehicles, but it would also like to track them as they move through the highway system. In an ETTM system, the ability to identify all vehicles with one roadside antenna is desirable. This is because individual antennas for each lane could open up the possibility of the vehicle not being identified by unintentionally changing lanes or by intentionally driving between two lanes. The development of link protocols is beyond the scope of this discussion. However, by careful choice of protocols, an adequate system could be designed that has the appropriate requirements of probability of detection, exact identification, and false alarm needed to build a safe and reliable highway control system. The roadside receiver for this system would look like a radar receiver with much more processing and computational capabilities. The incoming signal will first pass through a pulse detection circuit. Here, each vehicle’s signal will be separated according to the range gate it occupies. Then the coding on each vehicle’s response is checked to get the vehicle ID. This ID is checked with the IDs in a temporary buffer containing the recently been received IDs. When a vehicle’s ID has been received for the first time, it will be entered into the buffer and will stay there until the vehicle passes out of range. This process allows the receiver to keep track of all received ID numbers and permits more processing to improve system reliability. Using this procedure, the receiver processor will decide that it has identified a vehicle only after it has received multiple responses from that vehicle. In this way, the receiver can throw out spurious ID numbers that have resulted from detection process errors. ID numbers of identified vehicles that have passed out of range will be removed from the database after those numbers have not been received for some time period. The receiver can improve the identification process by using the fact that a particular vehicle’s range gate should change only slightly between successive responses. Figure 7.5 is a block diagram of this receiver, which shows its major functions and how it can determine vehicle speed and other location information. Once the vehicle has been identified and tracked, application specific proInput signal
Pulse detection
Extract ID
Compare with buffered IDs
Add new IDs
Remove old or spurious IDs
Update current IDs
Perform application-specific response: • billing • transmit response • tracking
FIGURE 7.5 system.
Confirm reliably received IDs
Block diagram of the roadside receiver in the UWB automatic vehicle identification (AVI)
© 2001 CRC Press LLC
cessing can be carried out and, if necessary, a two-way link can even be established. For example, in an ETTM system, the IDs would be used to bill tolls to the correct users’ accounts. If the system is only monitoring traffic, then the same UWB device could also transmit the vehicle IDs to various locations that need traffic data. Automated Highway Systems (AHS) AHS was the application area that brought ultra-wideband technology to the attention of Virginia Tech researchers. They were looking for a technology that could provide the following: 1. The centimeter ranging accuracy required to safely control the vehicles 2. Vehicle position update rates that would allow a system to respond orders of magnitude faster than a human driver 3. The ability to communicate between vehicles and the roadside They first examined millimeter wave (>40 GHz) devices as a way to obtain the desired accuracy; however, atmospheric absorption and hardware costs immediately limited the potential usefulness. They first noticed UWB technology being used in impulse radar but also needed a cooperative communications system capability. Then they found that there was an ARPA project developing UWB technology for both location and communications, and this seemed to be an answer. UWB devices could provide centimeter accuracy and communicate information using solid state electronics. It was later found out that numerous other projects were developing aspects of UWB and that it was a mature technology. It was quickly apparent that UWB technology had a wide range of potential applications in AHS concepts requiring sensing, communications, and sensor fusion. Almost all AHS concepts have some need for ranging and vehicle-to-vehicle or vehicle-to-roadside communications. The following sections discuss specific applications of UWB technology to those problems. Vehicle-to-Vehicle Communications A number of AHS concepts have suggested using vehicle-to-vehicle communications to stabilize the control of vehicles in a system that uses platooning. Platooning is an approach to vehicle control that separates the vehicles on a roadway into independent groups. Within these groups, or platoons, the vehicles are spaced very close together—say, a few meters—and their movements are coordinated so that the platoon acts as a unit. Using a communications link, the head vehicle can inform all nearby vehicles when it plans to start a maneuver, e.g., slow down, speed up, stop, change lanes, etc. Each following vehicle controller can begin responding before a sensor could detect a change in spacing. The head vehicle could provide both its intended velocity and acceleration, as well as passively providing its location. This would allow for a greater stability by effectively closing the system loop. It is a difficult problem to find a system that would not become crowded, could provide reliable point-to-point communications, and not need line of sight. The UWB technology could provide the correlation capability to uniquely identify signals, reject interference, and see through nonmetallic objects. If the designer carefully places UWB antennas (for example, on top of vehicles on a short mast), then the line-of-sight problem can be handled easily. In addition, the location capability can be used along with other sensors to determine relative headway, velocity, acceleration, jerk, etc. Roadside-to-Vehicle Communications In many AHS concepts, there will be a need to send some form of advisory signal from the roadside to the vehicles. Roadside advisory communication can be used to set a desired vehicle speed, desired headway, desired platoon size, etc. In an advanced system, it can be used in the regulatory © 2001 CRC Press LLC
control loop to control each vehicle’s actuators. UWB communications links have the capability to handle the hundreds of kilobytes per second needed for communicating from the roadside to a specific vehicle. In cases where vehicles use wayside markers to control their path, or if the roadside determines all vehicle locations, then UWB sensor and communications technology can benefit any AHS approach. In most proposed approaches requiring large amounts of information transfer, the roadside-to-vehicle links will be set up and maintained in a similar manner to call handling in a cellular phone network. When a car enters an automated highway, a call-initiation process will start, and the vehicle will be assigned to a dedicated channel. When the vehicle leaves the region covered by one transceiver, a handoff protocol will give the vehicle a new channel assignment to continue control. Many proven cellular telephone techniques, especially those that use CDMA, can be applied to maintaining roadside-to-vehicle links. The Cooperative Infrastructure Managed System Many AHS concepts can be characterized as being vehicle based and require that each individual vehicle use on-board sophisticated sensors and processors to make its own driving decisions. Each vehicle essentially acts independently and only needs occasional advisory signals from the roadside or from other vehicles. The Virginia Tech faculty proposed the cooperative infrastructure managed system (CIMS) as an AHS concept using UWB sensing and communications technology to get around the problems of many essentially autonomous vehicles. The CIMS takes a different approach of distributing the system intelligence along the roadside infrastructure instead of on each vehicles. This greatly reduces vehicle systems cost and complexity without adding much to the infrastructure. The CIMS concept is to let the infrastructure make the global control decisions, hence the “infrastructure managed” part of the name. The vehicles will still make the high-data-rate regulatory control decisions using information it receives from its own on-board sensors. The vehicle will also translate the general spacing commands it receives from the roadside into specific commands that it can use to control its individual actuators. So the vehicle and the roadside will act cooperatively, with each performing the functions it can do best. The CMIS concept of distributed intelligence also allows another level of cooperation. For the roadside to control every vehicle within a certain region, it must have information about the entire region. Using this information, the controller will be able to make decisions that will allow vehicles to act cooperatively in various situations. For example, if a car has a flat tire, all the vehicles around it can adjust their spacing as a unit rather than each vehicle individually deciding which direction it will move. In the CIMS concept, roadside processors use will use vehicle location and movement information to determine an optimal way for all vehicles to use the roadway. The system will control individual sectors as well as the entire automated highway. To carry out this tight control, vehicle movement and location must be measured about once every 0.01 s. When the system determines the desired vehicle response, the roadside stations will transmit the desired acceleration vector to each vehicle. Then each vehicle’s processor sends the necessary steering, braking, and throttle actuator commands to carry out CIMS directions. The rapid command rate means that each vehicle will be able to respond to events faster than if it were operated manually. The vehicles will be able to react cooperatively to certain situations, because the roadside processors direct each vehicle’s response based on information about every other nearby vehicle. UWB devices appear to be one of the cheapest and most effective ways to provide both vehicle location systems and roadside communications links to vehicles. The roadside infrastructure will have many transceiver units spaced about 300 m apart to provide for some redundancy and for vehicle location purposes. Each unit will have a UWB transceiver and accompanying processor and be connected together so that the processors can operate in a © 2001 CRC Press LLC
distributed manner. These processors will make the control decisions for a section of the roadway. Each vehicle will also have its own transceiver, and the communication link system will function similarly to a cellular phone network. Because each vehicle’s UWB locating and communications device will always be in range of several receivers, each vehicle’s response signal will be received at multiple receivers. The roadside processing and control system can use multiple responses to fix each vehicle. There are several approaches to this, including the hyperbolic location system. Hyperbolic systems use the time delay between different stations to define hyperbolas on which the vehicle must lie. Once ambiguities are resolved, the crossing point of these hyperbolas is the vehicle’s position.
7.4.5
OTHER ITS APPLICATIONS
Advanced Traveler Information Systems Another advanced traveler information system (ATIS) goal is to provide the users with traveler information, and UWB communications appear to be one of the best candidates. Coupling the UWB communications devices with an LCD display, or a simulated voice, could inform travelers of coming traffic jams, dangerous road conditions, exit ramp facilities, or where available parking spaces are in a central business district. In-vehicle signing is a specific ATIS example application that will display approaching road signs in the vehicles for the drivers. Possible display methods include projecting information contained from an approaching sign on a heads-up display (HUD), on the windshield, a screen, or an LCD panel. One way to do this is to put UWB transmitters on signs to continually broadcast the message to approaching vehicle receiver. The low cost and wide bandwidth available make these devices attractive candidates for site-specific forms of traveler information. The wide range of uses makes overall deployment both feasible and attractive. Collision Avoidance and Precrash Restraint Deployment Collision avoidance is an obvious ITS application of the MIR chip developed by Lawrence Livermore National Laboratory. The current chips have perfect specifications for finding the range to an object near a car. Presently, the limited available MIR detection range could be used for both driver warning while backing and for blind-spot detection. Future chips can have ranges capable of warning of forward objects. The predicted low cost make UWB MIR vehicle warning systems an affordable aftermarket add-on, or they can be a factory-installed feature for new cars. Detecting inevitable collision is another potential use of the MIR UWB range sensor. When a sensor determines that the rate of closure of two vehicles is beyond the capability of those vehicles to avoid impact, then the sensors can trigger vehicle air bags to deploy earlier to increase safety. Another feature would be sending an automatic signal to a local emergency services dispatch center to notify them of the collision and its intensity before it actually occurs. Concealed Detectors Vandalism and unintentional damage to sensors and communication links is a major potential problem in building any ITS. Many devices will be installed where they will be subject to vandalism or other environmental effects. The UWB signals ability to penetrate concrete and drywall make its application in high foot-traffic areas more reasonable. Raytheon is looking at the UWB as a possible sensor technology for determining space availability in the Med Tech Corridor operational test in Johnson City, TN. The exact extent of penetration for different substances and the effect of changing media on the signal and performance require further research before a device can be developed. © 2001 CRC Press LLC
Ground Probing Ground probing and buried object detection are among the oldest UWB radar technology uses. Traditionally, UWB impulse radar was used for detecting metallic objects such as mines and ordnance. It can be just as effective in measuring depths of pavement layers and water pockets. Monitoring the condition of pavements, bridge decks, and road beds to estimate the frequency and cost of repairs and resurfacing is a major potential application. Dr. Al-Qadi of the Virginia Tech Civil Engineering Department is currently researching this area. UWB radar technology could potentially save millions of dollars in highway maintenance costs.
7.5 CONCLUSIONS Ultra-wideband communication and radar location is an exciting new technology with a wide range of potential ITS applications. The short ranges, high range resolution, and ability to penetrate nonconducting materials and operate in all weather make UWB devices excellent candidates for ITS uses. ITS applications developers should explore UWB technology for their transportation needs. FCC rulings and large-scale production of UWB devices will make UWB technology more popular in the future. It is reasonable to foresee a day when every car and every mile of highway will be equipped with at least one UWB device.
REFERENCES 1. Micropower Impulse Radar (MIR) Technology Overview, Lawrence Livermore National Laboratory, Livermore, CA. 2. Scholtz, R., “Multiple Access with Time-Hopping Impulse Modulation,” MILCOM’93, Bedford, MA, 1993. 3. Impulse Radio Basics, Pulson Communications Co., Atlanta, GA. 4. Harmuth, Henning F., Nonsinusoidal Waves for Radar and Radio Communication, Advances in Electronics and Electron Physics, Supplement 14, New York: Academic Press, 1981. 5. Skolnik, Merrill I., “An Introduction to Impulse Radar,” NRL Memorandum Report 6755, Naval Research Laboratory, Washington, D.C., November 20, 1990. 6. James D. Taylor, ed., Introduction to Ultra-Wideband Radar Systems, CRC Press, Boca Raton FL, 1995. 7. James, Robert D. and Walimbe, Atul S., “An Infrastructure-Controlled Cooperative Approach to Automated Highways,” Intelligent Transportation: Serving The User Through Deployment, ITS America, Washington, D.C., 1995, pp. 171–179.
© 2001 CRC Press LLC
8
Design, Performance, and Applications of a Coherent UWB Random Noise Radar Ram M. Narayanan, Yi Xu, Paul D. Hoffmeyer, John O. Curtis
CONTENTS 8.1 Abstract 8.2 Introduction 8.3 Radar System Description 8.4 Theory of Random Noise Polarimetry 8.5 Results of Simulation Study 8.6 Proof-of-Concept Experimental Results 8.7 Results of Field Tests 8.8 Conclusions 8.9 Acknowledgments References
8.1 ABSTRACT A novel coherent ultra-wideband radar system operating in the 1–2 GHz frequency range has been developed recently at the University of Nebraska. The radar system transmits white Gaussian noise. Detection and localization of buried objects are accomplished by correlating the reflected waveform with a time-delayed replica of the transmitted waveform. Broadband dual-polarized log-periodic antennas are used for transmission and reception. A unique signal processing scheme is used to inject coherence in the system by frequency translation of the ultra-wideband signal by a coherent 160 MHz phase-locked source prior to performing heterodyne correlation. System coherence allows the extraction of a target’s polarimetric amplitude and phase characteristics. This chapter describes the unique design features of the radar system, develops the theoretical foundations of noise polarimetry, provides experimental evidence of the polarimetric and resolution capabilities of the system, and demonstrates results obtained in subsurface probing applications.
8.2 INTRODUCTION Ground-penetrating or subsurface radar systems are increasingly being used for a variety of military and civilian applications.1 Although such systems are essentially similar to other free-space radar systems, they present certain unique problems that demand specialized system design and signal processing capabilities. Some of the primary issues that need special attention are efficient coupling of the electromagnetic energy into the ground, elimination of the large reflection from the air-toground interface, achieving adequate signal penetration into sometimes lossy media, and achieving
© CRC Press LLC
adequate signal bandwidth consistent with desired depth resolution. From a phenomenological point of view, factors such as propagation loss, clutter characteristics, and target characteristics are quite different from free-space systems. Ground-penetrating radar systems operate over a wide range of probing depths, from close-range high-resolution applications such as locating buried mines and hidden voids in pavements at depths of up to 50 cm, to long-range low-resolution applications such as probing geologic strata at depths of over 100 m. The University of Nebraska has developed a coherent polarimetric random noise radar system used mainly for detecting shallowly buried mine-like objects. This novel ground-penetrating radar (GPR) system was designed, built, and tested over the last two years. Although the transmit waveform is phase incoherent, simulation studies and performance tests on the system confirm its ability to respond to phase differences in the received signal. This GPR system uses a wide bandwidth random noise signal operating within the 1–2 GHz frequency range. High spatial resolution in the depth (range) dimension is achieved due to the wide bandwidth of the transmit signal. The radar system is operated and controlled by a personal computer (PC), and the data acquired are stored in the hard drive in real time. From the raw data, the system produces four images corresponding to the copolarized receive amplitude, cross-polarized received amplitude, depolarization ratio, and polarimetric phase difference between the orthogonally polarized received signals. The polarimetric random noise radar system was used to gather data from a variety of buried targets from a specially designed sandbox 3.5 m long, 1.5 m wide, and 1.0 m deep. Targets that were buried included metallic as well as nonmetallic objects of different size and shapes that mimicked land mines as well as other objects. These objects were buried at different depths and with different relative orientations. This chapter is organized as follows: 1. Section 8.3 provides a detailed description of the coherent polarimetric random noise radar system. 2. In Section 8.4, we develop the theoretical foundations of random noise polarimetry. 3. Section 8.5 describes the results of a simulation study that confirms the ability of the system to respond to phase differences in the reflected signal. 4. In Section 8.6, we show results of proof-of-concept experimental tests in air that demonstrate system performance. 5. Section 8.7 discusses images of buried objects acquired from a specially designed sandbox. 6. Section 8.8 summarizes and concludes.
8.3 RADAR SYSTEM DESCRIPTION A block diagram of the system is shown in Figure 8.1. The noise signal is generated by OSC1, which provides a wideband noise signal with a Gaussian amplitude distribution and a constant power spectral density in the 1–2 GHz frequency range. The average power output of the noise generator is 0 dBm. This output is split into two in-phase components in power divider PD1, which has a 1 dB insertion loss over the 3 dB power split. Thus, the power divider outputs are at –4 dBm nominal level. One of these outputs is amplified in a power amplifier AMP1, which has a gain of 34 dB and a power output of greater than +40 dBm at its 1 dB gain compression point. Thus, the average power output of AMP1 is +30 dBm (1 W), but the amplifier is capable of faithfully amplifying noise spikes that can be as high as 10 dB above the mean noise power. The output of the amplifier is connected to a dual-polarized broadband (1–2 GHz) log-periodic transmit antenna ANT1. The log-periodic antenna, in addition to being broadband, has desirable features such as a constant gain of 7.5 dB with frequency, superior cross-polar isolation of greater than 20 dB, and main-to-back lobe ratio of better than 30 dB over the operating frequency band. Although our initial design calls for transmission of linearly polarized signals, the dual-polarized © CRC Press LLC
FIGURE 8.1
Block diagram of polarimetric random noise radar system.
antenna can also be configured to transmit circularly polarized signals through the use of switches and hybrids. The other output arm of the power divider PD1 is connected to a combination of a fixed and variable delay lines, DL1 and DL2 respectively. These delay lines are used to provide the necessary time delay for the sampled transmit signal so that it can be correlated with the received signal scattered from objects or interfaces at the appropriate depth corresponding to the delay. The fixed delay line DL1 is used to ensure that the correlation operation is performed only at depths below the air-soil interface, thereby serving to eliminate ground clutter. Since the total probing depths are of the order of 1 m maximum, the delay lines are relatively short with maximum losses of not more than 1 dB. These delay lines are physically realized by low-loss phase shifters, which can be rapidly programmed to step through the entire range of available delays so that various probing depths can be obtained. Assuming a 6 dB maximum loss in DL1 + DL2, the noise power available at the input of the lower sideband up-converter MXR1 is –10 dBm. To perform coherent processing of the noise signals, a unique frequency translation scheme is proposed. The primary component of this technique is a 160 MHz phase-locked oscillator OSC2, which has a power output of +13 dBm. This is connected via a power divider PD2 to the IF input terminal of MXR1. Assuming a 0.5 dB insertion loss in PD2 (over the 3 dB power split), an adequate level of +9.5 dBm is available for the frequency translation. The output of MXR1 is the lower sideband of the mixing process, which lies within the 0.84–1.84 GHz frequency range. The nominal average power level is –15 dBm. This coherent noise signal is split by power divider PD3 © CRC Press LLC
into two channels: the copolarized and the cross-polarized channels; the second output of the power divider PD2 is again split into two 160 MHz signals in power divider PD4. We will now discuss the signal processing of the copolarized channel. The cross-polarized channel operation is essentially identical, so it will not be repeated. One of the outputs of PD3 at a level of –19 dBm is amplified in a 19 dB gain amplifier AMP4, thereby providing nominal 0 dBm average power at the output. Since this signal is noise-like, the amplifier AMP4 is chosen so as to provide a linear output of +10 dBm minimum. This signal is used as the local oscillator (LO) input to a biasable mixer MXR2, whose RF input is obtained from the copolarized channel of the receive antenna ANT2 and a 20 dB gain low noise amplifier AMP2. The receive antenna is identical to the transmit antenna. Amplifier AMP2 is used to improve the noise figure at the receiver input. Mixer MXR2 is biased in the square-law region using a dc voltage since the LO drive, being of varying amplitude, can sometimes attain low power levels, resulting in an inefficient mixing process. The dc bias ensures that the mixing process is efficient for LO drive levels as low as –10 dBm. In general, the RF input signal to the mixer MXR2 consists of transmitted noise at 1–2 GHz scattered and reflected from various objects/interfaces. However, since the LO signal has a unique delay associated with it, only the signal scattered from the appropriate depth (i.e., range) bin will mix with the LO to yield an IF signal at a frequency of exactly 160 MHz. Signals scattered or reflected from other depth bins will not provide a constant frequency of 160 MHz. The output of the mixer MXR2 is connected to a narrowband bandpass filter FL1 of center frequency 160 MHz and bandwidth 5 MHz, ensuring that only 160 MHz signals get through. The output of filter FL1 at 160 MHz is split into two outputs in power divider PD5. One of these outputs is amplified and detected in a 70 dB dynamic range 160 MHz logarithmic amplifier AMP6, whose logarithmic transfer function was measured as 25.2 mV/dB. The wide dynamic range ensures that a wide range of scattered power levels can be processed. The other output of the power divider PD5 is connected to one of the inputs of an I/Q detector IQD1, whose reference input is one of the outputs from PD4. Both of the signals are exactly at 160 MHz; thus the I/Q detector provides the in-phase (I) and quadrature (Q) components of the phase difference between the two signals. Since frequency translation preserves phase differences, the I and Q outputs can be related to the polarimetric copolarized scattering characteristics of the buried object or interface. In a similar fashion, the cross-polarized channel is simultaneously processed using amplifier AMP5 (equivalent to AMP4), biasable mixer MXR3 (equivalent to MXR2), 160 MHz bandpass filter FL1 (equivalent to FL2), power divider PD5 (equivalent to PD6), logarithmic amplifier AMP7 (equivalent to AMP6), and I/Q detector IQD2 (equivalent to IQD1). The system therefore produces the following outputs at various depths as set by the delay lines: 1. 2. 3. 4.
Copolarized amplitude Copolarized phase angle Cross-polarized amplitude Cross-polarized phase angle
8.4 THEORY OF RANDOM NOISE POLARIMETRY Since the transmitted signal has a random amplitude distribution and a uniform power spectral density, we model the transmit voltage wave υ t ( t ) as υ t ( t ) = a ( t ) cos ( ω o + δω )t
(8.1)
where a ( t ) takes into account the amplitude distribution and δω ( t ) takes into account the frequency spectrum of υ t ( t ) . ω o is the center frequency of transmission. We assume that a ( t ) follows a Gaussian distribution while δω ( t ) follows a uniform distribution, and that both a ( t ) and δω ( t ) © CRC Press LLC
are ergodic processes. Furthermore, we assume that a ( t ) and δω ( t ) are uncorrelated and statistically independent. The average power transmitted P t , is given by 2
υt ( t ) P t = ----------Ro
(8.2)
where R o is the characteristic system impedance, and a bar over a variable denotes its time average value. Since a ( t ) and δω ( t ) are independent, we can write 2
2
2
υ t ( t ) = a ( t )cos { ( ω o + δω )t } 2
2
= a ( t ) ⋅ cos { ( ω o + δω ( t ) )t } 1 2 = --- a ( t ) 2
(8.3)
2
since the average value of cos ( · ) is 1/2. Thus, 1 2 P t = --------- a ( t ) 2R o
(8.4)
Consider an object of complex reflectivity R exp { jφ o } buried at a depth d. To simplify the analysis, we assume that both the magnitude R and the phase angle φο of the object reflectivity are invariant with frequency. If the dielectric constant of soil is ε r = ( ε′ r – jε″ r ) , the phase velocity of the electromagnetic wave is cυ p = --------ε′ r
(8.5)
if we assume that the soil medium is low loss, i.e., ε″ r < < ε′ r . Thus, the two-way delay for a signal that is transmitted, reflected and arriving at the receive antenna, τ, is 2d ε′ τ = 2d ------ = -----------------r υp c
(8.6)
For lossy media, the phase velocity νp is slower than the lossless case, thereby increasing the twoway signal delay τ. Let the propagation constant in soil, γ, be given by γ = α + jβ
(8.7)
where α is the attenuation constant and β is the phase constant. In general, α and β both increase with frequency. Thus, the two-way propagation factor is given by © CRC Press LLC
A ( d ) = exp { 2γd } = exp { – 2αd } exp { – 2jβd }
(8.8)
The time-varying expression for the received voltage υ r ( t ) can now be obtained as the timedelayed version of υ r ( t ) modified to include the effects of scattering and two-way propagation. Thus, υ r ( t ) = a ( t – τ )R exp { – 2βd } ⋅ cos { ( ω o + δω ) } ( t – τ ) + φ o – 2βd }
(8.9)
The time-delayed sample of the transmit signal is υ t ( t – τ ) = a ( t – τ ) cos { ( ω o + δω ) ( t – τ ) }
(8.10)
When this signal is passed through a double sideband up-converter whose IF frequency is ω´, the lower sideband output v′ t ( t – τ ) is v′ t ( t – τ ) = a ( t – τ ) cos { ( ω o – δω ) ( t – τ ) }
(8.11)
The difference frequency from the mixing process of υ r ( t ) and v′ t ( t – τ ) yields a voltage υ d ( t ) given by 2
υ d ( t ) = K 1 Ra ( t – τ ) exp { – 2αd } cos { ω′ ( t – τ ) + φ 0 – 2βd }
(8.12)
where K1 is some constant. Note that this signal is always centered around ω′. The average amplitude of this signal, V d , is given by 2
V d = K 1 Ra ( t – τ ) exp { – 2α o d } = 2K 1 RR o P t exp { – 2α o d }
(8.13)
where αo is the value of α at ω = ω o . The average power in this signal, P r , is given by 2
Vd 2 2 2 P r = -------- = 2K 1 R o P t exp { – 4α o d } R 2R o = K2 R
2
(8.14)
where K2 is a constant. Thus, measurement of the power Pr yields the square of the reflection coefficient magnitude. To measure the phase φo, consider the output of the I/Q detector fed by υ d ( t ) and υ 1 ( t ) , where υ 1 ( t ) is given by υ 1 ( t ) = cos ω′t
(8.15)
Since both of these signals are at the same frequency ω′ , the I/Q detector can unambiguously measure the phase difference, θ, given by © CRC Press LLC
θ ( t ) = – ω′τ + φ o – 2βd
(8.16)
The average value of θ as measured by the I/Q detector is θ = – ω′τ + φ o – 2βd
(8.17)
Note that β is simply the value of β at ω = ω o , which is ω o ε′ r β = ---------------c
(8.18)
2ω o ε′ r d θ = φ o – ω′τ – ---------------------c
(8.19)
We therefore obtain
Thus, a measurement of the average value of θ yields the phase angle φo. Until this point, we have not considered the effects of polarization. If the antenna can simultaneously measure both the copolarized and the cross-polarized scattered power, and if the hardware for both copolarized and cross-polarized channels are identical, then we can measure P rc , P rx , θ c and θ x where the subscripts “c” and “x” refer to the copolarized and the cross-polarized channels, respectively. Thus, 2
P rc = K 2 R c
2
P rx = K 2 R x
(8.20) (8.21)
Thus, the ratio of Prx to Prc yields the power depolarization ratio, D, which is seen to be independent of the system transfer function, i.e., 2
P rx R ------- = -----x2 = D P rc Rc
(8.22)
θ c = φ oc – ω'τ – 2βd
(8.23)
θ x = φ ox – ω'τ – 2βd
(8.24)
Furthermore, we have
and
Thus, the difference between θ x and θ c yields the phase angle between the cross-polarized and copolarized channels, again seen to be independent of the system. θ x – θ c = φ ox – φ oc © CRC Press LLC
(8.25)
The resolution properties of the system can be easily observed by considering a received signal from another range (or depth) bin whose delay is different from τ. Let the delay from the buried object be τ′ , but the delay within the system will be τ. In this case, υ r ( t ) is now modified and expressed as υ′ r ( t ) as follows: υ′ r ( t ) = a ( t – τ′ ) exp { – 2αd } ⋅ cos { ( ω o + δω′ ) ( t – τ′ ) + φ o – 2βd }
(8.26)
When this signal is mixed with υ′ r ( t – τ ) , we get υ′ d ( t ) , given by υ′ d ( t ) = Ra ( t – τ )a ( t – τ′ ) exp { – 2αd }· cos { ω′ ( t – τ ) – ω o ( t – τ′ ) + δω′ ( t – τ′ ) – δω ( t – τ ) + φ o – 2βd }
(8.27)
Since the noise voltage a ( t ) has a temporal correlation function of the form ( sinx/x ) , we have the result a ( t 1 )a ( t 2 ) = 0
(8.28)
for t 1 – t 2 » 1 ⁄ B , where B is the system bandwidth. Thus, the average power in the signal υ′ d ( t ) can be shown to be equal to zero. We see, therefore, that unless the internal time delay is exactly matched to the expected time delay, the output of the detector is zero. As we step the internal time delay τ from zero to the maximum expected value, the depth profile of scattering can be built up by the system, so that not only can targets be identified, they can also be localized. In practice, the practical system will suffer from drawbacks such as system nonlinearities in amplitude and phase that can degrade the detection efficiency and resolution. In an ideal case, the resolution is determined by the system bandwidth B. The resolution ∆d is given by υ c ∆d = ------p- = ----------------2B 2 ε′ r B
(8.29)
For c = 3 × 108 m/s and B = 1 GHz, we get 15 ∆d = ---------- cm ε′ r
(8.30)
For dry soil, ε′ r ≅ 3 , and for wet soil ε′ r ≅ 25 . Thus, the system resolution varies from 8.6 cm in dry soil to 3 cm in wet soil, with an intermediate value of about 5 cm. The maximum depth of detection is limited by the noise figure of the front-end receiver amplifier, and the loss characteristics of the soil. The noise power level at the receiver, N, is given by N = kTB D F
(8.31)
where k is the Boltzmann constant (1.37 × 10–23 J/K), T is the ambient temperature (assumed to be 300 K), BD is the detection bandwidth (5 MHz), and F is the front end amplifier noise figure (2 dB = 1.58). Using these values, the noise power at the receiver is computed as –104.9 dBm. Soil losses vary widely in value, ranging from about 1–2 dB/m for dry sand to over 100 dB/m for © CRC Press LLC
wet clay, thereby yielding widely varying detection depths, depending on soil type and moisture content. It must also be noted that the transmission coefficient at the air-to-soil interface is modified due to scattering caused by the soil surface roughness, although this effect is not expected to be significant owing to the lower frequencies used for subsurface probing. From the raw data collected by the radar system, we generate images based on the Stokes matrix formulation for facilitating the detection and recognition of targets using the polarimetric information on the buried target. The Stokes vector is a convenient method for representing the polarization state of an electromagnetic wave and is denoted as [ S ] , given by So [S] =
S1
(8.32)
S2 S3
whose individual elements are defined as follows: 2
2
(8.33)
S1 = EH – EV
2
2
(8.34)
S 2 = 2 E H E V cos θ d
(8.35)
S 3 = 2 E H E V sin θ d
(8.36)
So = EH + EV
In the above equations, θd is the polarimetric phase angle, i.e, the difference between the phase angle of the horizontally received signal and the vertically received signal. Also, E H and E V are the electric field amplitudes of the horizontally and vertically polarized received signals, whose squared values represent the copolarized reflected power and cross-polarized reflected power respectively (assuming the transmit polarization is horizontal). We recognize So as the total reflected power (sum of the copolarized and cross-polarized reflected power). S1 is recognized as the difference between the copolarized and cross-polarized reflected power. S2 is proportional to the cosine of the polarimetric phase angle, while S3 is proportional to the sine of the polarimetric phase angle θd. Both S2 and S3 are weighted by the absolute electric field amplitudes of the reflected copolarized and cross-polarized signals, as can be seen from their definitions. It is also to be noted that 2
2
2
2
S0 = S1 + S2 + S3 ·
(8.37)
The use of S2 and S3 is very helpful in detecting targets, since these parameters move in opposite directions and thereby provide additional information about the reflected signal. When S2 is high, S3 is low, and vice versa. Thus, no matter what the polarimetric phase angle is, the target image is bound to show up in either S2 or S3, or sometimes in both.
8.5 RESULTS OF SIMULATION STUDY Various computer simulations were performed to evaluate the performance of the radar system design.2 These simulations were performed for various combinations of soil type, soil moisture, © CRC Press LLC
depth of target burial, and polarimetric response of the buried target. Ground reflections as well as uncorrelated system noise were added to the received signal to simulate realistic field conditions. Results of simulations using random noise as the probing signal are shown in Figures 8.2 and 8.3. In Figure 8.2, the reflectivity of the buried object is assumed to be 1 exp { j0° } , while in Figure 8.3, the reflectivity of the buried object is assumed to be 1 exp { j90° } . The objects are assumed
FIGURE 8.2 Simulation results using random noise waveform for target reflectivity of 1 exp{j0°}. (a) Transmitted signal amplitude vs. time. (b) Transmitted signal shifted by ±ω´ to simulate the double sideband up-conversion. (c) Received signal amplitude vs. time after two-way propagation and reflection. (d) Multiplied output signals in (b) and (c) above vs. time. (e) Spectrum of filtered output in (d) showing the peak at ω´. (f) Multiplied output in (d) filtered at ω´, showing the input signal at the I/Q detector vs. time (solid line). © CRC Press LLC
FIGURE 8.3 Simulation results using random noise waveform for target reflectivity of 1 exp{j90°}. (a) Transmitted signal amplitude vs. time. (b) Transmitted signal shifted by ±ω´ to simulate the double sideband up-conversion. (c) Received signal amplitude vs. time after two-way propagation and reflection. (d) Multiplied output signals in (b) and (c) above vs. time. (e) Spectrum of filtered output in (d) showing the peak at ω´. (f) Multiplied output in (d) filtered at ω´, showing the input signal at the I/Q detector vs. time (solid line).
to be located at a depth of 5 cm in “clayey” soil (48% clay, 40% silt) with 10% volumetric moisture whose dielectric constant was computed as ε r = ( 4.56 – j1.32 ) . The following plots are shown in the figures:
© CRC Press LLC
1. 2. 3. 4. 5. 6.
Transmitted signal amplitude vs. time Transmitted signal shifted by ± ω′ to simulate the double sideband up-conversion Received signal amplitude vs. time after two-way propagation and reflection Multiplied output of signals in (b) and (c) above vs. time Spectrum of filtered output in (d) showing the peak at ω′ Multiplied output in (d) filtered at ω′ , showing the input signal at the I/Q detector vs. time (solid line)
As can be seen, the polarimetric phase of the reflection from the buried object is clearly evident in Figures 8.2f and 8.3f. These signals are 90° out of phase, consistent with the 90° phase difference in their assumed reflectivity. Results of simulations using a spread-spectrum waveform as the probing signal are shown in Figure 8.4. The transmitted signal was assumed to be of constant amplitude of 1, while its frequency was changed in a random fashion between 1 and 2 GHz after each burst. The target reflectivity o was assumed to be 1 exp { 0 } , while the soil and depth parameters were kept the same. Comparison of Figures 8.2f and 8.4f indicates that the phase of the reflected signal is indeed preserved and is independent of the transmitted waveform type. We also confirmed that the ratio of the power received to that transmitted, i.e., average power in (f) divided by average power in (a), was the same for both types of waveforms considered.
8.6 PROOF-OF-CONCEPT EXPERIMENTAL RESULTS Preliminary test results on the radar system in air confirm its ability to extract the polarimetric response of targets with good range or depth resolution.3 These results are in conformity with our simulation studies performed earlier and described above. Figure 8.5 shows the ability of the system to track reflected signals from targets at various ranges. In this experiment, the transmitter output was directly connected to the copolarized receiver input using a coaxial cable, effectively bypassing both antennas. The intention was to confirm if the system delay time, as set by the variable delay line DL2, was able to track the pseudo-reflected signal as its range was varied. The top curve shows a plot of the copolarized amplitude as a function of the system delay time for a coaxial cable length of 1 m, which shows that the peak occurs at a delay time of 6 ns. The bottom curve shows that the peak occurs at a delay time of 10 ns for a cable length of 1.7 m. Using these values, the dielectric constant of the coaxial cable is computed as 2.9, which agrees with the manufacturer’s specification of 2.8. Thus, the radar system is capable of tracking reflections from targets by observing the delay time of the peak signal amplitude. Figure 8.5 also reveals that the cross-correlation function approximates the typical sin x/x response for the voltage with a sidelobe level of 13 dB. Thus, the maximum sidelobe level for power, which is proportional to the square of the voltage, is 26 dB. We note from Figure 8.5 that an amplitude difference of approximately 600 mV exists between the peak of the correlation function and the sidelobes for the detected voltage. This corresponds to a sidelobe level of 23.8 dB for the reflected power, since the transfer function of the logarithmic amplifier is 25.2 mV/dB. Thus, the response from adjacent depth bins will obscure the main lobe target response only if the reflectivity of the target located in the adjacent bin is about 24 dB higher than that of the main lobe target. The ability of the system to track the phase of the reflected signal amplitude was also confirmed. The radar system was pointed at a metal plate at a range of 1.2 m, and the phase of the copolarized signal, θ c , was measured from the outputs of the I/Q detector IQD1. The metal plate was then moved back in 15 cm increments, and the corresponding unwrapped phase angle measured. A plot of the phase angle difference (from the 1.2 m reference) as a function of the range increment is seen to be linear in Figure 8.6, thus showing that the system does respond to controlled phase changes brought about by changes in the transit time between the transmit and receive signals. © CRC Press LLC
FIGURE 8.4 Simulation results using random noise waveform for target reflectivity of 1 exp{j0°}. (a) Transmitted signal amplitude vs. time. (b) Transmitted signal shifted by ±ω´ to simulate the double sideband up-conversion. (c) Received signal amplitude vs. time after two-way propagation and reflection. (d) Multiplied output signals in (b) and (c) above vs. time. (e) Spectrum of filtered output in (d) showing the peak at ω´. (f) Multiplied output in (d) filtered at ω´, showing the input signal at the I/Q detector vs. time (solid line).
To observe the polarimetric response of the radar system, a polarizing grid was fabricated. This was a square 60 × 60 cm wooden frame inside which thick copper wires were fixed along one direction at 5 cm spacing. The grid was placed in front of the antennas with the longitudinal axes of the copper wires parallel to the transmit electric field vector. This was denoted as θ = 90° . The depolarization ratio, D, was measured for different values of the angle between the polarizing © CRC Press LLC
FIGURE 8.5 Copolarized signal amplitude as a function of system delay under direct transmitter-receiver connection at various cable lengths. © CRC Press LLC
FIGURE 8.6
Change in detected phase of copolarized signal as a function of incremental target range.
grid axis and the electric field vector by rotating the grid, and this is plotted in Figure 8.7. As expected, the D value is minimum when the axis of the grid coincides with the electric field vector and reaches a value of 1 when the angle θ is 45°. Below the value of 45°, both copolarized and cross-polarized signal amplitudes are very low, and this results in the D value leveling off to about 1. Finally, the resolution capabilities of the system were confirmed to meet design specifications. Since the system bandwidth is 1 GHz, the theoretical resolution in air is 15 cm. To test this, two similar objects were placed side by side within the antenna beams. The copolarized signal amplitude was recorded for three different distances between the objects (one object was kept fixed, while the other was moved back in 7.5 cm increments). These results are shown in Figure 8.8. The dotted line is the raw data, while the solid line shows the smoothed data. When the target separation is zero, there is one single peak observed near the 9 ns delay time. At 7.5 cm separation, only one target is observed at 9 ns, but the peak appears somewhat broader. At 15 cm separation, we clearly see the presence of two discernible well resolved peaks. Thus, our proof-of-concept experimental results do confirm that this novel radar system has the ability to characterize the high-resolution polarimetric scattering response of targets in air.
8.7 RESULTS OF FIELD TESTS The radar system was used to gather data from an assortment of different buried objects in a specially designed sandbox.4 The dimensions of the sandbox are 3.5 m long, 1.5 m wide, and 1 m deep. Metallic as well as nonmetallic objects were buried at different depths and orientations. The radar antennas were scanned over the surface as data were collected continuously. The operational configuration is with the antennas pointing down, hence leakage through backlobes is not expected to be a problem unless highly reflective objects appear above the system as it is scanned.
© CRC Press LLC
FIGURE 8.7 Depolarization ratio as a function of the angle between the electric field vector and the longitudinal polarizing grid axis.
8.7.1
RAW IMAGES
The following raw images were obtained using the polarimetric random noise radar. Each figure (Figures 8.9–8.12) contains four images from one radar scan over various buried objects. The top image is the copolarized received power. The second image is the cross-polarized received power. The third image is the depolarization ratio, and the fourth image is the absolute phase difference between the copolarized and the cross-polarized received channels. The relative amplitude scale in decibels applies to the copolarized and cross-polarized power and the depolarization ratio. Figure 8.9 shows the image pertaining to two metal plates, each 23 cm in diameter and 2 cm in thickness, buried 23 cm below the surface with a 30 cm lateral separation between the two. The copolarized and depolarization ratio images clearly show and resolve these two objects. Figure 8.10 is the image obtained from the same metal plates (as in Figure 8.9), but each plate is buried at different depths. The first was buried 30 cm below the surface, and the second was buried at an 8 cm depth. Spacing between the plates was changed to 15 cm. Again, the copolarized and depolarization ratio images not only clearly show these objects but are also able to resolve them. Figure 8.11 shows the image pertaining to a metal plate (same size and shape as above) and an identical wooden plate. Both plates were buried at a depth of 23 cm below the surface, with a lateral separation of 23 cm. In this image, it is easy to detect the metal plate, but the wooden plate is not clearly observable on account of its low dielectric contrast with respect to the soil medium. Further data processing using the phase information may make this object more visible, and one such technique based on Stokes matrix processing is discussed in the next subsection. Figure 8.12 shows the image of a 6 cm diameter metal pipe that was buried 29 cm below the surface. The transmit polarization was parallel to the pipe’s axis, while the scan direction was perpendicular to the pipe’s axis. Under these conditions, the pipe acts as a point target with low © CRC Press LLC
interaction time with the radar during its scan. Again, the copolarized and depolarization ratio images clearly show the pipe, and the familiar hyperbolic curve shape is observed. From the images shown, it is easy to conclude that the metallic objects are fairly easy to locate with the polarimetric random noise radar. Nonmetallic objects such as the wooden plate are much harder to discern from the raw data, and additional processing may be required to enhance detection. The initial large reflection from the surface also obscures objects just below the surface. Further signal processing may also be used to overcome this drawback. The next subsection describes results from Stokes matrix processing.
FIGURE 8.8 Copolarized signal amplitude as a function of system delay time for various separations between identical targets. © CRC Press LLC
8.7.2
PROCESSED IMAGES
Stokes matrix images were generated and combined with simple image processing operations to improve target detectability and clutter rejection. The smoothing filter is used for reduction of radar clutter and noise. It was found from the original raw data that high-frequency tonal variations were prevalent in regions without targets, and these grainy variations were attributed to the fact that the soil volume was inhomogeneous and contained voids and rocks. The smoothing operation, when performed, results in lowpass filtering and eliminates the high-frequency noise components. The thresholding operation is applied on the global scale to the entire smoothed image. It enhances image intensities above the mean intensity of the entire image, thereby enhancing target detectability, while simultaneously eliminating clutter, identified as low-intensity areas, by setting these to zero digital number. As will be shown, these postprocessing operations are successful in reducing clutter and enhancing target detectability. We emphasize here that smoothing and thresholding operations were performed on all four Stokes matrix images.The relative amplitude scale in dB applies to all four images. The three postprocessed images (Figures 8.13–8.15) show S0 (top left), S1 (top right ), S2 (bottom left), and S3 (bottom right). The preprocessed image corresponding to two objects (one a round metal plate 23 cm in diameter and 2 cm thick, and the other a wooden plate of the same shape and dimensions) is shown in Figure 8.11. The objects are buried in dry sand at 23 cm depth each, with
FIGURE 8.9 Raw image of two metal plates buried at the same depth. (a) Copolarized received power. (b) Cross-polarized received power. (c) Depolarization ratio. (d) Polarimetric phase difference. © CRC Press LLC
FIGURE 8.10 Raw image of two metal plates buried at different depths. (a) Copolarized received power. (b) Cross-polarized received power. (c) Depolarization ratio. (d) Polarimetric phase difference.
a lateral separation of 23 cm. The Stokes matrix processed images are shown in Figure 8.13. Both objects, especially the wooden plate (right object) are detectable in the S1 image. We also show images of polarization sensitive objects to demonstrate the capability of the system to utilize polarimetric features of the target in Figure 8.14 and Figure 8.15. In these figures, images were obtained for combinations of target orientation parallel to (Figure 8.14) and perpendicular to (Figure 8.15) the scan direction. Note that Figure 8.15 is the postprocessed image whose raw version is shown in Figure 8.12. The transmit polarization was parallel to the longitudinal axis of the object, which was a metal pipe 6 cm in diameter and 85 cm long. When the transmit polarization was perpendicular to the object axis, detection was not possible; hence, these images are not shown. From the processed images, we observe that a long, slender object can be detected, no matter what its orientation is with respect to the scan direction, as long as the transmit polarization is parallel to the object orientation. This indicates that a dual-polarized transmitter, i.e., one that simultaneously or switchably transmits vertical and horizontal polarized signals, can easily detect such an object.
8.8 CONCLUSIONS In this chapter, we have demonstrated the potential of random noise polarimetry for high-resolution subsurface probing applications. This unique concept synergistically combines the advantages of © CRC Press LLC
FIGURE 8.11 Raw image of a metal plate and a wooden plate buried at the same depth. (a) Copolarized received power. (b) Cross-polarized received power. (c) Depolarization ratio. (d) Polarimetric phase difference.
a random noise ultra-wideband waveform with the power of coherent processing to provide a powerful technique for obtaining high-resolution images. Other applications being investigated that exploit the coherency in the system include interferometric (using spaced antennas) and synthetic aperture radar (SAR) techniques to sharpen the azimuth resolution. In addition, random noise polarimetry can be used in foliage penetration (FOPEN) radar systems by operating at lower frequencies, typically in the 250–500 MHz frequency range.
8.9 ACKNOWLEDGMENTS This work was supported by the U.S. Army Waterways Experiment Station through contract DACA39-93-K-0031. We appreciate the assistance of Dr. Lim Nguyen, who provided valuable comments.
© CRC Press LLC
FIGURE 8.12 Raw image of a metal pipe with axis parallel to transmit polarization and perpendicular to scan direction.(a) Copolarized received power. (b) Cross-polarized received power. (c) Depolarization ratio. (d) Polarimetric phase difference.
© CRC Press LLC
FIGURE 8.13 Postprocessed images of a metal plate and a wooden plate buried at the same depth. (a) S0. (b) S1. (c) S2. (d) S3.
© CRC Press LLC
FIGURE 8.14 Postprocessed images of a metal pipe with axis parallel to transmit polarization and parallel to scan direction. (a) S0. (b) S1. (c) S2. (d) S3.
© CRC Press LLC
FIGURE 8.15 Postprocessed images of a metal pipe with axis parallel to transmit polarization and perpendicular to scan direction. (a) S0. (b) S1. (c) S2. (d) S3.
REFERENCES 1. D.J. Daniels, D.J. Gunton, and H.F. Scott, “Introduction to subsurface radar.” IEE Proceedings Part F, Vol. 135, pp. 278–320, August 1988. 2. R.M. Narayanan, Y.Xu, and D.W. Rhoades, “Simulation of a polarimetric random noise/spread spectrum radar for subsurface probing applications.” Proc. IGARSS ’94 Symp., pp. 2494–2498, Pasadena, CA, August 1994. 3. R.M. Narayanan, Y. Xu, P.D. Hoffmeyer, and J.O. Curtis, “Design and performance of a polarimetric random noise radar for detection of shallow buried targets.” Proc. SPIE Conf. on Detection Technologies for Mines and Minelike Targets, Vol. 2496, pp. 20–30, Orlando, FL, April 1995. 4. R.M. Narayanan, Y. Xu, P.D. Hoffmeyer, and J.O. Curtis, “`Random noise polarimetry applications to subsurface probing,” Proc. SPIE Conf. on Detection and Remediation Technologies for Mines and Minelike Targets, Vol. 2765, pp. 360–370, Orlando, FL, April 1996.
© CRC Press LLC
9
New Power Semiconductor Devices for Generation of Nanoand Subnanosecond Pulses Alexei F. Kardo-Sysoev
9.1 9.2 9.3
Introduction Properties and Limitations of Primary Submicrosecond Switches Properties and Limitations of Nanosecond Opening Switches, Based on Step Recovery of High-Voltage P-N Junction 9.4 Properties and Limitations of Picosecond Closing Switches Based on Reversible Breakdown (Delayed Ionization) in P-N Junctions 9.5 Properties and Limitations of Pulse-Forming Semiconductor Networks 9.6 Conclusion References
9.1 INTRODUCTION Producing short-duration, high-power pulses is a major problem in impulse radar design. This chapter describes a new approach to solid state pulser design. The availability of switching devices that can quickly open or close in less than 10–8 seconds limits pulser technology and impulse radar performance. The most promising devices are semiconductor devices but, until recently, semiconductor device power levels have been far lower than those of gas-discharge gaps or magnetic compressing cells. High-power, super-fast semiconductor devices use the super-fast recovery of high-voltage diodes and delayed “over-voltaged” breakdown effects discovered in the early 1980s.1,2
9.1.1
NEW SWITCHING DEVICES
Nanosecond Range The problem of high-power switching in the nanosecond range can be solved by using the superfast recovery effect of a high-voltage power diode when it is switched from forward to reverse bias. There is the well known ordinary recovery process as shown in Figure 9.1. The long forward current pulse produces relatively uniform, high-density electron-hole plasma distribution in the diode base layer (Figure 9.1a). During the recovery process, reverse current pulls out electron-hole plasma from the base. The plasma concentration at the p+n junction decreases to zero. The space charge region (SCR) occurs near the p+n junction and begins to move rightward (Figure 9.1b). At this moment, the diode resistivity increases, and the current decreases. This process is not fast,
© 2001 CRC Press LLC
USUAL ε
E,p,n
SUPERFAST
p,n
ε
E,p,n
p,n
a) n +
+ p
+ n
n
+
-
0
ε
0
x
p,n
-
n
P
x
A short 100 ns forward current pulse forms a steep, but not uniform plasma layer in the diode base layer. A thin, high density plasm layer forms near the p-n junction with the diffusion process, and the other part is formed by a fast bipolar drift.
During the recovery process a reverse current pulls the electron-hole plasma from the base region.
E,p,n
+
+
E,p,n
ε
p,n
b) n
n +
+ p
+ n
+
-
0
x
+ n
-
0
x
When the external voltage polarity changes, the SCR forms nears the p+n junction and the SCR begins to move right slowly, as in an ordinary process.
A space charge region (SCR) occurs near the p+n junction and begins to move right. At this moment the diode resistance increases. ε
+ p
p,n
ε
p,n
E,p,n
E,p,n c) +
n + p 0
+ n
SCR
-
V
This process is not fast because the plasma storage nears the SCR boundary prevents its fast movement.
+
x
n + p 0
n+
SCR
V=VS
-
X
After the SCR boundary begins to move right very fast because there is no plasma near the boundary and the velocity depends on majority carrier motion only.
-electric field distribution -electron-hole plasma distribution -majority electron distribution SCR- space charge region V - velocity of SCR boundary Vs - saturated velocity FIGURE 9.1 Fast recovery in high-voltage power diodes is the key to generating sequences of high-power, short-duration pluses for UWB radar. © 2001 CRC Press LLC
because the plasma storage near the SCR boundary prevents the fast motion of this boundary (Figure 9.1c); this is a slow, submicrosecond process. It is possible to provide a far faster recovery, as shown in Figure 9.1, right-hand figures. The general idea is that a short (i.e., hundreds of nanoseconds) forward current pulse forms a steep but not uniform plasma distribution in the diode base layer shown in Figure 9.1a. A thin, high-density (~1018 cm–3) plasma layer is formed just near the p+n junction with the diffusion process; the other part of distribution is formed by a fast bipolar drift wave and has two orders lower plasma concentration. Then, the external voltage polarity is changed (Figure 9.1b), a space charge region (SCR) is formed near p+n junction, and the SCR boundary begins to move rightward slowly (V < VS) as in an ordinary process. Simultaneously, at the n+n boundary, the bipolar drift wave is formed and begins to move leftward; the front of this wave is very sharp, because the front motion velocity is inversely proportional to the plasma concentration. Calculations show that this front arrives at the boundary of plasma layer exactly simultaneously with the complete depletion of this layer. After this, the SCR boundary begins to move rightward very fast, because there is no plasma near the boundary, and its velocity depends on majority carriers motion only (Figure 9.1c). Correlation between reverse current density and base material resistivity should be chosen so that the electric field in the base layer is high enough for carrier velocity saturation (V = VS). If this is the case, the SCR boundary moves rightward under the saturated velocity Vs, the reverse voltage increases, and reverse current decreases very quickly. The typical value of voltage rise rate is 1012 V/s (1 kV per 1 ns) in the diode with 2 kV breakdown voltage. Figure 9.2 shows typical oscillographs of this process. The super-fast switching diode is called a drift step-recovery diode (DSRD). It is an opening switch that can form the cut-off current front from 0.2 to 3.0 ns, depending on the diode operating voltage (0.2 to 3 kV, respectively). The operating reverse-current density depends on the operating voltage, and for 2 kV it is equal to ~102A/cm2. The operating area of such a device can be as big as 30 to 40 cm2, so the current pulse may be as high as 3 to 4 kA. Picosecond Range High-power switching in the picosecond range cannot be based on the principles associated with the motion of carriers in semiconductors as in the micro- and nanosecond ranges. Even when moving at saturated velocity, the carriers travel only 10 µm in 100 ps, which is about an order less than SCR width in power diode blocking—for instance, 2 kV. Therefore, the new physical phenomena are necessary for picosecond switching. We found one of these phenomena during the investigations of the silicon diode breakdown at super-fast overvoltaging. Figure 9.3 shows the results of these experiments. When we applied to a reverse biased diode (the diode structure is shown on Figure 9.3b) the overvoltage pulse with a rise rate more than 1012 V/s, it formed a drift step recovery diode (DSRD) generator. One can see that, during several nanoseconds, the avalanche breakdown does not occur, despite an applied voltage that is two times higher than the breakdown voltage in static conditions. Then, the voltage applied to the diode dramatically drops during tens of picoseconds, and the current increases. The physical nature of this phenomenon was explained by detailed experiments. The breakdown delay comes from the fact, illustrated in Figure 9.4, that thermal generation cannot produce a sufficient number of carriers for impact ionization in super-high field region in a few nanoseconds. At the same time, a high displacement current passing through the device produces, in the neutral part of the base, a field high enough for lattice impact ionization by majority carriers. The holes produced here drift leftward and, for a few nanoseconds, reach the super-high field region near the p+n junction. These holes initiate an extremely intensive breakdown in this region; the ionization time here is less than 10–11 s and, at about this time, the super-high field region is filled up by electron-hole plasma, the field in it drops, increasing in the nearby region where the breakdown begins, and so on. The front of the so-formed impact ionization wave moves rightward toward the flux of the hole-initiated breakdown. Current thorough the diode increases and, after the wavefront reaches the n+ region, the base © 2001 CRC Press LLC
j, A
100 ns 5
2 ns t
0
-20
-40
U, B
10 0
t
-1000
-2000
FIGURE 9.2
Transient process of superfast recovery in high-voltage diodes.
is filled by electron-hole plasma. The duration of this process is determined by the ionization rate in the super-high field region and can be two orders of magnitude less than the time of flight with saturated velocity carriers. The main point of this phenomenon is the space separation for a short time in the super-high field region and the hole-generation region. Diodes based on this phenomenon are called silicon avalanche shapers (SASs). It is of interest to note that the same phenomenon occurs also in GaAs diodes and leads to the high-power stimulated emission under certain conditions. The shortest switching time obtained up to now in silicon diodes is 50 ps for 0.1 MW of switched power and jitter less than 30 ps.
9.1.2
CIRCUIT
ENGINEERING
Step Recovery Devices Step recovery devices are used as fast opening switches in high-power nanosecond pulsers. However, as distinguished from traditional devices such as transistors, step recovery devices have a time of conducting state less than 1 µs—usually near 100 ns. Therefore, in real systems, they are always used jointly with traditional switches: transistors or thyristors. Traditional switches initially generate © 2001 CRC Press LLC
I, A
U, V
60
3000
40
2000
static breakdown threshold 1
20
1000
2 t, ns
1
2
3
4
(a) current (1) and voltage (2) curves during switching
φ 2mm +
220µ
p
n 14
Nd = 10
-3
cm
+
n
(b) silicon diode structure under investigation
FIGURE 9.3
Superfast reversible breakdown in high-voltage diodes for delayed ionization.
relatively long pulses of hundreds of nanoseconds, then recovery devices shape them by shortening the fronts or/and decreasing the lengths. To force a drift step recovery device into the conducting state, a forward current, or pumping current, is first allowed through it. This current creates the necessary charge of excess (nonequlibrium) carriers. In diodes, the pumping current passes through the very same pair of electrodes as the current of the main (power) circuit does. Thus, in systems using a two-electrode switches, the pumping circuit and the power circuit turn out to be connected. In some cases, this connection may be used for throwing energy out of the pumping circuit into the load, improving pulse generation efficiency. In other cases, when it is desirable to weaken the connection, it is possible to separate the pumping circuit from the load by use of highpass filters such as chokes. Both of these possibilities have their advantages and disadvantages and will be examined later. As previously noted, step recovery devices are current breakers. The main variants of the pulse shaping systems are as follows: © 2001 CRC Press LLC
E
En
Eb 2 Ec
3
1
x
vs
∆ δ0
W SCR
_ + p
n
n
+
+
(a) Electric field distribution:
1- under static bias 2,3- under overvoltage Ec - electric field maximum under static bias Eb - breakdown electric field under static conditions W SCR - width of space charge region under static bias - width of superhigh field region - width of diode base layer neutral region δ0 - carriers saturated velocity Vs
E
En vf >> vs
Eb
vs
vs
x plasma
region
∆
(b) Impact ionization wave propagation
FIGURE 9.4
Drift step recovery diode (DSRD) electric field conditions for high-speed switching.
© 2001 CRC Press LLC
• Circuits with intermediate accumulation of energy—in particular, in inductance (which may be made from inductive coil or a piece of transmission line). The output pulse is shaped during sharp break-off of the current flowing through the inductance by step recovery devices, as shown in the example of Figure 9.5. • Circuits in which the device shorts the transmission line from the generator to the load for the time needed to establishing a wave with the required amplitude in the line. Then, the device quickly opens the line and shapes pulse front, which is the case shown in Figure 9.6. The simplest variant is one when DSRD shunts the load directly. A significant feature of all shaping circuits consists of the fact that, during the time τ–, when the step recovery device is in the conducting state, it is necessary to increase the current in the energy storage inductor to the required level Im, after which this current should be quickly broken off. The current increase is done by the primary switch. Powerful transistors (bipolar or field-effect) or thyristors may be used as primary switches. In powerful pulse-forming circuits with a large peak currents Im, the primary switch should guarantee a relatively large current rise rate I´ = dI/dt Ý
R1 I 2
I1
S1 +
L1
L2 DSRD
C1
S2 C2
τ
f (0.5-2)ns
I,U delay I1
+ -
U1m R1*Idsrd S
Uc
τ fwhm L/R1
τ0
t
τ+
(100 - 300) ns
Idsrd FIGURE 9.5
A symmetric parallel LC circuit for a drift step recovery diode (DSRD). pumping pulse
input
FIGURE 9.6
output
A drift step recovery diode (DSRD) used as a shaper in a transmission line.
© 2001 CRC Press LLC
Im/τ–. Better switching properties of the step recovery devices require a shorter time τ–. Usually, τ– is 50 to 200 ns, which yields I´ Š 1 A/ns for Im > 100 A. The problem of the high dI/dt capability of power closing switches will be examined later in detailed way. Although schematic of step recovery devices (fast opening switches) strongly differs from those for widely used schematics of closing switches, it is possible to design a power switch combining a slow-closing thyristor and fast-opening switches. Such a thyristor-diode closing switch (TDCS), shown in Figure 9.7, may be considered as a fast-closing two terminal switch [(1) and (2) in Figure 9.7]. In this case, it is possible to use the great experience in the development of powerful thyristorbased pulsers, after substitution a TDCS for the thyristor. This provides dozens-fold improvement in turn-on rise time. The most efficient DSRD-based circuits will be considered later. Quasi-symmetrical Parallel LC Circuit3 Let us consider the two similar LC circuits of Figure 9.5, which have drift step recovery diodes (DSRDs) inserted in common arm. Initially, energy storage capacitors C1 and C2 are charged with the polarity shown, and switches S1 and S2 are opened. When the switch S1 is closed, capacitor C1 discharges via inductor L1 and the diode (DSRD). The discharge current is forward current for the DSRD, the resistance of the DSRD is low, and the current I1 in C1, S1, L1, and the DSRD circuit oscillates. The half-cycle period of the LC circuit must not be more than several hundred nanoseconds. The minority carrier lifetime in DSRD is large—tens of microseconds and more. The total amount of electron-hole pairs injected or pumped in DSRD during the first forward half-period of current oscillation is equal to the charge passed through the diode. When the current changes direction in the second half-period, the diode remains in a highconducting state due to stored electron-hole pairs. If, at the moment when current I1 crosses the zero level at t = t0, the second switch S2 closes and the C2 discharge current I2 is added to the current of L1 and C1 circuit, doubling the total DSRD current. At the moment that the current maximum is extracted for the τ– time period from DSRD, the charge is equally injected for a τ+–
Power Supply
L1
1
C1
Cp
L2
C2
C3
Thyristor R1
2
p umping Source
DSRD
FIGURE 9.7 capacitor.
Combined closing switch consisting of a thyristor, drift step recovery diode (DSRD), and a
© 2001 CRC Press LLC
period, and the DSRD total current breaks sharply for 1–2 ns. At that moment, all energy initially stored in the C1 and C2 capacitors is accumulated in the L1 and L2 inductors, and their currents reach maximum values. During the time of the DSRD switch-off process, its current is switching into the load resistor Rl. The front of the load current pulse is determined by the turn-off time of the DSRD, and the decay is ~L/Rl (where L is the total inductance of L1 and L2 connected in parallel). The peak load voltage Ulm ~ Rl(I1 + I2) may be many times higher (>10 ×) than the initial capacitor voltage Vo. Such voltage multiplication is one of the important advantages of DSRDbased circuits. High voltage exists in the circuit for only a very short time (nanoseconds); therefore, high-voltage corona and arc discharge problems are not severe. The circuit case of the ideal primary switches S1 and S2 shown above is very simple and very effective. It should be mentioned that the total delay of shaped pulse is equal to 3/4 of the LC circuit period and may reach hundreds of nanoseconds. The stability of the delay is determined by stability of the LC circuit, which, as is well known, may be better than 10–4. Therefore, jitter of the delay may be less than 100 ps. When C1 and C2 are charged from one power supply, the instability does not have an influence on the moment of the DSRD current break; i.e., the ratio of injecting current to extracting current does not depend on power supply voltage. The performance of primary switches S1 and S2 may strongly influence the efficiency of pulse generation and loss of energy in the L and C elements. The influence of losses may be defined by the Q-factor of the C, S, L, and DSRD circuits or by the ratio of first current maximum at the first half-cycle to the second, when only switch S1 is turned on. It was difficult to get a good Q-factor larger than 3 for currents greater than 100 A. However, it is very easy to get Q-factors larger than 100 for small nonferric linear inductors of less than 1 µH, even at currents greater than 1000 A. The Q-factor of low inductive capacitors with stored energy >10–3 J is not so large as for inductors, and it is about 10 to 30 for the best ceramic capacitors. The switch losses play a major role in Q-factor. There are two types of losses: transient losses during turn-on processes, and losses due to resistance in the turn-on state. Actually, they decrease the Q-factor of LC circuits into the two to five range. The symmetry of the LC circuit in Figure 9.5 is broken due to a low Q-factor. With equal L and C in both circuits, in the second half-period L1C1 the current I1 is not equal to the current I2 during the first half-period L2C2. When the DSRD breaks the current, part of the L2 current I2 goes to the first inductor L1, additionally increasing total losses. Adjustment of parameters of the first LC circuit by increasing L1 and decreasing C1 can decrease the current interchange between L1 and L2. For example, if the Q-factor is less than 3, then the overall efficiency of pulse-forming circuit still will be worse than 70–80%. Every kind of known closing switch may be used as a primary switch S1 and S2. In this work, we will only consider properties of field-effect and bipolar transistors, thyristors, and dynistor semiconductor devices used as primary switches. Series LC Circuits Interchange of inductor currents plays a minor role in series LC circuits shown in Figure 9.8. Initially, switches S1 and S2 are opened, the storage capacitor C1 is charged up to initial voltage Uo, and the second capacitor C2 is discharged. After closing the first switch S1, the first capacitor C1 discharges, and the second C2 charges. If capacitances of C1 and C2 are equal after the half-period time of the LC circuit cycle, the second capacitor C2 will be charged up to the initial voltage Uo. The discharge current flows through DSRD in forward direction and “pumps” it as in the previous case. At the moment of maximum voltage at C2, the second switch S2 closes, and C2 discharges via S2, L2, and the DSRD so that the current flows in blocking direction for the DSRD but, as in the previous case, the DSRD is in a conducting state. If L1 = L2 and C1 = C2, the impedance of © 2001 CRC Press LLC
L1
S1
L2
-
C1
S2
+
C2
DSRD
R1
IDSRD Uo
C
Us2 UC2
UC 1
τ+
FIGURE 9.8
t
τ−
A series LC circuit for a drift step recovery diode (DSRD).
L2C2 circuit is two times less than the impedance of the C1, L1, L2, and C2 circuit at the “pumping” stage, so that the peak current is two times more than pumping current. Therefore, at the moment of current maximum, the extracted charge is equal to pumped charge. The DSRD breaks the current and diverts it into the load Rl. If the Q-factor is small, but the voltage drop on S2 in open state is low, then L1 and L2 are still separated, and the decay of load pulse is determined by L2 only. The new features of the series LC circuits are as follows: 1. The first switch has to pass all energy. 2. The second switch can hold the total load current. 3. After closing of the first switch, the sharp voltage pulse appears at the second switch S2 as shown in Figure 9.8. The last feature can lead to the stray dU/dt turning on S2, if S2 is a thyristor. But if S2 is a magnetic switch, the voltage may be used to turn it on.4 © 2001 CRC Press LLC
Features (1) and (2) are disadvantages in the case of a high-current, high-energy pulse generation, compared with the parallel LC circuits. A piece of transmission line (coaxial cable) may be connected instead of inductor L2. In this case, a rectangular pulse will be generated, and pulse length equals doubled wave propagation time along the line. The same substitution of lines instead of inductors in the parallel circuit of Figure 9.5 permits it to shape rectangular pulses, but in this case current exchange between two parts of circuit could distort the ideal shape.
9.1.3
SUBNANO-
AND
PICOSECOND DEVICES
Devices with delayed ionization, such as silicon avalanche shapers (SASs) are closing switches. As was shown in Section 9.1.1, to switch on a SAS requires a constant voltage bias and a fast rising voltage. A fast rising voltage may be generated by one of the DSRD-based circuits, described above. Figure 9.9 shows several basic SAS circuits. Only part of DSRD-based quasi-symmetrical circuit is shown. The dc bias is provided by use of Cb, Lb, and Rb parts. A SAS may be connected in a gap in a transmission line as shown in Figure 9.9a. In this case, the SAS only decreases or erodes the pulse front propagating through the line. If peaking capacitor Cs is connected, the output pulse amplitude may be increased up to two times. During the time of charging of Cs, a considerable part of energy may be reflected from Cs and lost, which leads to the poor circuit efficiency. Rb
Lb
+
I-
L1
a
I+
L2
-
DC bias Shaping Head
Cb
C
SAS S
DSRD
Lb
Rb +
I-
L1
b
I+
L2
DSRD
Cb Cs
SAS
Lb
c
I+
L2
DSRD
FIGURE 9.9
Rb + DC bias -
I-
L1
DC bias
Cb
Lc
Cs
SAS
Silicon avalanche shaper (SAS) basic circuits for pulse generation.
© 2001 CRC Press LLC
The space charge region of a diode can store the charge, and a DSRD may be used as a peaking capacitor, as shown in Figure 9.9b, with no loss of energy due to reflection. An additional capacitor Cs may be connected in parallel to the DSRD. All main parts, including DSRD, Cb, and SAS, may be assembled as one unit with minimal parasitic parameters. However, the output voltage cannot exceed the maximum voltage capacity of the DSRD. A modification of the scheme is shown at Figure 9.9c. In this case, the storage (peaking) capacitor Cs is charged via inductor Lc. Here, the Cs charging voltage may be twice as high as on the DSRD but, due to nonlinearity of the DSRD capacitance, a considerable part of energy stored in the DSRD capacitance is lost, because it cannot be transferred to Cs .
9.2 PROPERTIES AND LIMITATIONS OF PRIMARY SUBMICROSECOND SWITCHES As was shown in Section 9.1.2, new step recovery diodes are used with primary closing switches, which must produce current pulses in submicrosecond range. The set of switch parameters strongly determines the potential performance of pulse-forming circuits. After one cycle of pumping and current break, a DSRD is ready for the next cycle, and the period of the pulse repetition, limited by DSRD, is hundred or several hundreds nanoseconds only. That leads to the maximum possible repetition rate in the megahertz range. In actual circuits, the maximum repetition rate is limited by primary switches, e.g., dozens of kilohertz for power thyristors. Power field-effect transistors (FETs) could provide repetition rates as high as the megahertz range. The other important parameters for the primary switches are turn-on time, resistance in turn-on state, and dI/dt characteristics. The border between a transient turn-on process and a steady (static) turn-on state is rather fuzzy. Typical for every kind of switch, the turn-on process of switching of constant voltage into R1 is shown in Figure 9.10. There is an initial fast voltage drop from off-state voltage, then the rate of voltage drop (dU/dt) sharply decreases by an order of magnitude. The length of this slow “tail” may be tens of times more than the first fast part. The “tail” gradually comes to a steady state voltage drop Uon. The “tail” voltage drop may be tens of times higher than steady-state drop. As a rule, after turn-on, power high-voltage devices do not reach a steady state during the relatively short time of 100 to 300 ns while DSRD is in a high conducting state τ + τ+. Therefore, the well known device parameter “static on-resistance” is not much value for evaluating the energy losses. In all of the circuits discussed in Section 9.1.2, the primary switches have to provide needed current rise rate (I´ = dI/dt). For LC circuits with maximum current Im and oscillating frequency f{I = Imsin(2π ft)}, intrinsic value Ii´ is Ii´ = 2πfIm = Im/Th, where Th is the half-period time. It may be shown that a high Q-factor is possible only when the dI/dt capability of the primary switch Is´ is much larger than LC intrinsic value. When Is < Ii´, current oscillation in the LC circuit is impossible (where the Q factor is less than 1); most of energy stored in capacitor dissipates in the switch. In the next sections, we consider the properties of semiconductor power switches transistors, thyristors, and dynistors for use as primary switches. We will not discuss the systems that drive (gate) the switches.
9.2.1
TRANSISTORS
Bipolar Transistors The physics of bipolar transistor’s operation is well known. In the turn-off state, the applied voltage is blocked by collector p-n junction as shown in Figure 9.11. The width of space charge region Wscr is © 2001 CRC Press LLC
R1
U
SW
Usw Uoff
dU/dtf
Uint
dU/dtsl
Uon t
off state
turn on process
"tail"
steady state
FIGURE 9.10 The turn-on process during semiconductor switch closing.
W scr =
2εU ---------qN d
(9.1)
During the turn-on process, electrons are injected into the SCR via the p-base layer. The collector and load current increases. Due to voltage drop on the load, the voltage drop in the p+nn+layers of the transistor decreases when the transistor current increases. There are two main factors limiting current density rise rate (dj/dt). One is due to diffusion of electrons via p-layer. The collector current density (jc) rise in case of ideal efficiency of emitter may be approximated by τ j c = j b ----n [ 1 – exp ( – t/τ n ) ] τd
(9.2)
where τn is the electron lifetime in p+-layer, τd = Wp2/2Dn, which is the time of diffusion of electrons through the p+-layer, Dn is the diffusivity, and jb is the gating base current. From Equation (9.2) for maximum value of dj/dtm, we have dj j --------m- = ---bdT m τd © 2001 CRC Press LLC
(9.3)
"off" state
E Em
j=0
j=0
x
En 0
Wn
WSCR "on" state
n,p pm
pp
Na
nn
pn
n
nm Nd
++
x Wn
0 Wp
n
p
np
0
SCR
L
+
n
p
+
+
n
FIGURE 9.11 The distribution of fields and carriers in a bipolar transistor.
High-voltage transistors, as a rule, have a p-layer width near 10 µm and a τd near 20 ns. Even for a large effective collector area S Ý 1 cm2 and base current Ý 10 A, we have Im´ = djc/dtm Ý 0.5 × 109 A/s. This value should be compared, for example, with the Ii´ = 2 × 109 A/s LC circuit needed to generate a 100 A pulse, where Th Ý 50 ns. It is evident that more than 10 transistors must be used in this example to get good a Q-factor. It is possible to increase Im´ capability by decreasing Wp and τd. For Wp Ý 1 µm, then τd is less than 1 ns. For such short times of diffusion, which are less than the time of flight of electrons across the space charge region, the Im´ capability is limited by space charge redistribution for the one-sided injection case. It may be shown that this limitation is 2
3εV s j′ m = ----------3 W scr where ε = permittivity, which gives jm´ ð 1011 A/cm2 s. © 2001 CRC Press LLC
In accordance with Equation (9.1), Wscr ≈ 10–2 cm, Nd ≈ 1014cm–3, and τ = Wscr/Vs ≈ 1 ns for transistor blocking of more than 1 kV. Such a transistor should have more than 100 times improved Im´ capability, but to our knowledge they are not produced. It should be noted that such thin-base transistors will be very close and similar to high-voltage vertical field-effect transistors, where carriers transfer through the thin low-field region does not limit the current rise. For a given collector current density during turn-on process, the transistor voltage drop consists of two parts: the space charge region voltage drop of Equation (9.1) and the neutral region voltage drop (Un) from Figure 9.11. j c ⋅ W nut U n = ---------------------q ⋅ µn ⋅ Nd
(9.4)
where Wnut = neutral region width The space charge voltage drop may be determined as U SCR = U – Rl ⋅ S ⋅ j c – U n
(9.5)
where Rl · S · Jc = the load resistor voltage drop S = the collector’s area Equation (9.4) is valid when jc < js = qVsNd. If jc ≈ js, then the space charge of electrons compensates for the space charge of donors in the n-layer, the SCR widens up to the n+-layer, and the neutral region disappears. If the jc > js electric field gradient changes sign, the field maximum is shifted to the n+-layer, and field intensity at collector p+n junction decreases. This field decrease slows down the electron drift velocity at p+n junction and decreases the rate of current rise (dI/dt), so the current density should be less than js = qVsNd. Nevertheless, while jc < js , it is possible, in accordance with Equation (9.5) that USCR will go to zero or even change signs. That indicates the appearance of a diffusion region at the p-n junction instead of high-voltage SCR, as shown in the Figure 9.11 “on-state.” The diffusion region width L increases to L ≈ 2Dt
(9.6)
and the width of the neutral region, where voltage drop is determined by Equation (9.4), decreases. The voltage drop at diffusion region is less than 1 V—a very small amount. The rate of width increase in Equation (9.6) is the slow process diffusion. As follows from this equation, it takes about 2 µs to fill the n-layer of about 100 microns width from diffusion region. It is the process that produces the slow “tail,” mentioned above and shown in Figure 9.10. Hence, the main advantage of bipolar transistors over field-effect transistors in low static onstate voltage drop plays no part in their use as primary high-voltage switches. In low-voltage, highcurrent transistors, this advantage may play a significant role, e.g., for 100 V and Wn ≈ 10-3 cm, filling times of about 20 ns are possible. For a 1 kV transistor with S = 1 cm2, Wn ≈ 10–2 cm, and Nd ≈ 1014 cm–2, then a neutral region voltage drop as high as ≈50 V at a 100 A current is possible. Associated dynamic resistance is approximately 0.5 ¾, which may strongly limit the Q-factor for LC circuits with impedance ρ LC = L/C less than 10 ¾. As was mentioned in Section 9.1.2, totally efficient use of switches as primary switches may be defined as the ratio of maximum currents during the first I1 and the second I2 half-period Th of oscillation in the LC circuit with the switch. It should be noted that the problem of determining the Q-factor from the decay of oscillations is poorly defined when the Q factor is low—say, less than 5. © 2001 CRC Press LLC
The devices tested in an LC circuit must be shunted by a diode to bypass reverse current. The diode must have a low forward resistance to decrease losses and to diminish the Q-factor. Such diodes were specially developed by us and are in considerable use with all kinds of primary semiconductor switches. Our tests of bipolar transistors in LC circuits have verified the considerations made above. For example, a test of a 1 kV rated NPN KT854 transistor (peak current Im = 18 A, Uoff = 400 V, driving base current Ib = 3 A, Th = 160 ns) has showed I1/I2 = 1.5 (poor Qfactor ≈ 2 to 3). Increasing the driving current improved the Q-factor, but not significantly, due to a decrease of the transistor gain at large currents. Low-voltage (<100 V) KT970 transistors have a better Q-factor at the peak current. This Qfactor is about 5 to 7 at a peak current of Im ≈ 30 A, which possibly allows these transistors to be used as primary switches. Field-Effect Transistors High-voltage vertical field-effect transistors (FETs) have much in common with bipolar transistors in terms of space charge region, blocking applied voltage, and the injection of carriers into the SCR, which leads to increased current. The main difference is the source and the way the carriers are injected into the SCR. The time needed for electrons to get from the source into the SCR, even in high-voltage FETs, is far less than in bipolar transistors and is as small as several nanoseconds. These times are close to the time of flight across the SCR, and the considerations made above on j´m for the small τ d case may be applied to FETs. When the voltage drop at the SCR decreases and USCR goes to zero in accordance with Equation (9.5), then no diffusion region appears, which is contrary to the situation with bipolar transistors. Nevertheless, electron transfer across the low field region slows down, and the rate of current rise decreases. The minimal voltage drop is determined by the drop at the neutral region, as shown in Equation (9.4), which is not enriched by electron-hole plasma. An FET’s resistance in the static turn-on state is higher than that of a bipolar transistor. As was shown in the previous section, this advantage plays no role in case of the use of high-voltage devices in LC circuits with Th ~ 10–7 s. Due to a faster transfer of electrons from the source to space charge region (SCR), the FET has a better dI/dt capability than bipolar transistors with usual thick p-layer. We should remember that bipolar transistors with thin p-layers would have the same I´ capability as an FET and could provide the same, or maybe slightly better, Q-factor due to lower static “on” resistance. The test results of FETs in the same LC circuit, where Th = 160 ns, are compared in Table 9.1. TABLE 9.1 Comparison of FETS in an LC Circuit where Th = 160 ns Transistor Type – Voltage Rating
p (¾)
I1/I2
Q
I1 Amp
IRFBB30 – 1 kV
50
1.15
5 to 7
8
IRFBB30 – 1 kV
24
1.50
2 to 3
16
IRF840 – 500 V
24
1.30
3 to 4
16
IRF840 – 500 V
18
1.50
2 to 3
22
These results show that a decrease of the LC impedance (ρ) worsens the Q-factor and peak current increase. The higher the rated voltage, the more static “on” resistance and n-layer voltage drop, as follows from Equations (9.1) and (9.4), assuming the same effective area S.
© 2001 CRC Press LLC
9.2.2
FOUR-LAYER DEVICES
The most powerful semiconductor devices are four-layer n+p+np+ structures, which are thyristors if they have third gating electrode, or dynistors if they have only two electrodes. Although they have only one p+-layer more than bipolar transistor, the physics of their operation is changed dramatically. There is no good theoretical basis for this change in power high-voltage thyristors. There are only several hypotheses explaining each specific feature of thyristor’s behavior. The distinctive property of thyristor is the double injection feature where electrons are injected from n+ emitter, holes are injected from opposite placed p+ emitter. Such two-sided injection may fill all the volume of the n-layer by high-density electron-hole plasma during a much shorter time than is the case with a transistor. The space charge redistribution limitation considered in Section 9.2.1 and Equation (9.8) gives a value four times higher for the symmetrical thyristor case than with an asymmetrical transistor. In thyristors, the direction of the drift component of the injected carriers velocity coincides with the diffusion component in p+-layer, or base. Due to this coincidence, the electron’s time of flight across a rather thick (e.g., 10 µm) p+ base may be very short at a high current density. In this case, a heavy dJ/dt limitation, shown in Equation (9.3), which is connected with p+ base, plays no role. Current rates up to dJ/dt Ý 1010 A/cm2 s has been achieved in thyristors. This value is many orders of magnitude more than achieved in bipolar or field-effect transistors. In spite of such perfect potentiality of thyristors, the problem of fast switching of high currents remains. To get a high dI/dt factor, it is necessary to increase the area of devices through which current flows. The main obstacle is dynamic filamentation of current in turn-on process. This phenomenon is typical for every device with positive, or regenerative, feedback. It was shown in Reference 6 that, in a thyristor-like device, such instability of uniform current distribution appears in the turn-on processes when the condition d2j/dt2 > 0 is fulfilled. This condition means that if, in some small local area, fluctuation with increased current density appears, then the rate of current rise increases, so the current density increases again, and so on. This local point current density “runs out” from an average current density in the remaining part of area, so the uniform current distribution is unstable. To suppress this instability, it is necessary to design an n+p+np+ structure in which the condition d2j/dt2 < 0 is fulfilled; that is, the current rise is sublinear with time, at least at current levels exceeding some critical value jc, as shown in Figure 9.12. It is impossible to get from initial “off” state with j ≡ 0 to the stage where d2j/dt2 < 0 not crossing the area where d2j/dt2 > 0. Nevertheless, if this transition is placed in the low-current region, it is possible to “jump over” it by a fast increase of the current density across the whole area up to needed value, as shown in Figure 9.12. Dynistors This jump from a zero current to a stable, uniform current distribution is possible only in dynistors. It may be forced by short (e.g., 2 to 20 ns), high-voltage pulse applied to the cathode and anode of a dynistor as shown in Figure 9.13. A separating diode prevents the driving pulse from getting to the load. We have tested different types of tailor-made dynistors having a n-layer width Wn ≈ 130 µm, a 40 ¾-cm resistivity, and p+-layer (base) width Wp ≈ 10 µm. A dynistor can hold a 1 to 1.2 kV potential and have a 15 to 20 ns turn-on time. Applying a driving pulse increases dynistor voltage over the initial voltage U0 of a charged capacitor. If the length of driving pulse is less than the dynistor turn-on time, the driving pulse “overvoltages” the dynistor above the breakdown of the collector p+n junction and initiates a dynistor current jd exceeding critical value jc, as shown in Figure 9.12. A high electric field at the p+n junction causes impact ionization, which gives a fast rise in the current. Impact ionization could provide very high dj/dt, many orders of magnitude more than injection. To switch all the dynistor area uniformly, the driving current has to be approximately 10% of the maximum load current. With blocking voltage U0 = 1 kV, 500 A were switched into the load for less than 20 to 30 ns. The blocking voltage may © 2001 CRC Press LLC
be easily increased by connecting several dynistors in series or stacking them. Two stacked dynistors switched 760 A at U0 = 2 kV for the same time. Although dynistors appear to have considerable promise, they have not been used extensively, because they need complicated driving systems. Now, the use of step recovery devices makes generation of short driving pulses effective and simple. Nevertheless, much research and design must be made before dynistors can be widely used. Using very short voltage fronts of less than 2 ns allows dynistors to switch on in the delayed ionization mode with turn-on times of 100 ps. The mode will be considered later in a section devoted to picosecond devices. Thyristors Presently, thyristors are used mostly in power circuits because of their current limitations. They need very simple driving circuits, but their main disadvantage with respect to transistors is a low repetition rate.
j d2j/dt > 0
d2j/dt < 0
uniform current distribution is unstable
uniform current distribution is stable
jd jc
short powerful driving usual driving 0
t
FIGURE 9.12 Thyristor and dynistor current density vs. time. separating diode nanosecond pulser
+ -
dynistor
FIGURE 9.13 A dynistor test bench diagram. © 2001 CRC Press LLC
storage capacitor Rload
A very cheap and simple 1 kV rated KU221 thyristor, when switched in an LC circuit (Th = 160 ns, ρ = 23 ¾, I1/I2 ≈ 1.4, I1 = 18 A) showed nearly the same figure as bipolar and field-effect transistors. But this thyristor, having the same conditions as IRF840 case, showed the same I1/I2 = 1.4 factor at up to more than I1 = 100 A. When tested, the above transistors showed I1/I2 > 2 at currents exceeding 50 A. Our thyristors, custom-made for high dI/dt factors, were tested in a very low-impedance LC circuit with U0 = 1.1 kV, Th = 300 ns, C = 0.4 nF, and inductance determined only by wiring. The test results were I1 = 3.3 kA, I1/I2 = 2.8 and dI/dt = 19 A/ns. The dI/dt advantages of thyristors were clearly seen when they were used in a thyristor-diode closing switch (TDCS), briefly considered above in Figure 9.7. The operating mode of a thyristor in TDCS systems differs substantially from the operating mode in a linear modulator system where the thyristor is switched on in series with the load resistance. In a normal modulator system, as the current increases through the thyristor, the voltage on it decreases, since the voltage drop on the load increases. At the end of the transition process, the current through the thyristor is ImH = Uo /Re. In a thyristor-diode switch, the voltage drop on the DSRD at the high-conductivity stage is small. When there is a large enough additional capacitor Cp , the increase in the current is limited by the charge Qt, which was accumulated in the DSRD during the pumping time. The thyristor current increases when there is a constant voltage on it up to value Im,which is determined by the condition Q1 =
Im
∫0 I ( t ) dt
Then, due to the superfast restoration of the voltage on the DSRD, the diode current breaks and switches to the load, while the voltage on the thyristor drops sharply. Such a system for increasing the current when there is a constant voltage and an external limitation of the charge has a number of peculiarities. The concept of the “front for switching on the thyristor” loses its meaning, since the moment for the limitation of the current is controlled by an external circuit. The rate of increase of the current I´ = dI/dt for the given voltage becomes the most important parameter. Let us recall that the current, which is broken off by the DSRD, can easily be made as large as desired. It is only necessary to increase its area; however, there still is some valid limitation due to skinning of the current. At the very same time, it is well known that increasing the area of the p+–n–p–n+ structure of the thyristors does not lead to a proportional increase in the rate of increase of the current due to the effect of the localization of the current. A substantial increase in the area, and consequently in I´ as well, turns out to be possible when switching on the thyristor using an overvoltage pulse along the anode. As shown in Reference 6, the presence of feedback between the current and the voltage in thyristors due to the load reaction contributes to the localization of the current in the thyristor during the transition switching-on process. In a circuit where charge stored in DSRD is used to limit the thyristor current, such a coupling is absent, and the homogeneity of the distribution of the current during switching on can be improved. The transition processes for switching on thyristors in the charge limitation mode have not been studied elsewhere. The maximum allowable values for I´ given in the reference literature concern the system when the current is limited by an external circuit. These I´ values cannot even indicate the order of magnitude of the values in the case that interests us. Experiments showed that modulator thyristors such as the KU108 and KU221, used in a system with the charge limited, have a dI/dt ∼ 1 to 3 A/ns.
© 2001 CRC Press LLC
9.3 PROPERTIES AND LIMITATIONS OF NANOSECOND OPENING SWITCHES, BASED ON STEP RECOVERY OF HIGH-VOLTAGE P-N JUNCTION 9.3.1
GENERAL CONSIDERATIONS
We have already covered the basics of the drift step recovery diodes. To estimate a device’s potential uses, it is necessary to consider the physics of their operation. No work on the devices has been published in the West; however, there are several Russian groups working on drift step recovery devices, and most of them are involved in DSRD pulsers design. Only two groups, which are with the Ioffe Physical-Technical Institute and the Pulse System Group, are involved in research and development investigating the physics of DSRD devices. The following consideration of the physics is based on the work of these groups. Research is focused on estimating the main relations between the devices technological parameters (n-layer width, doping level, etc.) and the operational parameters such as optimal current, switching time, voltage drop, and others. To explain the effect of super-fast restoration in high-voltage diodes, the classic and well known pattern of the processes must be expanded considerably. Now we will examine their basic properties at the qualitative level. To begin with, we will make simplified estimation of the upper limit for the main parameter dU/dt. Then, to support the conclusions, more detailed theory will be presented as a supplement. For the case of sharp switching-on of a forward current pulse I+ with duration τ+ in the p+–n–n+ diode near the p+–n junction during the current flow time, a diffusion layer of plasma is formed as shown in Figures 9.1 and 9.14, with a characteristic size W d1 ≈ D p τ + and a maximum concen1⁄2 1⁄2 tration at the p+–n junction of n m ≈ j + τ + / ( qD p ) . The small duration of the current pulse τ+ as compared with the lifetime of the minority carriers τp where τ+ « τp is a very important condition in the pattern being described. Beyond the border of the diffusion layer, where x > Wd1, the concentration of the minority carriers is small, and their transfer is determined by the drift in the electrical field. When there is a sufficiently large charge that has passed during time τ+, a second narrow, enriched diffusion layer with size Wd2 can be formed at the second boundary by the n–n+ junction. The size of the second diffusion area and the n,p the first diffusion region
the second diffusion region
field drift region
n
Nd Wd1
p
0
Wd2 w
x
+
p
n
n
+
FIGURE 9.14 Distribution of the carrier concentration during injection in the diode. © 2001 CRC Press LLC
concentration accumulated in it turn out to be less than in the first diffusion area. This is because the holes move over to the second area with a delay for the time of their flight through the n-base. It can be shown that a sharply nonhomogeneous distribution that is close to that mentioned above is also obtained in structures whose minority carriers have a small lifetime τp. In this case, and in the event of all of the arguments above, it is necessary to substitute τ+ for τp. Thus, when there is a decrease in τ+, the size of the enriched layer by the p+–n junction can always be made smaller than the final size of the space charge region when there is complete restoration of the voltage. In our case, Wd1 ≈ 15 µm when τ+ = 2 × 10–7 s, and WSCR ≈ 70 µm when U ≈ 1 kV. It is precisely this characteristic that provides the super-fast restoration. Due to the small size of Wd1, the voltage drop on the space charge region (SCR), after its formation, is small during the entire time that its boundary remains inside Wd1. As the SCR boundary passes through the enriched plasma layer, its thickness decreases. After the boundary leaves for the area with a low concentration, the rate of expansion of the SCR sharply increases and, as a consequence, the increase in the voltage on the SCR also accelerates. As further analysis shows, by the time the enriched layer disappears, all of the minority carriers also turn out to be pulled out of the drift region (x > Wd1), and the conductivity current in the SCR is broken. From what has been stated above, it is possible to immediately obtain elementary evaluations for the rate of growth of the voltage on the diode in the process of restoration during the passage of reverse current with density j–. So, the intensity of the field in a quasi-neutral region is inversely proportional to the carrier concentration n. j– E ≈ -----------2qµn
(9.7)
where µ = the mobility. In this case, the µ of the electrons, for simplicity’s sake, is assumed to be equal to the µ of the holes. As was noted, by the p+–n junction, there arises an area with a strong plasma concentration backward gradient, which will contribute to the move of the holes to the p+-layer and will prevent the leakage of electrons from the p+–n junction, as shown in Figure 9.15. With time, the plasma concentration in the area drops, while in agreement with Equation (9.7), the field intensity begins to increase. It is obvious that, at the point of the concentration maximum at which dn/dx = 0, the electrons move away from the p+-layer to the n-layer with a speed that depends on the intensity of the electrical field created by the passing current so that j– v m = µE = ----------2qn m
(9.8)
After the period during which an electrical charge equaling the entire charge of the electrons in the dotted area of Figure 9.15 passes through the external circuit, the concentration of electrons by the p+–n junction reaches zero. The neutral condition near the junction is disrupted, and an area of positive space charge, determined by the ionized impurities with a concentration of Nd and the holes, arises. The field intensity in the SCR and the voltage drop on it (USCR) begin to increase. The boundary (a thickness on the order of the Debye radius) between the SCR and the neutral plasma moves with the speed of the electrons departure from the boundary. The rate of movement of the boundary does not coincide with the speed of the point of the concentration maximum, and this problem will be examined later in more detail. However, in the first approximation, due to the retention of the concentration gradient, they can be considered to be close. By taking Equation (9.8) into consideration, the rate of growth of the voltage on the SCR is © 2001 CRC Press LLC
a)
p
t1
t1
t2
pm
o
t3
o
o
L x
xm n
b)
n ,p t4>t3 nm pm
n p
Nd
o
space charge region
a
xm
L x
FIGURE 9.15 Restoration of the space charge region by the p+n junction.
dU SCR ( qN d + j – /v s )aV m ( qN d + j – /v s )aj – U′ SCR = -------------- = ---------------------------------------- = ------------------------------------dt ε 2εqn m
(9.9)
where a = the coordinate of the SCR boundary and nm is the concentration of the plasma at this boundary. Upon the conclusion of this expression for U´SCR, it is supposed that the field intensity in a large part of the SCR exceeds the value Es = 104 V/cm, which corresponds to the saturation of the drift velocity of the charge carriers. Equation (9.9) shows that the maximum for U´SCR(U´m) value is determined only by the constants of the material and equals U′ m = E m v s
(9.10)
where Em = the maximum value of the field in an SCR limited by an avalanche breakdown. For silicon, Em Ý 2 × 105 V/cm, vs Ý 107 cm/s, and Um Ý 2 × 1012 V/s. This means that it is possible, in principle, to use diodes to shape a change of voltage with an amplitude of close to © 2001 CRC Press LLC
2 kV during time τf of approximately nanoseconds, which is extremely attractive for nanosecond pulse technology.
9.3.2
DIODE-LIKE SWITCHES WITH ONE P-N JUNCTION
First, we considered the physics of electron-hole plasma injection and extraction in p+nn+ structures, which is the physical basis for building new devices called drift step recovery diodes (DSRDs). Actual device structures and their use in circuits can differ from those used in the cases considered earlier. In this section, we consider how these differences influence the device performance, what changes or additions must be made for using DSRDs, and what improvements are possible. Just as with the well known step recovery diodes (SRDs), when switching a short forward current pulse to a reverse current, the DSRD, over a certain amount of time τd (the delay of the beginning of fast restoration), possesses a high conductivity and is capable of allowing a significant current when there is a small applied voltage. When a DSRD is switched in parallel to the load, it shunts the current during the entire time τd as shown in Figure 9.16a. When the conductivity of the DSRD drops sharply, then the entire current is switched to the load so that it acts like an opening switch. As distinguished from the SRD, which permits operation with a direct current in the forward direction (we call it the pump current), the DSRD operates, as a rule, in the pulse mode. The parameters of the pumping, the amplitude, and the duration are very tightly connected with the efficiency of operation of the DSRD as a switch. The discussion given above on the physics of nonstationary processes in diodes makes it possible to select the optimal ratio for the values of the DSRD’s design elements and operating mode. a)
U
VD
I
R1
U1
b)
U,I
I
U
Ud O d
f
t
FIGURE 9.16 The drift diode: (a) schematic diagram and (b) turn-on voltage and current vs. time. © 2001 CRC Press LLC
Current Waveform Influence on Diode-Like Switches with One P-N Junction Analysis of the injection and restoration processes was done for a special, or step, form of the voltages and/or currents, which simplifies the explanation for the physical pattern of the processes considerably. However, for practical purposes, the case of a smooth change (for example, sinusoidal or linear) in the currents given by a circuit that is external to the diode is more interesting. It is clear that, qualitatively, the pattern of the processes has been retained. At the very same time, the operation of the diode in pulse circuits, up to the moment of fast restoration, is characterized by parameters that are integral with respect to time. Such parameters are • the complete charge accumulated in the diode of the unequilibrium carriers P = ∫ J dt • the fraction of the complete charge that remained in the diffusion layer, determined r+ by the ratio of the electron-hole mobilities • the dimensions of the diffusion layer, determined by the duration of the time interval L = Dτ+ It is obvious that these parameters have little sensitivity or are completely insensitive to a specific form of current pulses. The fast restoration processes in conditions of the removal of equilibrium carriers is determined only by the value of the current from which this process begins. The extraction process is characterized by the distribution of the plasma injected at the first “pumping” stage. In structures for whom the lifetime was decreased to τp = 0.5 µs by means of gamma radiation, when increasing the duration of forward current to values τ+ > τp, the plasma extraction process is determined only by the lifetime τp. There is an analogy between low-duration τ+ nonstationary processes and stationary processes with a small lifetime τp, which is confirmed by Figure 9.17. This feature is of practical significance and makes possible to use constant current pumping of DSRD. Efficiency In practical work, a voltage pulse with the minimum possible front duration τ, when there is the minimum voltage “precursor” before the front, must be obtained in the load, since the “precursor” substantially lowers the efficiency of the device. U, V 1600 τ += 0 1200 τ += 0.4 µs τ += 0.6 µs τ += 0.8 µs
800
400
0
20
40
t, ns
FIGURE 9.17 Restoration of the voltage on the p+nn+ structure with a small lifetime τp = 0.5 µs for different forward current durations. © 2001 CRC Press LLC
As was shown, at the end of the extraction of the electron/hole plasma by means of the backward current, a distribution is formed that corresponds to the equilibrium (unmodulated) state of the diode; that is, in the entire base n = Nd and p = 0; in this case, the density of the current through the device is j = qNdnE0. The condition of Equation (9.10), which describes the maximum rate of growth of the reverse voltage when the main charge carriers are flying out of the base, requires that the restoration current density js Ý qNdvs, while E0 = Es. When increasing the density, a sharp increase in the “precursor” begins, and when decreasing it, the front lengthens. For the given amplitude of the shaped voltage pulse, the optimal current density can be established by means of changing the area of the device or the load resistance. In this case, the amplitude of the voltage on the neutral region just before the front Unu ≥ wE0. To obtain the minimum ”precursor,” it is necessary to manufacture a DSRD with the minimum possible base thickness. In reality, the thickness value should correspond to the width of the SCR in the case of the voltage of a stationary avalanche breakdown of the p–n junction. As was noted above, when increasing the duration of the pumping, the influence of the diffusion layer near the p+–n junction begins to have an effect, and it is precisely here that a wide space charge region (SCR) arises earlier than when all the holes leave the base. An additional voltage drop USCR arises near the p+–n junction. Consequently, in this case the “precursor” is determined by the sum USCR + Unu. Let us examine the effect of USCR on the operation of the device in more detail. The characteristic thickness of the diffusion layer L = D ( τ + + τ – ) is the parameter that determines the value of USCR. This condition for the smallness of USCR immediately imposes a limitation on the total time of the pumping and delay τ + + τ – . Additionally, the USCR value depends significantly on the shape of the impurity distribution in region L. When there is a partial compensation by the acceptor impurity of the n-layer donors, which occurs in the process of creating the p–n junction, the space charge of the doping impurity (totaled with the hole charge) can be decreased. It is double diffusion for the example case of boron and aluminum, which makes it possible to create the required compensation area on the tail of the rapidly diffusing impurity that is the optimum. However, when j– Ý js and L Ý 3 × 10–3 cm, even in the case of complete compensation, the USCR levels with Un, and this imposes a natural limitation on L (for L « 10–2 cm) and on the total time (τ+ + τ− ≤ 5 × 10–7 s) in kilovolt range diodes. A substantial decrease in USCR is achieved in “quasi-symmetrical” p+–p–n+ diodes with charge accumulation in the emitter p-layer. However, for devices manufactured by diffusion technology, there is an impurity concentration gradient Na – Nd in the p- and n-layers. For this reason, the value of the parameter js = qvs(Na – Nd) changes along the n-layer, and it becomes impossible to retain the optimal condition for the fast restoration stage where j– = js. As a result, such structures, when providing a large permissible value of τ+ + τ−, have a somewhat smaller maximum rate of the growth of the voltage as compared with the p+–n–n+ structures. The principle for the formation of fast restoration of the voltage, which was discussed above, can also be applied to low-voltage structures of 100 V or less. In this case, the duration of the forward current pulses and the delay time should be much less than in high-voltage structures. In this case, the device effectiveness, when determining the ratio of the complete time τ+ + τ− to the duration of the restoration time K = (τ+ + τ−)/τf worsens (K determines the compression ratio in time compression circuits). Actually, K Ý vsL2/(Dw), that is, τ+ + τ− Ý L2/D, and to get the required sequence for the extraction processes (at the delay stage, this is the extraction of the unequilibrium carriers, and, at the restoration stage, this is the extraction of the equilibrium carriers), the condition L/w < 5 is necessary. Consequently, the effectiveness decreases as L squared. It is possible to decrease the effect of the duration of the pumping when using a DSRD with a small lifetime of the minority carriers. In Section 9.3.2, the analogy between stationary processes with small carrier lifetimes and nonstationary fast processes has already been given. This analogy is expressed in the equivalency of the pattern for the switching processes in diodes with very small minority carrier lifetimes (τp Ý 0.5 µs) in the case of a large duration for the forward current pulses © 2001 CRC Press LLC
(τ+ » τp), and in diodes with a large lifetime (τp > 10 µs) with a small duration for the forward current pulses τ+ = 0.5 µs, as shown in Figure 9.17. However, the technology complications of manufacturing the devices is not justified by the certain systemic advantages of a DSRD device with a small lifetime (simplification of the decoupling of the main current circuit and the auxiliary circuit of the pumping current) in the majority of variants for uses that are interesting for practical purposes. Synchronization Those devices in which charge losses are absent, that is, τ p » τ + + τ – , possess a number of advantages. First of all, the stability of the moment of switching for them is determined only by the stability of the circuits that shape the forward and back current pulse. The device’s parameters do not affect switching stability, as shown by experimental data that a temporary instability of less than 50 ps was determined only by the stability of the synchronization circuits and the power sources. For the case of several DSRD connected in series for increasing the switching voltages, the total current for pumping and extraction automatically guarantees the synchronicity of the instant of the voltage restoration on all of the devices. However, when shaping pulses whose front τf has a nanosecond duration, the synchronization should also have an accuracy in the range of ones and fractions of nanoseconds. In the case of the complete duration of the cycle for the flow of the current, τ + + τ – ≈ 1 µs , –3 the relative permissible charge loss should be τ f ⁄ ( τ f + τ – ) ≤ 10 , which is very small. Even in the case of purely recombination losses and a lifetime of 100 s, the τp value should be controlled with an error of no more than 10%. When decreasing τp, the required accuracy grows correspondingly. So, during the operation of a DSRD assembled in “stacks,” due to the synchronicity requirement, it is necessary to decrease the duration of the pumping and the τ+ value as compared with those permissible for a single device. As was shown during the fast growth stage of the voltage, the rate of voltage rise is simply determined by the degree of doping of the n-layer and the density of the flowing current. Therefore, the difference in the voltage on the diodes is determined by the difference in their operating area and the degree of doping. Both of the latter parameters determine the volt-farad characteristics of the device as well. As practical experience has shown, the primary selection of diodes according to a dispersion of the volt-farad characteristics within the limits of 20 percent makes it possible to connect dozens or more diodes in series without a noticeable widening or elongation of the front. It follows from the operating principle of the DSRD that internal positive current feedback (current regeneration) is lacking in it, and the entire switching cycle is equal to the flight time of the carriers through the n-layer of the structure. This determines the main advantage of the fast restoration in the case of switching using DSRDs. In addition, due to the lack of regeneration, there is no localization effect, or filamentation of current, connected with the development of the instability of the uniform current distribution, which is typical for such switching devices as thyristors, avalanches transistors, etc. This makes it practical, by using simple increases of the device’s working area, to increase the switch current up to thresholds determined only by the circuit parameters, or by skinning.
9.3.3
TRANSISTOR-LIKE SWITCHES
WITH
TWO P-N JUNCTIONS
Fast step recovery processes in p-n junction may be incorporated in more complicated structures with two (transistors, DSRT) or even with three (thyristors, DSRD) p-n junctions. Let us consider a three-layer transistor like the n++–p+–n–n structure shown in Figure 9.18. In this structure, the p+–n–n+ part is similar to one of DSRD considered earlier. This device is connected as shown in Figure 9.18 with the base grounded. Initially, there is no voltage from an external source applied to the collector. A high current gating pulse with density jg is applied in the forward © 2001 CRC Press LLC
Ri Uin
+
Io
R1
n n + p ++ n
Uout
Ig
FIGURE 9.18 The drift step recovery transistor (DSRT).
direction to base emitter n++–p+ junction during τ+ time. After diffusion time τd Ý W 2p/2Dn, electrons from the emitter reach the collector p+–n junction, and the p+–n junction is forced into a saturated state. After this moment, the process of holes injection from p+-layer to n-layer begins. The transfer of holes through the n-layer is diffusion in nature due to the absence of applied voltage; therefore, the width of the layer enriched by carriers that appears in the n-layer (L) is. The well known Fletcher condition for high injection level links the carrier density on both sides of p+–n junction so that 2
2
pp np ≈ Nd np ≈ pn ≈ nn
(9.11)
If the transistor amplification gain is large (β »1), the carrier distribution in p+ base is linear. Assuming that the carrier distribution in n-layer is nearly linear as well, we can get 2
j + W p ( n n – N d )L n p W + ------------ + ------------------------- = j + τ q 2 2qD n
(9.12)
The first and second terms in Equation (9.12) are the total number of electrons in the p+ base, and the third is the number in the n-layer. By use of Equations (9.11) and (9.12), this can be transformed into p n L 2p n w p j -------- -------------- + 1 = ---+ ( τ + – τ d ) 2 Nd l q
(9.13)
Equation (9.13) shows that most carriers are accumulated in the diffusion region, having Lthickness, of the n-layer when the condition τ+ » τp is fulfilled. When the gate pulse is turned off, the collector voltage U0 in Figure 9.18 is applied, and the collector current due to accumulated charge starts. Enriched by free carrier region of n-layer (0 < L) of Figure 9.15 is in a high conducting state, and its voltage drop is small. The remaining part of the n-layer is in an equilibrium state, and the voltage drop Ueq is Ueq = jc(Wn – L)/qnNd, where jc is the collector’s current density, which is limited by external circuit jc = U0/SRi, where A is the collector area, and Ri is the external resistor. The plasma dispersion process in the transistor is similar to the process in DSRD, which is a reverse concentration gradient and space charge region (SCR) restoration. The main difference is that a no drift nonlinear wave appears due to the absence of holes outside enriched diffusion region. Electrons, coming from p+-base into n-layer, help to suppress SCR restoration and to improve switching properties. To get this improvement it is necessary to remove all the electrons from the © 2001 CRC Press LLC
p+-base before the enriched diffusion region disappears completely. If not, the rate of the collector current break is slow and is determined by a slow process of electron movement into the n-layer across p+-base. The second main condition, in addition to τ+ » τp, for fast current break and voltage restoration for transistors is the same as for DSRD enriched diffusion region of the n-layer; that is, L must be shorter than SCR width WSCR, or (L « WSCR). The limit of the voltage rise rate should be determined as for DSRD. Actually, the collector current concentrates on the border between base and emitter electrodes in a DSRT. The maximum current density in DSRT is limited by the same as for DSRD condition jc < qvsNd, which means that the total collector current must be smaller than is possible for the DSRD of the same area. Lower average current leads to a somewhat lower (i.e., 1.5 to 3 times) voltage rise rate. It should be noted that the described mode of operation is possible when the collector voltage source Uin supplies constant voltage; then, the collector current limited by the resistor jcm must be smaller than the gating current (jcm « jg).
9.4 PROPERTIES AND LIMITATIONS OF PICOSECOND CLOSING SWITCHES BASED ON REVERSIBLE BREAKDOWN (DELAYED IONIZATION) IN P-N JUNCTIONS 9.4.1
GENERAL CONSIDERATIONS
Impact ionization is an extremely powerful mechanism for generating electron-hole pairs and can provide enormous growth rates for the concentration and current. This is possible, for example, by applying a very short, high-power voltage pulse to a n+–n–n+ structure. When the n-layer has a thickness of approximately 10–2 cm, the amplitude that guarantees the threshold ionization coefficient α∝ Ý 105 cm–1 is equal to approximately 104 V. The characteristic time for the multiplication of the carriers in the case of such a coefficient equals approximately 10–12 s. The duration of the pulse should not strongly exceed this value. In the opposite case, due to the increase of the carrier concentration, the conductivity current begins to increase and consequently requires a larger generator power. The shaping of such a pulse presents a still-unresolved problem and makes the approach unpromising. Obviously, for practical use, it is necessary that the independent fast development of the ionization process be guaranteed when there is a relatively slow and/or weak triggering source. Such a process of fast reversible breakdown in high-voltage p-n junctions was discovered in Russia by the same group of the Physical-Technological Institute (PTI) that discovered fast restoration in high-voltage p-n junctions as well.1 The physical picture of the process has been suggested by the same group and shortly was considered in part (a) in Figure 9.4. This effect has been exploited for subnanosecond switching in silicon avalanche shapers (SASs). Since then, SASs have been used for pulse generation by many groups in Russia and America. However, there are only a few papers devoted to the physical processes of SAS switching. In the next sections, we will consider the physics of the process in more detail. The consideration is based for the most part on the results of investigations that have been conducted by the PTIPSG group. As was shown in Figure 9.4, the switching properties of SAS are based on fast ionization shock wave generation and propagation. Fast ionization shock waves have been known in gases for more than half a century. The possibility of a wave breakdown in semiconductors and the concept of an ionization wave were developed for the trapped plasma avalanche transit time (TRAPATT) system of operation for silicon avalanche-flight diodes, which are intended for the generation of microwave signals. Let us briefly examine this mechanism. A diode with a p+–n–n+ structure is subjected to a constant voltage having a blocking polarity. The ratio of the thickness of the n-layer w and the concentration of the doping impurities in the nlayer Nd is chosen so that the n-layer is completely covered by the space charge region in the case © 2001 CRC Press LLC
when the maximum value for the field intensity near the p–n transition is lower than the threshold for the impact ionization Ea. The leakage current with density j0, determined by the heat generation processes, goes through the diode. At a jump during time δt, which is so small (δt « ts = w/vs) that the carriers fail to shift noticeably during this period, we will increase the current density through the diode to value j. Since the conductivity current is small (j0 « j), the field intensity in the nregion begins to increase according to the law E = jt/ε, and it will be raised higher than the impact ionization threshold in layer δ. At first, where there was a significantly small j0 value, the increase in the conductivity due to ionization will not prevent the reinforcement of the field. Then, after a time interval ∆t Š τi ln(j/j0), where τi = 1/(avs), and the carrier concentration increases so that the density of the conductivity current exceeds the j value, the field intensity in the δ-layer begins to decrease to a value that is less than Ea or the ionization threshold. It is obvious that the field maximum will shift, as shown in Figure 9.19. In the area of the new maximum due to ionization, the field intensity again decreases, the maximum sifts, etc. This means that an ionization wave arises that runs with a speed vb > vs. On the basis of what has been stated, we get the following evaluation for the speed of the wave to be
y
x
0
E
t3 t0
Eα
t1
t2
Vf δ
0
p
x
n
FIGURE 9.19 Shock ionization wave. The instability of the flat front on an ionization wave. The dotted lines indicate the perturbed state. © 2001 CRC Press LLC
δαv δ v f ≈ ----- = --------------s∆t j ln --- j 0 On the other hand, taking into consideration that, at the wavefront, the current is the displacement current, from Figure 9.19 it is easy to get the velocity of the shift in point d, at which the field intensity has some kind of fixed value, for example Ea. dE ∂E ∂χ ∂E ---------a = ------ ------ + ------ ≡ 0 ∂χ ∂t ∂t dt ∂χ where ------ = vb ∂t qN ∂E ------ = – ---------d ∂χ ε ∂E j ------ = -∂t ε From here it follows that vs j j v r = --------- = ----qN d js Thus, in the case of a current jump after a short period ∆t of the voltage increase, the ionized wave movement begins, and the voltage on the diode drops during time τf =w/vb to almost zero. It should be noted that the ionization wave of a TRAPATT system is analogous to “the wave of the potential gradient” in a gas, and the expressions for the speed of these waves are identical. The difference lies in the physical principle, which creates the potential gradient. In the case of a TRAPATT system, this is the space charge of the immobile ionized donors, and, in the second case, these are the geometrical distortion of the field or the fields of a little mobile “ion island.” The examined mechanism for the generation of plasma has found wide use in avalanche flight diodes, which are use for generating microwave oscillations. However, for the purpose of fast switching of high power levels, such a mechanism has little effectiveness. The reason is that, in the case of the fast switching of power (that is, during the switching of a device in the conduction state during a short time τf), it is desirable that the trigger pulse from the external additional source have either a small power (Pt « Pl), or a large increase in time τf » τf . In other words, the switch should, as a minimum, guarantee a gain either with respect to power or with respect to fast response. From what has been stated previously, it follows that, in TRAPATT diodes, the gain with respect to quick response, ∆t δ k = ----- ≈ ---- ≈ 1 τf w The gain with respect to power is equal to the ratio of the constant shift voltage to the amplitude of the super-voltage pulse, which is also approximately 1. The lack of gain with respect to a fast response and power does not prevent the use of the TRAPATT diode in microwave generators for which a fast voltage drop on the diode formed during the wave run “leaves” for the external circuit © 2001 CRC Press LLC
and almost completely “returns” with a change in the phase to the diode for starting up the wave on the next cycle. But the lack of gain makes direct use of the process described above unpromising for purposes of fast communication. The main difference between TRAPATT waves and waves in an SAS is in the initial conditions. In the TRAPATT case, there is a constant flow of primary carriers in the time of the field rise. In the SAS case, the field rises when there are no primary carriers to start ionization in the high-field region, although the field intensity is high enough for very quick breakdown. The ionization is delayed until carriers arrive from the low-field region. It allows us to call the ionization in the SAS “delayed ionization,” and the high-field region is “overvoltaged.” The different modes of ionization shock waves mentioned above have some common features. We will begin by considering such process modes that are common and then consider the processes that are special features of SAS operation. Ionization Wavefront Stability We should first discuss the problem of the stability of a flat ionization wavefront. As is obvious from the mechanism for the wave movement, its speed is determined by the ionization rate of the front, so the higher the ionization rate, the greater the speed. Let us examine the movement of a flat front, the line of the maximum of the voltage intensity, which is a one-dimensional pattern as shown in Figure 9.19. Initially, the field is uniformly spread across the area of the diode. Let a wavefront disturbance arise in the form of a protuberance. It is obvious that such a distortion of the front line causes the thickening of the field lines near the protuberance; that is, it causes an increase in the field intensity. This effect of reinforcing the field at the point of the electrode is well known in electro-technology. A local increase in the field intensity at the apex of the protuberance leads to an increase in the ionization rate and the velocity of motion of the apex in comparison with the remaining portion of the protuberance. That is, the “height” and curvature of the latter increase. This feature leads to the strengthening of the field at the apex, to an increase in the velocity of motion of the apex, etc. Thus, a wavefront, which is flat at the beginning, should fall apart into “threads” moving with great speeds, which then also determine the increase in the current. The examined mechanism for the formation of the instability is essentially equivalent to the mechanism for the development of instability in a gas breakdown. Plasma Concentration Let us make some evaluations for ionization waves that are not connected with a specific method of wave excitation. When filling the single volume of a semiconductor with electron-hole plasma, due to the impact ionization process, the energy going for ionization is drawn up from the initial 2 energy of W = εE ⁄ 2 , which is the electrical field in this volume. As is well known, the density of the field energy and the change in the energy density when changing the field intensity at dE equals dW = εEdE
(9.14)
Taking the dissipation of the energy due to all possible collisions into consideration, for one act of ionization, average energy equaling Wu = qE/α is expended (where α is ionization coefficient). Therefore, to change the number of pairs that come to be due to ionization with an expenditure of energy dW, we get the following based on Equation (9.14): dW εαdE dn = -------- = -------------Wu q © 2001 CRC Press LLC
(9.15)
When integrating Equation (9.15) within the limits from the initial value at the wavefront for the field intensity Em to the final value at the “tail” of the wave, which is accepted to be equal to zero, we find for that for α = α ∞ exp ( – b ⁄ E ) , αα ( E m )E m E m - -----n m = ------------------------ b q
(9.16)
where nm is the plasma concentration at the wave tail after filling the volume with plasma. Let us emphasize that Equation (9.16) has a universal character that does not depend on the velocity of motion of the wavefront. The field intensity drop behind a passing ionization front with a width ∆ is connected with the fact that the electron-hole pairs that come to be due to ionization are separated by the field, and a space charge arises. Taking into consideration that the fraction of the separated carriers of the complete number at the front is approximately vs/vf, we get the following from the Poisson equation: qn m ∆v s E m ≈ ----------------εv f
(9.17)
When substituting Equation (9.16) here, we find that 1 b v ∆ = ------ ------ ----n α m E m v f
(9.18)
which is the evaluation for the dimensions of the ionization wavefront. Breakdown Delay when There Are No Primary Carriers To generate a fast ionization wave in the case of delayed ionization, we must first determine under what conditions it is possible to create an overvoltaged region in the semiconductor and whether it is at all possible for a region wa to exist in which the ionization integral exceeds 1.
∫ αi dχ wa
where α i = α n exp – ∫ ( α n – α p ) dχ wa
is the effective ionization coefficient; αn and αp are the ionization coefficients for the electrons and the holes. Let us make note of the fact that ∫ α i dχ = 1 is the condition for stationary breakdown. Obviously, the realization of the overvoltaged condition is absolutely necessary for any ionization process leading to the filling of the semiconductor volume with plasma during an amount of time that is less than the flight time of the carriers through it. Let a leakage current having density j0 flow through a p+–n transition. Then, the space charge region (SCR) boundary for time T = q/(j0S) intersects one carrier at an average. The flight time of a carrier through the SCR is τs = wSCR/vs. From here, for T > τs, we find that the average time during which the volume of the SCR will be completely free of charge carriers is © 2001 CRC Press LLC
j 0 V q T fr = T – τ s = ------ 1 – -----j0 s qv s
(9.19)
where V = SwSCR is the volume of the SCR. It is easy to determine that, in structures with a leakage current through a p+–n junction j0 < 10–9 A/cm2, the value Tfr lies in the nanosecond range when S < 0.1 cm2 and wSCR = 0.01 cm. Thus, when supplying a voltage pulse with a front duration of τf
(9.20)
where t is the time read out from the time the carrier gets in the SCR, and w is the thickness of the n-layer. The coefficient for radiation recombination is small in silicon, and it should be expected that, at least for the initial stage of the development of the avalanche, while its volume is small, its radiation is weak. Therefore, it is possible to disregard photogeneration in front of the head portion of the avalanche.
E + +
+
+ +
p
i
FIGURE 9.20 Evaluation of the current of a single avalanche. © 2001 CRC Press LLC
n
As is well known, the frequency of tunnel ionization for an electron can be presented in the form of w n ≈ w τ exp ( E ⁄ E τ ) , where for silicon wτ Ý 1013 s–1 and Eτ Ý 4 × 107 V/cm. For an average value E » 4 × 105 V/cm, corresponding to the two times the overvoltage on the diode, we get exp(–E/Eτ) Ý 10–44, which implies that it is possible to disregard tunnel ionization. The reinforcement of the field in the head portion of the avalanche can increase the role of the tunnel ionization, but still the exponent remains large. However, the extremely strong dependence of the ionization rate on the field intensity does not make it possible to make a more accurate evaluation of the effect of tunnel ionization. We need to discuss the problem of the avalanche front velocity when there is no photo- or tunnel ionization, and also when there is no meeting flow of ionizing carriers. The electron that comes to be at the wavefront is accelerated into a strong electrical field up to the moment of collision at the optical photon, like a free one. After collision, it completely loses its momentum. As a result of the collision, an average speed equal to the saturated speed vs is established. It occurs as a small number of carriers that avoided acts of collision, and that pass through the entire path λi = 1/α up to the accumulation of ionization energy Ei (like free ones) lead the impact ionization. In agreement with these, the average speed along the entire path for the fast electrons is 1 1 2E v cp = --- v m = --- --------i 2 2 m
(9.21)
and can be higher than the saturated speed. Using silicon as an example, and taking into consideration the law of conservation of momentum, then Ei = 1.8 eV, which, in agreement with Equation (9.21), gives vcp = 4.7 × 107 cm/s; that is, a four-fold surpassing of vs. However, the fraction of such fast electrons from their total number equals n0/n Ý exp(–b/E), 6 where b ≈ E i ⁄ ( qλ ) ≈ 1.2 × 10 V/cm and λ is the length of the electron’s free run before collision with an optical photon. Let us recall that an = a× exp(–b/E), and a× = 0.65 × 106 cm–1. Thus, when E « b, the fraction of fast electrons is small (5 × 10–2 when E Ý 4 × 105 V/cm), and they only somewhat widen the front in comparison with Equation (9.18). The main mass of electrons (>95%) moves with the saturated speed. Taking what has been stated into consideration, it is possible to assign va= vs. Then, the evaluation according to Equation (9.20) for real values w Ý 3 · 10–2 cm, U0 = 103 V yields I0 < 10–2 A when t < 10–9 s. Such a small current, when there is typical resistance of the external circuit of the diode of 50 ¾, cannot remove the voltage from it and cannot prevent the formation of an overvoltaged region. Then, the probability that several carriers will occur simultaneously decreases exponentially with an increase in their number. The maximum area of the “tail” from one such electron is less than w2 = 10–4 cm2; therefore, even when the area of the entire structure is 10–2 cm2, 99% of the entire area of the “overvoltaged” region will be free of carriers. It is possible to expect a delay in ionization for a time that is close to the flight time in a structure with a small leakage current. The possibility of such a delay was verified experimentally on a PIN diode created by the diffusion of aluminum and boron in n–-silicon with a specific resistance of ρ = 270 ¾·cm. The thickness of the i-layer wi = 120 µm, and the depth of the diffusion of aluminum equaled 100 µm when its surface concentration was 1017 cm–3. The n+ type layer was formed by the diffusion of phosphorus through the polished surface, which ruled out the possible injection of carriers from a contact. The diode had a stationary breakdown voltage of 2800 V. Measurements of the volt-farad characteristics showed that the complete overlapping of the i-layer by a space charge region occurred at a voltage of 200 V, equal to that calculated for the given wi and ρ values. The short (3 ns) high-voltage pulse was applied to the diode, in addition to constant bias U0 = 1000 V. The measured current trace showed that, in the case of a complete voltage on the diode of approximately 5000 V, only a displacement current went through the diode. The curve of this current almost repeated the curve of the rate of change of the voltage; that is, the conductivity current was less than 1 A, or negligibly small. Such a “currentless” state exists for 3 ns. When © 2001 CRC Press LLC
there is a further increase in the voltage by 10%, an irreversible breakdown occurs. Let us evaluate the degree of overvoltage intensity. Assuming the exponential distribution of the concentration of aluminum in the p-layer NAl = Ns exp(–x/a), it is easy to determine, while integrating the Poisson equation twice, the voltage drop on the section of the SCR that is positioned in the p-layer is U p = a ( E n – qw p N d ⁄ ε )
(9.22)
The characteristic length of the doping profile is determined from the condition a = wp/[ln(NS/ND)] = 1.16 × 10–3 cm, where Nd is the concentration of the impurities in the i-layer. It is easy to show that Up = 250 V, and that the voltage drop on the i-layer Ui = 2550 V for the threshold of the stationary breakdown, which corresponds well with the results of the stationary breakdown threshold. The maximum field intensity in this case equals 2.3 × 105 V/cm on the p–i -junction, and it decreases to 1.95 × 105 V/cm on the i–n-junction. The corresponding time for –1 electron ionization τ n = ( α n v s ) equals approximately 40 and 100 ps. When there is a total voltage (pulse plus constant bias) of 5 kV, the following will hold: Up = 400 V, Em = 4 × 105 V/cm, τn = 3 ps at the p–i-junction, Em = 3.6 × 105 V/cm, and τn = 4 ps at the i–n-junction; that is, in the overvoltaged state, the ionization time decreases by more than an order of magnitude as compared with the corresponding stationary breakdown. If it is assumed that the distribution of the carriers that create the initial leakage current with density j0 is homogeneous with respect to volume, then when there is such a small ionization time, for flight time τs Ý 1.5 ns (taking the expansion of the SCR into consideration), the leakage current should have increased by exp(τn /τs) Ý exp(350) times! So, when there is any small density for the leakage current j0, after time τs, then the complete current should have surpassed the 1 A level estimation for maximum possible conductivity current noted above. It should be noted that, when there is such an increase, the avalanche created by a single electron also should make the transition to the streamer stage long before approaching the anode. The limitation of the current in this case is evidently connected with the “damping” effect of a thick p-layer with a relatively low concentration Nτ. When there is a large current density in the streamer, the voltage drop on the p-layer increases, and the field in the streamer is weakened. Since the thickness of the p- and i-layers are almost identical, such a redistribution can lead to a two-fold average field intensity decrease, even to the stationary breakdown level. After accepting a current density in a streamer of 106 A/cm2 and its cross section of 10–4 cm2, we find that the complete current of a single streamer equals approximately 10–2 A, and this is too small for the redistribution of the voltage between the diode and the resistance of the external circuit (100 ¾). On 50 ns pulses, there was a failure to create a significant overvoltage due to the irreversible breakdown of the diodes. Thus, in semiconductor silicon diodes, for a short time (approximately the flight time), it is possible to create overvoltaged regions in which the characteristic ionization time is more than an order of magnitude lower than the ionization time corresponding to the threshold of the stationary breakdown.
9.4.2
EFFECT
OF
SUPER-FAST SWITCHING
IN
DIODE SILICON STRUCTURES
Main Features As was shown above in PIN structures, it is possible, at least for a time equal to the flight time, to create a region of a strong electrical field in which each carrier can give rise to many (more than dozens) of secondary carriers during the flight time. In this case, the total voltage on the diode surpasses the stationary breakdown voltage by approximately two times, and the conductivity current is extremely small. When there is an insignificant increase in the duration of the pulse, the © 2001 CRC Press LLC
conductivity current sharply increases, which is accompanied by the complete loss by the diode of its blocking capability, and an irreversible breakdown occurs. However, similar experiments on a super-voltage on p+–n–n+-structures have shown the possibility of a reversible breakdown. But, after a certain time after the breakdown, the blocking capability of the structure is completely restored. In Figure 9.3, it was shown that the voltage on the structure, after reaching the maximum value Um = 3 kV after delay time τ3 = 2 ns, drops extremely quickly (in τf < 0.2 ns) to the residual value Umin = 200 V and then slowly increases to the stationary value U* = 2.1 kV in approximately 10 ns. Cooling the structure from room temperature to the temperature of liquid nitrogen somewhat decreased the delay and the switching voltage. Some of the properties and features of super-fast switches include the following: 1. Extremely high stability of the switching moment relative to the moment for the beginning of the voltage increases. (Jitter within the limits of 30 ps is the value determined by the instability of the oscilloscope triggering.) 2. A large ratio dUb /dt during switch-on and dUa /dt applied to the diode, called gain, dU dU k = ---------1 ⁄ ---------a ≈ 10 dt dt 3. A current density after switching that significantly exceeds the density of the threshold current, which can be guaranteed by equilibrium carriers in the n-base, so that the entire n-layer is strongly enriched by carriers in the switching process. 4. The switching time τf < 0.2 ns, which is an order of magnitude less than the drift time of the carriers through a modulated volume with the saturated speed. 5. The switching voltage, Um = 3 kV, is twice as large as the stationary breakdown voltage Un = 1.5 kV. We already noted that the wave ionization processes during a breakdown are continuously connected with the mechanism that guarantees the existence of primary carriers that initiate ionization. A number of factors bear witness to the fact that the processes for the thermal generation of carriers, both in the SCR and in neutral regions, cause a leakage current of p–n transitions to the breakdown, but this does not play a part in the case that interests us. The following factors illustrate this fact: 1. A very large gain of k Ý 10. 2. A weak dependence of the switching process on temperature, while the leakage currents caused by thermal generation decrease by many orders of magnitude (approximately 1030) when there is a change in temperature from room temperature to the temperature of liquid nitrogen (77 K). The delay in ionization not only does not increase, it decreases somewhat. 3. The extremely high stability of the delay, which is retained even at the temperature of liquid nitrogen. As may easily be shown taking into an account the famous Shockley–Sa–Noice theory, at room temperature the density of the leakage current caused by thermal ionization equals 10–9 A/cm2 for the studied structures. It should be noted that, in the experiments, there was a failure to get rid of the surface leakage currents due to the small area of the structure, and it was only possible to evaluate the upper boundary of the leakage current. At the temperature of liquid nitrogen, the minimum recorded currents were less than 10–11 A when there was a displacement of 1 kV. Even at these currents, the average time between moments of the passage of carriers through the plane of the p–n-junction should equal δt ≈ q ⁄ I ≤ 10 ns . In agreement with this, the stability should be close to the value of δt. © 2001 CRC Press LLC
Since the switching process does not depend on frequency in a very wide range of pulse repetition frequencies (from tens of kilohertz to single pulses), the effects connected with the accumulation of residual carriers in strongly doped layers from cycle to cycle (the storage effect in the TRAPATT model) can also be ruled out. Consequently, it is necessary to assume a pure field mechanism, or mechanisms, for the generation of primary carriers that initiate impact ionization. Obviously, when there is the complete switching time τf = 0.2 ns, the average distance between –3 the primary carriers in the super-voltage region should be no more than l ≈ τ ϕ v s ≈ 2 × 10 cm. The concentration n0 Ý 108 cm–3 and leakage current density j0 Ý qn0vs = 10–4 A/cm2 correspond to this l value. Let us recall that the non-local effects mentioned in Section 9.4.1 will have an effect when there are fields with an intensity of approximately 106 V/cm. The following experiment yields an evaluation of the upper boundary leakage current at which the formation of a overvoltaged region is still possible. In the tested diodes from the side of the n+-layer in the contact, we etched a window with a diameter of 1 mm, and with a depth of 30 µm in the silicon. This area was illuminated by a focused beam from a lamp. The constant bias voltage equaled 500 V. The currents for various values of the intensity of the illumination had been measured. Then, fast-rising voltage was applied to the diode for the each illumination measured before. It was shown that a photo current of 10–8 A noticeably decreased the delay in the current growth and the switching voltage, and it increased the switching time to 0.5 ns. When the current was 10–5 A, the switching effect disappeared completely. We have changed p–n-junction leakage current by heating as well. The results are the same as when the diode is illuminated, so the origin of the initial, or leakage, current plays no role. It is the value of the leakage current that is important. In agreement with what was stated above, the observed switching process is conveniently divided into the following stages: 1. 2. 3. 4.
The The The The
creation of an overvoltaged region in the absence of initiating carriers generation of initiating carriers excitation of a fast ionization wave extraction of the plasma accumulated after the wave passes
Effect of the Circuit Parameters Figure 9.21 shows the results of experiments on the effect of U´ and the constant bias voltage U0 on the process of switching p+–n–n+ structures with Nd = 1014 cm3, w = 250 µm. The U´ value decreased by using lowpass LC filters with the corresponding frequency cutoff. It is evident that the change in U´ has a very strong effect on the character of the switching. When U´ < 0.5 × 1012 V/s, the effect of the switching is practically absent. When there are larger U´ values, a “precursor,” determined by the displacement current, grows in front of the section of the rapid increase in the current. Experiments show that, when there is a small constant bias and U´ = 1012 V/s, there is no switching, but when there is an increase in the bias more than 400V, the switch-on appears. It should be noted that, when there is a bias voltage close to the stationary breakdown voltage, the effectiveness of the switching decreases. This is connected with an increase in the leakage current and is similar to the behavior of a structure when heated or illuminated. As is evident from Figure 9.22, the dependence of the maximum switching voltage on the rate U´ could be bell shaped in character. When there are small rates, dependence increases with an increase in U´, but it drops when there are large rates. Moreover, when there are small U´ values, the increase in the bias voltage U0 leads to a decrease in Um and, when there are large U´ values, it leads to an increase in Um. The dependence of the residual switching voltage after switching on U´ has a minimum, as shown in Figure 9.23. When there are small rates (U´ < 1012 kV/ns), an increase in U´ leads to a © 2001 CRC Press LLC
1240V
sharp decrease in Umin and, when there are large rates so that U´ > 1012 kV/ns, it leads to an increase again. We can determine the average concentration from the current traces of the pulled-out charge Pb, assuming the distribution of the concentration with respect to the volume after the wave run is homogeneous. For the case of optimum switching, we have for U´ = 1.2 × 1012 V/s, maximum Pb: nm Ý Pb/(qSw) Ý 1015 cm–3. In the very same supposition, we can find nm, knowing the residual voltage Umin, as shown in Figure 9.23, then nm Ý Umin/(qjw) = 1015 cm–3. Before switching, the diode current is the displacement current; therefore, we can get the maximum of the field intensity near p-n -junction as
3 2 1
24A
U0=1000V
2
3 1
I=0
1.8ns
FIGURE 9.21 Diode turn-on characteristics when different voltage rates are applied.
Um, kV
3.4 U0=400V
3.2
700V 1000V
3.0
2.8
2.6 0.4
0.8
1.2
1.6
FIGURE 9.22 Turn-on voltage vs. the applied voltage rise rate. © 2001 CRC Press LLC
2.0
U',kV/ns
1 t E m = E 0 + ----- ∫ I dt εS 0
(9.23)
According to Equation (9.23), it is easy to find the value Em = 3.3 × 105 V/cm for the moment of the time just before switching, for which the evaluation of the concentration according to Equation (9.16) yields nm = 5 × 1015. It is clear that, due to the decrease in the voltage on the structure as the current increases, the field of the front should weaken in comparison with the initial one, and consequently the concentration nm should decrease. Considering this and the sensitivity of the nm value to the change in Em, which agrees with Equation (9.16), and the inaccuracy of the determination of S and Umin, then the coincidence of the three values for nm, determined by independent methods, should be considered completely satisfactory. This coincidence also indicates the sufficiently homogeneity of the of the modulation process. Since Equation (9.16) yields an evaluation from above then, from a comparison of this evaluation and the evaluation of the concentration according to the carried out charge, it follows that modulation enveloped no less than 40% of the device’s area. The experiments during which the area of similar structures was gradually decreased showed that the residual voltage increases with a decrease in the area according to a low, which is close to the linear law. Therefore, it is possible to confirm the wave process in structures with a large neutral region (NR) thickness that is homogeneously enough. This homogeneity, which is caused by the large concentration of the carriers initiating the wave, also explains the extreme stability of the process in time. So, the given experimental data series confirms the fact that, in “thick” structures (i.e., those with a large NR and a large initial SCR), when there are moderate U´ values, the process develops according to the system examined above; that is, injections of carriers from the NR to the SCR and excitation of a fast ionization wave. Effect of the Structure’s Parameters There is an interesting and important problem of the effect of the technological parameters of a structure, such as the degree of doping, the thickness, and the depth of the bedding of the junctions on super-fast switching in a wide range of systemic parameters U0, U´, and R. However, such studies are extremely difficult. An extremely wide range of such characteristics that are specific
Umin kV 2
1 U0=400V U0=700V U0=1000V 0 0.4
0.8
1.2
1.6
FIGURE 9.23 Residual voltage vs. applied voltage rise rate. © 2001 CRC Press LLC
U', kV/ns
for the discussed phenomenon, such as the maximum switching voltage, the residual switching voltage, and the duration of the switching process, can be observed from structure to structure. The laws of the dependencies (Um on U0, etc.) can even change qualitatively. Within a batch manufactured in one technological cycle, the spread is usually very small. An uncontrollable spread in the parameters arises between structures that are manufactured in different technological cycles and out of different initial materials, although such parameters as the depth of diffusion, the thickness of the structure, the surface and volume resistance, the carrier lifetime, and the voltage of the stationary breakdown are controlled well enough. Such a position is evidently explained by two causes. 1. As was noted previously, the wave ionization processes are closely connected with the processes for the formation of primary initiating carriers since, in the super-voltage region, weak tunnel ionization through the system of deep levels in the absence of impact ionization can be a determining factor. The very same thing applies to impact ionization through the level in the NR, where zone–zone ionization impossible due to a weak field. The control of field ionization is extremely difficult in our case, just as is true of the control of thermal ionization through deep levels in the space charge region (SCR) (in a silicon high-voltage device), which determine the leakage current of a p–n junction. In spite of all attempts to purify the material from all levels lying by the middle of the forbidden zone, and making the main contribution to the leakage current, residual uncontrollable levels (or a system of levels), there will always be an uncontrolled leakage current that is greater than the calculated diffusion currents from the neutral regions. The concentration of these levels can be so small that it is impossible to determine it in other independent experiments such as the measurement of photo- and thermo-capacitance, etc. 2. Generally speaking, ionization waves can be unstable with respect to spatial perturbation with a wavelength that is greater than the thickness of the front of the ionization wave, since the latter, with an increase in speed, becomes more unstable. In a semiconductor, the development of instability that is manifested as the current’s “filamentation” effect, leads to the breakdown (or burn-through) of the structure. Actually, in systems of very large super-voltages and small fronts, the switching in thin structures was unreliable. If the pulse duration was long, e.g., 10 ns, then the structures very quickly burned out, often after the first switching. As practical experience has shown, the presence of the NR in the entire range of working voltages is necessary for reliable operation; this is because the overlapping of the base by the space charge area is not permitted. Evidently, the NR, acting as the ballast series resistance for the wave, suppresses instability. Below are given the experimental data on the effect of the thickness of a high-ohmic n-layer in a p+–n–n+ structure on the switching process. For the reasons stated above, all the structures were manufactured in one technological cycle out of silicon wafers having different thickness and having an identical donor concentration Nd = 1.25 × 1014 cm–3. The depth of the doping of the p+–n junction, which was obtained by means of the diffusion of boron with a surface concentration Ns Ý 5 ×1019 cm–3 and aluminum with Ns Ý 1017 cm–3, equaled 50 µm. In Figure 9.24a, b, and c there are given the dependencies of the switching voltage, the maximum field intensity at the p–n junction at the switching moment [where this calculation is based on the experimental data according to Equation (9.23)], and the turn-on time on the rate of increase of the voltage for different thickness of the n-layer. As was stated previously, when supplying a constant voltage, the switching of structures with a small thickness of the n-layer, and a thin NR as well, became very unreliable, because the devices quickly burned out. Therefore, all of the data were obtained when there was nearly zero constant bias. For large U´ values of more than 2 × 1012 V/s, the structures switch on well, and without constant displacement. In this case, the large flow of primary carriers does not prevent the creation © 2001 CRC Press LLC
Um, kV 4.5
200 m
0.7kV
w=250 m
150 m
0.7kV
a)
3.5
2.5
1
3
5
7
9
12
U', 10 V/c
b)
E 5 10 V/cm 5
4
3 c)
w=150 m w=200 m w=250 m
1
3
5
7
9
12
U', 10 B/c
,ns 0.6
0.4 w=250 m
0.2
w=150 m
0
1
3
temporal resolution 5
7
9
12
U', 10 B/c
FIGURE 9.24 The relations of the switching voltage (a) to the maximum field intensity (b) to the duration of the switching (c) and on the rate of increase of the voltage on the structure.
of a overvoltaged region, and fast switching is possible, as was noted above. However, the gain with respect to fast response should decrease sharply, which is also observed in experiments. Calculation of the field intensity at the p–n junction at the moment of switching does consider the true course of curve U´ in Figure 9.24b and shows that, with an increase in U´, the Em value at first increases linearly but, when U´ > 5 × 1012 V/cm·s, it is saturated and works toward a value of 4.4 × 105 V/cm. It is evident from Figure 9.24b that, for all structures with different n-layer thicknesses, the dependencies coincide with good accuracy. It should be assumed that, when there are large U´ speeds and a small size for the SCR, the switching process will take place along a different route from that of the preceding case, which is when there are small U´ values and a large SCR. It may be shown that, in fields with an intensity of approximately 4 × 105 V/cm, the excitation of a fast ionization wave is possible even when there is a very large carrier concentration.
© 2001 CRC Press LLC
2
E m εa 16 –3 n 0 ≈ -----------≈ 10 cm 2qb When U´ > 5 × 1012 V/s in experiments, the current density before switching with a small error of less than 20% is determined by the law CdU j = ---------------------------dt ( CεS ⁄ w ) so that the current is the displacement current, and its density does not exceed 500 A/cm2. Thus, the upper evaluation for a density of the conductivity current equaling jmp < 100 A/cm2, and for the concentration n0 ð jmp/(qvs) = 1014 cm–3, and this value is a great deal less than the threshold value of 1016 cm–3 we obtained above, which confirms the possibility of exciting a fast wave. A necessary condition for the wave process is the gradient of the field and (or) concentration. There is a field gradient in the SCR; therefore, the ionization process can be a wave process. In the NR, the distribution of the field and the concentration are homogeneous from the start. The breakdown of homogeneity can arise due to the generation of unequilibrium carriers by impact ionization and the increase in their concentration to a value, which is comparable with Nd. But even in this case, the nonhomogenity region, positioned at the boundary of the NR by the n+–n junction, will have dimensions ∆ Ý vsτ. Therefore, when there is a small τ and a small constant bias, and consequently a0 « w, then, after the exit of the wave from the SCR to the NR, the entire voltage will be quickly “thrown” at the NR, and homogeneous delayed ionization occurs there. It can be shown that such a process also leads to switching; that is, it leads to an increase in the current when there is a decrease in the voltage, but with a small gain with respect to fast response. The saturation of the dependence of Em on U´ (when Em Ý 4.7 × 105 V/cm) evidently means that intensive tunnel ionization is beginning through the intermediate level in the forbidden zone. It follows from Figure 9.24 that, with an increase in U´, the switching time τϕ decreases, and with an increase proportionally with the thickness of the n-layer. When there are large U´ values (more than 5 × 1012 V/cm), a tendency is observed toward the saturation of τϕ, which occurs in the very same U´ region where the saturation of the dependence of Um and Em on U´ takes place, which signifies that there is a direct connection between the switching speed and the overvoltage.
9.4.3
DIODE-LIKE SWITCHES
The preceding section gave the results of the study of the delayed impact ionization switching effects, which can be used for creating super-fast switching devices. Such devices do not have stationary S-type volt-ampere characteristics and are actually used as peakers of initial pulses. One of the most important parameters for peakers is the ratio of the operation delay τd to the duration of the front, which almost corresponds to the ratio of the rate of the increase of the voltage until switching and the drop rate of the voltage during switching. This ratio is the gain with respect to fast response that is obtained using a peaker. In the case of the development of a peaking elements, it is necessary to guarantee the maximum value. As was shown above, a sufficiently large gain of κ Ý 10, when there is good stability and high operational reliability of the diode, can be obtained in a two-step switching process. Such switching is accomplished in diodes with a specific resistance of the initial silicon ρ = 30 to 50 ¾·cm (Nd = 1014 cm–3). The switching effect in diodes disappears when the rate of the increase in the voltage is less than 0.5 × 1012 V/s. When there is the maximum switching voltage, Um = 3 kV and U´ = 1012 V/s corresponds to a delay of τ3 = 3 ns. For the indicated U´ and Um values, the switching time equals τϕ = 0.2 ns; that, is κ Ý 15. The thickness of the n-layer and the area of the device play an important part. Devices that are intended for operation with small U´ speeds should operate © 2001 CRC Press LLC
with an initial displacement that is close to the stationary breakdown threshold. To do this, they have a space charge region (SCR) with a large thickness and an initial value a0 = 100 µm when U = U0 and a final value am = 200 µm when U = Um. To suppress the current “filamentation” effect, which guarantee the device’s reliable operation, the sufficient reserve of neutral region (NR) with respect to thickness is necessary, since the NR prevents the development of instabilities. Therefore, the complete thickness of the n-layer should equal 250 to 300 µm. Decreasing the device area decreases the current passing through it at the delay stage, and this leads to an increase in the density of the current and the residual voltage after switching. The optimum area when U´ Ý 1012 V/s and the device operates in circuits with an impedance of 50 ¾ equals S = 0.10 to 0.15 cm2. Such a value guarantees the smallness of the precursor UC in the shaped pulse (due to the flow of the capacitance current) of less than 15% and the residual voltage Umin Ý 200 to 500 V. Shaping picosecond range pulses requires a large overvoltage and U´ = (3 to 5) × 1012 V/s. In this case, operation without a constant bias is possible. Since the maximum thickness of the SCR in this mode decreases, then it is possible to decrease the total thickness of the n-layer to 200 µm and, having retained the size of NR, to avoid the “filamentation” of the current. To decrease the stray capacitance current, it is necessary to decrease the area of the diode down to 0.015 cm2. This makes it possible to retain roughly those very UC and Umin values that are intended for operation in diodes with small U´ (1012 V/s). The experiment with such an optimized diode (wn = 200 µm, S = 0.023 cm2) showed the switching voltage 3.3 kV, and the residual voltage is close to zero. The duration of the front of a pulse shaped at a load of 50 ¾ equaled less than 50 ps and was located behind the threshold for the resolution of the system. The amplitude of the fast change in the load is 2 kV, and the power is 80 kW, which is four orders of magnitude greater than the power commutated by the known devices in the given temporal range. The instability of the switching, which is limited by the jitter of the sampling oscilloscope, did not exceed 0.02 ns, which was the instability of the oscilloscope itself. The threshold operating frequency of a diode peaker depends on the time of the return to the initial state and heating up. It should be noted that process of plasma extraction after switching on in SAS is just the same as for DSRD. As was shown in Section 9.4, the plasma remaining after switching could be extracted by the current in a small enough amount of time (10–8 to 10–9 s), and the limit for operating frequency may be as high as several megahertz. However, after the break of the current in the NR, there can remain unequilibrium carriers whose extraction can be determined by the diffusion through the NR to the SCR and by the recombination in it. The diffusion flow of these carriers from the NR to the SCR is equivalent to the leakage current and can prevent the creation of a overvoltaged region when there are small U´ values. The experiments conducted on two-pulse methodology showed that the restoration time when U´ Ý 1012 V/s is approximately 2 µs. While increasing the rate of increase of the voltage, the effect of the residual charge in the NR decreases. The heat losses can conditionally be divided into three parts: loses on the precursor of the current (at the delay stage), at the front, and at the plasma extraction stage. At the delay stage, the current in a significant portion of the SCR structure is a displacement current and is not accompanied by heat dissipation. The energy lost in the NR can be evaluated from above in the following manner: –6
P 3 ≈ j s E s τ δ ( w – a 0 ) ≈ 10 J The losses at the front should be less than the usual commutation losses in switches in the approximation of the linear increase in the current since, in the switching process, in front of the wavefront, the current is a displacement current. Consequently, Im Um - τ ≈ 3 × 10 –6 J P f ≤ ----------6 f © 2001 CRC Press LLC
At the restoration stage, it is possible to distinguish two stages. In the first (τ1), the concentration of the electrons near the boundary with the p+-layer is great, there is no SCR, and the voltage drop is equal to the residual voltage. At the second stage (τ2), the restoration of the SCR and the increase in the voltage on the diode begin. The losses on it depend on the specific shape of the pulse sent to the diode. When there is a short pulse of τp < τ1, the losses are determined by the residual voltage, I m U min τ p - ≈ 3 × 10 –6 J P l = ------------------2 When there is a large pulse duration, the losses in the second section can exceed the losses in the first by an order of magnitude. Thus, when there is reliable heat exchange (10 W/cm2) and a permissible temperature for heating the structure 100°C, the threshold average operating frequency limited by heat liberation can exceed 100 kHz.
9.4.4
TRANSISTOR-LIKE SWITCHES
As previously noted, after filling the n-base of a p+–n–n+ structure with plasma, as a result of the run of the impact ionization wave, the process of the dispersion of this plasma begins. After the conclusion of the process at approximately 10 ns, the blocking capability of the p–n junction is restored. However, in some applications, it is necessary to maintain the conducting state of a switch for a prolonged time interval of 10–8 s or more. It is well known that, in n++–p+–n–n+ transistor structures, the complete charge that passed through the external circuit, due to the regenerative feedback, can significantly exceed the charge preliminary to that “pumped” to the n-layer. This is the principle that makes it possible to slow down the dispersal of the plasma and that can also be used in devices on impact ionization waves, exciting the wave in the collector p+–n–n+ part of the n++–p+–n–n+ transistor structure. Let us examine the process of switching such a transistor structure, as shown in Figure 9.25. The transistor is connected into a circuit in the same a manner as SAS. As the applied voltage increases, the SCR near the p+–n junction will expand, and in this case the maximum field intensity on the p+–n junction increases and reaches the critical value Emc, at which time an impact ionization wave is formed. The SCR expansion is associated with the removal of the main carriers from the p+–n junction with a drift speed in the electrical field having intensity Ep, which arose in the NR of the p+-layer, equaling vpp = µpEp. This same field causes the injection of electrons from the n++-layer, and their drift in the direction toward the p+–n junction with a speed of vnp = µpEp. It is obvious that the electrons go away from the n++-layer to a distance Ln = bLp, where L p = ( εE mc ) ⁄ ( qN s ) , l 0 = µ n ⁄ µ p , and Na is the concentration of the acceptor doping impurity in the p+-layer. Thus, when fulfilling the condition that w p ≥ ( 1 + b )L p
(9.24)
where, and wp is the thickness of the p+-layer, the electrons from the n++-layer will not be able to end up in the strong field area before the intensity in it reaches the Emc value. It is obvious that, in this case, a wave in a transistor structure can be excited in exactly the same way as in a diode structure. In an asymmetrical p+–n junction, Na » Nd and E mc = ( 2qN d U m ) ⁄ ε , where Um is the voltage of the switching of the p+–n structure. After the run of the wave and the filling of the entire n-layer with electron-hole plasma, a current of density j is determined by the external circuit. At the initial stage of the plasma dispersion, the nonequilibrium carrier current by the p+–n junction is a diffusion current. The characteristic size of this region Ld Ý qDppm /j ð 10–4 cm for typical values pm < 1016 cm–3 and j > 102 A/cm2. After time τd = qpmLd /j = 10–9 s, the concentration of the electrons and holes at the p+–n junction © 2001 CRC Press LLC
drops to a value at which the transfer will have a drift character, so the electrons will begin to depart from the p+–n junction to the n-layer depth with a speed j v nn = -------------------------q ( pm + nm ) In the diode structure by the p+–n junction, an SCR arises that expands at the very same rate of a = vnnt at which the voltage will be restored to 2
qρ 0 a U SCR ≈ ------------2p
(9.25)
where ρ0 = Nd + j/(qvsp), and vsp is the saturated drift speed of the holes. In a transistor structure, the electrons that left the n++-layer earlier and flew through the part of the p+-layer that remained free of them up to the moment of switching during time, ( w p – L pK ) - qN d τ np = -----------------------j
(9.26)
will reach the p+–n junction and, by compensating for the space charge with density ρ0 in the SCR, they curtail the increase in the voltage on it. From Equations (9.24) through (9.26), we find that, by the moment of the end of the flight of electrons across p+-base,
E,n
Emc
n
Ep
vp 0
n
Lp
vn 0
p
FIGURE 9.25 A transistor with delayed ionization. © 2001 CRC Press LLC
Ln
x
n
n
N U SCR ≈ U m ------d Na
(9.27)
These evaluations show that the increase in the SCR voltage after the run of the ionization wave can be slowed down significantly in the transistor structure. The sign of the equality in Equation (9.24) corresponds to the optimal case. After all of the holes are recombined in the nlayer and/or leave the n-layer for the p+-layer and recombine there, the current through the structure and the arrival of the electrons from the n+-layer are stopped, and voltage restoration at the p+–n junction begins. The information given above was proven on n++–p+–n–n+ structures, where Nd = 1014 cm–3, wp = 15 m, wn = 250 m, and Equation (9.24) is satisfied. The experiment has shown that, in a transistor structure, the stage for the restoration voltage is absent, as distinguished from the diode; moreover, after switching, the voltage smoothly drops during the entire time that the current flows, which is 10 ns, and is determined by the pulse generator used.
9.4.5
THYRISTOR-LIKE SWITCHES
In a number of cases, it is necessary that, after fast switching, the switch remain in the conducting state for an unlimited length of time during which the current through it is not broken at the moment required by the compulsory method. To realize this switch in the transistor peaker discussed above, it is necessary to eliminate the possibility of extracting the accumulated holes; that is, to guarantee the complete regeneration of the hole current. The simplest way to do this is to add an additional p+-emitter to the transistor structure from the direction of the n-layer, which creates a thyristor structure. However, the switching characteristics of such a structure turn out to be unsatisfactory for the following basic reasons. When there is a rate of U´ Ý 1012 V/s, the complete time for raising the voltage to 3 kV equals approximately 3 ns, while the density of the displacement current provides a field intensity in the NR of the n-base, at which the drift speed of the carriers is close to the saturated speed (107 cm/s). Therefore, the holes injected from the p+-layer to the n-layer pass an nlayer having a thickness of 200 m, and they end up in the SCR earlier than the voltage will be created in it; that is, they do not make it possible to delay ionization. Additionally, large collector leakage currents arise due to the high gain coefficient of the p–n–p transistor, which, as shown previously, also prevents the delaying of ionization. So, delayed ionization is possible only in a thick thyristor. It is possible to avoid these shortcomings in n++–p+–n–n+–p++-type structures shown in Figure 9.26, in which, between the p+-emitter and the n-base, an n+-layer is introduced that provides the additional delay of the injection of holes to the n-base. The ratio of the parameters of the p+–n section are the very same as in the transistor structure described above. The parameters of the n+layer should guarantee a delay of the flight of the holes τp that is equal to the delay of the electron electrons in the p+-layer so that µ n qw n+ N n+ τ p = ------------------------µpj where Nn+ is the concentration of the donors in the n+-layer. From here, taking Equation (9.26) into consideration, we get µ w n N n+ = N a ( w lp – L nc ) -----p µn That is, the parameters of the p+-layer are close to the parameters of the n+-layer. © 2001 CRC Press LLC
The experimentally obtained curves for the change in the voltage and current in such a structure almost repeat those for the transistor structure. However, it is possible to bring the n++–p+–n–n+–p++ structure out of the switched-on state only by means of decreasing the current through it to a value lower than the confinement current. The turn-on times of both transistor and thyristor are the same as of SAS, or < 0.3 ns.
9.4.6
POSSIBILITIES IONIZATION
OF
THE NUMERICAL MODELING
OF
DELAYED IMPACT
Super-fast switching is an unexpected phenomenon that can occur when there is a relatively slow increase in the voltage, but during a time that is much greater than the time of the voltage drop during switching. In this case, the current density increases homogeneously enough along the entire area of the structure. We must assume a two-stage switching mechanism to explain this super-fast switching characteristic. At the initial stage in the voltage rise, the nonequilibrium carriers, which are holes, will arise due to impact ionization by the main carriers in the neutral region (NR). These nonequilibrium carriers are carried out into the expanding space charge region (SCR), then they fall into the supervoltage region, where the field intensity is many times greater than the threshold field intensity, and they excite a fast ionization wave. The calculations done for such a two-stage mechanism turned out to be in good qualitative and satisfactory quantitative agreement with the experiment, where there were large SCR dimensions (more than 10–2 cm) and average U´ values of about 1012 V/s. In the area of large U´ values, which are close to 1013 V/s, and small SCR dimensions of less than 10–2 cm, the switching process can go in another one-stage route when the formation of an overvoltaged region occurs on the background of a relatively large flow of initiating carriers. Naturally, the gain with respect to fast response in a one-stage process turns out to be less. The minimum switching time can be less than 0.05 ns. E,n,p
Em E n
p
0
0 + Wp
n
++
p
+
FIGURE 9.26 A thyristor with delayed ionization. © 2001 CRC Press LLC
x
Wn
Wn
n
n
+
+
p
+
We should examine the problem of accurately calculating the possible models of the switching process. The description of the ionization wave itself is fundamentally nonlinear. This phenomenon is difficult to describe analytically, even within the framework of the hydrodynamic approach, which requires solving a system from the Poisson equation (which considers the mobile carrier charges) and the continuity equations that consider the dependence of mobility on the field intensity. A significant feature of the problem is the fundamental necessity of giving an integral with respect to the entire thickness condition for the distribution of the field (the Kirchhoff equation), because of the influence of the external circuit. Therefore, during the analytical solution, it is impossible to use the methods that are well developed for nonlinear wave processes. These include the automodel solutions, which are widely used during the calculation of a TRAPATT system of avalanche flight diodes. Some attempts were made by the PSG team, and by others, to study the process of switching a p–n junction by means of computer modeling in a one-dimensional approximation. In all cases, the calculations led to the following generally obvious results. When using only sufficiently accurately established coefficients for zone-to-zone impact ionization for obtaining a noticeable gain with respect to fast response, a small leakage current was needed, which provided an initial concentration of less than 106 cm–3. Then, the average distance between the carriers was more than 10–2 cm, so it was comparable to the characteristic dimensions of the structure. It is obvious that the hydrodynamic approximation that was accepted in the modeling for these conditions is already unusable, so a consideration of discreteness is necessary. In addition, to obtain the small switching times of <0.2 ns observed in the experiment, the calculated voltage on the structure, and consequently the field intensity on it as well, should be significantly (1.5 to 2 times) greater than the experimental values. From what has been previously stated, it follows that both impact and tunnel ionization through deep levels can play a significant role in the switching process. However, up to the present time, these processes almost have not been studied and cannot be put in the form of concrete formulas with numerical coefficients in the model. In addition, in the case of especially fast models, an increase in the spatial instability (nonhomogeneity) is possible. In this case, the one-dimensional approach is generally not usable. In cases where, for the numerical calculation, there are sufficiently reliable data about the initial stage up to the formation of the wave, then the simple analytical approach developed in Section 9.4.2 gives a good coincidence with computer modeling. In this chapter, the problem of the resolution of plasma after switching has been largely omitted from the discussion because, in diode structures, this process is similar to the processes previously described in Section 9.3.
9.5 PROPERTIES AND LIMITATIONS OF PULSE-FORMING SEMICONDUCTOR NETWORKS 9.5.1
PEAK POWER
AND
FRONTS
General Considerations All the methods of short-pulse power generation that use the new switches are based on time compression of initially long (submicrosecond) pulses. Generally speaking, the output pulse power is determined by the product of the power of single switching devices and the number of the devices connected in series and/or parallel in the power compression network. In this case, three main questions arise. 1. What is the maximum possible power of the switching device? 2. How can we sum the powers of many devices? 3. What is the most effective means to sum the powers? © 2001 CRC Press LLC
It is clear that the answers to the questions depend on the types of devices and are different for drift step recovery devices or silicon avalanche shapers. Nevertheless, we can make some remarks that are applicable for both types. Almost all the new devices (excluding DSR transistors and thyristors) are two-terminal devices, and this is the feature that determines their use in power compression networks. It is evident that, for the most simple cases, when only one switching device is used (as shown in Figs. 9.5 and 9.9), such circuits are more complicated than well known circuits using a three-electrode switch triggered through the third electrode. In the last case, a pulse-forming network has a much smaller number of components. The case of two-electrode devices becomes simpler when a large number of switching devices must be used to increase power. This is because the devices require no triggering circuits. For example, many DSRD wafers may be soldered one on the top of another or stacked. Such stacks may have ten or more times higher voltage than a single diode, but, for an end user, the stack appears as one device or a single unit having a greater thickness. The possibility of combining both types of DSRDs (drift step recovery devices) and SASs (silicon avalanche shapers) has been discussed above. Now, we will consider the limitations of stacking a large number of devices. Drift Step Recovery Devices It was shown that maximum rate of voltage increase limitations for DSRD is of a fundamental nature and is near U Ý 2 × 1012 V/s. From this limitation, it follows that the shortest pulse front τf that a DSRD can generate is U Ea Wn τ f ≥ ------d = -----------U′ 2U′ where
(9.28)
Ud = EaWn/2 = maximum diode blocking voltage Ea = threshold ionization field Wn = n-layer width
For the best efficiency, εE W n = --------aqN d
(9.29)
where all of the n-layer is overlapped by a space charge region in the off-state. It may be noted that, for a quasi-symmetrical diode, U´ may be two times more, or up to 4 × 1012 V/s. Equation (9.28) shows that at 2 kV diode can generate a 1 ns front with a 2 kV amplitude pulse. For best efficiency, it follows from Equation (9.28) that 2
2
εE a εE a V s τ f = ----------- = ------------2qN d 2j –
(9.30)
The condition for fulfilling Equation (9.28) is that j– = qNdVs, which was taken into account. We should like to emphasize that the switching-off process may be slowed down in comparison with Equation (9.28) by a simple increase of the diode area. In this case, the current density will be less than the saturated condition j– « qNdVs. Two general important rules follow from Equations (9.28) and (9.29). First, the lower the front of generated pulses with the same amplitude Uo, the higher the number of wafers N that must be used in the stack. Second, the lower the front, the higher the current density that must be used. © 2001 CRC Press LLC
As was mentioned before, good uniformity (synchronism) of voltage recovery on each diode can be reached in the case of good reproducibility of the diode parameters Nd and S, and it may be shown that δU ≈ δN d δU ≈ δS where
(9.31)
δU = relative dispersion of the diode voltage at the given moment δNd, δS = relative dispersion of doping concentrations and areas
One may conclude from Equation (9.31) that good control of diode parameters, including the requirement for a long lifetime of minority carriers, permits an infinite increase in the number of diodes in the stack. However, the increase in the diode stack voltage increases the thickness or height of the stack. From Equations (9.28) and (9.30), it follows that the minimal thickness of the stack Ws of N diodes is 2U W s = ---------0 Ea
(9.32)
and is independent of front duration. In the case of short fronts, the stack consists of the larger number of the thinner diodes. Actually, the thickness of heavy doped layers, contacts, and soldering interface must be taken into consideration. Due to this addition, actual thickness may be times more than the minimum as shown in Equation (9.32). The increased thickness, which increases the stack inductance, may worsen the front. For example, the thickness of the 100 kV stack estimated by Equation (9.32) is more than 1 cm. Taking into an account the above-mentioned additions, the thickness can reach 4 to 6 cm. As was mentioned in Section 9.1 and shown in Figure 9.9, the switch should be connected in a circuit as a part of a waveguide. For such a connection, the next limit for a stack diode thickness Ws may be evaluated as L W τ f » --- ≅ ------s ρ cw
(9.33)
where cw = electromagnetic wave velocity in the waveguide L = inductance of the stack ρ = waveguide impedance In spite of the short duration of high-voltage pulses, oil or other dielectric filling should be used in high-voltage pulsers. In the case of oil filling (c Ý 2 · 1010 cm/s), we get from Equations (9.32) and (9.33) that the upper voltage limit for a stack with 1 ns turn-off time is Ws Ea Es cw τf cw Ea Wn - ≈ --------------- = -----------------U 0 « ----------2 2 2V s U 0 « 1 MV © 2001 CRC Press LLC
(9.34)
Let us consider the upper limit of the switching-off break current. As can be shown for either the injection or extraction mechanisms of switching, the maximum area of a semiconductor wafer of the round shape S limited due to skinning is 2
2
c W - = c 2 τ 2f S ≤ ----------2 Vs
(9.35)
where c is the velocity of electromagnetic wave in the semiconductor. The maximum current is proportional to the front duration square, 2
2
I m ≤ j s S = c εE a τ f
(9.36)
The stability of the uniform current density distribution across the device area was discussed above, where the stability conditions for an SAS were obtained. These also showed that a uniform stable current distribution in DSRD is inherent. The maximum power commutated by single stack of wafers Pm may be deduced from Equations (9.34) and (9.36) to be 2
2 2
P m ≤ c w c εE a τ f
(9.37)
For τf Ý 1 ns, Equations (9.36) and (9.37) yield Im < 104 A,
Pm < 1010 W
It should be noted that the actual limitation on the switching power of one stack of wafers strongly depends on the device technology, the additional requirements on heat dissipation, and so on. Silicon Avalanche Shapers The silicon avalanche shaper (SAS) is a two-electrode device, so our previous discussion about the advantages and disadvantages of two-electrodes switches is valid here. Many SAS wafers must be assembled in a stack to get increased output power. The same DSRD limitations described in Equations (9.33) and (9.34) are valid for the stack thickness, but the parameters Ea and τf will be different for the SAS. At the present level, we do not have a good understanding of delayed ionization switching. So, for fast recovery processes, it is possible to make only rough estimations for τf . The maximum field intensity and voltage for an SAS are higher than for a DSRD. The upper limit for Ea is Ea ð b, where b is the threshold field in Equation (9.16). As was shown, the actual value of Ea is saturated in silicon at about 4 to 5 × 105 V/cm level, which is two to three times less than b, but still two to three times more than static breakdown threshold. So, there is a difference for the SAS and DSRD devices, although it is modest. It is evident that the SAS turn-on time, τf, is not less than the product of the mean time of one act of the impact ionization τi, and of the multiplier determined by ratio of concentration of carriers at the initial moment ni and the final concentration nm so that n n 1 τ f ≥ τ i ln -----m = --------------- ln -----m a ( Ea ) V s n i ni © 2001 CRC Press LLC
(9.38)
From Equation (9.38), we can determine that the turn-on time τf ranges from 10 to100 ps. It should be remembered that the turn-on time is determined by the rate of voltage rise dU/dt applied to the device, as was shown in Part 9.4.3. From Equation (9.34), we get U 0m « 200 kV for τ f ≈ 100 ps U 0m « 20 kV for τ f ≈ 10 ps As mentioned earlier, the delayed ionization can be subject to instabilities, determined by the fact that increasing the field intensity both decreases the time of ionization and increases the rate of carrier generation. When many devices are assembled in a stack, that same fact helps to synchronize the processes of switching the devices. Let us suppose that, due to some initial deviation of a diode parameter (for example, a smaller diode thickness), this diode switches on faster than others. Faster voltage drop on the device increases the voltage drop on the other devices. The voltage increase accelerates the turn-on processes and helps the slower devices catch up to the fast one. So, the requirements for the uniformity of the stack SASs assembled parameters are weaker even than in the case of DSRDs. It may be shown as well that the skin effect limits the area of semiconductor wafer to the value 2
c - = c 2 τ 2i S < --------2 2 vs a
(9.39)
Equation (9.39) looks similar to Equation (9.35). The maximum current density in turn-on state jm is determined by the maximum concentration in accordance with Equation (9.16). εE E j m = qV s n m = --------a -----a τf b
(9.40)
From Equation (9.40), we get E 2 I m ≤ c εE a τ f -----a b
(9.41)
Again, the limit of Equation (9.41) is very similar to the limit in the case of DSRD in Equation (9.36). So, from Equation (9.41), we get 3
I m ≤ 2 × 10 A for τ f ≈ 100 ps, S ≈ 1 cm I m ≤ 200 A for τ f ≈ 10 ps, S ≈ 0.01 cm
2
Finally, we evaluate for the peak power (Pm Ý U0mIm), and get © 2001 CRC Press LLC
2
8
P m ≤ 4 × 10 W for τ f ≈ 100 ps 6
P m ≤ 4 × 10 W for τ f ≈ 10 ps The evaluated power of a single diode stack is not the limit for a pulser output. Now, the next step to increasing power to exceed the limits shown is to sum the powers of many such stacks into a single load. The limit to the stack voltage implies that the diodes in the thicker stack, which are placed on the opposed ends, work causally independently. Therefore, the delay of the electromagnetic wave propagation along the stack is more than the duration of the switching processes. It may be shown that the same causal independence of the processes at the opposed edges of the wafer exists when the device area is larger than the limitations of Equations (9.35) and (9.39). Therefore, the summing of powers of the stacks is possible only in the case of external synchronization of the pulses generated by each stack. The possibility of synchronizing a large number of independent sources and the stability of delays of power compressing cells will be considered later. After that, the means of power summation through the use of transmission lines will be also be considered. In conclusion, we should make a brief mention about the pulse’s full width half maximum (FWHM) width. For a given device switching time, the generated pulse width is mostly determined by the circuits of the pulse forming network (PFN). Nevertheless, the requirement for the best efficiency imposes strong limitations on the shortest and longest possible pulses. For the case of short pulses, the limit is very simple and clear: turn-on or turn-off time is equal to the pulse front, is equal to the pulse decay, and is equal to the FWHM. When the FWHM is shorter than turn-on or turn-off time, the efficiency drops drastically. For the case of long pulses, the limits are determined by either the device’s parameters or the actual network used. The example of a network for generation of step pulse with practically infinite width was shown in Section 9.1.2 in the thyristor-diode closing switch. Some examples of the pulses of different shapes will be considered later.
9.5.2
ELECTRICAL EFFICIENCY
General Consideration The total electrical efficiency of pulsers, when determined as a ratio of output and input energy, strongly depends on the pulse form and the type of circuits. For example, it is evident that, in the case of a very long, step-like pulse, the commutation losses play no role. The main requirement for many uses is to put as much energy as possible into the shortest time period. It is evident here, and mentioned earlier, that good efficiency is possible only when the switching time of the switch is shorter than the FWHM. The mentioned conditions determine the bell-shaped pulse. The most efficient way to generate such pulses using two-terminal switches is with the pulse compression networks and examples shown in Section 9.1.2. The pulse compression network could include many stages, each consisting of simple compression cells. The total efficiency of the network ξ is the product of partial efficiencies ξi of each stage of the power compression cell when several stages are used; then ξ = ξ 1 × ξ 2 …ξ i . Each cell may be characterized by the power compression ratio ηi, which is the ratio of the short pulse output power of the cell and the initial long pulse generated inside the cell by a relatively slow switch, as shown in Figure 9.2. The semiconductor compression cells, which use DSRD and SAS, have limitations to the maximum compression ratio, as considered in Sections 9.4.3 and 9.5.3. The actual number of compression cells is determined by such pulser design requirements as efficiency, power, size, © 2001 CRC Press LLC
weight, cost, and so on. At present, no regular procedure exists to determine an optimal number of cells and the compression ratio distribution between the cells. Our experience shows that, for the purpose mentioned above, you may use two DSRD stages and one or two SAS stages. Let us consider the network consisting of two DSRD stages and one SAS, shown in Figure 9.27. The energy losses for one cycle of compression may be separated into two types. First, there Cp S3 U0
pumping source L3 C1
L1
2
R1
L4
C5
3
L2 D3 C4
S2
S1
R1
D2
D1 Db
L5
C2
C3
(a) A multiple stage power compressing network
ID3
U, I UC
50-100ns
3-8ns 1-2ns
0.1-0.5ns
US1 ID1
IL3
US tf
T/4
t
T/2 UD1 UD2
(b) The power compression transients FIGURE 9.27 A power compression network showing (a) the circuit and (b) performance.
© 2001 CRC Press LLC
is the energy dissipation into the heat in each part of the circuit during the time of the conductivity current flow. Second, even in the case of no conductivity current, only part of energy stored initially in the cell can be compressed and thrown out into the next stage or the load. Generally speaking, the remaining part of the energy could be recaptured and used in the next cycle of pulse forming, but practically, in nano- and subnanosecond time bands, present-day technology does not permit such recuperation. So, the remaining energy either dissipates as heat after several cycles of oscillations or produces a parasitic pulse some time after the useful one. The first stage of compression (S1, C1, L1, L2, C2, S2, D1) is similar to that considered in Figure 9.2 of Section 9.1.2. The second DSRD (D2) is pumped by additional pulse-forming circuits Cp S3 through the inductor L4. The output SAS (D3) is biased by dc voltage from the source U0 through the inductor L5. When the first DSRD (D1) turns off, the energy stored in the inductors L1, L2 is transferred to charge capacitance of the diode D1, the additional capacitor C3, and is used to increase the current of the second-stage storage inductor L3. It should be noted that, due to the initial pumping, the second DSRD (D2) still is in a conducting state in spite of the reverse current. It will be shown later that, in the case of high Q factor of the first stage and under conditions L1 = L2, C1 = C2, most of the energy stored in L1, L2 will be transferred to the storage inductor L3 after discharge of C3, if the condition L3 = L1/2 is fulfilled. The capacitor C3, along with the added capacitance of D1, determines only the delay of energy transfer and the maximum voltage at D1. When the current in the inductor L3 reaches the maximum value, the diode D2 turns off. The charge stored in D2 should be adjusted by a pumping source to guarantee the mentioned condition. The energy stored in L3 is transferred to the diode D2 capacitance, the additional peaking capacitor C4, and to the capacitance of the SAS, D3. When the diode D2 voltage reaches the threshold voltage for SAS D3 to switch on, the capacitor C4 and the capacitance of DSRD are discharged into the load Rl.. The typical half-period τ times of the Figure 9.27 network are: 1. Circuit L1C1L2C2
T = τ = π L 1 C 1 ≈ 0.1 to 0.5 µs
2. Circuit L1C3L2C3
τ t = τ ≈ π L 1 C 3 ≈ 0.5 to 20 ns
3. Circuit L3C4
τ f = τ ≈ π L 3 C 4 ≈ 0.5 to 2 ns
4. Circuit C4Rl
τ ≈ C 4 R 1 ≈ 0.2 to 1 ns
The energy passing through each stage cannot be less than the output energy in the load. It may be shown from Equation (9.37) that the output energy Q of the cell that compresses the pulse down to 1 ns FWHM by use of a single DSRD stack is limited by the value Q Ý 10 J. For the pulse having 0.1 ns FWHM, we get from Equations (9.34) and (9.41) that Q Ý 4 × 10–2 J. These levels of energies should be taken into an account for further detailed consideration. Submicrosecond Circuits In Figure 9.27, the total energy is first stored in capacitors C1 and C2, which are charged up to the voltage UC. When the switch S1 closes, the oscillations in the S1L1C1D1 circuit start. During the first half-period, the current flows through the diode D1 in the forward direction. Most energy losses are connected with semiconductor devices, i.e., the closing S1 and opening D1 switches. We can easily get a quality factor QL of the value more than 50 for nonferrite inductors L1, L2. The Q-factor for small ceramic capacitors may be as high as 30 to 50. During 3/4 of the LC cycle, which is the total operational time of the cell, total energy losses in L and C are less than 3% and will not be taken into account further. The losses connected with the closing switches may be separated into the commutation losses QC during turn-on time, and the residual losses QS determined by residual voltage in the on-state. Figure 9.27b shows the transient switching process. During turn-on time τfs, © 2001 CRC Press LLC
the switch voltage drop decreases from the initial stationary value UC down to the sustaining value US. The last may be characterized by an on-state resistance RC. To estimate commutation losses, we consider a simplified case where the voltage decrease is linear. Computer modeling has shown that the difference between a linear type of the voltage decrease and a nonlinear one is less than two times, or minor. In the LC circuit the current shape is sinusoidal. For this case, we get t fs
QC =
U0
- ( τ – t )I m sin ϖt dt ∫ ----τ fs fs
(9.42)
0
where ϖ = frequency of the L1C1 circuit Im = maximum pumping current When turn-on time τfs is far less than the half-period ( τ fs « 1 ⁄ ω ), Equation (9.42) may be simplified to 2
2
U 0 I m ωτ fs Q C ≅ --------------------6
(9.43)
For the residual losses, we can write π
Qs =
RS Im
- sin ϖt dϖt ∫ ---------ϖ 2
0
πR S I m = ------------2ϖ
Taking into account that I m = U C ⁄ ρ (where ρ = from Equations (9.43) and (9.44) that Q 2 Q C ≈ ------0 ( ωτ fs ) , 3
(9.44)
L 1 ⁄ C 1 is the L1C1 impedance), we obtain
Q0 RS π Q S = --------------ρ
(9.45)
2
C1 UC - = the energy stored in C1. where Q 0 = -----------2 From Equation (9.45), it follows that the commutation losses have a square law dependence on the turn-on time, and the residual loses are proportional to the on-state resistance. During the second half-period, the current passes through the additional bypass diode Db in Figure 9.27. It may be shown that losses at Db are a small part of the losses in the DSRD. In Section 9.3.3, it was shown that, for DSRD, the condition DT ⁄ W ≤ 0.2 , where T = π ⁄ ω is the half-period, should be fulfilled. From this condition, we can get 2
2 ( τf Vs ) W T ≤ ---------- ≈ --------------25D 25D
(9.46)
The condition of Equation (9.46) shows that, the faster the DSRD, the smaller the permitted half-period. As follows from Equation (9.45), for the given turn-on time of the primary switch τfs and turn-off time of DSRD τf , the decrease of T, or increase of ϖ, improves the switching properties of DSRD but worsens the efficiency of the primary closing switch. The two-stage network consid© 2001 CRC Press LLC
ered here permits using a slow DSRD for the first stage with good efficiency to get short halfperiod for the second stage with fast DSRD. Hence, the second L2C2S2 circuit is switched on after the half-period; the current flows in the circuit only during a quarter of a period. The commutation losses remain the same as in the first L1C1 circuit, but the residual loss is two times less than in Equation (9.44) and, for total losses in closing switch Qct, we can get 2
( ωτ f ) 0.75R s π Q ct = Q 0t -------------- + ------------------ρ 3
(9.47)
where Q0t = 2Q0, which is the total energy initially stored in both LC circuits. After 3/4 of the period of LC circuits, the total energy minus losses is stored in inductors L1 and L2. If the losses in the switches S1, S2 are small, so that Qct « Qot, the currents in L1 and L2 are equal during the second half-period. The diode D1 current is the sum of L1 and L2 currents, and the diode turns off when the total current reach the maximum value. As was mentioned before, when D1 is turning off, the L1 and L2 currents are charging the capacitor C3 and the capacitance of DSRD. Simultaneously, these currents increase the current in the inductor L3. Nanosecond Cells The process of the energy, or current, transfer from the inductors L1 and L2 to the inductor L3 is presented in simplified version in Figure 9.28. Two inductors L1 and L2 from Figure 9.27 are I1
I3 2
L1
L3
S2
I2 C2
FIGURE 9.28 Simplified energy transfer between inductors for nanosecond cells.
represented as one storage inductor L1 with the current I1, which is equal to the sum of the currents in L1 and L2. The diode D1 is modeled by the opening switch S2 and the capacitor C3, which linearly approximates the capacitance of D1 and possible additional external capacitor C3 as shown Figure 9.27. It may be shown that the voltage restoration on DSRD may be represented as the charging of the space charge region’s nonlinear capacitance and of a geometrical capacitance of the diode. Computer modeling shows that the linear approximation considered here conserves the main features of the processes under investigation. It may be shown as well that the representation does not contradict the representation of switching as a widening of the SCR used in Equation (9.28). Both representations are different views of the same physical phenomenon. Initial conditions for circuit on Figure 9.28 are t = 0, I1 = Im, I3 = 0, U2 = 0 where U2 is the capacitor C3 voltage at node 2. It may be shown that the currents and voltage in the circuit are described by the next set of equations. © 2001 CRC Press LLC
Im i M ρ n sin ω 2 t - = -----------U 2 = -------------------------sin ϖ 2 t C 3ϖ2 1 1 + L ---- L 3 I m ( 1 – cos ω 2 t ) I 3 = ----------------------------------L 1 + -----3 L1 ( 1 – cos ω 2 t ) I 1 = I m 1 – -----------------------------L 1 + -----3 L1
where ρ n =
(9.48)
1 + ( L1 ⁄ L3 ) 2 L 1 ⁄ C 3 , ω 2 = ---------------------------L1 C3
Thus, we have that, if L1 = L2 at the time τ t = π ⁄ ϖ 2 , I1 = 0, U2 = 0, I3 = Im, then the total energy stored in the inductor L1 is transferred to the inductor L3. In the case of L3 « L1, the current transferred into L3 may be two times more than the initial current I1. So, not only voltage but also current multiplication is possible in DSRD pulse compression networks. The price for the multiplication is the bad efficiency, which, in accordance with Equation (9.48), drops to zero level for the current doubling so that I1 = –Im when L3 = 0, and ¼ = ω2t. In the case of the poor Q-factor due to losses in S and D1, the current’s symmetry with the first cell of Figure 9.27 with C1L1C2L2 is broken. The currents at the moment of their maximum value, in the inductors L1(I1m), L2(I2m) are not equal, so I1m < I2m. A part of the current I2m is transferred into the inductor L1, when the DSRD breaks the current. It may be shown that, when the current of the inductor L3 reaches the maximum value, the currents in L1 and L2 are I 10 = ( I 1m – I 2m ) ⁄ 2 = I 1m – 2ρ ( I 1m + I 2m ) ⁄ L 1 ω I 20 = ( I 2m – I 1m ) ⁄ 2 = I 2m – 2ρ ( I 1m + I 2m ) ⁄ L 2 ω These expressions confirm the conclusion made in the Section 9.1.2 that, to decrease the residual currents, it is necessary to adjust the ratio of the inductors so that L1 > L2 when I1m < I2m. The lost energy Qb connected to these currents is 2
( I 1m – I 2m ) - Q0 Q b = -------------------------2 2 ( I 1m + I 2m )
(9.49)
where Q0 is the energy initially stored in L1 and L2. In accordance with Equation (9.48), the maximum voltage at the opening switch U2 decreases when the period of energy transfer τt increases. In the preceding section, we did not consider the DSRD losses during the first submicrosecond cycle of power transferring from capacitors C1 and C2 to the inductors L1 and L2 of Figure 9.27. Now, we see that, during the next nanosecond cycle of power transferring to the inductor L3, the voltage on DSRD D1 in off state reaches the high value so that U2 » UC. Therefore, the residual voltage on the DSRD’s stack during a conducting state may be high enough, and the energy losses © 2001 CRC Press LLC
in the stack may play a significant role. The number of diodes in stack N is determined by maximum value of U2 from Equation (9.48). The requirement of the shortest turn-off time used in the Section 9.5.1 now may be omitted, and a current density far lower than saturated js may be used, which helps to decrease the losses. The turn-off time τoff may be estimated from Equation (9.8) to be Wn Nd q εE = --------a τ off ≈ ---------------j j
(9.50)
Using Equations (9.30) and (9.50), one can find the voltage drop on the diode stack Us in the on state to be 4U 2 W n U s ≤ ----------------------------------3E a τ off Tµ n µ p
(9.51)
where T = π ⁄ ω is the half-period for the submicrosecond part of the compression network in Figure 9.27. It should be noted that voltage drop in the diode is insensitive to the current direction. Equation (9.51) shows that voltage drop does not depend on the current. Actually, the current dependence is concealed in Equation (9.50). The diode current I and the needed area of the diode S are chosen to satisfy the Equation (9.50) so that S = I/j. It should be noted that W and T are connected by the conditions of Equation (9.46) and cannot be chosen separately. Substituting Equation (9.46) into (9.51), we have 20U 2 D U s ≤ -------------------------------3E a τ off µ n µ p
(9.52)
Equation (9.52) shows that the turn-off time τoff should be as long as possible. The maximum value for τoff is τ off ≈ π ⁄ ϖ 2 ≈ τ 1 . That means that the external capacitor C2 is absent, and only the intrinsic capacitance of the opening switch DSRD is used. It is well to bear in mind that, although the last condition improves the efficiency in some cases, a great deal of expensive semiconductor material may be needed. The estimated energy loss Qs, derived from Equation (9.52), is Us Im π Q s ≤ -------------------------L 3 1 + ---- ϖ2 L 1
(9.53)
Equations (9.48) and (9.52) also show that the increase of the half-period τt decreases the maximum voltage U2 and decreases losses. Nevertheless, in the case of a long τt, two effects that worsen the performance of the network start to play a role, as is shown in Figure 9.27.
© 2001 CRC Press LLC
1. During the half-period τt, the inductor currents recharge the capacitors C1 and C2. The charge returned to C1 and C2 may be derived from Equation (9.48). The energy returned to the capacitors C1 and C2 (Qr) is 2
2
2
2
Im τt L3 4Q 0 τ t ω - = -------------------Q r = -----------------------------------------------( L 1 + 2L 3 ) ( C 1 + C 2 ) L 1 2 + --- L 3
(9.54)
This energy cannot be recuperated and will be lost. 2. The longer τt, the larger the charge that must be pumped into the second DSRD D2 of Figure 9.27 and the pumping D2 current. The impedance of the pumping source with large current capacity must be small. In this case, a noticeable part of the DSRD current may be diverted into the pumping source, causing additional losses. When the second DSRD D2 breaks its current, then the current of L3 charges the capacitance of DSRD D2, peaking capacitor C4 and the capacitance of SAS D3. If the pulser under consideration is designed to generate nanosecond pulses, then the load resistor shown in dashed lines is connected in parallel with D2 to node 3. The other parts, shown in Figure 9.27 on the right-hand side of D2, are excluded. In the last case, part of the current of L3 is diverted into the load resistor. When the voltage at DSRD reaches maximum value, the capacitances of DSRD and of C4, if they exist, are discharged into the load. Assuming the simplification made for Figure 9.28, then this pulse-forming process may be illustrated by Figure 9.29. The most I3 3 L3
S4
I/
I4 S3
IR Rι
C4
FIGURE 9.29 Simplified operation of a nanosecond pulser.
interesting case for pulse forming is the case of unipolar pulse with no oscillations and having maximum amplitude on the load. It may be shown that, for this case, then the next condition that must be satisfied is 1 L R l = --- -----32 C4
(9.55)
and the pulse shape is bell-like with FWHM Ý 5 RlC4. –t I m t exp --------------- 2R 1 C 4 U 3 = -------------------------------------2C 4
(9.56)
where Im is the broken current, and C4 is the total capacitance of DSRD and the additional capacitor. © 2001 CRC Press LLC
The pulse amplitude is U m = 0.7I m R 1
(9.57)
The energy accumulated in the diode capacitance C4 is not lost and mostly is transferred into the load later. As was shown, the conductivity current exists only in the neutral region of DSRD during the phase of fast recovery. The losses Qnr caused by the current are similar to the losses evaluated for SAS in Section 9.4. E s W n 2U m I m E s - ≈ ----- Q l Q nr ≤ ---------------------------Vs Ea Ea
(9.58)
where Ql is the energy transferred to the load. The condition of the shortest turn-off time j = js = qNdVs was used in Equation (9.58). These losses are small so that Qnr/Ql ð 5%. If the current is less than the saturated value js, then the losses are smaller than in Equation (9.58) and proportional to j/js. For estimation of the voltage drop on D2 during the on state, Equation (9.51) can be used with minor modifications. 4U m W n U S2 ≤ -------------------------------3E a τ f τ t µ n µ p
(9.59)
where Um is the output pulse amplitude, and τf is the front of the pulse. When the D2 capacitance and capacitor C4 are used to store the energy, the voltage at D2 (node 3) before the closing of SAS (D3) is determined by U3 ≈ I3 ρ3
(9.60)
where ρ 3 = L 3 ⁄ C 4 , and C4 includes the capacitance of D2 and D3. The losses in D2 are determined by Equations (9.58) and (9.59) as in the previous case but this time using the new value of U3 from Equation (9.60). It may be shown that the energy stored in the nonlinear capacitance of the space charge region (QSCR) is S q N d W SCR U SCR q*U SCR C4 Ud Q SCR = ----------------------------------- = ----------------- = -----------3 3 2
(9.61)
where C4 = 4εS/3Ws = 4εSVs/3τf is the linear equivalent of the capacitance of SCR, and WS is the total thickness of all diodes in the stack; that is, when Wn = WSCR, then the SCR overlaps the n-layer. When this result is compared with that well known for linear capacitor, it is apparent that the only difference is the multiplier 3/2. Subnanosecond Cells When the SAS D3 of Figure 9.27 closes, then the capacitor C4 discharges into the load. As was shown, there is an optimal value for the voltage rise rate on one SAS of dU/dt Ý 1012 to 3 × 1012 V/s, depending on the type of SAS. The delay of the breakdown during turn-on is 1 to 2 ns. When © 2001 CRC Press LLC
many SASs are connected in a stack, the dU/dt value is increased proportionally to the number of stacked SASs, but the delay keeps the smaller value of one diode. Equation (9.58) determines the losses in DSRD during the phase of fast restoration and is valid for cases using DSRDs as storage capacitors. The maximum DSRD voltage given in Equation (9.60) should match the maximum switching or turn-on voltage Uon of the SAS. Experience shows that, for improved switching properties of low resistance in the turn-on state, short turn-on time, and reduced jitter at the moment of the turnon when there is non-zero value of dU/dt, the DSRD voltage should be slightly more than Uon. At 2 the time t = π ⁄ 3ζω , where ω = 1 ⁄ L 3 C 4 , the value of dU/dt is equal to 50%, and the voltage is equal to more then 85%, of the maximum possible values at the times t = π ⁄ 4ω , and consequently t = π ⁄ 2ω . If the SAS voltage is matched to ensure switch-on at the moment t = π ⁄ 3ω , then only 75% of the energy stored in inductor L3 is transferred to the capacitor C4. The energy remaining in the inductor L3 will not be lost but will be transferred into the load as well, but at the time of transferring τ L . Then, L τ L ≈ -----3 Rl
(9.62)
may be many times longer than the time that capacitor C4 (τl) takes to discharge into the load resistor Rl. τl = C4 R1
(9.63)
If the SAS turn-on time τs is many times less than discharge time τs « τl, then the loss of the load voltage ∆U = U3 – Ul in the first approximation is τ ∆U ≈ U m ----s τl where Ul is the load peak voltage. For the energy losses QCS during turn-on time, assuming a linear approximation of the SAS voltage drop, we have τ S Q C4 Q CS ≤ ------------3τ l
(9.64)
where QC4 is the energy stored in capacitor C4. For the losses QSon, which are due to the residual voltage drop determined by Equation (9.40), we have E Q Son ≈ Q C4 -----SEα
(9.65)
When the total capacitance C4 is represented by only the nonlinear DSRD D2 capacitance, the discharge of the diode into the load is accompanied by losses in neutral region of the diode. During the period of charging of the diode from the inductor L4, the current density is chosen to be saturated so that js Ý qNdVS. During the discharge period, the current may be many times more than during the charging process. The discharge process for the diode capacitance is different from those of the usual linear capacitor. © 2001 CRC Press LLC
Let us consider the case, shown in Figure 9.30, of the DSRD discharge into the low value resistor R1 after the fast switch SAS turns on. Let the turn-on time of SAS τon and the time of the discharge τ1 Ý R1C, where C ≈ Sε ⁄ W can be many times less than the time of the electrons flight through the n-layer τs and τon « τs = Wn/Vs, τ1 « τs. It was mentioned that, in DSRDs, the space charge region under maximum voltage overlaps the n-layer, as shown in Fig 30. When the diode voltage drops to zero value, then the space charge distribution is not changed, that is, dE/dt = qN/ε because of the short discharge duration. As my be seen from Figure 9.30, the SCR is divided into a positive and a negative field region. The charge QR passed through to the load during the time of the voltage drop is QR =
∫ I dt
εE a S = Sε∆E = ----------2
(9.66)
For the stored energy Q0 in the diode before the discharge, and afterward Qp, we have 2
2
SεE a W -, Q 0 = ---------------6
SεE a W Q p = ---------------24
SAS
(9.67)
I1
+ DSRD
R1
E ,n Em
n
Em 2 Vs
Vs
0
x Vs
js = qvs N d
Vs
p+
n
FIGURE 9.30 Nanosecond pulse forming and the DSRD internal transients. © 2001 CRC Press LLC
- Em 2 n+
Equation (9.67) shows that only 3/4 of the stored energy can be transferred to the load during the short time period τp « τs. Then, the process of neutralization begins as electrons come from the n+-layer into the negative field region and compensate the space charge. The neutral region of the compensated charge propagates into the n-layer with the saturated electron velocity. In the neutral region, the current is the sum of the displacement current and the conductivity current j = ε dEn/dt + qNdVs. In the collapsing SCR, only the displacement current j = ε dEscr/dt exists. From the conditions of continuity of current and zero voltage drop, as shown in Figure 9.30, then we have j j r = ---s , 2
qN E ( x = 0 ) = ---------d ( W – V S t ) ε2
(9.68)
Equation (9.68) shows that the residual current jr flows during the flight time ÝW/Vs, and the energy of the field that remained after the fast discharge dissipates into heat during this time. The additional linear capacitor C4 of Figure 9.27 provides better efficiency, but the price is the slower voltage rise on the DSRD and SAS. When the energy stored in C4 is equal to the energy stored in DSRD and the loss of energy in DRSR decreases to 12.5%, then the voltage rise is two times less. This decrease of the rate can worsen the SAS performance.
9.5.3
TIME STABILITY
AND
SYNCHRONIZATION
As was shown before, the most efficient method of pulse generation is power compression using several compressing stages. In this case, the total delay τd of the output short power pulse from the first pulse triggering the primary switch, such as a thyristor or transistor, may be as long as several hundred nanoseconds; that is, 100 to 400 ns. As a rule, the delay τd depends on the output power so that more power means more delay. This delay is due to the fact that more powerful primary switches, such as thyristors or transistors, are slower. In many applications (for example when many pulsers work at the same load to increase power), the delay stability is a very important factor. For the mentioned case of the power summation of pulses having subnanosecond fronts, the absolute instability must be less than the front δτ < 100 ps , –3 and relative stability δ must be as low as δ ≈ δτ ⁄ τ d ≤ 10 . The delay instability consists of two components: a slow instability, or drift, when the change of delay period is minutes, and a fast instability, or jitter, which is the change of delay from one pulse to the next. Slow drift in pulsers may be as long as nanoseconds for the delay of hundreds of nanoseconds. The last large value is due to component warm-up. After warming for 20 to 30 minutes, the drift decreases to the nanosecond and subnanosecond range in tens of minutes, which depends on the temperature stability of the environment, mechanical vibrations, etc. For summing up many pulser outputs into the single load, or controlling the antenna radiation pattern, the slow drift can be compensated by phase locked loop (PLL) feedback. For example, at the pulse repetition frequency (PRF) of 1 kHz, it is possible to use one pulse per second (1 pulse from 1000) to adjust the change in position of the output pulses by changing the triggering pulse delay as shown in Figure 9.31. The fast and random changes from one pulse to the next cannot be compensated, and we will consider the pulse jitter in more detail. The main sources of delay instability for the most used pulse compression circuits may be represented as follows: 1. Instability of the delay between two circuits triggering the primary switches S1 and S2 2. Instability of the L1C1 and L2C2 circuits as a result of the instability of half-period oscillation of the LC circuits 3. Instability of the turn-on delays of the primary switches S1 and S2 with respect to their triggering © 2001 CRC Press LLC
4. Instability of delays DSRDs of the D1 and D2 turn-off moments 5. Instability of the high-voltage sources charging the primary storage capacitors C1, the C2 peaking capacitor, and the blocking capacitor C5 Let us consider the causes of instability more closely. 1. There are well known approaches to decrease the driving circuits instability, or jitter. For example, there is the direct counting of high-frequency stabilized oscillations using miniature LC delay lines and so on. The applications of such methods in sampling and digital oscilloscopes allows us to detect jitter as small as tens of picoseconds for delays as long as 100 to 200 ns. 2. It is a well know fact that the relatively long (i.e., hours) time stability of LC circuits of high-frequency generators operating at tens of megahertz may be as high as 10–3 to 10–4. The frequency drift for short time periods of less than 10–2 seconds, corresponding to our pulse repetition rate, is far less. That means that jitter of half-period δT LC , determined –4 by LC stability, is less than δT LC ≤ 10 T LC ≈ 100 ps where TLC is the LC’s half-period. 3. The primary switches S1 and S2 may be power thyristors or either FET or bipolar transistors. The delay of these devices depends on the ratio of the gate current to the maximum collector current. The higher the ratio, the less delay. As was shown in Section 9.3, for the case of switching on LC circuits, two different cases are possible. a. The current rise dI/dt is limited by the maximum value of the switch dIm/dt factor. b. The current rise rate is limited by the LC circuit impedance; that is, the condition is fulfilled that dI U -------m- » ------0 dt L
(9.69)
where U0 is the C1,C2 charging voltage, and L is the inductance of L1 or L2 inductors.
Pulser 1 coupler
Delay variable
Delay comparator coupler
Load
Pulser 2
triggering pulser
Delay
FIGURE 9.31 Block diagram showing how to use one pulse per second to adjust the pulse repetition frequency. © 2001 CRC Press LLC
As was earlier for the second case described in Equation (9.69), the energy transfer from the capacitor to the inductor is more efficient. Therefore, the energy loss is less, as shown in Equation (9.43), and so the influence of the turn-on process on the current rise is low. In this case, the device turn-on delay is determined by the collector voltage drop, but not the current rise, up to some level—usually 0.9 times the maximum. As a first approximation, the delay time for bipolar devices is determined by the diffusion time across the gated base layer, τdif and the time of the collector space charge region discharge τSCR. 2
W τ dif ≈ ------g- , 2D
1 τ SCR ≈ --- 2εU 0 qN d jg
(9.70)
where jg is the triggering current density at the collector, U0 is the collector voltage, Nd is the collector layer doping, Wg is the base layer width, and D is the carrier diffusivity. The value of τSCR may be made as low as several nanoseconds, even for high voltages and power devices, by the use of fast power triggering. The minimal discharge time is limited by the flight of carriers through the space charge region. For high-voltage devices, the time is near 1 ns per 1 kV of blocking voltage. The estimated τdif for 1 kV rating power bipolar transistors or thyristors is equal to tens of nanoseconds. This value corresponds to the unity current gain: after τ = τdif, the collector current discharging the SCR capacitance reaches the triggering current value, provided that the LC circuit’s current during this delay time is small due to the dI/dt limitation that dILC/dt ð U0/L and ILC ð (U0/L) τdif. It is possible to decrease the time during which the collector current discharging the SCR capacitance reaches the needed level by an increase of the triggering or gate current. But this way is not very efficient, because the delay is decreased sublinearly with the current increase. The most efficient way is to improve performance is to decrease the base layer width and the diffusion time τdif. The extreme position of this approach nearly overlaps with the field-effect transistor case where all triggering, or input current, is spent to charge the gate capacitance which is more than the collector’s. So, it is possible to achieve a delay of the power switch τd as short as tens, or less than 10, nanoseconds. It is well known that semiconductor carrier mobility and diffusivity depend strongly on temperature. When the temperature changes are in the range of 0° to 40°C, the mobility change is as high as tens of percent. But the temperature causes a relatively slow change, so it is drift and the jitter that is considered here. Let us consider the influence of random carrier motion on the jitter. The times of momentum relaxation in semiconductors are very small, on the order of 10–13 s, and the number of carriers in the volume of the power device is large, N > 1012. So, the relative fluctuation of the number δN is 1 –6 δN ∼ -------- < 10 N The relatively fast current fluctuation has approximately the same value. Then, it is evident that the relative fluctuations in the time intervals during which the charge transferred by the current reaches some definite level will have approximately the same value δτ d ≈ δQ ≈ δI ≈ δN . So, the jitter connected with the carrier fluctuation is © 2001 CRC Press LLC
δT d ≈ δτ d × τ d and is far less than 10–12 s. It should be noted that the same estimates are valid for processes of accumulation and dispersal of plasma in a DSRD. 4. The time of the current break by DSRD, Tb, is determined by the following conditions: the charge, the number of carriers stored in the diode during the pumping phase Q+, and by the forward current I+, which is equal to the charge Q– pulled out by the reverse current I–. T --2
Q+ =
∫ I+ dt 0
Tb
= Q_ =
∫ I_ dt
(9.71)
0
Equation (9.71) is valid if the carrier losses are small. As was mentioned before, DSRDs must have lifetimes τp that are as large as possible. Modern technology can make the times larger than 100 s with recombination losses less than «10–3 for half-periods of about 100 ns. The second possible source of losses is carrier leakage through p+n and nn+ asymmetrical junctions, and near the contact p+n and nn+ junctions in quasi-symmetrical diodes. The leakage may be described by injection coefficients of the p-n junctions. The shift of the break point Tb due to charge losses is proportional to the charge losses so that ∆T 1–γ T ---------b- ≈ ------- + ----------T ⁄ 2 2τ p γ
(9.72)
In the case of γ Ý 0.999, τp > 100 s and T/2 Ý 100 ns, so that Equation (9.72) yields ∆Tb < 0.1 ns. It is evident that fast fluctuations, or jitter between two pulses δTβ, are determined by fluctuations of τp and γ and must be far less than the ∆Tb evaluated above. 5. Instability of the voltage source δU could affect the jitter in a variety of manners. a. When both capacitors C1 and C2 are charged from the single source, the change of the source voltage changes both Q+ and Q–, preserving their ratio and, therefore, the position of the breaking point. Therefore, Tb is not sensitive to δU. b. The delay of SAS switching on τSAS is not more than several nanoseconds (usually less than 3 ns) from the start of the fast voltage rise. It is well known that devices that use impact ionizations, such as avalanche photodiodes, etc., are subject to strong current instabilities. The main source of current fluctuations in the devices is the strong relative fluctuation of the small number of primary carriers initializing impact ionization. It was shown in Section 9.5.2 that, in SAS, the number of initial carriers starting impact ionization in the neutral region is large, being the majority carriers-electrons. Their concentration is ~1014 cm–3, and their number N > 1011 leads to the relative –5 fluctuations of the number δN ⁄ N ≈ ( 1 ⁄ N ) ≤ 10 . The number of holes generated by ionization in the neutral region P0 is about 106, and their relative fluctuation is –3 δP ⁄ P 0 ≈ 10 , which is far larger than that of the electrons. The evaluation of the delay time due to that factor is δP δτ SAS ≈ τ SAS ---------0 < 3 ps P0 which is a rather small value. © 2001 CRC Press LLC
As was shown, the power supply instability does not bring the DSRD’s current breakpoint instability, but the supply instability could cause strong instability of the closing switches delay for the SAS and the primary ones S1 and S2. In primary switches, it follows from Equation (9.70) that the instability of the turnon delay δτd is proportional to the voltage instability δU0, so that δU δτ d ≈ τ d ---------0 2U 0
(9.73)
The same equation, taking into account the new value of the delay, is valid for the SAS case. It follows from Equation (9.73) that the instability of the power supply, provided –4 that the jitter is less than 10 ps for a delay τd Ý 10 ns, must be better than δU 0 ⁄ U 0 ≤ 10 . The total fluctuation of the output pulse delay δτ out may be represented as δτ out ≈
∑ δτdi
(9.74)
i
where δτ di are partial fluctuations. An estimation for Equation (9.74) shows that it is possible to get a total fluctuation, or jitter, as small as tens of picoseconds. It should be remembered that slow drift, which is orders of magnitude larger (i.e. nanoseconds), may be compensated by using a feedback loop. The same Equation (9.74) determines the average dispersion δτm of jitter for the case of m pulsers working into the same load. δτ m ≈ δτ d m
(9.75)
For example, in the case of 10 identical pulsers working at the single load, the total dispersion is δτm Ý 30 ps for the 10 ps jitter of a single unit. Therefore, synchronization of 10 pulsers with 100 ps fronts is feasible. The evaluations made above were checked at the example of two identical pulser assemblies on a large printed circuit board with 7 kV output into a 50 ¾ load. Two outputs were fed into the load resistance through a star-like matching circuit, shown in Figure 9.32, which had 6 dB attenU1
R1 R3
U2
R2 R1 = R2 = R3 = 16.7Ω
Load
U 1 + U2 2
FIGURE 9.32 A pulser using delay lines.
uation. Each pulse has about a 200 ps front and ~1 ns decay at PRF Ý 500 Hz. After adjusting the delay between outputs for a zero level, the load pulse front and voltage were equal to the output of each pulser. Then, during half an hour due to the slow drift of the delay, the load pulse amplitudes © 2001 CRC Press LLC
decreased slightly, by about 20 percent, and two “peaks” appeared on the load pulses. The long time for the pulse divergence to appear means that the synchronization control by means of the phase locked loops mentioned above is feasible.
9.5.4
AVERAGE POWER
AND
PULSE REPETITION FREQUENCY
General Considerations Section 9.5.1 considered the problems of peak power and electrical efficiency. It is evident that the output average power Pa is the product of the pulse energy QR and PRF, fp. Pa ≈ QR fp
(9.76)
The limitation on the average power may be determined by Equation (9.76) when there are limitations on either the pulse energy or the PRF. It is also possible that the average power is limited by the heat sink and the overheating of some critical parts, usually semiconductor devices. In this case, Equation (9.76) should be used to determine pulse energy when the average power and PRF are given. From the cases mentioned, it follows that, for the general case, the average power is determined by a very complicated interplay of different factors and may be clearly stated only for an actual pulser design. Therefore, we will first consider the thermal and heat sink limitations on average power. Then, after considering nonthermal PRF limitations, it will be possible to estimate the average power for an actual pulser as minimal from two thermal and nonthermal conditions. Thermal Limitations It is evident that thermal limitations on devices in the chain of pulse compression cells may differ strongly. The overheating of primary switches, such as thyristors and transistors, is a well known problem that has been studied elsewhere. We will consider here only DSRD and SAS heating. DSRD Heating Semiconductor thermal diffusivity, DT, is generally small, for example silicon has a Dt Ý 0.7 cm2/c, and so the heat from dissipated energy cannot be quickly removed from the thick semiconductor bulk to an external heat sink during the short heating time. The diffusion time of heat Tw through the typical n-layer of thickness Wn = 10–2 cm is w
W τ w ≈ ------n- ≈ 100 µs DT which is much longer than the half-period time during which the energy dissipates in the layer. The temperature increase in the n-layer Tp during the pumping time ∆τp may be estimated from Equations (9.46), (9.50), (9.52), and (9.53) to be Qτ US j+ T+ 20U 2 DTε ∆T p = --------≈ ---------------- ≤ ---------------------------------------------VC τ 2W n C τ 3τ off t off µ n µ p W n C r
(9.77)
where Cτ is the specific thermal capacity, V = SWn is the n-layer volume, and T is the half period as shown in Equation (9.51) of the first- or second-stage compression cell described in Equation (9.53), depending on the case under consideration. © 2001 CRC Press LLC
In the case of τoff Ý 5 ns, T = 200 ns, U2 = 1 kV, Wn Ý 10–2 cm, C Ý 2 J/cm3K, then Equation (9.77) yields ∆Tp < 10–2 °C, which is a very small value. The other heating sources, including reverse current and transient switch-off processes described in Equation (9.58), only slightly increase the estimated value. That indicates very small deviation of the peak temperature from the average after warm-up time. After the system has warmed up, the average temperature is determined by the balance between the incoming heat Qτ fp and the outgoing into the heat sink Pout . Assuming the ideal heat sink with zero thermal resistivity, we get the thermal flow density ∆T P out ≈ λ ------W
(9.78)
where λ is the thermal resistivity of the semiconductor, and T is the temperature drop on the device bulk. It should be noted that the total thermal flow is proportional to the area of the devices, as is the total DSRD current. Therefore, the heat balance does not depend on the area of the device and the total device current. For a device with only one p-n junction and even one-sided cooling, from Equation (9.78), using the same approach as for Equation (9.77), we get WQ τ f p ∆T = -------------λ
or
∆Tλ f p = ----------WQ t
Q US j+ T 5U 2 DTε - ≈ ----------------------------------t ≤ ------------ε S τ off τ off µ n µ p
(9.79)
(9.80)
where S is the device area. For the case considered above, Equation (9.80) yields Qτ/S ð 10–3 J/cm2, which is a very small value. The increase of the DSRD temperature decreases carrier mobility and worsens the turn-off time. Experiments showed that heating up to greater than 150°C increases the turn-off time by about 30 percent. Assuming the maximum temperature of 150°C, we get from Equation (9.78) the maximum possible cooling capacity Pout Ý 10 kW/cm2. The maximum possible PRF from Equation (9.79) is fp > 108 Hz. The considerations made above show that a thin, low-voltage DSRD used at the first compression stage potentially has no heat limitation. In actual designs, the influence of external thermal contact resistance from the thin soldering interface, tungsten or molybdenum thermal expansion compensator, etc., may be large and may increase the thermal resistance ten times. However, the LC period time will still be the main limitation. The DSRD used in the second pulse compression stage, as shown in Figure 9.27, has far shorter turn-off times, τoff < 1 ns, and higher thermal losses may be expected from Equation (9.80), but the pumping time and reverse current period are shorter in nearly the same proportion as τoff. So, it may be expected that the density of thermal losses remains nearly the same as in the first-stage cases, as do the PRF limitations. The contribution of the DSRD turn-on state losses at the second stage on the total pulser efficiency is small with respect to the other losses, and it was not considered in detail in Section 9.4. But the losses may contribute significantly to the incoming thermal heat of DSRD and will be considered later. The balance between incoming and outgoing heat energy changes drastically in the case of high-voltage stacks of many DSRDs. Incoming heat flow from Equation (9.80) increases proportionally with the stack voltage. Outgoing heat flow in Equation (9.78) decreases proportionally with the thickness of the stack and voltage. © 2001 CRC Press LLC
For the DSRD stack case, Equation (9.78) may be rewritten as ∆TE P out ≤ λ -------------a U st where Ust is the stack voltage. For the case of Ust Ý 100 kV and Ea Ý 100 kV/cm, we have Pout ð 150 W/cm2. Again, we should remember that these estimates are approximate and do not include the heat resistance of interfaces between p-n junctions and the heat sink, which may be large. It was shown that the maximum working area of DRSD is limited by Equation (9.35), and so it limits the heat flow Pmax going into the heat sink so that P max ≤ P out S
(9.81)
From Equation (9.35), in the case of τf Ý1 ns, it follows that S ð 102 cm2, and Pmax ð 15 kW. The output average power at the load, P1 is P max P l = ----------1–η
(9.82)
where η is the second-stage DSRD efficiency. As was mentioned earlier, the efficiency is determined by the losses during the on-state and by the transient, or commutation losses of Equation (9.58). When the pumped charge Qp is small, εSqWn d µ n Q p < ------------------------µp and it may be shown that voltage drop at the diode during the on state is determined by j+ tj + µ p - W – ----------------U S = -------------qµ n N d εqµ d N d
(9.83)
Equation (9.83) shows that the diode voltage is less than the voltage drop on the undoped (not enriched) n-layer having resistance Rs = Wn /SqµnNd. The energy loss during pumping will be only less than Qp so that 2
R S I + πτ + Q p = -----------------4
(9.84)
where τ+ is the pumping current pulse length, and I+ is the current amplitude. Taking into account the equality of pumped and extracted charge, we have from Equation (9.84) τ Q p = Q _ ----_ , t+
2
πR S I – τ _ Q _ = -----------------4
(9.85)
where Q– is the energy loss at RS during the reverse current period, and I– is the break current. It follows from Equation (9.85) that losses during pumping are less than during reverse current, if the pumping period is longer, which is usually the case. Actually, the ratio τ–/τ+ may be as small as < 0.1, and the pumping losses in the second-stage DSRD may be neglected. It may be shown © 2001 CRC Press LLC
that the same Equation (9.85) is valid in the strong modulation case, because the carrier distribution depends only on the injected charge, not the actual curve of the current vs. time. The current must be near to the saturated value j– Ý js = qVsNd to get the shortest turn-off time. In this case, from Equation (9.85), we have πE S W n I – τ – πU m I – τ – E S Q – = ------------------------- = ------------------------4 2E a
(9.86)
It may be shown that Equation (9.86) is valid for the case of a diode stack as well. In the case of a bell-shaped pulse, which has the conditions of Equation (9.55), and taking into account Equations (9.55), (9.57), and (9.61), we have for the energy transferred into the load Q E πτ – -Q Q – = -----S -----------E a 16τ off l
Q l ≈ 8U m I – τ off ,
(9.87)
Equation (9.87) shows that the pulse compression should not be large for good efficiency. If τ – ⁄ τ + < 5, then the total DSRD stack losses are less than 10 percent when ξ > 90%, see Equation (9.58) as well, and the average output power can reach Pl > 100 kW according to Equation (9.82). It should be emphasized that the second-stage output high-voltage DSRD is the bottleneck in the process of pulse compression with respect to the average power. In accordance with Equation (9.87) and an estimated S Ý 10 cm2, we get Ql Ý 1 J, and the maximum PRF limited by heating is fp Ý 150 kHz. SAS Heating As mentioned earlier, DSRDs are insensitive to increased temperature, because their performance degradation is connected with mobility decreases only. Devices based on delayed ionization are more susceptible to the heating due to the fast, exponential increase of leakage current. It has been shown that large current leakage prevents the field intensity increase above the breakdown threshold and generation of a fast ionization wave with a high-density plasma tail. Experiments showed that fast switching under a moderated dU/dt Ý1 to 2 × 1012 V/s applied exists only at temperatures below 100°C. The one-pulse heating in SAS may be derived from Equation (9.77), where the incoming heat Qτ has a different value, partially considered in Section 9.4.4, so that Q τ = Q d + Q on + Q S • During the delay, E S τ d WS Q d ≤ j S -----------------2
(9.88)
where τd is the ionization delay τd Ý Um/U´. • During the turn-on process, all the energy stored in the electric field before the fast wave generation is spent on electron-hole pair generation and, in the end, will be converted into heat. This part of energy may be determined as with Equation (9.61). Additionally, after the wavefront in the wave tail, some energy is lost due convective current. This loss is determined by an expression like Equation (9.88). For the total losses, we have 2
2SεU j S E S WSτ f Q on ≤ ----------------m + ---------------------3W 2 © 2001 CRC Press LLC
(9.89)
where Um is the turn-on voltage. • During the turn-on state, taking into account Equation (9.40), we have Sτ p ε p E a E S W n E a ----Q S ≈ j m U on τ p S < ------------------------------- b τf
(9.90)
where τp is the pulse length, and Uon is the sustained on-state voltage in the SAS. Estimations based on Equations (9.88) through (9.90) and (9.77) for the case where τd Ý 2 ns, τf Ý 0.1 ns, τp Ý 0.2 ns, and Um = 3.5 kV, are Qd ------ ≤ 10 –5 J/cm 2 , S
Q on -------- ≤ 10 –3 J/cm 2 , S
QS ------ ≤ 10 –5 J/cm 2 , S
–1
∆T ≤ 10 C
The main losses are turn-on losses determined by the loss of the energy stored in the electric field. The one-pulse overheating is an order of magnitude larger than in the case of DSRD, but it still is small. From Equation (9.81) for the case of ∆T Ý 100°C, Ust Ý 100 kV and Ea Ý 3 × 105 kV/cm where the SAS Ea is more than in a DSRD, we get Pout ð 300 W/cm2, slightly more than for DSRD due to higher Ea. It should be stressed that the maximum area is small; for example, S < 1 cm2 for τf Ý 100 ps, in the SAS case, and the total allowable thermal input is less than 300 W. Using the above estimates for losses, we get from Equation (9.80) the maximum PRF fp < 300 kHz. For the maximum switched-on current of 2 kA, as shown in Equation (9.41), and taking into account Equation (9.50), we have the energy that may be transferred into the load Ql for the pulse width Ý 0.2 ns, so that I m U m τ f I m U m τ dec –2 Q l ≈ --------------- + --------------------- = 10 J 3 3 where τdec is the decay time, η Ý 90%, and the average load power limited by heating is Pl ð 3 kW. Nonthermal Limitations DSRDs In the preceding section, it was shown that thermal heating is not a potential limitation on the average power and repetition frequency of <1 kV pulsers. The average power is limited by the PRF, which in turn is limited by the length of the LC cycle for the circuit shown in Figure 9.27. Actually, the PRF is severely limited by the primary switches S1 and S2, which have to operate at megahertz PRFs. The discharge time of the storage capacitors C1 and C2 is on the order of 100 to 300 ns, but the capacitors must be charged before the next cycle from the power source. To get the capacitor charge time as short as the discharge period, it is necessary to switch off the primary switches S1 and S2 so that the turn-on and turn-off times are equally small. Power FET transistors have short turn-on and turn-off times near 10 to 15 ns and may be used for that purpose. But, at high PRFs in a constant (not a burst) mode of operation, the current switched by each of the transistors should be many times—an order of magnitude at least—less than possible at low PRFs. So, the total number of FETs used in a pulser must be very high, and the pulser design becomes more complicated. © 2001 CRC Press LLC
Drift step recovery transistors (DSRT) are the basis for a more efficient approach to high-PRF pulser design and will be discussed later. Silicon Avalanche Shapers One of the main conditions for effective SAS operation is a low concentration of initial carriers and low conductance current leakage. After the turn-on process, the volume of an SAS n-layer is filled by a high-density electron hole plasma of from 1014 to 1016 cm–3. It is evident that the next cycle of operation, or the next generated pulse, is possible only when the plasma is completely dispersed, or removed, and the remaining carrier concentration drops down to a very low level of about 106 cm–3. The ratio of charge carrier concentration from turn-on to turn-off states can be very large, e.g., greater than 109 to 1010 times. There are two ways of removing plasma from an SAS with a diode p+nn+ structure. 1. It can be accomplished by extracting carriers from the n-layer into the p+ and n+-layers. These extracted carriers must in turn be extracted from the layers into the contacts or disappear in them due to recombination. If the carriers remain in the p+, n+-layers during the next cycles, then the high SCR displacement current and conducting current in the p+,n+-layers returns the carriers into the SCR. 2. There can be recombination of carriers in the n-layer. After switching, the load current flows through the SAS in a reverse direction to extract carriers in the p+ and n+-layers. This process resembles the extraction process in the DSRD, but there are many differences that will be discussed. The current density in a SAS is an order of magnitude larger than in a DSRD. Therefore, during the extraction process, when part of the plasma is removed and the space charge region appears, the voltage drop across the SAS rises sharply and leads to high energy losses. It is a well known fact that, if the space charge region does not overlap the n-layer, which is the usual condition in an SAS before a fast triggering voltage is applied, generally, only part of the plasma may be extracted. Nearly complete extraction of plasma is possible only in DSRD due to the matched processes of plasma injection and extraction by the current. In an SAS, both processes are of a different nature and cannot be matched, so the complete extraction of plasma by current is impossible. Actually, for good efficiency, the current pulse width must be so short that only a small fraction of the generated plasma is removed from the SAS volume. It follows that the main way of plasma dispersal is recombination, so, for a high PRF, the lifetime of carriers must be short. The time τr needed for the plasma concentration to drop from nm down to the n0 level is n τ r = τ p ln -----m n0
(9.91)
where τp is the lifetime of the minority carriers. A value of τp < 1 s is needed to get τr Ý 20 s and fp Ý 50 kHz for the case of nm Ý 1016 cm–3 and n0 Ý 106 cm–3. It is common knowledge that the component of leakage current in an SCR due to thermal carrier generation jl is inversely proportional to the lifetime, so that qn i W SCR j l ≈ ------------------τp
(9.92)
where ni is the intrinsic concentration, about 1010 cm–3, in silicon at 20°C temperature, and WSCR is the SCR width. © 2001 CRC Press LLC
In the case of τp Ý 1 s and WSCR Ý 10–2 cm, Equation (9.92) yields jl Ý 10–5 A/cm2, and for the carrier concentration in the space charge region n0, we have j 7 –3 n 0 ≈ --------- ≈ 10 cm qV S which is a very high value, much more than the n0 Ý 106 cm–3 supposed before. Nevertheless it is possible to switch on an SAS with good efficiency for the case of a large n0 if the applied triggering dU/dt voltage rate is increased up to 2 × 1012 V/ for each p-n junction in the SAS. In this case, the estimate from Equation (9.91) should be corrected by using a new value for n0, but the correction is minor. It may be noted that ni in Equation (9.92) increases exponentially with temperature, and that dependence is SAS’s main heat limitation. Experiments have shown that increasing the dU/dt voltage rate up to 4 × 1012 V/s forces efficient SAS switching on, even at a 200 kHz PRF in the burst mode for an average PRF of 10 kHz. The SAS used in the experiment had a τp > 2 s. It should be remarked that we discovered superfast delayed switch-on in silicon high-voltage diodes with very short carrier lifetimes of less than 10 ns. Such short lifetimes have also been seen in diodes subjection to irradiation with light ions having energies greater than 5 MeV. A PRF of several megahertz is expected from such structures. Their leakage current is far smaller than expected from Equation (9.92), and this fact may be explained by strong trapping of carriers at deep levels in which they were generated by irradiation. We consider such structures to be very effective as SAS, but our investigations are still at a very early stage.
9.5.5
CIRCUIT DESIGN
FOR
GENERATING DIFFERENT PULSE SHAPES
General Considerations The new devices are DSRD opening and SAS closing switches. Their main features are • They have only one stable off-state in which they can remain for an infinitely long time, and their on-state is relatively short. • They are two-electrode devices, and the triggering pulse is applied to the same electrodes through which the load current passes. Because of these features, the devices are most efficient when used in pulse-compression circuits for the bell-shaped pulses considered earlier in Section 9.1.2, Figures 9.5 and 9.27. Nevertheless, these devices may be used for generating and shaping a large variety of waveforms. As a rule, in these cases, the devices’s performance is degraded and/or some additional limitations appear. Some examples of step-like pulses have already been given in Section 9.1.2, so we will consider them here. The combined thyristor-diode closing switch (TDCS), shown in Figure 9.7, emulates a power closing switch with a turn-on time as fast as a DSRD’s turn-off time. The method has disadvantages in that all the energy stored in the additional capacitor Cp is lost. The lost energy fraction is equal to the ratio of energy stored in Cp and the total energy stored in the pulse forming network (PFN), as in the case of C1 + C2 + C3 in Figure 9.7. A DSRD may be used to shape the front of pulse current generated by other means, such as shown in Figure 9.6. Due to a pumping current length limitation, the separating inductor LS cannot be very large. The inductor shunts the line after the DSRD opens and distorts the flat part of the pulse by bending it down. © 2001 CRC Press LLC
It may be shown that the decay time τd of the pulse due to the pumping circuit is 2
U+ τ+ τ d ≈ -----------U p τ in
(9.93)
where U+ is the pumping source voltage, τ+ is the pumping pulse length, Up is the shaped pulse amplitude, and τin is the pulse front before shaping. In the case of U+ = UP, τ+ Ý 200 ns and τin = 40 ns so that Equation (9.93) yields τd Ý 1 s. In the case of the SAS shown in Figure 9.9, the rate of voltage rise must be higher than 1012 V/s and the form to be shaped less than 3 ns. Thus, the pulse to be shaped must be generated by one of the DSRD circuits described above. After switching on, plasma generated in the diode is extracted by the current, the SCR appears, and the voltage drop on the SCR distorts the flat part of the pulse. The time of plasma extraction in diodes is less than 10 ns; that is, the pulse decay is limited by the same value. In Sections 9.4.5 and 9.4.6, it was shown that, in transistor- and thyristor-like structures, the plasma extraction is suppressed due to regeneration after switching. An SAS based on thyristors may be in the on-state indefinitely, and they are the best choice for such applications. Exponential pulses with short fronts and a long decay may be generated by a symmetrical LC circuit such as shown in Figure 9.5. In this case, the decay time τd Ý L/R1 should be larger than the DSRD turn-off time. It should be remembered that, in the long decay time case, a considerable part of the energy will return to capacitors C1 and C2 in accordance with Equation (9.54). The pulse may be shaped further by use of an SAS, as shown in Figure 9.9, where using SAS on thyristors is preferred. Rectangular pulses with short leading and trailing edges may be effectively generated by the same circuit shown in Figure 9.5. The inductors L1 and L2 must be substituted by transmission lines T1 and T2, or PFNs with impedances ρ1 and ρ2. The operation of the transmission line circuit version of the circuit in Figure 9.5 is very close to the same circuit with inductors. The energy from capacitors C1 and C2 is transferred to the magnetic field in the lines. When the DSRD breaks the current, the lines discharge their energy into the load. If the line impedance is matched to the load Rl = ρ1/2 = ρ2/2, then the circuit generates the rectangular pulse with a flat summit and a length of 2τl where τl is the delay time of the line. The load current Il is U0 C1 I l = ---------------- = I1 = I2 τl ρ1
(9.94)
where U0 is the voltage on capacitor C1. It should be noted that, when the pulse current is flowing, the capacitors C1 and C2 are recharged, and part of the energy stored in the lines is lost in accordance with Equation (9.54). Therefore, only short pulses of 2τl « T, where T is the half-period of the LC circuit, can be generated with good efficiency. For long pulses, greater and 100 ns, the circuit with TDCS shown in Figure 9.7 is more efficient. High-Frequency Circuits The DSRT (transistors) may be very effectively used for generating very high PRF pulses, and the circuit is shown in Figure 9.33. When a triggering pulse is applied to the base-cathode electrodes, in accordance with the process described in Section 9.4.4, the DSRT Q1 closes. The current in the inductors L1 and L2 rises, as shown in Figure 9.34. The diode D1 prevents the inductors L2 from being shunted by the diode D2 (DSRD). The current Ip of the circuit RpLp pumps D2. © 2001 CRC Press LLC
Some time after the end of the triggering pulse, the DRST quickly opens and breaks the conductivity current. The inductor L2 current I2 passes through the diode D1 into the DRSD D2 in the reverse direction. Due to the low resistance of the diodes D1 and D2, which are in the highly conducting state, the I2 current decay τdec is very slow, so τdec Ý L2/Rf , where Rf is the low resistance of D1 and D2. The inductor L1 current I1 charges the capacitance of the DSRT collector Ccb. Then, the current I1 changes direction and passes through D1 into the DSRD D2, increasing the diode current ID. At the moment of maximum I1, the total DSRD current ID is two times more than I2. At the moment when ID reaches the maximum value, DSRD breaks the current, and the energy stored in inductors L1 and L2 is transferred into the load. The front of the pulse is equal to the
Q1
U dc C cb
Utr
Lp L1
D1
L2
Rp D2
R1
FIGURE 9.33 A DSRT transistor used for generating very high-PRF pulses.
Utr
∆τ
Ucb Udc I1
t
τp
t
π LCcb
t
I2
t
ID2
t
UR1 τ d = L 1,2/ 2R1
t
FIGURE 9.34 The transient conditions for the DSRT transistor pulser circuit in Figure 9.33. © 2001 CRC Press LLC
diode’s turn-off time, so the pulse decay is τd = L1/2R1, if L1 = L2, as in the case of the symmetrical circuit shown in Figure 9.27. The circuit is ready for the next cycle after the end of the load current. The maximum repetition rate is limited by the time needed to store energy in the inductors, which is less than 100 ns plus the time of the half-period of oscillation of the circuit consisting of the inductor L1 and the DSRT collector capacitance, which is less than 10 ns. Experiments have demonstrated PRFs of more than 5 MHz. The Wave-Shaping Line There is a method to efficiently generate short pulses, and it uses SASs that are switched on like sequential elements of a transmission line of LC cells, which are matched with the input l1 and output l2 lines of Figure 9.35. The constant bias voltage between the diodes is distributed using resistive networks. The voltage that charges capacitors Cn increases along the line. When the pulse falls on the first SAS and switches it on, the next cell of the line shapes a large-amplitude wave. During the transition of the next cell, the duration of the wave’s front increases to a value that is close to the cell half period, or cell’s time constant. This is due to the upper frequency cut-off effect of discreet lines. Then, the next SAS is switched on, and its amplitude is increased again while the duration of the front is maintained, and so on. When a line made of four SAS devices switches three cells with a time constant of about 0.5 ns and a line impedance of 50 ¾ between each SAS pair, a pulse is shaped at the output that has an amplitude of 5 kV and a front duration of 0.3 ns, as shown in Figure 9.35b. The peak power is 500 kW. The advantage of such a method consists of the fact that the energy for obtaining a subnanosecond pulse with a large amplitude is taken from the constant bias source, while the triggering generator shapes a wave with the amplitude necessary for switching on only the first SAS. +
-
11
L1 D1
L2
C1
D2
C2
Ln Cn
12
R1
Dn
(a) Circuit with LC cell transmission lines
V 1 kV
1 ns t (b) Voltage output vs. time
FIGURE 9.35 Silicon avalanche switch (SAS) and transmission line circuit for generating short pulses. © 2001 CRC Press LLC
9.5.6
MATCHING
LOADS
WITH
General Considerations As shown in Section 5.1, it is possible to match the pulser with the load by adjusting the voltage of the DSRD stack by the number of p-n junctions, and the current by the area of the p-n junction. For example, the most commonly used 50 ¾ load needs a 50 kV stack at 1 kA. Such a stack consists of many (about 50) p-n junctions, but it is thick, so its cooling is severely limited, as is the average power. The same peak power due to the high stability of DSRD circuits could be achieved when all DSRD circuits work in parallel. In this case, the cooling condition is about 50 times better, but the pulser output impedance is very low at 0.02 ¾ for 1 kV and 50 kA. It should be noted that, generally, the DSRD pulser also has a simpler design with a lower output impedance. In the cases when the load impedance is fixed and cannot be adjusted to the pulser output, the impedance transformer, or matching circuit, must be used as shown in Figure 9.36. Two methods are possible for matching the pulser and load for sub-nanosecond and nanosecond pulsers only. The first is by using a line transformer based on transmission lines with variable wave impedance along its length. The second is by using a line transformer based on sections of transmission lines with constant wave impedance. A combination of these methods is possible.
R1 DN
DSR
FIGURE 9.36 impedance.
+
S2N
+ -
D1 DS R
+ -
2
RD
DS
S1N S3
+
L1
S4
L2
S1
+ -
S2
+
A DSRD pulser circuit using parallel devices for improved cooling, but with a lower output
© 2001 CRC Press LLC
Line Transformer Based on Transmission Lines with Variable Wave Impedances along Its Length Let us consider a line transformer of length L, as shown in Figure 9.37, which has a variable impedance ρ(x). The reflection ratio dK(x) from a small part of the line dx with an impedance change dρ is 1 dρ ( x ) 1 d ln ρ ( x ) dK ( x ) = -------------- -------------- dx = --- -------------------- = N ( x )dx 2ρ ( x ) dx 2 dx
(9.95)
1 d ln ρ ( x ) where N ( x ) = --- -------------------- is called the local reflection function. 2 dx It is evident that the next condition is valid: 1--- ρ ( l ) ln ----------- = 2 ρ(0)
L
∫ N ( x ) dx
(9.96)
0
The increment of the reflected wave dU– on distance dx is dU – = U + N ( x )dx
(9.97)
where U+ is the incident wave. In general, the incident wave is attenuated due to reflection, and it is impossible to evaluate a reflected wave from Equation (9.97). Evaluation is possible only in the case of weak reflection and attenuation. Assuming weak attenuation, then from Equation (9.97), we have
r (L)
r (X)
U-
U+
dp
r (0) dx
0
L
X
Ip
X
FIGURE 9.37 A line transformer with a variable impedance ρ(x). © 2001 CRC Press LLC
lp
U– =
∫ U+ N ( x ) dx
(9.98)
0
where lp is the incident pulse length in the line. When N(x) is constant, the line impedance varies exponentially with distance so that, if N(x) > 0, it increases, and for N(x) < 0, it decreases. Consider the simplest case of a triangular wave with amplitude Um and width lp. For the case where N(x) is constant and lp < L, from Equation (9.98) we have that, for t > lp/C, the reflected wave amplitude Ur is constant. U m Nl p U r = -------------2
(9.99)
From Equations (9.96) through (9.99), we can evaluate the loss of energy to reflection Q–. 2
( L ) l p ln ρ ---------- ρ ( 0 ) U L Q _ ≈ ----------- ≈ Q + ----------------------------ρ(0) 4L 2
(9.100)
where Q+ is the incident wave energy, L is the line length, ρ(0) and ρ(L) are the wave impedances at the line ends. Equation (9.100) is valid for the weak reflection case where Q– « Q+. So, this tells us that the reflection losses are small when the line length is much more than the pulse length and the transformation ratio, U(L) Ku = ------------ = U(0)
ρ(l ) ----------ρ(0)
is not high. We have checked the validity of Equations (9.98) through (9.100) by computer simulation and by experiment for the case where ρ(L) = 5 ¾, ρ(0) = 50 ¾, FWHM τp = lp/C = 0.25 ns, and τl = L/C = 1.75 ns. The line transformer that ends with a 5 ¾ output was opened so the output load was 50 ¾ (see Figure 9.38). The results of time domain reflectometry are shown in Figure 9.38. A nearly constant creeping wave is seen, as predicted in Equation (9.99). The creeping wave is about 8 percent of the initial incident wave, which matches Equation (9.99). The output pulse of the mismatched 50 ¾ load for the no losses case must be about 0.575 of the input, and about 0.316 for the matched case. From Equation (9.100), we have the reflection energy loss Q– Ý 0.18 Q+ and there is a corresponding 90 percent amplitude decrease for the output pulse. In accordance with this value, the output amplitude is about 0.517 of the input. The difference is due to the incident wave attenuation clearly seen in Figure 9.38, which has not been taken into account in Equation (9.100). The creeping reflected wave is reflected as well and gives rise to the tail incident wave seen after the output pulse in Figure 9.38. From Equation (9.100), we determine that an efficient line transformer with a high transformation ratio must be long. Our computer modeling showed that the efficiency depends only slightly on the type of function N(x). The difference for all tested functions N(x), such as N(x) = x(L – x), N(x) = th(x), N(x) = constant, N(x) = sin(¼x/L), was within several percentage points. © 2001 CRC Press LLC
delay line
5 Ohm
50 Ohm
output
Input
R1
transformer
50 Ohm
0.25 input
0.20
Volts
0.15 b
0.10 output 0.05
secondary creeping wave
0.00 -0.05 0.0
1.0
2.0
3.0
4.0
5.0
time ns
0.25 incident wave
0.20
Uin Volts
0.15 0.10
reflection from open output
0.05 a
0.00 -0.05 0.0
creeping wave 1.0
2.0
3.0
4.0
time ns FIGURE 9.38 Experimental delay line output transformer and measured performance. © 2001 CRC Press LLC
5.0
Transformer Based on Transmission Line Sections with Constant Impedances The transformer on a section of transmission lines is shown in Figure 9.39. In this case, two line inputs are connected in parallel and their outputs in series. The input impedance, ρin is ρ0/2, and the output ρout is 2ρ0, where ρ0 is the line impedance. For n lines ρin = ρ0/n, ρout = nρ0, and the transformation ratio Ku = n. Note that the transformer can work in reverse. It may be seen from Figure 9.39 that, in addition to two main lines 1,1´ with 2,2´, and 3,3´ with 4,4´, there is also a parasitic line of 1,1´ with 3,3´ present in the circuit. When an incident wave reaches l´ and 2´ of line 1, the parasitic wave that is traveling to the shorted end 1,3 is excited. The ratio of powers of the parasitic to the main waves is determined by the ratio P W1 ρ -------- = -----2 P W2 ρ1
(9.101)
The next two techniques can increase the wave impedance ρ2 and decrease the size of the parasitic wave. First, by making the transmission line as thin as possible; however, short power pulses require rather thick cables for good transmission. Second, by filling the space surrounding the conductors with a high wave impedance, ρf, media so that ρf =
µ0 µf ---------ε0 εf
(9.102)
where µ0 is the permeability, ε0 is the dielectric permittivity of a vacuum, and µf and εf are the relative values for the media. The simplest way is to dress the cable line with ferrite rings. It is well known that the relative value of µf , for ferrite may be in the hundreds, while Gf will be low, i.e., between 3 and 4. But, due to the magnetic domain and the inertia for nano- and sub-nanosecond pulses, the effective µf is much lower. Our experiments with NN400 ferrite shown that for times less than 70 ps, the ratio of µf /Gf < 1 due to the high value of εf and the poor µf., then µf increases, and 200 ps later, ρf achieves three times the vacuum value. It seems that, for sub- and nanosecond pulses, this technique is possible but not very effective. A third method way is to coil the feeder cable, which is very effective with thin, flexible feeders. However, this method is not very good in practice for a thick and rigid high-voltage feeder. The last two techniques, or even all three, may be combined. For example, the feeder coil may be wound around a ferrite rod. 4 3
4' line 2
3'
V
Vin
2 1
2'
line 1
FIGURE 9.39 The parasitic circuits of the transformer of Figure 9.38. © 2001 CRC Press LLC
1'
One can easily find that the number of possibly unwanted parasitic lines m increases with the number of main lines n super-linearly as n(n – 1) m = -------------------2
(9.103)
For each of theses m parasite lines, the parasitic wave power relation of Equation (9.101) is valid. It should be noted that, for some of the unwanted lines of higher order, the exciting voltage Vn may exceed the input voltage. For example, the voltage for a parasite line consisting of the uppermost and the lowest lines will be n – 1 times greater than the input voltage, so the parasitic wave power may be high. As a result, transformers with a high transformation ratio n > 3 become very complicated and not very effective. Both types of transformers work in the reverse direction and may be used to match a lowimpedance output to the load. The transformer shown in Figure 9.36 can simultaneously sum up the power of several pulsers and match the impedances. The width of the transformer’s line may be increased to accommodate a larger number of pulsers. The limit is a radial line with the load placed in the center of the disk and the pulsers placed around the circumference. Such radial line transformers may also be assembled in a stack, as shown in Figure 9.40, to sum up the output voltage. The inner wire may be replaced by an electron beam to design an electron accelerator.
9.6 CONCLUSION The physics and engineering of fast semiconductor switching of high electric powers has developed rapidly in recent times. Even as this chapter was being prepared, new and interesting results were being reported. Therefore, we found it necessary to conclude by briefly discussing the degree of completion of the pattern for the physical processes described for the devices in this report. Specifically, we should mention the means and possibilities for improving the parameters of the devices, pulsers and the latest achievements in this area. It can be confidently confirmed that the physical mechanism for the operation of drift step recovery devices with fast restoration is understood somewhat completely. Given this understanding, and following the most general principles, it is not complicated to calculate the design parameters for an ideal semiconductor structure. We can specify the design and provide the best values of working voltage, duration of the high-conductivity stage, break-off current, and current break-off time. It is simple enough to solve the analysis problem as well and calculate the operational parameters of a specific structure, at least for diodes and transistors based on the design parameters. The main unsolved technological problem is how to make a structure which is close to ideal. Direct wafer bonding to manufacture DSRD and SAS devices is a very promising approach that was successfully tested in Ioffe Physico-Technical Institute. Much additional work and investigation must be done before there can be wide practical applications of this technology to pulser design. The physics of these processes in delayed ionization devices is more complicated, and it has obviously not been studied sufficiently. The existence of different types of instabilities in ionization phenomena, combined with the lack of reliable data on the ionization processes with the participation of deep levels, or states in the forbidden gap, also impede synthesis and analysis with an accuracy that is acceptable for practical purposes. Such device parameters as the switching time and residual voltage are almost not being subjected to calculation at the present time, while SAS and pulse shaper design goes on empirically. © 2001 CRC Press LLC
FIGURE 9.40 A radial line nanosecond summation circuit.
Let us recall that the maximum possible fast response for devices is determined by the maximum speed of voltage restoration, which, for a p-n junction, is about 2 × 1012 V/s. When the front duration equals 2 ns, this corresponds to a working voltages of 2 kV and n-level doping to a level of 1014 cm–3. The dU/dt value may be increased proportionally to the number of p-n junctions in a stack. The voltage of a stack may be as high as hundreds of kV. The density of the break-off current is greater than 160 A/cm2, and the current which is limited by skinning, equals approximately 10 kA when the diameter of the structure is 10 cm. Since DSR devices are easily synchronized, it is possible to increase the switched voltage and the current by means of the series and parallel switching of a large number of devices. In this case, there are no foreseen fundamental limitations to increasing the power. For delayed ionization devices, the threshold for the minimum switching time is approximately 10–12 s, which is the time between two sequential acts of ionization when the saturated value for the ionization coefficient is clear. Instabilities can considerably worsen the commutation parameters, and they also do not make it possible to determine the prospects for increasing the switched power in the case of the series and parallel device switching. Nevertheless, the sub-nanosecond pulse generation technology is developing quickly. © 2001 CRC Press LLC
The recent achievements in pulse shaping technology obtained by the authors8–15 are, for the following cases, shown in the table below. Pulse Shape
Front Duration
Load (¾)
Voltage Amplitude
Power
Pulse Duration
Triangular
0.1 ns
50
20 kV
8 MW
1 ns
Bell
0.7 ns
100
90 kV
80 MW
2 ns
These parameters are unique, but in the near future they will also be significantly improved.
REFERENCES 1. I.V. Grekhov, and A.F. Kardo-Sysoev, “Subnanosecond current drops in the delayed breakdown of silicon p-n junction,” Sov. Tech. Phys. Let., Vol. 5 (1979), No. 8, p. 395–396. 2. I.V. Grekhov, V.M. Efanov, A.F. Kardo-Sysoev, and S.V. Shenderey, “Formation of a high nanosecond voltage drop across semiconductor diode,” Sov. Tech. Phys. Let. Vol. 9, No. 4, 1983. 3. I.V. Grekhov, “New principles of high power switching with semiconductor devices,” Solid-State Electronics, Vol. 32, No. 11, 1989, pp. 923–230. 4. Yu. A. Kotov et al., “A novel nanosecond semiconductor opening switch for megavolt repetitive pulsed power technology,” Proceedings 9th IEEE Pulse Power Conference, 1993. 5. E.A. Panutin, and I.G. Chasnikov, “Current density distribution in fast thyristors,” Soviet Radiotechika and Elecronica, Vol. 23 D, No. 4, 19078, pp. 883–886. 6. V.I. Brylevski, A.F. Kardo-Sysoev, and I.G. Chasnikov, “Uniform current distribution unstability in a fast semiconductor switch,” Soviet Electronic Technology, Vol. 4, No. 2, 1985, pp. 48–52. 7. A.F. Kardo-Sysoev et al, “Avalanche injection in high speed thyristors,” IEEE Transaction of Electron Devices, ED-23, No. 11, 1976, pp. 1208–1211. 8. V.M. Efanov, A.F. Kardo-Sysoev, and I.G. Chashnikov, “Fast power switches from picosecond to nanosecond time scale and their application to pulsed power,” Tenth IEEE International Pulsed Power Conference, Albuquerque, New Mexico, 1995, pp. 342–347. 9. A. Litton, A.Erickson, and P. Bond (Fast Transitions, Inc. Lomita, CA); A. Kardo-Sysoev (Megapulse, St. Petersburg, Russia); and Barney O’Meara (Moose Hill Enterprises, Sperryville, VA); “Low impedance nanosecond and sub-nanosecond rise time pulse generators for electro-optical switch applications,” Proc. Tenth IEEE International Pulsed Power Conference, Albuquerque, New Mexico, 1995, pp. 783–738. 10. V.M. Efanov, A.F. Kardo-Sysoev, I.G. Chasnikov and P.M. Yarin, “New superfast power closing switched dynistors on delayed ionization,” Conference Record of the 1996 Twenty- Second International Power Modulator Symposium, Boca Raton, Florida, 1996, pp. 22–25. 11. V.I. Brylevsky, V.M. Efanov, A.F. Kardo-Sysoev, I.A. Smirnova and I.G. Chasnikov, “Power fast modulators thyristors,” Conference Record of the 1996, Twenty-Second International Power Modulator Symposium, Boca Raton, Florida, 1996, pp. 39–42. 12. V.I. Brylevsky, V.M. Efanov, A.F. Kardo-Sysoev, I.A. Smirnova and I.G. Chasnikov, “Power nanosecond semiconductor opening plasma switches,” Conference Record of the 1996, Twenty-Second International Power Modulator Symposium, Boca Raton, Florida, 1996, pp. 51–54. 13. A.F. Kardo-Sysoev, S.V. Zazulin, V.M. Efanov and Y.S. Lelikov, “High repetition frequency power nanosecond pulse generation,” 11th IEEE International Pulsed Power Conference, Baltimore, Maryland, 1997, p. 107. 14. A.F. Kardo-Sysoev, V.I. Brylevsky, V.M. Efanov, M. Larionov, I.G. Chasnikov and P.M. Yarin, “Powerful semiconductor 80 kV nanosecond pulser,” 11th IEEE International Pulsed Power Conference, Baltimore, Maryland, 1997, p. 273. 15. A.F. Kardo-Sysoev, V.M. Efanov, I.G. Chasnikov, “Fast ionization dynistor (FID)-new semiconductor superpower closing switch,” 11th IEEE International Pulsed Power Conference, Baltimore, Maryland, 1997, p. 274. © 2001 CRC Press LLC
10 Fourier Series-Based Waveform Generation and Signal Processing in UWB Radar Gurnam S. Gill CONTENTS 10.1 Introduction 10.2 Waveform Generation 10.3 UWB Radar Configuration and Signal Processing References
10.1
INTRODUCTION
In this chapter, we describe a Fourier series-based method for the generation of ultra-wideband (UWB) waveforms and a radar concept that employs this waveform. UWB waveforms are distinguished from conventional narrowband waveforms by their large fractional or relative bandwidth (which the ratio of absolute bandwidth to the center frequency.)1–2 However, UWB signals at baseband have special features that are not present in conventional radar waveforms of the same bandwidth at higher frequencies. (Note that conventional narrowband radars can achieve bandwidths comparable to that of UWB radars but at a higher carrier frequency.) These features are the presence of low-frequency components and narrow pulsewidths at the same time. The low-frequency components penetrate both ground and foliage and may also excite target resonances that may lead to detection of targets hidden in foliage or underground as well as stealthy targets.2 Very narrow pulsewidths give fine range resolution and low clutter. Since clutter limits target detection in most situations, reducing clutter enhances the detection of targets with UWB waveforms. Conventional narrowband signals can achieve large absolute bandwidths at higher frequencies in Ku and millimeter wave (MMW) bands, but they do not have the same propagation and target scattering characteristics as do the UWB signals of comparable bandwidths. Most of the UWB sources reported in the literature are of the impulse type, which is implemented by Marx bank or similar techniques. The concept behind impulse generation is the storage of energy over a longer period of time and then its release in very short period of time. The release time is of the order of a nanosecond or less, which results in the generation of pulses of small duration. At the present time, the typical method of storing energy is capacitive (such as Marx bank) and the release of energy is accomplished by switches such as spark gap, diode, laser actuated semiconductor, etc. The problem with these techniques is that a significant amount of energy of the waveform lies around the zero frequency, which cannot be radiated by an antenna. The pulse shape and PRI cannot be precisely controlled. There is no fine control of spectrum to avoid interference with friendly receivers. The Fourier method of waveform generation overcomes these
© 2001 CRC Press LLC
disadvantages of conventional impulse generation. With this method, any periodic waveform (such as a train of baseband pulses of approximately rectangular shape) can be synthesized by expanding the desired waveform by Fourier series then generating and transmitting the resulting sinusoidal components. Thus, the transmitted signal is generated in the frequency domain by summing relatively low-power harmonics of the desired signal instead of generating the signal by a single high-power source in the time domain.3–4 Radar detection performance in the presence of noise depends on the energy in the pulse (or the average power), which increases with the increasing pulsewidth. However, target detection in the presence of clutter requires a narrow pulsewidth. These two contradictory requirements are often satisfied by the use of pulse compression, which allows long transmit pulses for larger amount of pulse energy and narrow compressed pulses for low clutter and good range resolution. UWB pulses are of very short duration and may not contain enough energy, thus the use of pulse compression is important to compensate for low energy in the pulses. Therefore, in this chapter, we extend the Fourier synthesis to generation of complex amplitude coded waveforms, which will allow the generation of Barker like codes in time domain. This capability will allow pulse compression and coherent integration of UWB signals and thus reduce the need for very large power sources, which are required for conventional implementation of impulse-type radars. In contrast to pulsed carrier conventional waveforms, the UWB waveforms are baseband. With UWB waveforms, we cannot define a carrier frequency and consequently cannot define Doppler frequency shift, either. Thus, discrete Fourier transform (DFT) implemented Doppler processing used for detection of narrowband radar signals cannot be employed as such in the processing of UWB radar signals. However, we can perform the functions of Doppler processing (i.e., integration of pulses, cancellation of clutter, and target velocity measurement) directly in the time domain by using the change from pulse to pulse in target round trip time. In this chapter, we describe the generation of Fourier series-based uncoded and coded waveforms and a concept of a coherent radar system, which employs such waveforms. Also, we describe time domain signal processing concepts to be used in the proposed implementation. Fourier series waveforms can potentially have applications other than radar.
10.2
WAVEFORM GENERATION
This section describes waveform generation for both uncoded and coded waveforms for UWB radar.
10.2.1
GENERATION
OF
UNCODED WAVEFORMS
The Fourier series is normally used to decompose periodic signals into sinusoids. However, we can also use Fourier series in reverse to synthesize a periodic signal. This will be done by finding the Fourier series expansion of the radar waveform to be produced and then generating and transmitting each sinusoidal component of the expansion. A separate oscillator produces each harmonic component of Fourier series expansion and the sum of all the oscillators will reproduce the desired periodic signal. The Fourier series expansion will contain an infinite number of terms, but the number of oscillators which can be realistically employed is, of course, finite. Furthermore, an antenna cannot transmit the direct current (dc) component of Fourier expansion; thus, dc component should not be included in the waveform generation. With these constraints, we can synthesize a desired radar waveform from the following approximate expansion, N
x(t) ≈
N
∑ [ an ·cos ( nωo t ) + bn ·sin ( nωo ) ] ≈ ∑ cn ·cos ( nωo – φn ) n=1
© 2001 CRC Press LLC
n=1
(10.1)
where ω 0 is the fundamental angular frequency related with pulse repetition interval T as ω 0 = 2π/T . This scheme can be implemented as shown in Figure 10.1. Each harmonic is generated by a separate oscillator that is amplitude and phase controlled. The oscillators are phase locked to a stable master oscillator. The constituent harmonics will add up to form the desired signal. To generate a periodic train of baseband rectangular pulses of width τ, we implement the following truncated Fourier series: nω 0 τ nω 0 τ nω 0 τ nω 0 τ 2A - cos ----------- cos ----------- cosnω 0 τ + sin 2 ----------- ⋅ sin nω 0 τ x ( t ) ≈ ------- ∑ sin ---------- 2 2 2 2 nπ
(10.2)
n
We can rewrite this expression more conveniently as N
x(t) ≈
∑ cn cos ( nω0 t – φ n )
(10.3)
n=1
where 2A nω 0 τc n = ------- sin ----------2 nπ
(10.4)
nω 0 τ φ n = ----------2
(10.5)
f0
Phase & Gain Control
Switch
2f0
Phase & Gain Control
Switch
Phase & Gain Control
Switch
Multiplexer
Power Divider
Master Oscillator
Figure 10.2 shows a pulse train generated with the above weights and the following parameters: f 0 = 250 MHz, τ = 0.5 ns, and N (number of oscillators) = 9. Note that, unlike other UWB waveforms that look like one or more cycles of a sinusoid, this one is a truly a baseband or carrierfree waveform.
Phased Locked Oscillators
Nf0
Waveform Coefficient Storage
FIGURE 10.1 Fourier UWB transmitter block diagram. © 2001 CRC Press LLC
0.6
x(t)
0.4
0.2
0
-0.2 0
2
4
6
8
10
12
Time (ns)
FIGURE 10.2 Fourier-based rectangular pulse train.
The truncated expansion will generate only an approximation of the ideal waveform. However, if the antenna is properly designed, these “approximate” waveforms can be transmitted without dispersion, whereas ideal rectangular waveforms will be distorted by the antenna. Thus, these less than perfect rectangular waveforms may be superior to the ideal rectangular waveforms. The greater the number of harmonics, the more closely the generated pulse matches in shape with the ideal pulse. The pulse shape by itself is not of much significance in radar waveform design as long as the matched filter can be constructed for the transmitted waveform. The power in the pulses goes up as the number of oscillators are increased. Thus, one can generate high-powered pulses by using many oscillators of smaller power. In the discussion so far, we have performed the Fourier series expansion for the asymmetric case where t = 0 at the beginning of pulse. The expansion contains both sine and cosine terms or amplitude and phase. Waveform generation based on this expansion would require 2N oscillators or active phase control. However, if the expansion is performed for the time symmetric case where t = 0 occurs at the center of pulse, the expansion will contain only cosine terms. Thus, the terms to generate the waveform will require only N terms instead of 2N terms as given by N
sin ( nω 0 τ/2 ) 2Aτ - cos ( nω 0 τ ) x ( t ) = ---------- ∑ ----------------------------T nω 0 τ/2
(10.6)
n=1
However, the first pulse in this case will be half of the normal pulse. This may not be important when large number of pulses are transmitted.
10.2.2
GENERATION
OF
CODED WAVEFORMS
Coded (pulse compression) waveforms are used in conventional narrowband radars to increase the average power (for higher detection performance) and still retain the advantages of a short pulse. The average power is particularly low in UWB waveforms due to very narrow pulsewidths. Thus, it is all the more important to employ coded waveforms to increase the average power in UWB radars. In conventional radars, pulse compression waveforms are generated by phase coding the carrier (i.e., 0° phase shift for “+” and 180° phase shift for “–”). Instead, one can generate binary coded waveforms for UWB radar by polarity coding of the pulse (i.e., positive amplitude for “+” and negative amplitude for “–”). © 2001 CRC Press LLC
There are two ways to generate a code. One is generation of continuous repetition of code, and the other is code word followed by listening period, which is more typical of pulsed radar waveforms. We will describe both methods and their trade-offs. We use a specific method for Fourier series expansion in this chapter for signals whose derivatives can be expressed in terms of delta function sequence. This method is as follows. We wish to approximate the periodic function x ( t ) from the truncated Fourier series expansion as N
x(t) ≅
2πnt 2πnt a n · cos ------------ + b n · sin ------------ T T
∑
(10.7)
n=1
Differentiating both sides with respect to t, N
x′ ( t ) =
∑
– 2πn 2πnt 2πnt 2πn ------------- a n ·sin ------------ + ------------ b n · cos ---------- T T T T
(10.8)
n=1
Let N
x′ ( t ) =
2πnt 2πnt a n · cos ------------ + β n · sin ------------ T T
∑
(10.9)
n=1
where 2πn α n = ---------- ·b n , T
– 2πn β n = ------------- ·a n T
(10.10)
If x′ ( t ) consists of delta function sequences, it can be easily expressed in the form of Equation (10.7). Coefficients an, bn are then computed as –T a n = ---------- ·β n , 2πn
T b n = ---------- ·α n 2πn
(10.11)
Substitution of an, bn in Equation (10.7) will complete the Fourier expansion of a coded waveform x( t) .
10.2.3
GENERATION
OF
CONTINUOUS CODED WAVEFORMS
A general periodic code sequence shown in Figure 10.3a can be mathematically represented as p–1
x( t) =
∞
∑ ∑ k = 0 n = –∞
A k · ∏ t – nT – kT ------ p
where 1, 0 < t < T -- p ( t ) = ∏ 0, --T- < t < T P © 2001 CRC Press LLC
(10.12)
a. Continuous wave coded waveform
T
T
T
T
T b. Pulsed coded waveform
FIGURE 10.3 Continuous wave and pulse code sequences.
Ak is + A or –A, depending on a specific code. The code has p subpulses over time T. Differentiating both sides of Equation (10.12) yields ∞
p–1
x′ ( t ) =
∑ Ak ⋅ ∑ n = –∞
k=0
δ t – nT – kT ------ – p
∞
∑ n = –∞
( k + 1 )T- δ t – nT – ------------------ p
(10.13)
Rewriting the above equation as ∞
x′ ( t ) = A 0 ⋅
∑
∞
δ ( t – nT ) + A p – 1 ⋅
n = –∞
∑ n = –∞
pT δ t – nT – ------ + 2A s ⋅ p
∞
∑∑ all s n = – ∞
sT δ t – nT – ------ (10.14) p
where s are code switch points (where code subpulses change sign). In Equation (10.14), As will be +A if the code changes from negative to positive and will be –A if the change is from positive to negative. The delta function sequence δ T ( t ) can be represented as ∞
∑
δT ( t ) =
n = –∞
1 2 δ ( t – nT ) = --- + --- ⋅ T T
∞
∑ cos ( nω0 t )
(10.15)
n=1
From Equations (10.14) and (10.15), 2A x′ ( t ) = ---------0 ⋅ T
∞
2A p – 1 -⋅ cos ( nω 0 t ) + + ------------T
∑ n=∞
4A + --------s ⋅ ∑ T
∞
∑ cos
all s n = 1
∞
∑ cos n=1
sT nω 0 t – ------ p
pT nω 0 t – ------ p
(10.16)
Note that the dc term has been dropped for the purpose of practical implementation. Rewriting the above equation as © 2001 CRC Press LLC
∞
2 x′ ( t ) = --- ⋅ T
all s
n=1
2 + --- ⋅ T
2pnπ
2snπ
- + A p – 1 ⋅ cos ------------- ∑ cos ( nω0 t ) ⋅ A0 + 2As ⋅ ∑ cos ----------p p
∞
2pnπ
2snπ
- + A p – 1 ⋅ sin ------------- ∑ sin ( nω0 t ) ⋅ 2As ⋅ ∑ sin ----------p p all s
n=1
(10.17)
The coefficients of cos ( nω 0 t ) and sin ( nω 0 t ) have been defined as αn and βn, respectively. Thus, from the above equation, 2 2snπ 2pnπ α n = --- · A 0 + A p – 1 ·cos ------------- + 2A s · ∑ cos -----------T p p all s
(10.18)
2pnπ –2 2snπ β n = ------ · – A p – 1 ·sin ------------- – 2A s · ∑ sin -----------p T p
(10.19)
all s
From the above equations, an and bn are obtained as 2pnπ 1 2snπ a n = ------ · – 2A s · ∑ sin ------------ – A p – 1 ⋅ sin ------------- p nπ p
(10.20)
all s
1 b n = ------ ⋅ A 0 + A s ⋅ nπ
2snπ
2pnπ
- + A p – 1 ⋅ cos ------------- ∑ cos ----------p p
(10.21)
all s
From Equations (10.7), (10.20), and (10.21), the general expression of the continuous coded waveform is N
1 x ( t ) = ------ ⋅ nπ
∑ cos ( nω0 t )·
n=1 N
1 + ------ ⋅ nπ
∑ sin ( nω0 t ) ⋅
n=1
– 2A s ⋅
2snπ
2pnπ
- – A p – 1 ⋅ sin ------------∑ sin ----------p p
all s
2pnπ 2snπ 1 + 2 A s ⋅ ∑ cos ------------ + A p – 1 ⋅ cos ------------- p p all s
(10.22)
Thus, Equations (10.20) and (10.21) can be used to determine the Fourier coefficients for arbitrary polarity coded waveforms. Note that the summation is over the switch points. As is +A for the p’s when the code changes from negative to positive, and vice versa. The magnitude at the first and last points stays the same. Using this method coefficients for the Barker code of length 11 (+ + + – – – + – – + –) are computed as A 12nπ 14nπ 18nπ 20nπ 22nπ 6nπ a n = ------ ⋅ 2 sin ---------- – 2sin ------------- + 2sin ------------- – 2sin ------------- + 2sin ------------- – sin ------------- nπ 11 11 11 11 11 11
(10.23)
A 6nπ 12nπ 14nπ 18nπ 20nπ 22nπ b n = ------ · 1 – 2 cos ---------- + 2cos ------------- – 2cos ------------- + 2cos ------------- – 2cos ------------- + cos ------------- (10.24) nπ 11 11 11 11 11 22 © 2001 CRC Press LLC
As presented, this waveform will repeat itself indefinitely, which is not of much utility for pulsed radars. Thus, to have a pulsed coded waveform with controlled pulse repetition frequency (PRF), we must employ switches. Oscillators are turned on for the length of coded pulse and then turned off to achieve the desired PRF (or listening period). Using these coefficients, the pulsed coded waveform for Barker code of length 11 is as shown in Figure 10.4.
10.2.4
GENERATION
OF
PULSED CODED WAVEFORMS
A high-PRF coded waveform can be generated without using any switches. This may be of some advantage, as fast switches are harder to develop. To generate such a waveform, one needs the Fourier series expansion of waveform shown in Figure 10.3b, whose coefficients are given by 1 a n = ------ · – 2A s ⋅ nπ
2pnπ
2snπ
- – A p – 1 ⋅ sin ------------- ∑ sin ----------T T
(10.25)
all s
1 b n = ------ · A 0 + 2A s ⋅ nπ
2pnπ
2snπ
- + A p – 1 ⋅ cos ------------- ∑ cos ----------T T
(10.26)
all s
N
1 2pnπ 2snπ x ( t ) = ------ · ∑ – 2A s ∑ sin ------------ + – A p – 1 ⋅ sin ------------- cos ⋅ ( nω 0 t ) nπ T T all s
n=1
1 + ------ ⋅ nπ
N
∑
A0 + As ⋅
2snπ
2pnπ
- + A p – 1 ⋅ cos ------------∑ cos ----------T T
all s
n=1
⋅ sin ( nω 0 t )
(10.27)
In principle, one can use either of the two methods for coded waveform generation. The key difference is that the first method requires fewer oscillations and demands the use of switches, 1.5
1
x(t)
0.5
0
-0.5
-1
-1.5 0
10
20
30
40
50
Number of pulses per PRI
FIGURE 10.4 Baseband Barker code of length 11 (code: + + + – – – + – – + –). © 2001 CRC Press LLC
60
whereas the second method does not require switches but calls for a higher number of oscillators. In general, the first method will be more efficient in most situations, as it requires fewer oscillators. The second method is suitable for very high-PRF waveforms.
10.3 UWB RADAR CONFIGURATION AND SIGNAL PROCESSING 10.3.1
GENERAL CONSIDERATIONS
In conventional narrowband radars, the receiver and processor are designed as matched filters, which is usually done in three steps, i.e., the IF filter is matched to a subpulse of long transmit pulse, pulse compression is matched to a long transmit pulse, and Doppler processing is matched to a group of N pulses within a coherent processing interval (CPI). By integrating the signal energy in N pulses, the matched filter maximizes the signal-to-noise ratio, which turns out to be equal to the ratio of the integrated signal energy to the noise spectral density. Thus, the detection performance in UWB radar, like that in conventional radar, can be improved by increasing the transmitted energy over the time on target and by employing matched filtering to integrate the signal energy. Because of very narrow pulsewidth (which is dictated by spectrum requirements), the energy in the UWB pulse would be small. It can only be increased by raising the peak power, which is limited by existing technology. Thus, to design a UWB radar with existing power sources, it is necessary to maximize the signal energy by using longer coded pulses (and employing pulse compression) and by coherently integrating the pulses within coherent processing interval (CPI) using techniques similar to Doppler processing in the conventional radars.
10.3.2
RADAR CONFIGURATION
A UWB radar system based on Fourier synthesized waveforms is shown in Figure 10.5. The radar is designed to receive each transmitted frequency line separately, which decreases the noise substantially, as compared to other UWB waveforms where the receiver listens over the entire bandwidth. Before signal processing, the data from all the analog-to-digital converters (A/Ds) is summed together on a range bin basis, which is equivalent to forming pulses in the time domain. Pulse compression is performed by correlating the returned signal with the reference signal, which increases the signal-to-noise ratio and narrows the effective pulsewidth. The reference signal in this case is a polarity coded transmitted sequence instead of the phase or frequency modulated waveforms of narrowband radars. However, other than the waveform, the processing and the principles of pulse compression in UWB are the same as in narrowband radars. In the next step, processing will be performed to integrate pulses, isolate clutter from the target signal, and measure the target velocity.
10.3.3
VELOCITY PROCESSING
In conventional narrowband radars, coherent integration, clutter discrimination, and velocity measurement are performed by taking a discrete Fourier transform (DFT) of the radar return from N pulses within a CPI. The DFT converts the time domain data from all the pulses to the frequency domain. Radar return from a moving target and from ground appear at different places in frequency domain due to their different radial velocities. Thus, the target separates from the clutter and can be detected in a relatively clutter-free situation. The DFT forms a parallel bank of narrowband filters covering the radar return spectrum. Target velocity is determined from the filter number in which the target appears. Also, the DFT is equivalent to coherent integration; that is, the signalto-noise ratio increases by a factor equal to the number of pulses within a CPI. Thus, the operation of taking the DFT improves signal-to-noise and signal-to-clutter ratios and determines the target velocity. © 2001 CRC Press LLC
Waveform Control
f1
Antenna
Amplifier and Phase Control
Multiplexer
Circulators f2 Oscillators
Receiver f1
A/D
Receiver f2
A/D
Receiver fN
A/D
CFAR Detector
Data Memory
fN
Pulse Compression
Velocity Processor
FIGURE 10.5 UWB radar system block diagram.
DFT processing is often called Doppler processing in the literature, as it is based on the Doppler shift fd from the target, given approximately by f d = 2V/λ , where V is the target velocity and λ is wavelength of narrowband signal. However, for baseband signals, Doppler effect cannot be represented by frequency shift, as there is no carrier in the baseband signal. Thus, the traditional Doppler processing is not applicable to the processing of baseband signals. However, the pulse-topulse change in range (or round-trip time) of a moving target and the lack of such change for clutter from stationary ground can be utilized to perform functions equivalent to Doppler processing. Radar returns from a stationary object due to two successive pulses will be PRI (T seconds) apart. If a target is moving with radial velocity V with respect to the radar, the change in range over one pulse repetition interval is VT, which corresponds to change in the round trip time of 2VT/c. Thus, the returns from the moving target due to two successive pulses are T − 2VT/c seconds apart. Figure 10.6 shows the target return from moving target due to pulses within a CPI. If the radar returns due to successive pulses are added together (with constant delay T), the resulting sum will increase only for stationary objects. For a moving target, the sum will spread out in range domain without building up the target magnitude as shown in Figure 10.6. However, if the returns from successive © 2001 CRC Press LLC
Transmitted pulses
Received pulses
Pulse 1
2VT c Pulse 2
2VT c Pulse 3
Summation of pulses with delay T
Summation of pulses with right delay
FIGURE 10.6 Velocity preprocessor for integrating successive UWB radar returns for a moving target with a radial velocity V and a pulse repetition interval T.
pulses are added together with a delay of T − 2VT/c, the resulting sum will have a large value for the target moving with velocity V, and it would be quite small for targets with other velocities. An analog device with an adjustable delay can provide the required delay for implementation. Because target velocity is not known beforehand, it is not possible to predict the right delay to be used in the summation of signals. This problem can be solved by performing several parallel summations of a radar return with several different delays matched to velocities that cover the expected range of target velocities. This results in several parallel channels, and the outputs of all these channels are examined for target detection. The output will build up for a channel whose delay matches with the target velocity, whereas other channels will be mismatched to this particular velocity. The outputs of mismatched channels will be low and spread out. This is because each channel is tuned to a specific velocity. The advantages of this processing are as follows: 1. Signal-to-noise ratio will increase. 2. Signal-to-clutter ratio will increase. Summation will increase the signal, as it is almost the same from pulse to pulse, whereas clutter samples coming from different places will decorrelate. 3. Target velocity can be determined from a delay that gives the highest output as delay × c V = --------------------2T
(10.28)
Velocity processing can also be performed digitally. The received signal for N pulses is digitized and stored, and it consists of M range bins for each pulse. For easy understanding of signal processing, this data can be imagined to be organized in range bin-pulse matrix of N rows (corresponding to N pulses) and M columns (corresponding to M range bins) as shown in Figure 10.7. Although our interest is in the detection of moving targets, we shall momentarily consider stationary targets. For target detection, we first integrate the signal from all the pulses within the CPI to improve its signal-to-noise ratio. For stationary target, this is done by adding signal from the same © 2001 CRC Press LLC
FIGURE 10.7 Range bin-pulse matrix for digital velocity processing. Samples to be added are shown in check range bin RN for a target with one assumed velocity.
range bin number corresponding to all the pulses in the CPI, as the stationary target stays in the same range bin during the transmission and reception of all the pulses within the CPI. This implies the addition of columns in Figure 10.7. In contrast, a moving target occupies varying range bins for each pulse in a CPI. Thus, to integrate pulses from the moving target, we would add the output of those range bins that the target occupies for pulses within the CPI. If, during the first pulse, a target moving with radial velocity V is in mth range bin, then, during nth pulse, it would appear in bin number given by m – I [(n – 1)2VT/cτ], where I stands for integer of quantity within the parenthesis. Range bins for different pulses given by above expression will be added together for signal integration. Figure 10.7 shows range bins for a case in which the target moves by one bin during the time between the pulses. Since target velocity is not known in advance, the expected range of target velocities will be considered. For each velocity, a different set of range bins from the pulse-range bin matrix will be added together. For practical implementation, only a finite number of velocity values within the expected range of target velocity will be considered. This is equivalent to forming parallel filters, each tuned to different velocity. However, velocity processing is performed in the time domain, as compared to Doppler processing in narrowband radars, which is performed in the frequency domain.
REFERENCES 1. J. D. Taylor, Introduction to Ultra-Wideband Radar Systems, CRC Press, Boca Raton, FL, 1995. 2. B. Noel, Ultra-wideband Radar: Proceedings of the First Los Alamos Symposium, CRC Press, Boca Raton, FL, 1991. 3. M. I. Skolnik, “An Introduction to Impulse Radar,'” NRL Report 6755, Nov. 1990. 4. G.S. Gill, H.F. Chang, “Waveform Synthesis for Ultra-Wideband Radar,” Proceedings of the 1994 IEEE National Radar Conference, March 1994. 5. H. F. Harmuth, Nonsinusoidal Waves for Radar and Radio Communication, Academic Press, New York, 1981. © 2001 CRC Press LLC
11 High-Resolution Step-Frequency Radar
Gurnam S. Gill CONTENTS 11.1 Introduction 11.2 Step-Frequency Waveform 11.3 Step-Frequency Radar 11.4 Modeling of Target Return for Step-Frequency Radar 11.5 Detection of a Moving Target in Clutter 11.6 Application of Step-Frequency Waveform in Imaging 11.7 Conclusion References Appendix
11.1
INTRODUCTION
High range resolution has many advantages in radar. Apart from providing the ability to resolve closely spaced targets in range, it improves the range accuracy, reduces the amount of clutter within the resolution cell, reduces multipath, provides high-resolution range profiles, and aids in targetclassification. High range resolution capability may play a key role in the detection of important classes of targets, that is, low radar cross section (RCS) targets embedded in clutter. A much smaller range cell of a high-resolution system reduces the amount of clutter competing with the target. This, along with clutter cancellation, will increase the signal-to-clutter ratio enough that targets become visible above the residual clutter and noise. High range resolution techniques can be grouped in three main categories: impulse, conventional pulse compression, and frequency-step (a.k.a. frequency-stepped) waveform. The range resolution, ∆R, for all three categories is given by ∆R = c/2B
(11.1)
where B is the bandwidth and c is velocity of light. Bandwidth is achieved in a different manner in each category. Impulse waveforms achieve high resolution with extremely narrow pulses of high power.1 Their advantages and disadvantages, as well as technological developments in this area, are discussed elsewhere in this volume, and we need not repeat them here. In conventional pulse compression waveforms, large bandwidth is attained by modulating transmit pulses instead of by decreasing their time duration. Returned pulses are processed by correlating them with a replica of the transmitted signal. The resulting pulses are of much shorter duration than the transmit pulses. The main advantage of conventional pulse compression is that the high resolution can be obtained without decreasing the pulse width and thus the energy in the
© 2001 CRC Press LLC
pulse. This prevents loss of target detectibility. Both impulse and pulse compression present some processing and technology problem. (See Chapter 10 and later discussions of UWB radar systems.) The step-frequency waveform is the third category for achieving high range resolution.2–4 Radars employing a step-frequency waveform increase the frequency of successive pulses linearly in discrete steps. A step-frequency waveform can be viewed as an interpulse modulated pulse compression waveform in which modulation is applied across the pulses instead of within individual pulses. It provides a high range resolution capability by producing a detailed target range profile and a detailed two-dimensional image of the target when coupled with SAR/ISAR. In the past, step-frequency radar has been primarily used for diagnostic RCS measurements in anechoic chambers and in open ranges where two-dimensional (2D) imaging is performed by using the target’s rotational motion. Step-frequency radar’s high resolution range profiles and 2D images are used for target recognition and classification. Currently, the fine range resolution capability of stepfrequency radar is being exploited to solve the difficult problem of detection of high-speed, lowRCS targets in the presence of large clutter. This class of problems includes detection of cruise missiles, sea skimming anti-ship missiles, and stealth aircraft. A step-frequency radar has a narrow instantaneous bandwidth (corresponding to individual pulse) and attains a large effective bandwidth (corresponding to frequency spread of pulses within a burst) sequentially over many pulses in the processor. As a result, the hardware requirements become less stringent. Lower-speed A/Ds (commensurate with the low bandwidth of individual pulses) and slower processors can be used for reduced data. The receiver bandwidth would be smaller, resulting in lower noise bandwidth and a higher signal-to-noise ratio. Another important but less obvious advantage of step-frequency radar is the rejection of multiple time around clutter, which can be quite large for high-PRF waveforms. Due to the different frequencies of successive pulses, multiple time around clutter from ambiguous ranges will come at frequencies other than the one from the target area. Several of these range ambiguous clutter returns will be rejected by the receiver IF amplifier, as these returns will lie outside its passband. Apart from enhancement of signal-to-clutter ratio, this will reduce the dynamic rage of the received return, which in turn reduces the number of bits per sample requirement of A/D. In short, step-frequency radar provides the range resolution of wideband systems with the advantages of narrowband systems. It is a contemporary technology that can be implemented with off-the-shelf components. Existing radars can be modified for transmission, reception, and processing of step-frequency waveforms. This makes step-frequency radar cost effective and an attractive technique to achieve high range resolution, particularly in an era of dwindling defense expenditures where there will be less money available for development of new radars. The bandwidth can be tailored to the resolution requirements and the availability of spectrum. With precise control of the spectrum with this waveform, EMI/EMC problems are more manageable. As compared with UWB waveforms, frequency step waveforms require lower A/D conversion sampling rates, lower peak power sources, and slower computers to process smaller sets of data. Because of smaller instantaneous bandwidth, the noise bandwidth of step-frequency waveforms would be smaller. In comparison with conventional pulse compression waveforms, step-frequency for a given range resolution would require slower A/D converters. Step-frequency waveforms also have some limitations. Range resolution (or pulse compression) cannot be achieved with a single pulse. It would require transmission, reception, and processing of a group of pulses. However, this need not be a disadvantage. To perform Doppler processing, a return from a group of pulses is required. Thus, the same group of pulses can be used for both pulse compression and Doppler processing in the step-frequency radar. Detection of moving targets with step-frequency radar is not as straightforward as in conventional radars. The range-Doppler coupling experienced in conventional pulse compression waveforms becomes much more dominant in step-frequency waveforms. As a result, target signals are dispersed in range, which leads to loss of range resolution, range accuracy, and signal-to-interference ratio. Special signal processing will be required to undo these negative effects and restore the full potential of a step-frequency wave© 2001 CRC Press LLC
form. The waveform design for the step-frequency radar is not as obvious and clear cut as for the traditional MTI and pulse Doppler radar systems. This chapter seeks to fill this gap.
11.2
STEP-FREQUENCY WAVEFORM
The waveform for a step-frequency radar consists of a group of N coherent pulses whose frequencies are increased from pulse to pulse by a fixed frequency increment ∆f as shown in Figure 11.1 The frequency of the nth pulse can be written as f n = f 0 + n∆f
(11.2)
where f0 is the starting carrier frequency and ∆f is the frequency step size, that is, the change in frequency from pulse to pulse. Each pulse is τ seconds wide, and the time interval T between the pulses is adjusted for ambiguous or unambiguous range. Note that the frequency stays constant within each pulse. Groups of N pulses, also called a burst, are transmitted and received before any processing is initiated to realize the high-resolution potential of this waveform. The burst time, i.e., the time corresponding to transmission of N pulses, will be called the coherent processing interval (CPI) as in conventional radars. Since the frequency is constant within the individual pulse, its bandwidth is approximately equal to the inverse of pulse width. Pulses of typical time duration have narrow bandwidths, thus making the instantaneous bandwidth of the radar narrow. Thus, narrowband equipment (except for the antenna and transmitter) can be used to implement the radar. However, as shall be explained more fully later, effective large bandwidth can be realized by appropriately processing the N pulses in a CPI. The effective bandwidth is determined by the total frequency excursion, i.e., N∆f over the duration of N pulses. The range resolution of step-frequency radar is given by c - = -----------c ∆R = ---------2B eff 2N∆f
(11.3)
The fact that step-frequency radar resolution does not depend on the instantaneous bandwidth, and that resolution can be increased arbitrarily by increasing N∆f, are significant advantages. There is a constraint on selection of ∆f (i.e., ∆f ≤ 1/τ ); however, N can be increased to realize very high range resolution. It should be noted that, irrespective of the waveform and the compression method used, fine range resolution does require large bandwidth. For the step-frequency radar, large bandwidth is obtained sequentially over many pulses by interpulse frequency modulation whereas, for conventional radars, it is achieved in a single pulse by intrapulse phase or frequency modulation.
f0 + 2 f
f0 +(n-1 f
(
f0 + f
f0
τ T FIGURE 11.1 A step-frequency waveform achieves wide bandwidth (N∆f) sequentially (over a burst of many pulses) but has a narrow instantaneous bandwidth of l/τ. It provides the high range resolution of wideband radar systems with some of the advantages of narrowband radar systems. Step-frequency radar achieves range resolution of C/2N∆f (equivalent to a bandwidth of N∆f) as compared with range resolution of Cτ/2 for constant-frequency waveforms. © 2001 CRC Press LLC
11.3
STEP-FREQUENCY RADAR
The step-frequency radar consists of components commonly found in a typical coherent radar. The major addition is a coherent step-frequency synthesizer, which includes the additional up-conversion and down-conversion circuitry as shown in Figure 11.2. The step-frequency source allows the pulseto-pulse variation in the frequency of the signal. The transmitter and receiver front end should be wideband to accommodate the frequency changes in the transmit and receive signals. On the transmit side, first the coherent oscillator (coho) and synthesizer frequencies are added together in a mixer. The sum of the two frequencies is then up converted to RF by mixing with a stable local oscillator (stalo). The resulting signal, consisting of the sum of the stalo, coho, and synthesizer frequencies, is amplified and transmitted. Thus, the frequency of the nth transmitted pulse within the burst of N pulses is given by f = f stalo + f coho + n∆f
(11.4)
The pulse index n starts from zero, and the last value will be N − 1. On the receive side, the returned signal is down converted to IF frequency by mixing it with the stalo output, which is then amplified and band limited in the IF amplifier. In the next step, the IF output is further down converted to baseband by mixing it with the output of the frequency
)
fn
Power Amplifier
Duplexer
Pulse Modulator
RF Amplifier
f stalo
f stalo Stalo
n f
Stepped Frequency Sythesizer
n f
IF Amplifier
f coho Phase Detector
f coho Coho
Q
I
A/D
Memory
High Range Resolution Processing
FIGURE 11.2 Block diagram of the step-frequency radar is similar to that of a conventional radar except for the addition of the frequency step synthesizer and corresponding up-conversion and down-conversion circuitry. Signal processing for a step-frequency radar generally will be more complex but performed on a smaller number of samples. © 2001 CRC Press LLC
synthesizer. In this mechanization of step-frequency radar, the frequency synthesizer is synchronized to keep the transmitter and receiver on the same frequency within the pulse repetition interval signal. The IF output is further down converted to baseband in the synchronous detector. To retain both the amplitude and phase information, the output of the synchronous detector is in the form of in phase (I) and quadrature (Q) components. The synchronous detector generates the two components by mixing the signal with two 90° phase-shifted outputs from coho. The two-channel detector eliminates blind phases, improves signal-to-noise ratio by 3 dB and discriminates positive from negative Doppler frequencies. When the radar transmits a pulse, the synchronous detector output is sampled, digitized, and stored. Samples from the I and Q channels form a complex sample consisting of real and imaginary components. The typical sampling rate is one complex sample per pulse width. In the rest of this chapter, the word complex will be dropped, and the word sample will imply a complex sample. Each sample is termed a range bin, as it represents the signal from a range window of length cτ/2 where τ is the pulse width. The synchronous detector output for all range bins of interest due to all N pulses in a burst is collected prior to performing any processing. The first step in the computer processing of the step-frequency signals is range binning; that is, organizing the data in a range-pulse matrix as shown in Figure 11.3. Each column represents the synchronous detector output corresponding to a particular range bin due to N frequency-stepped pulses. Taking a discrete Fourier transform (DFT) of each column resolves the particular range bin of width cτ/2 into N equal subdivisions for a typical case when τ∆f is chosen to be unity. The theory of this process will be covered in the next section. In the literature, this subdivided range bin is termed high range resolution profile. This process is equivalent to conventional pulse compression with compression ratio of N; however, there is also a difference between the two. With conventional pulse compression, the range resolution is obtained in a single pulse, whereas, with the step-frequency, the resolution is obtained in N pulses. DFT Range-Pulse Matrix
Range Bins
Frequency
Pulse No.
R1
R2
R3
RM
P1
S11
S12
S13
S1M
P2
S21
S22
S23
f0 + 2 f
P3
S31
S32
S33
f0 + (N 1) f
PN
SN1
SN2
SN3
f0 f0 +
f
SNM
DFT Cτ/2N
Cτ/2
FIGURE 11.3 High resolution profile of a range bin. Each column represents a return from a specific range bin due to N frequency-stepped pulses. DFT of a column produces the detailed picture of the range bin in the slant range by dividing it into N finer subdivisions. The width of a range bin is Cτ/2, and the width of finer subdivisions will be Cτ/2N. This process is equivalent to conventional pulse compression with a pulse compression ratio (PCR) of N. However, if the product τ∆f is other than 1, the PCR will be Nτ∆f. © 2001 CRC Press LLC
11.4
MODELING OF TARGET RETURN FOR STEP-FREQUENCY RADAR
This section develops the underlying theory and the relevant equations later needed for design of step-frequency radar. First, we develop the mathematical model for a signal return from a point target when a frequency-stepped waveform is transmitted. The examination of the expression for the target return will reveal the unique characteristics and the problems associated with the stepfrequency radar. If the reference signal for the nth pulse is A 1 cos2π ( f 0 + n∆f )t then the target signal after a round trip delay of 2R/c can be represented as 2R A 2 cos2π ( f 0 + n∆f ) t – ------- c where f0 is radio frequency (RF) and consists of sum of stalo and coho frequencies. The output of the phase detector can be modeled as the product of the received signal with the reference signal followed by a lowpass filter. This is equivalent to the difference frequency term of the above mentioned product. For real sampling the phase detector output for the nth pulse is A cosφ n , and – jφ for quadrature sampling it is Ae n , where 2R φ n = 2π ( f 0 + n∆f ) ------c
(11.5)
To simplify the discussion, stationary and moving targets will be treated separately.
11.4.1
STATIONARY TARGET CASE
We rewrite Equation (11.5) for the stationary target as
4πf 0 R ∆f 2R - + 2π ⋅ ----- ------- nT φ n = ------------T c c fs
(11.6)
The first term represents a constant phase shift, which is not of any practical significance. The second term is the multiplication of the rate of change of frequency ∆f/T with the round-trip time 2R/c . This term represents a shift in frequency during the round-trip time 2R/c . Thus, the range (or the round-trip time) is converted into a frequency shift (which is analogous to conversion of range to frequency in linear frequency-modulated CW radar). Therefore, it is possible to resolve and measure the range by resolving the frequency, which can be done by taking the DFT of the received signal from N frequency-stepped pulses. Since the range is measured by taking the DFT, the range measurement will have the same limitations as the frequency measurement by DFT. Thus, the range resolution ∆R and unambiguous range Ru are dependent on the frequency resolution and the maximum unambiguous frequency measurement by DFT, respectively. To determine ∆R and R u , let the frequency shift fs due to the range be written as ∆f 2R f s = ----- ------T c © 2001 CRC Press LLC
(11.7)
Rewriting R in terms of fs, cT R = --- ----- f s 2 ∆f
(11.8)
cT ∆R = --- ----- ∆f s 2 ∆f
(11.9)
Taking the differential of both sides,
The above equation expresses the range resolution ∆R in terms of the frequency resolution ∆fs. The frequency resolution obtained from the DFT is the inverse of signal duration, that is, 1 ∆f s = ------NT
(11.10)
From Equations (11.9) and (11.10), the range resolution ∆R is obtained as c ∆R = -----------2N∆f
(11.11)
Similarly, we obtain the unambiguous range as c R u = --------2∆f
(11.12)
Equations (11.11) and (11.12) can be combined as R c cτ/2 ∆R = ------------- = ------------- = -----u N 2N∆f Nτ∆f
(11.13)
Returns from N frequency-stepped pulses are processed by taking their DFT, and these DFT coefficients represent resolution of range R u into N subdivisions, each of width c/2N∆f. Equation (11.13) implies that range bin of width cτ/2 is resolved in v Nτ∆f parts with range resolution of c/2N∆f . This is equivalent to conventional pulse compression where pulse of width τ is compressed into a pulse of width 1/N∆f , giving pulse compression ratio of Nτ∆f . The size of the range bin cτ/2 as compared with R u depends on the product τ∆f as given by cτ/2 ----------- = τ∆f Ru
(11.14)
The product τ∆f plays an important role in waveform design, and three cases based on it are shown in Figure 11.4 and discussed below. Figure 11.4a shows the case where τ∆f is equal to 1, and the original range bin is equal to the unambiguous range window R u . The range bin of width cτ/2 is resolved into N parts with an effective range resolution of cτ/2N . This case may be employed for mapping of stationary or rotating targets with no translational motion. However, this case would be unsuitable for detection of moving targets. Clutter will fill the entire unambiguous range window R u without leaving any clear space available to which nonstationary targets can move. © 2001 CRC Press LLC
cτ/2N
Ru
cτ/2 (a) τ f = 1
cτ/2N f
cτ/2
Ru
(b) τ f < 1
Fold
{
{ cτ/2
Ru
(c) τ f > 1
FIGURE 11.4 Three cases of range resolution based on the value of τ∆f, where N and τ are held constant for three cases.
Figure 11.4b depicts the second case, when τ∆f < 1 , that is, the original range bin comprises only a fraction of the unambiguous range window. The range resolution cτ/2Nτ∆f is poorer than in the previous case; however, there is clear space available that can be used for the detection of moving targets, as explained later in this chapter. Figure 11.4c depicts the third case, τ∆f < 1 where the range bin is larger than the unambiguous range window. Although range resolution is better, the range profile is aliased and distorted. Thus, for all practical cases of interest, the range bin cτ/2 should not exceed R u , which translates to constraining the product τ∆f not to exceed unity.
11.4.2
MOVING-TARGET CASE
This section will consider the case of a moving target. The concepts and equations developed in the previous section are also applicable to a moving target, as this case will include these terms. To determine the expression for radar return from a target moving at constant velocity υ, let the range R in Equation (11.5) for the nth pulse be represented as R n = R 0 + υnT
(11.15)
From Equations (11.5) and (11.15), one obtains the phase of signal from a moving target as 2 φ n = 2π ( f 0 + n∆f ) --- ( R 0 + υnT ) c
(11.16)
The above Equation can be rewritten to identify the frequency components in each term as
© 2001 CRC Press LLC
2υf 4πf 0 R 0 ∆f 2R 2υn∆f φ n = ---------------+ 2π ----- ---------0 nT + 2π ----------0 nT + 2π ---------------- nT T c c c c spread fs fd
(11.17)
Examination of the above equation reveals the problems as well as the characteristics of stepfrequency waveforms. The first two terms are the same as for stationary targets and have been discussed in the previous section. Note that it is the second component that gives the finer range resolution capability to frequency-step radars. This range resolution of c/2N∆f is equivalent to the resolution obtained by conventional pulse compression with a compression ratio of Nτ∆f . The maximum value of the compression ratio is limited to N for practical cases (as τ∆f is constrained to be equal to or less than unity). For detection of moving targets, range resolution can be traded off for creation of clear space within the range window R u . The third term in Equation (11.17) represents the Doppler frequency shift due to target motion, and it adds to the frequency shift of the second term. The range resolution process unwittingly treats the Doppler frequency as a frequency shift due to range and thus shifts the target range from its true range. This range shift can be easily calculated as υTf R s = -----------0 ∆f
(11.18)
which, in terms of processed range bins (of width c/2N∆f ), is given by 2υTf 0 N 2υNT = -------------L = -----------------c λ
(11.19)
where λ is wavelength corresponding to frequency f 0 . The fourth term in Equation (11.17) is due to the interaction of the changing frequency of the step waveform with the target motion. The Doppler shift changes with each pulse (even for constantvelocity targets) because of the change in pulse frequency. The Doppler shift due to the constant frequency component f 0 has already been taken into account in the third term. The fourth term gives the Doppler shift 2υn∆f/c due to the frequency step of the nth pulse. Thus, the return from a moving target due to N frequencies will contain N frequency components in the data domain instead of ideally containing one component. This spread in frequency of 2υn∆f/c , when processed by DFT, will lead to a spread in range by υNT , which in terms of range bins is given by 2
υNT- = 2υN T∆fP = ------------------------------∆R c
(11.20)
The reason for range spread is that taking of DFT is not an optimal processing for a target return spread over many frequencies. Range spread has several negative consequences, such as loss of signal magnitude, range accuracy, and range resolution. Concepts developed in this section developed so far can be summarized as follows. The basic data for a step-frequency radar consists of N complex samples obtained by quadrature sampling the output of the synchronous detector. These samples are from a particular range bin or target in response to transmission of N frequency-stepped pulses. If the target is stationary, the DFT of N data samples resolves the range bin into Nτ∆f fine-range bins of width c/2N∆f. DFT coefficients represent the target reflectivity of different parts of a range bin or an extended target within a range bin. Plots of the magnitude of DFT coefficients are often called (high-resolution) range profiles, as DFT process resolves the range bin into smaller parts. This process is equivalent to conventional pulse compression with a compression ratio of Nτ∆f. Product τ∆f should be kept less than unity to avoid the distortion of range profile by wraparound. If the target is moving, the range profile will shift in range and spread beyond its normal domain. A point scatterer that ideally should occupy one fine bin will be spread across υNT/∆R © 2001 CRC Press LLC
bins. It should be noted that the range shift L is the lagging edge of the spread Thus, the point scatterer will occupy from L to L + υNT/∆R bins instead of being in one bin. This is not a satisfactory result for target detection or mapping. The solution for this problem is to apply a correction factor to counter the spread, which will be discussed in the next section. To illustrate the concepts developed in this section, a computer simulation is developed using MATLAB. It will demonstrate the range resolution capability of step-frequency radar by generating a detailed range profile of a range bin containing three scatterers within a range bin. We shall also explore the effect of target motion and parameter changes on range profile. The parameters of the waveform used are as follows: N ∆f τ f0 PRF Ranges of 3 targets ∆R cτ/2
= = = = = = = =
300 0.5 MHz 2 µs 3 GHz 1 kHz 12,010, 12,030, and 12,060 m 1m Ru = 300 m
In this example product, τ∆f is chosen to be unity; therefore, the range bin ( cτ/2 ) is equal to the unambiguous range window R u , which is 300 m. A DFT of 300 signal samples (from a range bin containing the target scatterers) divides the range bin into 300 finer range cells, giving a pulse compression ratio of 300 and a range resolution of 1 m. Targets are located in a range bin starting at 12 km. The relative scatterer ranges within the bin are 10, 30, and 60 m. Figure 11.5 shows magnitude of DFT of signal samples that represent a range bin containing three stationary scatterers. Processing of a signal by taking a DFT has resolved these three scatterers with a resolution of 1 m. Scatterers clearly stand out, and the relative location of each scatterer is correctly indicated within the range bin. Each DFT point represents range interval equal to ∆R (which is 1 m for all the examples in this chapter). The effect of scatterer velocity is illustrated in Figure 11.6a, where scatterer S1 is moving with a radial velocity of 40 m/s toward the radar, and scatterers S2 and S3 are stationary. The response of S1 is dispersed over the range, its magnitude is attenuated, and its range is shifted. With an increase in velocity, these effects become more pronounced, as shown in Figure 11.6b, where scatterer S1 is moving with a velocity of 400 m/s. Figure 11.6c shows the range profile with all three scatterers moving with equal radial velocity of 50 m/s. The range shift and the range dispersion in terms of the number of processed bins is given by Equations (11.19) and (11.20). The effects of target motion can be mitigated by controlling the appropriate factors in these equations. For example, an increase in PRF will decrease the dispersion of the signals. The effect of increasing the PRF on the range profile of Figure 11.6c is shown in Figure 11.7. Parameters in Figure 11.7 are the same as in Figure 11.6c, except the PRF is changed from 1 to 10 kHz. Range shift and dispersion are much smaller, and range profile looks much closer to the stationary case of Figure 11.5. Other parameters can be similarly controlled to decrease the effect of target motion. However, there are other constraints on the parameters that have to be met, and these are described in Appendix 11.A. A better method of controlling the adverse effect of target motion by phase correction will be discussed in the next section.
11.5
DETECTION OF A MOVING TARGET IN CLUTTER
Detection of low-RCS targets in heavy clutter is a challenging radar problem that requires large improvements in signal-to-noise and signal-to-clutter ratios. The signal-to-noise ratio is improved by the integration of a larger number of pulses. The signal-to-clutter ratio is improved by the © 2001 CRC Press LLC
30
S1
25
Magnitude
20
S2 15
10
S3
5
0 0
50
100
150 Range in meters
200
250
300
FIGURE 11.5 High-resolution range profile of a range bin containing three stationary targets or scatterers. In this example, signals are simulated for a 300 m wide range bin containing three stationary scatterers at ranges of 10, 30, and 60 m. DFT maps the signals collected from the range bin due to 300 frequency-stepped pulses into a detailed range profile, which shows three scatterers at correct ranges. The three scatterers have unequal RCSs. Product τ∆f is unity for this case.
reduction of clutter, which is achieved in two steps—first, by limiting the amount of clutter entering the radar receiver and, second, by cancelling the clutter in the signal processor. The amount of clutter entering the radar receiver can be reduced by decreasing the effective pulse width. Effective short pulse width is normally achieved by employing standard techniques of pulse compression, which compress long coded transmit pulses required for adequate average power. However, pulse compression increases the instantaneous bandwidth, which requires wider bandwidth components and higher A/D sampling rates. The hardware and bandwidth requirements will increase significantly for conventional pulse compression for very high range resolution systems, which are required for very low clutter in the radar receiver. These disadvantages can be avoided with the step-frequency waveform and still achieve high range resolution. As noted in the previous section, radar return from a moving target due to N frequency-stepped pulses is spread out in the Doppler domain. When this signal is processed by taking the DFT, the target spreads out in the range domain too, resulting in a loss of signal-to-noise ratio, range resolution, and range accuracy. The solution to this problem is to counter the fourth term in Equation (11.17), which is the cause of the spread. This can be done by multiplying the collected signal from N pulses with the following factor before taking the DFT: C1 ( n ) = e © 2001 CRC Press LLC
2
– j4πυn T∆f/c
(11.21)
15
15
S2
S2
S1 10
10
Magnitude
Magnitude
S3 S3
5
5
S1
0 0
50
100
250
150 200 Range in meters
300
0 0
50
100
(a)
150 200 Range in meters
250
300
(b) 10
S1
9 8
Magnitude
7 6
S2
5 4 S3
3 2 1 0 0
50
100
150 200 Range in meters
250
300
(c) FIGURE 11.6 Effects of scatter velocity on the range profile. (a) Scatterer S1 is moving at 40 m/s, and S2 and S3 are stationary. In the profile, S1 is shifted in range due to Doppler shift. Furthermore, S1 is attenuated in magnitude and spread out in range. (b) Same as case (a), except that S1 is moving at a velocity of 400 m/s, which magnifies the effect of the velocity on the attenuation and spread of S1. Increased velocity has further deteriorated the S/C ratio and range resolution. (c) All three scatterers are moving at 50 m/s, so they are not as clearly localized as in Figure 11.5. There is a clear loss of crispness due to the range spread.
This correction factor counters the spread of the signal by rotating the samples in the opposite direction. For example, all the components of a point target will consolidate in one location with the application of correction. It may be preferred also to eliminate the constant range shift at the same time the despreading is performed. This can be done by including the third term in Equation (11.17) in the correction factor. With this change, the correction factor becomes 2
C(n) = e
– j4π ( υf 0 nT + υn T∆f )/c
(11.22)
The above correction factor essentially takes out the target motion and its negative effects while still retaining the advantages of step-frequency waveforms. Correction followed by a DFT generates © 2001 CRC Press LLC
25 S1
Magnitude
20
15 S2
10 S3
5
0 0
50
100
150
200
250
300
Range in meters
FIGURE 11.7 Effects of PRF on motion-induced spread signal loss. As in Figure 11.6c, the scatterers S1, S2, and S3 are moving at 50 m/s, except the PRF is changed from 1 to 10 kHz. As a result, the signal magnitude has gone up, and the range spread is reduced.
a range profile as if the target were stationary at the original location. Dispersion and range shift are eliminated, and the magnitude of the target response is restored to the proper value. The correction factor requires a good estimate of target velocity. Since the target velocity is generally unknown, compensation may be applied at all velocities uniformly spaced between the minimum and maximum expected target velocities. The compensation technique discussed so far will counteract the negative effects. However, it will put the target peak back into clutter and also spread the clutter beyond its range domain of cτ/2 . Thus, the clutter must be canceled prior to applying the velocity compensation so that the target can be detected while still maintaining accurate target range and fine range resolution. The following steps will be performed to detect moving targets in clutter, and they are also shown in Figure 11.8a. 1. Transform the data domain into the range domain by taking the DFT of the weighted N samples of a range bin (of width cτ/2 ) from N frequency-stepped pulses. 2. Apply clutter cancellation by zeroing out points corresponding to the clutter extent in the range domain obtained in the first step. Also apply weighting to the rest of data to reduce sidelobes to be encountered later in the reverse transformation. 3. Convert the modified range domain data back into the data domain by taking the IDFT. 4. Apply velocity compensation to the data in step 3 by multiplying with C ( n ) from Equation (11.21). Estimate the target velocity or use several velocities between the minimum and maximum expected target velocities. 5. Convert the compensated data back into the range domain via FFT. If there are more than one range domain data sets, choose the one with the highest and sharpest output in the clutter free area. The compensation velocity corresponding to the chosen set is the © 2001 CRC Press LLC
From Memory
From Memory
Hamming Window
Hamming Window
DFT
DFT
Uncompensated
Uncompensated HRR Profile
HRR Profile
Clutter Cancellation
IDFT
Compensated HRR Profile
DFT
Clutter Cancellation
∗
Compensated HRR Profile
Hamming Window DFT Velocity Compensation Factor
Velocity Compensation Factor
(a)
∗ :Convolution
(b)
FIGURE 11.8 Signal processing for detection of moving targets in clutter, which includes clutter cancellation and velocity compensation. (a) In this scheme, velocity compensation is in the time domain, and compensation is performed by multiplication. (b) This scheme uses a velocity compensation factor in the frequency domain, and compensation is performed by operation of convolution. Both schemes give equivalent results, but (a) requires fewer operations.
correct target velocity, and that set shows the true target range position and its detailed range profile. The range resolution is restored to theoretical c/2N∆f. Velocity compensation can also be directly applied in the range domain. This would require only one Fourier transform operation. However, one would need a transformed compensation term and also the multiplication in the compensation process will be replaced by a convolution. Figure 11.8b gives the alternative clutter-cancellation scheme. Detection of moving targets in clutter presented in this section shall be verified using simulation. Radar parameters used in the simulation are same as before except for the pulse width. The pulse width in this simulation is chosen to be 0.5 µs. This change in pulse width makes τ∆f equal to 0.25, pulse compression ratio Nτ∆f equal to 75, and range bin size cτ/2 equal to 75 m. Range resolution ∆R and unambiguous range R u are the same as before. Simulated signal samples contain clutter and noise. Signal-to-clutter ratio (SCR) is set to –20 dB with respect to the signal from the first target. Note that the DFT will map 300 samples from frequency-stepped pulses into 300 points across R u . However, only the first 75 points of the transformed data represent the clutter or stationary target signals coming from the range bin, and the rest of the DFT coefficients represent © 2001 CRC Press LLC
empty space, made available to accommodate Doppler shifted signals from moving targets as shown in Figure 11.9. This simulation includes three targets at 10, 30, and 60 m from the beginning of a range cell. Note that the size of range cell is 75 m. If these are stationary targets, they will appear at their true location after DFT processing as shown in Figure 11.10. These targets cannot be detected, as they are buried in the clutter. However, moving targets at the same locations will move out of the clutter area after DFT processing, as shown in Figure 11.11. The moving targets suffer a loss in signal magnitude as well as in resolution due to the spreading effect. Also, target ranges are not correct, due to unknown range shift caused by target motion. These problems are solved by cancellation of clutter followed by velocity compensation. Figure 11.12 shows the range profile of Figure 11.11 after clutter cancellation and velocity compensation. In this figure, clutter is cancelled, and targets have recovered resolution and are displayed at correct range.
11.6
APPLICATION OF STEP-FREQUENCY WAVEFORM IN IMAGING
Two-dimensional imaging techniques can be used in target identification, target classification, and diagnostic RCS measurements. To form a radar image of an object, resolutions in two orthogonal dimensions—that is, slant range (along the radar line of sight) and cross range (perpendicular to Clutter space which represents range bin of width Cτ/2 (75 m)
Nτ∆f (75)
1
Clutter free space to accommodate moving targets
N (300)
FIGURE 11.9 Allocation of clutter space and clutter free space for detection of moving targets by proper choice of τ∆f. In this example, τ∆f is 25, which makes the clutter space consist of the first 75 (Nτ∆f) points. Generation of clutter-free area has reduced the pulse compression factor from N(300) to Nτ∆f(75).
0
75
300
FIGURE 11.10 Targets in clutter. This figure shows the true location of three scatterers moving at 50 m/s. These targets cannot be detected directly, as they are embedded in clutter. © 2001 CRC Press LLC
FIGURE 11.11 Range profile for three moving scatterers (or point targets) in the presence of clutter. Clutter occupies the first 75 m and scatterers are physically located within the clutter area. DFT processing moves the targets out of the clutter area (due to the Doppler shift) but, in the process, they have suffered attenuation and dispersion.
40 S1
35
30
Magnitude
25
20 S2
15
10 S3
5
0 0
50
100
150 Range in meters
200
250
300
FIGURE 11.12 Range profile of the bin containing three scatterers after velocity compensation. Clutter is canceled, and targets show correct range with restored magnitude and resolution. © 2001 CRC Press LLC
radar line of sight)—are required. The quality of the image depends on the resolution in both dimensions, as shown in Figure 11.13. Traditionally, pulse compression waveforms of large bandwidth are used for high slant-range resolution, and they place more stringent requirements on A/D. In their place, a step-frequency waveform of lower instantaneous bandwidth can be used for the same high range resolution to ease the A/D requirements. However, instead of a single pulse of a conventional waveform, it would require a burst of frequency-stepped pulses to obtain a range profile in the slant range. Resolution in the cross range is obtained by resolving the Doppler frequency shift caused by the relative motion between the radar and the target in the azimuth (or cross-range) dimension. Different scatterers on target in the cross-range dimension have different line of sight velocities toward the radar and would give different Doppler frequency shifts. Thus, resolving the frequency shift will resolve the scatterers in the cross range. In synthetic aperture radar (SAR) imaging, it is the radar platform that provides relative motion whereas, in inverse synthetic aperture radar (ISAR), it is the target that provides the relative motion. However, in either case, relative motion should involve aspect angle changes between the radar and the target. Cross-range resolution is inversely proportional to the aspect angle change. Obviously, rotating the target will provide aspect angle change. Generally, translational motion between radar and target have some aspect angle change too, unless there is an unlikely situation in which their velocity vectors are pointed toward one another. In this section, discussion will be from the ISAR point of view, although the general principles are the same for both SAR and ISAR.
RθB
Real Beam Range Bin = Cτ/2
Resolution in Range Only Processed Range Bin = C/2Nτ∆f
Resolution in Both Range and Cross Range (azimuth)
FIGURE 11.13
Two-dimensional imaging.
© 2001 CRC Press LLC
To form an image using a step-frequency waveform, a sequence of M bursts is transmitted wherein each burst consists of a group of N stepped frequency pulses as shown in Figure 11.14. In ISAR, the target turns in azimuth as bursts are transmitted and received. Therefore, different bursts will be received from different azimuth angles. Each burst is processed to generate a detailed slant-range profile. A sequence of such profiles from various azimuth angles are processed together to generate a two-dimensional image.
11.6.1
SLANT-RANGE RESOLUTION
Waveform parameter design for step-frequency radar is significantly more complex than for constant-frequency waveforms. This section will discuss the factors that go into the selection of stepfrequency waveform parameters for desired slant-range resolution. The slant-range resolution ∆R s and the unambiguous range window W s and their mutual dependence are given by the following equations: c c - = -----------∆R s = ---------2N∆f 2B eff
(11.23)
c - = N∆R W s = R u = -------s 2∆f
(11.24)
To avoid aliasing of the range profile, maximum target length L should not exceed unambiguous range window, that is, L ≤ Ws c ≤ --------2∆f
(11.25)
Also, the pulse width should encompass the entire target, that is, cτ L ≤ ----2
(11.26)
T One Burst of N Pulses
Burst 1
Burst 2
Burst M
M Bursts of Duration MNT
FIGURE 11.14 Step-frequency waveform for two-dimensional imaging. Each burst provides a detailed range profile from a specific angle. A collection of M range profiles due to M bursts are processed for azimuth resolution. © 2001 CRC Press LLC
To design the waveform for a step-frequency radar requires determination of the values of the frequency step ∆f , pulse width τ, and the number of pulses N, within a burst. Each parameter has to satisfy several constraints. These constraints on waveform parameters are interdependent and cannot be neatly isolated. We shall give the guidelines and the factors involved in the selection as follows. The frequency step ∆f has to satisfy three constraints as described here. The first constraint is derived from Equation (11.25) as c ∆f ≤ -----2L
(11.27)
The second constraint stems from the fact that the desired range resolution depends on the effective bandwidth N∆f as given by Equation (11.23). The third constraint that ∆f has to satisfy is that the product τ∆f should not exceed unity. In most cases, it would be preferable to keep it well below unity. If the target is stationary and there is no clutter or interference, one may choose unambiguous windows equal to the maximum target length and both equal to cτ/2 . This is the case where τ∆f is equal to unity, and it would give the maximum possible resolution without aliasing. However, if the target is moving, it may be preferable not to use the equality in the constraints and have L < W s and τ∆f less than 1 to avoid aliasing. If the purpose is to increase the clutter free space, as in the detection of moving targets in clutter, then the product τ∆f would be significantly less than unity. Requirements on selection of τ are intertwined with ∆f . The constraints and factors involved in the selection of τ are enumerated below: 1. The pulse width should satisfy τ ≥ 2L/c . 2. Larger τ would improve the signal-to-noise ratio and thus lead to a longer detection range or better picture quality. 3. The pulse width has to be adjusted to satisfy the constraint on the product τ∆f , as has been explained earlier. The factors that will play a part in the selection of the number of pulses N in a burst are as follows: 1. The time on target t TOT or coherent processing interval is related to N as N t TOT = -------f PRF
(11.28)
2. The range resolution c/2N∆f is partly dependent on N. 3. Larger N will increase integration gain and the signal-to-noise ratio, which in turn will increase the detection range or the picture quality. 4. If velocity compensation is not performed, then increasing N will increase the range shift and dispersion for moving targets, as is apparent from Equations (11.19) and (11.20).
11.6.2
CROSS-RANGE RESOLUTION
In this section, we briefly give the principle and relevant equations of cross-range resolution. To explain the principle, we consider a rotating target in the beamwidth of a step-frequency radar as shown in Figure 11.15. A scatterer at a radial distance of R c from the center will have a radial velocity ωR c toward the radar. The Doppler shift f d from this scatterer will be given by © 2001 CRC Press LLC
Range Bin
Cross Range
r Radar
ωr
ω
FIGURE 11.15 Cross-range resolution from target rotation. As the target rotates, different scatterers in the same range bin have different radial velocity and thus different Doppler shifts toward the radar. These scatterers in the cross range can be differentiated by resolving their Doppler frequencies.
2ωR f d = -------------c λ
(11.29)
λ R c = ------- f d 2ω
(11.30)
or
Separation of two scatterers in the cross range ∆Rc is related to their corresponding separation in Doppler frequency as λ ∆R c = ------- ∆f d 2ω
(11.31)
which is obtained by taking the differential of both sides Equation (11.30). Resolution in the cross range is achieved by resolving the Doppler frequency, which is done by taking the DFT. The resolution achieved with DFT is given by the inverse of the signal duration, that is, 1/MNT . Thus, we can write cross-range resolution as λ 1 λ ∆R c = ------- ------------- = -----2ω MNT 2θ
(11.32)
where θ is the viewing angle by which the target turns during the signal integration time of MNT . A better resolution in cross range would require a larger viewing angle θ. However, the opposing constraint on viewing angle is that, if it is too large, scatterers may migrate across the range bins which, unless corrected, would cause blurring. Cell migration can occur in both slant range and cross range, so the worst case is at the target extremities. For a target with maximum length L rotating about its center, the maximum number of cells that a scatterer located at the edge of the target will migrate is given by LωMNT L∆θ Lλ K = --------------------- = ----------- = -----------------2 2∆R 2∆R 4 ( ∆R ) © 2001 CRC Press LLC
(11.33)
where ∆R is either the slant-range or cross-range resolution. Thus, cell migration becomes a more severe problem for longer wavelength, larger target size, and finer range resolution. Blurring may be avoided by picking parameters such that K is less than 1. However, this is likely to put an unacceptable constraint on the selection of range resolution and wavelength. In a more likely scenario, K would be greater than 1, and cell alignment would be required before the DFT is taken to form an ISAR image. Polar reformatting can be used to solve this problem. In other applications where step-frequency is not used, sampling time can be delayed or advanced to align the range cells. Two more effects need to be considered in connection with cross-range resolution as described below. In ISAR imaging, the motion of any scatterer on a rotating target is circular, and thus its velocity component along the radar’s line of sight is not constant. The resulting Doppler shift would not be constant over the integration time. When the DFT would be taken to resolve the cross range, the changing Doppler shift from one scatterer will smear into adjoining cross-range bins, resulting in blurring of the image. This phenomenon is similar to quadratic phase error in SAR, and focusing is required to correct it. As discussed earlier in this section, the target Doppler affects the slant-range resolution process two ways. First, it shifts the target range from its true range and, second, it spreads the target return over many bins. In ISAR imaging, the Doppler shift due to the target rotation will cause a range offset of ωR c Tf 0 /∆f which, in terms of bin numbers, is given by 2ωR c Tf 0 N/c . The range offset will be constant across range profiles and thus does not play any part in slant-range cell alignment. However, to avoid the range smearing, it may be desirable to constrain the Doppler shift due to target rotation to be less than half of the frequency bin in the slant range, that is, 2ωR PRF -------------c ≤ ----------λ 2N
(11.34)
This would put a constraint on the maximum target rotation of λ PRF ω max = ------ ----------4L N
(11.35)
If this constraint is observed, it will get rid of range smearing as well as range offset.
11.6.3
TWO-DIMENSIONAL IMAGING
The actual process of forming an ISAR image involves taking a 2D DFT of the received data from M bursts. The target rotates while the pulses are transmitted and corresponding target returns are received. At different bursts, the radar looks down at the target in the slant range from different azimuth angles. It would be convenient to explain the imaging process if the received data were organized in pulse-burst matrix as shown in Figure 11.16. The vertical axis consists of pulses within a burst at various frequencies. The horizontal axis represents bursts (at various azimuth angles). Each column consists of signal return due to all the pulses within that burst. After organizing the data, the next step in forming an ISAR image is to take the DFT of the matrix column-wise. This converts the data in each column into slant-range profiles, which is the resolution of coarse range bin into finer range bins. At this point we have achieved the resolution in the slant-range dimension from each of M azimuth location. Each row of the transformed matrix represent M samples of same fine slantrange bins from M azimuth angles. In the next step, we take the DFT of the transformed matrix row-wise. It divides the cross range into M fine cells corresponding to each slant-range cell. The magnitude of the complex values of the doubly transformed matrix will be taken to compute pixel © 2001 CRC Press LLC
FIGURE 11.16 Step-frequency ISAR processing. The FFT of each burst (that is, the column-wide FFT) transforms the return signal due to N frequency-stepped pulses into N processing slant-range bins, each of width C/2N∆f. Now there are M processing range profiles, each from different azimuth angles. Row-wise FFT will transform azimuth data into cross-range data. Cross-range will be resolved in M bins, each of width λ/2ωMNT or λ/2θ.
intensity. The resulting matrix represents the ISAR target image where columns represent the slant range and rows represent the cross range.
11.7
CONCLUSION
High range resolution offers many advantages, but it requires large-bandwidth waveforms. The implementation of large-bandwidth systems with impulse or conventional pulse compression techniques requires high-speed A/Ds and processors. However, step-frequency radar, by achieving large bandwidth sequentially, relaxes hardware requirements. As a result, a very fine range resolution can be achieved with off-the-shelf components, and even existing systems may be retrofit by adding step-frequency synthesizers. In this chapter, we have considered the application of step-frequency radar for detection of moving targets and also for 2D imaging. Low-RCS, high-speed targets normally will be suppressed with multiple time around clutter, but step-frequency radar can reject such clutter and facilitate their detection with off-the-shelf hardware. As demonstrated in Equation (11.17), step-frequency © 2001 CRC Press LLC
waveform spreads the return from a moving target and shifts the target range, resulting in a loss of target magnitude and loss of range resolution. With appropriate compensation, as described in this chapter, it is possible to overcome these problems.
REFERENCES 1. J. D. Taylor, Introduction to Ultra-wideband Radar Systems, CRC Press, Boca Raton, FL, 1995. 2. D. R. Wehner, High Resolution Radar, Artech House Inc., Boston, 1985. 3. J. A. Scheer, and J. L. Kurtz, Coherent Radar Performance Estimation, Artech House Inc., Boston, 1993. 4. G. S. Gill, “Detection of Targets Embedded in Clutter Using Frequency Step Waveform,” Proceedings of the 1994 International Symposium on Noise and Clutter Rejection in Radars and Imaging Sensors, Kawasaki, Japan, p. 115, November 1994.
© 2001 CRC Press LLC
11 Appendix 11A Waveform Design Considerations for Detection of Moving Targets Waveform design involves the selection of parameters such as pulse width, frequency step size, pulse repetition frequency, number of frequency-stepped pulses (N) in the coherent interval, and RF frequency to satisfy performance requirements such as range resolution, minimum and maximum target velocities, maximum target extent, unambiguous range, probability of detection, etc. This appendix gives the equations that relate the parameters of step-frequency waveform with these performance-related parameters. These equations, along with conventional radar equations, can be used to satisfy the performance requirements. The order and manner in which these equations are used for waveform design may vary with each application. 1. Sequential (or effective) bandwidth N∆f determines the range resolution as given by, c ∆R = -----------2N∆f
(A.1)
2. The amount of clutter-free space in the range domain will depend on the product τ∆f as the fraction of the range domain containing clutter is specified by the following equation: cτ/2 ----------- = τ∆f Ru
(A.2)
Choosing a low value for the product τ∆f will increase the clutter-free space. 3. There are two constraints related to the maximum target extent L t . To avoid aliasing or wraparound, Lt < Ru
(A.3)
Also, the pulse width should be large enough that it encompasses the entire target length for good target detectibility, that is cτ/2 > L t or, cτ L t < ----2
(A.4)
R u will always be greater than cτ/2 for the waveforms used for the detection of moving targets, therefore the latter constraint in Equation (A.4) related with the pulse width is
© 2001 CRC Press LLC
a tighter constraint and will always satisfy the former constraint. It should be noted that the pulse width affects the signal power as well as the amount of clutter intercepted. These factors would also be considered in the selection of pulse width τ. 4. The next constraint is derived from the requirement that a moving target must migrate from the clutter region to the clutter-free region. The worst case is for the slowest moving target or scatterer at the beginning of the clutter region. The worst-case requirement for a target to be out of clutter is that the range shift L should be greater than the clutter extent Nτ∆f , that is, 2υ min N -------- > Nτ∆f -----------λ f PRF
(A.5)
which leads to the constraint on PRF as 2υ min f PRF < -----------λτ∆f
(A.6)
2υ min τ∆f < -----------λf PRF
(A.7)
or on the product τ∆f as
5. Doppler-induced range shift is necessary for a target to move out of the clutter region. However, it should not be so large that it exceeds R u , in which case the target aliases or wraps around in the range domain. The worst case for this situation is the maximum velocity target at the end of the clutter region. To avoid aliasing, the sum of the range shift and spread for this target must be less than R u minus cτ/2 in the range domain, that is, L + P < N ( 1 – τ∆f )
(A.8)
Substituting L and P from Equations (11.17) and (11.18), 2
2υ max – N 2υ MAX N ∆f ----------------------- + ---------------------------- < N ( 1 – τ∆f ) cf PRF λf PRF
(A.9)
which leads to a constraint on PRF as 2υ max 1 N∆f - --- + ---------f PRF > ---------------1 – τ∆f λ c
(A.10)
2υ max 1 N∆f τ∆f < 1 – ------------ --- + ---------c f PRF λ
(A.11)
or on the product τ∆f as
© 2001 CRC Press LLC
12 The CARABAS II VHF Synthetic Aperture Radar
Hans Hellsten, Lars Ulander, James D. Taylor
CONTENTS 12.1 Background 12.2 Radar Description 12.3 Test Results 12.4 Conclusions References
12.1
BACKGROUND
An increased interest in low-frequency imaging radar systems for detection of targets hidden by biomass, ice monitoring, and detection of stealth-designed man-made objects inspired the Swedish National Defence Research Establishment (FOA) to built an experimental VHF synthetic aperture radar (SAR) system. The Coherent All Radio Band System (CARABAS) has been tested in two variations, called CARABAS I and II. This case study summarizes the CARABAS II system and test results as an example of UWB SAR capabilities.
12.2
RADAR DESCRIPTION
The CARABAS VHF SAR uses wavelengths between 3.3 and 15 m and a large signal-processing system to reach a detection capability comparable to figures for sophisticated microwave synthetic aperture radars. This objective raised a technical problem because, at low frequency, the wave wavelengths are no longer negligible compared to the dimensions of a typical resolution cell. The backscattering coefficient is influenced by a small number of scatterers, which means that the uniqueness in the coherent imaging will increase with a reduced level of speckle noise as a result.1 Biomass measurement was the second consideration in selecting a VHF signal. Combining the higher radar transparency of biomass at VHF frequencies with lower speckle at the same resolution made VHF SAR a promising system for estimating the total biomass.
12.2.1
SPATIAL RESOLUTION
Conventional narrowband and narrow-beam resolution formulas do not apply to the ultra-wideband and wide-beam SAR system cases. For the CARABAS case, VHF UWB SAR resolution is of wavelength order. For a number of independent quantities in an image, the resolution is best expressed as an area measurement A rather than as separate figures for range and azimuth, so that
© 2001 CRC Press LLC
λc c - ------∆A ≥ --------2∆φ 2B
(12.1)
where λc is the wavelength corresponding to the center of the transmitted bandwidth B, ∆φ is the aperture angle spanning the synthetic aperture, and c is the speed of light. D. Giglio estimates the theoretical UWB SAR resolution to be 2
λ min ∆A = -------2π
(12.2)
which is achieved if the SAR integration angle is 180°, and the radar frequencies starts from zero.2 For the shortest CARABAS wavelength, λmin = 3.3 m, so the resolution limit ∆A =1.7 m2. However, the practical CARABAS resolution is ∆A = 3 × 3 m2.1
12.2.2
PHYSICAL CONFIGURATION
The VHF radar requirements produced a major problem of where and how to install two 5-m long dipole parallel wideband dipoles to minimize aircraft body interference with the radiation pattern. Figure 12.1a shows the CARABAS I antenna installation in trailing canvas sleeves. Figure 12.1b shows the CARABAS II rigid antennas extended in front of the aircraft.
12.2.3
CARABAS I SYSTEM
This system operated at HH-polarization between 20 and 90 MHz in a stepped-frequency waveform. Figure 12.1 shows the ram air inflated canvas sack antennas that were alternated in ping-pong fashion to unambiguously separate backscattered signals from the two sides of the aircraft. Table 12.1 summarizes the main CARABAS I system parameters. The inflated antennas caused major mechanical problems, including unpredictable stability and collapse during flight. FOA engineers discovered two major characteristics of lobe splitting at the
(a)
(b)
FIGURE 12.1 The two CARABAS antenna configurations. Mounting two 5-m antennas on the Sabreliner aircraft to prevent airplane body interference was a major technical problem. (Source: reprinted by permission of the FOA). © 2001 CRC Press LLC
TABLE 12.1 CARABAS I Radar Parameters Aircraft Nominal altitude
Sabreliner 1500–6500 m
Nominal ground speed
100 m/s
Maximum slant range
7.5 km
Antenna
2 dipoles
Polarization
Horizontal
Frequency
20–90 MHz
Number of frequencies Frequency stepping factor Pulse Length, Tp Receiver Bandwidth Transmitter peak power
≤ 57 1.25 MHz 0.5 µs 2.5 MHz 1 kW
Systems PRF, PRFS
10 kHz
Effective PRF, PRFe
10/2/n kHz
Intermediate frequency
117.5 MHz
Baseband center frequency
2.5 MHz
Digital sampling rate
10 MHz
Number of bits
12
Data rate
80 Mbit/s
Tape recorder capacity
107 Mbit/s
Cassette capacity
60 minutes
highest frequencies and a very weak radar response at a few narrow frequency channels. A typical intensity plot of the full received bandwidth through one of the CARABAS I antennas, transformed to aspect angle in azimuth by coherent Doppler processing, is shown in Figure 12.2. For the highest frequency interval, a concentration of intensity can be found symmetrically off broadside. Vertical black bands at various frequency interval depict zero-padding in the RFI suppression algorithm, while others are caused by the low antenna sensitivity.1
12.2.4
CARABAS I PERFORMANCE
There were many operational limitations that limited performance. Differential GPS position reference was available during early tests. When transmitting the full bandwidth, this system was Doppler ambiguous above 65 MHz because of the limitations of the recording system for digitized raw radar data. Data collection problems combined, and the need to repeat signals for both right and left antennas resulted in a PRF that was too low for the full frequency-step scheme with adjacent increments of only 1.25 MHz. The best CARABAS I resolution area was calculated to be about 3 m2 using Equation (12.1) and assuming transmission of the entire bandwidth and an aperture angle of 120°. Actual resolution was estimated from the responses of 4.9 m size trihedrals at the Portage, Maine, test site. SAR images processed using 20 to 80 MHz and a 100° aperture angle gave a theoretical resolution of © 2001 CRC Press LLC
FIGURE 12.2 Typical intensity plot of the full received bandwidth through a CARABAS I inflated antenna. The data are transformed to aspect angle in azimuth by coherent Doppler processing. For the highest frequency interval, a concentration of intensity can be found symmetrically off broadside. Vertical black bands at various frequency intervals depict zero-padding in the RFI suppression algorithm. (Source: reprinted by permission CARABAS Program Office, National Defence Research Establishment [FOA] and SPIE.)
5 m2. Standard processing using differential GPS motion compensation gave a resolution area of 10 m2. Nonstandard processing using an inverse filter gave the expected theoretical resolution of 5 m2 at the expense of a severely degraded signal-to-noise ratio. Variable antenna characteristics across the bandwidth caused most of this degradation. Figure 12.3 shows an example response from a 4.9 m trihedral processed with a slightly different bandwidth and aspect angle parameters.
12.2.5
CARABAS II
FOA and Ericsson Microwave Systems AB (EMW) improved the CARABAS design and built a new sensor system in 1995 and 1996. Table 12.2 summarizes the CARABAS II system parameters, and Figure 12.4 shows the system block diagram. The principal improvements included the rigid antenna system shown in Figure 12.1b, increased transmitter mean power, and two receivers to simultaneously register the signal from each antenna. The additional receiver eliminated the Doppler ambiguous problem caused by having to alternately sample the receiver antennas. Figure 12.5 shows the surface resolution versus stand-off range for nonambiguous Doppler-range performance.
FIGURE 12.3 CARABAS I SAR processed data showing a three-dimensional plot of the response from a 4.9 m trihedral in an open grass field near Portage, Maine. (Source: reprinted by permission CARABAS Program Office, National Defence Research Establishment [FOA] and SPIE.) © 2001 CRC Press LLC
TABLE 12.2 CARABAS II Radar Parameters Aircraft Nominal altitude Nominal ground speed Flying conditions Maximum slant range Full integration angle*
Sabreliner 1500–10000 m 100 m/s IMC Programmable 120°
Full aperture length*
60 km
Full integration time*
500 s
Antenna Radiation pattern
2 phased wideband dipoles One side (backlobe < –15 dB)
Polarization
Horizontal
Frequency
20–90 MHz
Number of frequencies Frequency stepping factor Pulse length, Tp Receiver bandwidth Transmitter peak power
1–256 Programmable ð50% duty cycle 2 × 2 MHz 500 W
Systems PRF, PRFs
1–10 kHz
Effective PRF, PRFe
100 Hz–10 kHz
Intermediate frequency Baseband center frequency
215.25 MHz 3.75 MHz
Transmit notch
30 dB
Receive notch depth
90 dB
Transmit and receive notch BW
10 kHz
Receiver dynamic range Digital sampling rate Number of bits
88 dB (spurious free) 2 × 5 MHz 2 × 14
Data rate
160 Mbit/s
Tape recorder
240 Mbit/s
Cassette capacity
28 min
*Based on an assumed altitude of 10 km and a ground speed of 120 m/s.
Flexibility was a driving consideration in the CARABAS II radar electronics. The operator has a graphic interface and a dedicated micro-coded sequencing machine to control time-critical system parts. Radar modes of operation are easily controlled by changeable microcode.1 Signal interference had been a major problem in the earlier system. Frequency sharing with other users was done by dynamically analyzing and identifying occupied channels. Based on occupied channels, the transmit signal can be calculated with notches at the corresponding frequencies and the pattern loaded into a digital signal generator. Using large duty cycles of up to 50% allows narrowband notches to adapt to the current signal environment. Also, a long transmit pulse length gives a higher average power level with a low peak power value.1 © 2001 CRC Press LLC
Left
Power Amplifier
LPF
T/R Switch
RF-Sense
Reciever with A /D IF 215.25 MHz
Attenuator
RX 1
Gate
Ref. Control RX1
Ctrl.
Control RX2
LO out
10 MHz Reference signal
Local oscillator
Optical serial output RX1
-50 DB
500 W peak
RX 2
Optical serial output RX2
RF-pulse
Right
Ref out Control of LO
14 / 28 bit data path
Host computer Ref.
Waveform generator
LO input
TX PRF
8 bit data
Dataformatting
Radarconrol
Opto interface Ser / par
HDDR Ampex DCRSi 107 / 240 Mbit/s
CD-GPS systems
RS 232
Workstation
Massstorage
RS 232
Mouse
Workstation
Mouse
External interface
Memory
Ethernet
Radar control unit
CARABAS II System blockdiagram
FIGURE 12.4 CARABAS II system block diagram. (Source: reprinted by permission CARABAS Program Office, National Defence Research Establishment [FOA] and SPIE.)
10 km, 140 W
Ground projected resolution [m2]
14 12
7.5 km, 105 W
10
5 km, 70 W
8 6 4 2 0 0
5
10
15
20
25
30
35
40
Ground project stand-off range [km]
FIGURE 12.5 CARABAS II performance showing surface resolution vs. standoff range for non-ambiguous Doppler performance. (Source: reprinted by permission CARABAS Program Office, National Defence Research Establishment [FOA] and SPIE.)
Antenna beam steering was implemented by adding a delay line into one antenna feeder and feeding both antennas simultaneously. The beam could be shifted to either side of the aircraft. Digitized receiver data was tagged with system information and data from the phase differential GPS system (Ashtech Z-12) and stored on a AMPEX DCRsi 240 Digital Cartridge Recording System’s high-density tape. © 2001 CRC Press LLC
Frequency stepping gives CARABAS II a wideband radar performance by the sequential transmission of a number of narrowband subsignals where each signal is received completely before the next subsignal is transmitted. The advanced electronics give a great degree of flexibility for optimizing the signal for different frequencies and ranges. All subsignals have the same 2 MHz bandwidth with different center frequencies distributed at a fixed spacing of ∆F ≤ 2 MHz over the interval Fmin = 20 MHz to Fmax = 90 MHz. This small subsignal bandwidth provides the high dynamic range while the sampling rate is moderate. A new center frequency is chosen after each signal is received so that a particular frequency is revisited with a rate proportional to the Doppler bandwidth at that frequency. This achieves two things: each frequency is sampled along track at a rate corresponding to the Doppler bandwidth at that frequency, and the average transmitted power increases linearly with frequency. This optimizes the signal-to-noise frequency dependence for the inverse filtering required for best wideband SAR resolution. The step dwell time Tstep is the same for each transmission so that c ∆F T step = ------- ----------------------------2V F max2 – F min2
(12.3)
where V is the aircraft ground speed. The transmit signal duration is ηTstep where η is the transmit duty cycle. By choosing to skip reception at close ranges, a duty cycle such that 10% ≤ η ≤ 50% is possible. The signal can be a linear FM chirp, which means that, during one frequency step, certain frequencies will be omitted for close ranges. On the other hand, at close ranges, the Doppler bandwidth is reduced due to the steep incidence angle with respect to the ground. For the case that ∆F = 1.8 MHz, there is ηTstep = 100 µs for η = 35% and V = 120 m/s, so that the resulting overlap of 2 MHz signals will prevent Doppler ambiguities given an aircraft altitude of 10 km. The pulse duration of 100 µs can form notches for suppressing the transmit signal across any radio or television band within the adopted frequency range. This will give a flank width of about 10 kHz, which corresponds to the transmitted signal length. The noise contribution of the radar for this example is –26 dBm/kHz for bands suppressed by the transmit notches. The average transmitted power will be by 175 W for η = 35%. The maximum cross-track range consistent with unambiguous Doppler will be 2
2
c (1 – η) ∆F c x max = ---------------------- ----------------------------- cos --------------------------------------------4V F max2 – F min2 4∆A ( F max2 – F min2 )
(12.4)
This gives a maximum cross track range of xmax = 20 km for a resolution area of 3.3 m2 for the given duty cycle η = 35% and aircraft ground speed. Thus, at an altitude of 10 km, the ground may be images across a 13 km swath with a depression angle between 30 and 60 degrees and the surveillance rate of 1.5 km2/s. As mentioned earlier, the transmitted spectrum can be notched to prevent interference, with resulting simulated SAR images as shown in Figure 12.6. The problem is that spectral notching degrades the SAR image and many ways have been tested to accurately estimate the tones in noise and subtract the RFI from the received signal.4,5 MIT Lincoln Laboratory developed and tested an iterative MLE (Maximum Likelihood Estimator) on CARABAS II SAR data. The algorithm superresolves sinusoidal RFI tones in the received record and then subtracts them.6 An iterative algorithm called Darwinistic relaxation was developed and tested on simulated data to fill the spectral parts jammed by radio interference with values other than 0. Figure 12.6 shows the results of image restoration after applying the Darwinistic relaxation method. This method is a nonlinear interpolation method and is based on the histograph characteristics that have been found typical for CARABAS SAR imagery. After starting with a search of the strongest target in © 2001 CRC Press LLC
FIGURE 12.6 The effects of removing parts of the frequency spectrum on SAR images. In this case, simulated SAR data in the shape of capital letters are evaluated where part of the frequency spectrum is replaced with zeroes to degrade image quality. The interactive Darwinistic relaxation algorithm is applied to reconstruct the zero-padded spectral parts and recover an improved quality image. (Source: reprinted by permission CARABAS Program Office, National Defence Research Establishment [FOA] and SPIE.)
range, for every azimuth position in the image that was generated, all jammed gaps were filled with zeros. The individual frequency response is then calculated for these detected targets, and zeros in the gaps are filled with the corresponding spectral distributions obtained from these targets. The algorithm continue with a test of new spectral contributions derived on each iteration from a search in the latest generated image of an increasing number of strong targets. When the weaker parts of detected strong targets have an amplitude comparable to the background level, or noise floor in the image, then the process is complete. The term Darwinistic relaxation applied to this is related to algorithms reported in the literature for other applications.7
12.2.6
UWB STRIP MAP SAR PROCESSING
CARABAS II processing was done on the ground and offline because of the heavy computational burden. Factors contributing to the processing problem include the large integration angle in lowfrequency SAR, pulse compression, and radio frequency interference mitigation. To achieve real-time processing of low-frequency, high-resolution SAR images, one must 1. Minimize time delay between data collection and presentation of the SAR image to minimize computation data memory requirements. 2. Introduce suitable processing stages for pulse compression, antenna pattern compensation, RFI suppression, and motion error autofocusing. © 2001 CRC Press LLC
3. Minimize the number of floating point operations and floating point capacity. 4. Use multiple processor computer architectures. To describe how these special requirements are taken into account, consider that SAR processing methods can be divided into two broad classes: spectral-domain (ω-k and similar methods) and time-domain (back-projection methods). Spectral-domain FFT-based SAR processing of a square SAR image with one side N pixels long requires floating point operations on the order of N2logN. Time-domain back-projection methods require N3 operations. CARABAS II signals have typical values of ∆A = 3.3 m2 and V = 120 m/s; data corresponding to an image with one side N = 104 is collected in 100 s. The computing capability required for real-time spectral domain processing would be on the order of megaflop capacity, whereas time-domain processing would require tens of gigaflops.8,9 Back-projection signal processing is illustrated in Figure 12.7. The algorithm is derived by solving the corresponding inverse problem of assuming that a semicircular illumination pattern intersects the ground surface from the two aligned but separated wideband dipoles on the aircraft. The received signal g(x, R) will be the average along a semicircle of radius R of the surface reflectivity L(x, y). The analytical solution to the inverse problem relates radar data g(x, r) to the surface reflectivity function L(x, y) according to L
( F,F )
( k x ,k y ) =
( F,H 1 ) 1 2 2 ( 2k ) – k x = --- π --- k y 2kg ( k x ,2k ) c 2
(12.5)
where F denotes the function that has been Fourier transformed with respect to the corresponding variable, H denotes a Hankel transform of order one with respect to the corresponding variable, k is the transmitted wave number, kx is the wave number along the flight path, and c is the speed of light. After multiplication with range and magnitude-square detection, L(x, y) is proportional to the normalized radar cross section.3 The processing chain must include many special operations in addition to the SAR processing kernel. All of the steps described in B area computationally intensive. CARABAS II real-time processing including all of these features would require gigaflop capacity for spectral-domain processing. Performing real-time conventional back-projection might requires 100 gigaflops. There is an important advantage to back-projection processing: the data are not processed in batches, which reduces memory capacity. Back-projection also lends itself to a better method of RFI correction and motion error autofocusing that acts interactively with the actual SAR image formation. A new method using the Mercury multiprocessor system requires N5/2 operations per
FIGURE 12.7 The effects of removing parts of the frequency spectrum on SAR images. In this case, simulated SAR data in the shape of capital letters is evaluated where part of the frequency spectrum is replaced with zeros to degrade image quality. The interactive Darwinistic relaxation algorithm is applied to reconstruct the zero-padded spectral parts and recover an improved quality image. (Source: reprinted by permission CARABAS Program Office, National Defence Research Establishment [FOA] and Veridian ERIM International, Inc.) © 2001 CRC Press LLC
N × N image. Therefore, the SAR processing kernel needs hundreds of megaflop capacity to achieve real-time capability. These requirements are compatible with available multiprocessor performance.1
12.3
TEST RESULTS
CARABAS II’s first test images were made in October 1996 over the island Visingsö in the large fresh water lake Vättern in southern Sweden. The flight path is from left to right at the top of the image in Figure 12.8a. When images such as seen in Figure 12.8a were processed with only a 16 MHz bandwidth, the strong scatters generated so-called “paired echoes” on both sides and range around the actual target. The paired echo effect results from a periodic amplitude and/or phase ripple in the reconstructed wideband spectrum, as shown in Figure 12.9. For Figure 12.8a, the radar signal was divided into 1.875 MHz sub-bands, and each sub-band was generated as a linear FM chirp. The different sub-bands were resampled and realigned to generate the full spectrum bandwidth. The actual compression to a short pulse is made by a long FFT sequence, and all periodic ripple in the frequency spectrum will show up as sidelobes in the time-domain signal.3
FIGURE 12.8 An early CARABAS II image (a) made with only a 16 MHz bandwidth. Strong targets such as the reflector (b) generated false or “paired echoes” on each side in the range around the actual target. (Source: reprinted by permission CARABAS Program Office, National Defence Research Establishment [FOA] and Veridian ERIM International, Inc.)
FIGURE 12.9 Reconstructing the wideband spectrum (right) causes any amplitude or phase ripple in the sub-band signal to form as a periodic ripple and produce “paired echoes” around strong targets. (Source: reprinted by permission CARABAS Program Office, National Defence Research Establishment [FOA] and Veridian ERIM International, Inc.) © 2001 CRC Press LLC
FOA researchers observed that the well defined transmitted signal will be distorted by the power amplifier, the antennas, the transmit/receive switch, and the receiver IF filters. The distortion can be removed during signal processing if the overall system transfer function is known. Stanford Research Institute engineers made the same observation about their airborne VHF impulse SAR system and corrected their system’s range dispersion by applying the system’s impulse function to received signals.10 The amplitude and phase ripple produce sidelobes that interfere with good imaging. Figure 12.10 shows the peak amplitude and phase ripple needed to get a certain peak sidelobe ratio.3 To illustrate the effects of calibration, notice the image quality in Figure 12.8, which was processed without any comprehensive system calibration. For this image, the peak sidelobe ratio was measured and found to be –14 dB (for a linear structure), which corresponded to a 2 dB amplitude ripple in the broadband spectrum. Without processing, the achieved spectral resolution was measured over a number of point targets and found to be 6 × 9 m2 in azimuth and range compared with the theoretical limit of 4 × 9 m2. System sensitivity was measured at σonc = 35 dB. After a careful system calibration, the same raw data was processed to give the image shown in Figure 12.11, which compares well with the topographic map. Numbered arrows indicate reference trihedrals, and “3” indicates a farmhouse near the reference trihedrals shown in Figure 12.8. Applying a system transfer function reduced the peak sidelobe ratio to about –30 dB, which corresponded to a residue ripple of 0.5 dB. Figure 12.12 shows a topographic map and agricultural landscape with many small houses and farm buildings. A grey area in the right-hand (southernmost) image is a forested area.3
12.4
CONCLUSIONS
The CARABAS II flight test results show the advantages of using VHF frequencies and ultra-wide bandwidths for SAR imaging of foliated terrains. After compensation for system losses, the imagery provided exceptional detail. Both FOA and SRI have produced excellent large-scale imagery using UWB waveforms to illustrate that frequency band is the prime consideration, not the waveform or processing involved in getting the bandwidth. Specialized VHF UWB SAR processing can recover higher-quality images than higher-frequency SAR systems. The chapter provides a review of system status in 1997. Considerable improvements have been implemented since then. Paired-echo sidelobes 14
2
12
1,6 1,4
10
1,2
8
1 6
0,8 0,6
4
0,4
Peak phase ripple (deg)
Peak amplitude ripple (dB)
1,8
2
0,2
0
0 0
10
30 40 20 50 Peak sidelobe ratio (dB)
60
FIGURE 12.10 Calculated peak paired echo sidelobes vs. peak amplitude and phase ripple. (Source: reprinted by permission CARABAS Program Office, National Defence Research Establishment [FOA] and Veridian ERIM International, Inc.) © 2001 CRC Press LLC
a. Topographic map of the radar image area Radar Illumination
b. Radar image showing details on the ground FIGURE 12.11 Visingsö Island in southern Sweden. The HH CARABAS image shows the effects of system calibration to reduce the sidelobe level to –30 dB, which corresponds to a residue ripple of 0.5 dB. The altitude was 2136 m, frequency bands 29–45 MHz, aperture 77° (center), and incidence angle between 54° and 73°. Compare the image with a topographic map of the island where the black grid shows 1 km divisions. Wind generators and reference trihedrals all stand out. The image of Figure 12.8 corresponds to a section in the upper right corner, including the locations of the trihedrals and wing generators. (Source: reprinted by permission CARABAS Program Office, National Defence Research Establishment [FOA] and Veridian ERIM International, Inc.) © 2001 CRC Press LLC
a. Map of the radar area Radar Illumination
b. Radar image of the same area FIGURE 12.12 CARABAS image of Visingsö Island in southern Sweden from an altitude of 2136 m. The black topographic map grid shows 1 square km areas. The SAR image has been processed using the frequencies 29–45 MHz. The incidence angle varies from 72° to 79°, and the aperture angle is 48° at the center. (Source: reprinted by permission CARABAS Program Office, National Defence Research Establishment [FOA] and Veridian ERIM International, Inc.) © 2001 CRC Press LLC
REFERENCES 1. Hans Hellsten, Lars M.H. Ulander, Anders Gustavsson, and Bjorn Larsson, “Development of VHF CARABAS II SAR,” SPIE Proceedings Vol. 2747, Radar Sensor Technology (1996), pp. 48–56. 2. D.A. Giglio, “Overview of foliage/ground penetration and interferometric SAR experiments,” Proc. SPIE Conference on Algorithms for Synthetic Aperture Radar Imagery, SPIE, Vol. 2230, pp. 209–217, SPIE Bellingham, WA, 1994. 3. Björn Larsson, Per-Olov Frölind, Anders Gustavsson, Hans Hellsten, Tommy Jonsson, Gunnar Stenström, and L.M.H. Ulander, “Some results from the new CARABAS II VHF SAR system,” Proceedings of the Third International Airborne Remote Sensing Conference and Exhibition, Copenhagen, Denmark, 7–10 July 1997. 4. B.H. Ferrel, “Interference suppression in UHF synthetic-aperture radar,” Proc. SPIE Conference on Algorithms for Synthetic Aperture Radar Imagery II, SPIE Vol. 2487, SPIE Bellingham, WA, pp. 96–106, 1995. 5. M. Braunstein, J. Ralston, and D. Sparrow, “Signal processing approaches to radio frequency interference (RFI) suppression,” Proceedings SPIE Conference on Algorithms for Synthetic Aperture Radar Imagery, SPIE Vol. 2230, pp. 190–208. SPIE Bellingham, WA, 1994. 6. S. Ayasli, “Summary of results from the analysis of June 1993 Yuma ground demonstration experiment,” Proceedings Unexploded Ordnance (UXO) Detection and Range Remediation Conference, pp. 397–410, Walcoff and Associates, Fairfax, VA, 1994. 7. N.E. Hurt, Phase Retrieval and Zero Crossings, Kluwer Academic Publishers, Dortrecht, 1989. 8. J.C. Curlander and R.N. McDonough, Synthetic Aperture Radar-Systems and Signal Processing, Wiley, New York, NY, 1991. 9. W.G. Carrara, R.S.Goodman, and R.M. Majewski, Spotlight Synthetic Aperture Radar-signal Processing Algorithms. 10. Roger Vickers, Victor H. Gonzalez, and Robert Ficklin, “Results from a VHF impulse synthetic aperture radar,” SPIE Proceedings, Vol. 1631, Ultra-Wideband Radar, 1992, pp. 219–225. 11. A. Gustavsson, B. Flood, P.-O. Frölind, H. Hellsten, T. Jonsson, B. Larsson, G. Stenström, and L.M.H. Ulander, “First airborne tests with the new VHF CARABAS II,” Proc. IGARSS ’97, Singapore, 3–8 August 1997, pp. 1214–1216, IEEE, Piscataway, NJ, 1997.
© 2001 CRC Press LLC
13 Ultra-Wideband Radar
Capability Demonstrations James D. Taylor
CONTENTS 13.1 Introduction 13.2 Foliage-Penetrating Radar and Target Detection 13.3 Mine-Detecting Radar 13.4 Airborne UWB SAR Systems 13.5 Conclusions 13.6 Acknowledgments References
13.1
INTRODUCTION
This chapter is about high-resolution remote sensing with high-power impulse radar systems. Although the capability of short-duration (1 ns) impulse signals to image hidden objects was previously known, there had been few attempts to build high-power remote sensing systems. During the 1990s, the American Department of Defense Advanced Research Projects Agency (ARPA) sponsored radar programs for foliage penetration, mine detection, and high-resolution mapping. These demonstrations helped develop high-resolution radar imaging technology and signal processing methods for detecting special targets. ERIM International, the MIT Lincoln Laboratory (MIT LL), the Army Research Laboratory (ARL), and SRI International programs have all shown the future possibilities UWB radar systems. I have presented the significant results to show what can be achieved and to point out some of the problem areas. Please refer to the original sources for complete descriptions of the systems, test conditions, and signal processing algorithms.
13.2 13.2.1
FOLIAGE-PENETRATING RADAR AND TARGET DETECTION BACKGROUND
Foliage can conceal vehicles, equipment, and structures from airborne observers. Conventional airborne synthetic aperture radar (SAR) systems operating in the centimeter wavelengths do not penetrate foliage. The ARPA objective was to build and show high-resolution VHF/UHF frequency radar systems for foliage penetration. Ultra-wideband impulse waveforms are a solution to foliage penetration (FOLPEN) and target detection. A 1 ns duration impulse provides a continuous spectrum in the 100 to 1000 MHz range, which is ideal for penetrating the tree canopy, and has a theoretical range resolution about 15 cm (6 in.). During the early 1990s, ARPA sponsored an ERIM International program to evaluate the
© 2001 CRC Press LLC
effectiveness of foliage-penetrating radar for detecting hidden vehicles. The ARL also evaluated a simulated airborne SAR system for concealed target detection. This section describes those tests and results.
13.2.2
ERIM FOLPEN RADAR DEMONSTRATIONS
ERIM International engineers started by measuring the effects of foliage on UWB signals and concluded that the waveform changes caused by the foliage would permit UWB signal penetration, reflection, and imaging.1 Based on the results of the foliage transmission studies, they built the ERIM Rail SAR shown in Figure 13.1, with the system characteristics given in Table 13.1.2 The use of FM CW signals permitted exact measurements of frequency related effects. Signal Processing and Imaging When the ERIM Rail SAR was working under near-field conditions, the traditional spotlight SAR processor was inadequate. Instead, the system used a near-field SAR processor that did a Fourier transform in the azimuth direction. This is often called a plane-wave decomposition, and it provides data that appear to have been collected in the far field. Traditional spotlight processing (resampling and two-dimensional Fourier transforms) was used with the near-field transformed data to form an image.3
FIGURE 13.1 The ERIM Rail SAR had a pair of antennas that moved along the elevated horizontal truss shown here. Movement simulated aircraft flight. This was typical of other experimental impulse radar systems. (Source: reprinted with permission of Veridian ERIM International, Inc., and SPIE.) © 2001 CRC Press LLC
TABLE 13.1 ERIM Wideband Radar System Specifications2 Frequency and waveform
400–1300 MHz FM CW
Polarization
Full polarimetric
Aperture size
10 m
Range resolution
0.17 m
Azimuth resolution
0.5 m (scene center)
Maximum/minimum antenna height
14 m/3 m
Normal scene size
20 m × 20 m
To simulate the geometry of a spotlight airborne sensor, ERIM engineers positioned the Rail SAR in three locations, as shown in Figure 13.2. The target was a truck parked in a forest where the average tree trunk diameter was 20 cm, as shown in Figure 13.3. Measurements made in both HH and VV polarizations produced the broadside images in Figures 13.4 and 13.5, where the three calibration trihedrals show up as bright spots. Dihedral-like reflectors formed by the bed-to-truck wall and ground-to-truck reflections show up as bright lines. One major problem with FOLPEN SAR imagery is that a semiconducting tree trunk and the ground form a top hat reflector, which gives a point-like radar return. ERIM engineers also noticed this effect when imaging forested areas with the P-3 UWB SAR, which is described later.4 While trees are clutter when looking for hidden vehicles, the top hat reflector effect could be useful for remote environmental biomass studies. To suppress tree trunk reflections, ERIM’s engineers developed an algorithm based on a technique that reduced sidelobes, interference, and noise in Fourier transform data by solving for an adaptive window weighting function.5 Figures 13.4 and 13.5 show the effects of first enhancing the HH ground trunk point returns and then subtracting them. Conclusions The ERIM FOLPEN SAR showed that selecting the right polarization is important, as shown by the relative levels in the HH and VV channels in Figure 13.4 and 13.5. Ground–trunk interactions reflect as point scatterers, which can conceal the target by adding many returns to the SAR picture.
Scene (20 m x 20 m)
Target
30° dep angle
Trihedrals 24° Rail SAR Locations
FIGURE 13.2 The ERIM Rail SAR test geometry used to image the truck shown in Figure 13.3, which was not visible from the SAR antenna. (Source: reprinted with permission of Veridian ERIM International, Inc., and SPIE.) © 2001 CRC Press LLC
FIGURE 13.3 Photo of the target truck looking toward the ERIM Rail SAR. The average tree trunk diameter was 20 cm. (Source: reprinted with permission of Veridian ERIM International, Inc., and SPIE.)
FIGURE 13.4 ERIM Rail SAR 30 × 30 m images of the foliage-obscured truck when in Figure 13.3. (Source: reprinted with permission of Veridian ERIM International, Inc., and SPIE.) © 2001 CRC Press LLC
FIGURE 13.5 Results of applying the tree-trunk suppression algorithm to the truck imagery of Figure 13.4. The image is partly obscured by tree trunks, which were suppressed by a special processing algorithm. (Source: reprinted from Ref. 3 with permission of Veridian ERIM International, Inc., and SPIE.)
While using VHF/UHF frequencies to penetrate foliage, ground–trunk return clutter can still obscure and conceal targets. Reliably detecting objects concealed by foliage requires methods to suppress the extra returns from tree trunks that can obscure the target. Later Army Research Laboratory tests also demonstrated the removal of tree trunks and clutter based on reflection characteristics. SRI International has also demonstrated tree and clutter suppression for detecting concealed targets. The researchers concluded that target aspect angle also affects the radar returns and ability to image and detect vehicles concealed under foliage.
13.2.3
ERIM P-3 SAR TESTS
An airborne system showed the remote FOLPEN imaging capabilities of impulse radar. ERIM installed an UWB impulse SAR on a U.S. Navy P-3 aircraft. The SAR had the characteristics shown in Table 13.2. Radar signal processing was a major issue in all the programs described in this chapter. ERIM and MIT Lincoln Laboratory ground-based image formation processors based on range migration data provided the P-3 UWB SAR images. SAR motion compensation used differential GPS data.4 During 1995, the ERIM P-3 SAR made data collection flights over areas including Pellston and Grayling, Michigan; Presque Isle, Maine; Fort Bragg, North Carolina; Yuma, Arizona; and various areas in California. Polarimetric isolation and noise/RFI equivalent backscatter coefficients were computed from the data in Table 13.2. The interpolated peak intensity was based on a 16 ft trihedral with a boresight © 2001 CRC Press LLC
TABLE 13.2 P-3 UHF-UWB SAR CHARACTERISTICS4 Wavelength
1.5–0.3 m
Frequency
215–900 MHz
Polarization
VV, VH, HV, HH
Peak power
1.0 kW
Number of channels
4
Azimuth beamwidth (degrees)
113–25
Polarization isolation
20 dB
Processed bandwidth
515/180/120/60 MHz
Sampling
4096 6-bit I&Q per pulse
Range resolution
0.33/1.0/1.5/3.0 m
Swath width (depends on az resolution)
929/1965/2457/3276 m
theoretical radar cross section 6000 m2 at a center frequency of 470 MHz. Reflections from the trihedral in the VV and HV polarization were used to compute the two-way polarimetric isolation. The ratio of VV to HV intensity yielded an estimated polarimetric isolation of 21.7 dB, so the HV trihedral response can be considered due to channel coupling only. ERIM’s engineers found that the HV channel image backscatter coefficient for water areas was –22 dB. This backscatter level was probably due to residual RFI. However, forested area RFI was considered negligible, because the backscatter coefficient was around –10 dB.
13.2.4
FOREST IMAGE PROPERTIES
Figure 13.6 shows a close-up of HH and VV images of the forested area in Pellston, Michigan (see Table 13.3). As observed in ERIM’s ground-based FOLPEN SAR work, both HH and VV images are strongly dominated by many point reflections from individual tree trunks. Measurements showed the ground trunk reflection to be much stronger (~10 dB) for HH polarization, as shown in Figure 13.7. TABLE 13.3 Imagery Intensity Values from the Pellston December 19, 1994 Flight5 Interpolated Peak Intensity 8’ trihedral 16’ trihedral
Average Intensity Water Area
Radar Attenuator Setting
HH Pol
6,400,000
29,000,000
22
56 dB
VV Pol
10,000,000
33,000,000
14
57 dB
HV Pol
69,000
220,000
6.2
57 dB
Because top hat reflections dominated the returns, sloping ground spoiled the effect and reduced the tree trunk reflection. Figure 13.8 shows that the HH images are brighter, with many point-like ground-trunk reflections on the level areas. Where the ground slopes to the water, the top hat reflector effect is changed and results in a weaker return. The VV image has a weaker speckle due to the incoherent scatter from the tree canopy. Figure 13.9 shows P-3 UWB SAR and X-band SAR images of the same target area. Notice the greater detail available in the UWB SAR image due to both the higher resolution and foliage penetration.7 © 2001 CRC Press LLC
FIGURE 13.6 Forest images from Pellston showing enlarged 100 × 100 pixel areas. Ground trunk (top hat) reflection dominates the HH image (left), whereas tree canopy dominated the VV image (center). The average phase difference for the HH VV image (right) is 94°. (Source: reprinted with permission of Veridian ERIM International, Inc., and SPIE.)
© 2001 CRC Press LLC
10.00
Radar Cross Section (dBsm)
HH 5.00
0.00 VV
εg = 14-i2 εt = 9-i2.5
-5.00 -10.00 -15.00 -20.00
εt εg
-25.00 -30.00 0.00
20.00
40.00
60.00
80.00
FIGURE 13.7 Ground-trunk dielectric top hat reflection is typically 10 dB lower in the VV polarization than in HH. The VV response nulls occur at the Brewster angles of the plane and cylinder. (Source: reprinted with permission of Veridian ERIM International, Inc., and SPIE.)
Conclusions UWB SAR can penetrate foliage and provide fine resolution imagery of objects in the open or obscured by foliage. Ground–trunk reflections dominate the images but depend on the ground slope. For the P-3 SAR system, the polarimetric isolation of the imagery is greater than 20 dB. The observed noise/RFI equivalent backscatter of the imagery was less than –20 dB.6
13.2.5
U.S. ARMY RESEARCH LABORATORY (ARL) FOLPEN DEMONSTRATIONS
Separating targets from clutter is a major FOLPEN radar problem. While the radar can detect concealed targets, the presence of many confusing objects may still hide it from the viewer. Other UWB SAR mine detection programs revealed similar problems. Using Aberdeen Proving Ground data, ARL engineers demonstrated methods for separating vehicles from background clutter. They used their own system, called the BoomSAR, which had two radar antennas mounted on a 50 m high platform. Table 13.4 gives the system characteristics. Their image processing methods were based on the different UWB radar response characteristics of vehicles and trees.8 Figure 13.10 shows an example BoomSAR HH image. The enlarged window shows a tactical vehicle. The trees around it give a spiky clutter background. Modeling the effects of conductivity and the incidence angle produced the clutter characteristics shown in Figure 13.11. This led to a study of target scattering at different frequency bands, with the results shown in Figure 13.12 and 13.13. The result was a conclusion that trees and vehicles have different frequency characteristics, as summarized in Figure 13.14. The difference in characteristics above 400 MHz can aid automatic target discrimination processing.8 ARL developed signal processing methods to examine the spectral characteristics of returns and then eliminate those returns that match clutter and confuser objects. Figure 13.15 shows a typical case where five tactical vehicles are hidden among trees. After several passes, the trees are eliminated to leave only the five vehicles.8 Other methods of UWB SAR image processing and enhancement are covered in different volumes of the SPIE Proceedings on Algorithms for Synthetic Aperture Radar Imagery. © 2001 CRC Press LLC
(a) Topographic map
(c) HH polarized image
(b) Photo of the forest area
(d) VV polarized image
FIGURE 13.8 ERIM P-3 UWB SAR images showing the effects of both polarization and ground slope. The topographic map shows the area images in HH and VV polarizations. Notice that there are strong HH reflections from the treetop hat reflectors on the higher, more level ground than on the slopes leading to the surrounding water. VV reflections are dominated by the tree canopy and have a different texture. (Source: reprinted with permission of Veridian ERIM International, Inc., and SPIE.)
13.3
MINE-DETECTING RADAR
Background UWB ground-penetrating radar (GPR) technology dates from the 1970s. GPR user communities exist in geophysical surveying, construction, archeology, forensics, and related areas. SurfacePenetrating Radar by D.J. Daniels is an excellent source for anybody wanting to learn more about the subject.9 © 2001 CRC Press LLC
(a) P-3 UHF/UWB image. HH polarization, resolution 1 m range × 1 m azimuth, frequency 2–5 MHz to 374 MHz, Dec. 19, 1994
(b) ISFSARE image. HH polarization, resolution 2.5 m range × 0.8 m azimuth, frequency X-band, Sept. 20, 1995
FIGURE 13.9 Comparison of ERIM P-3 UWB SAR and X-band SAR imagery of the same location. Notice the greater detail of the UWB SAR and hidden details not seen by the X-band system. (Source: reprinted with permission of Veridian ERIM International, Inc., and SPIE.) © 2001 CRC Press LLC
TABLE 13.4 ARL BoomSAR Technical Specifications8 Antenna
4 × TEM horn
Frequency coverage
20–110+ MHz
PRF
up to 1 kHz
Polarization
HH, VV, HV, and VH (quasi-monostatic)
Waveform
Impulse
Average power
1 W (up to 5–10 W)
Receiver processing
Baseband sampling: 8 bit Resolution: Periodic RFI “Sniff” (or receive-only) mode
Receiver noise figure/loss
~2 dB/~2 dB
Receiver AGC
Computer control
Processed range gates
Scalable: 4096 and up
Data archiving
4096 RGs in Presum mode
Noise equivalent sigma0
ð50 dB
3D capability
Yes, by altering boom height
Motion compensation (MOCOMP)
Embedded in data stream (1 cm @ 1 km)
Platform collection speed
1 km/hr
Buried land mine detection is a major defense remote sensing problem. Because of the proliferation of mines and unexploded ordnance in all parts of the world, ARPA sponsored several projects in buried mine detection and discrimination during the 1990s. Detecting mines generally involves sweeping a metal detector over the suspected area. This raises two practical safety problems. First, passing through the mined area is hazardous. Second is identifying the object by probing and excavating, which is a slow and dangerous process even with known, mapped minefields. Trained mine removal specialists are still being killed by mines and unexploded ordnance from the two World Wars and more recent conflicts. Unexploded ordnance contractors used GPR for clearing areas of Kuwait after the Gulf War in 1991. The capability to sense buried objects makes GPR a candidate technology for remote mine detection. Because of the need to remotely detect mines, every group simulated an airborne SAR system. Special test areas planted with different sizes of real mines, mine-like objects, reference trihedrals, bomb fragments and steel or metallic clutter tested the experimental systems.10–13 The Lawrence Livermore National Laboratory (LLNL) mine-detecting SAR was typical of the test systems. Each system followed the general ideas of the block diagram and signal characteristics shown in Figure 13.16a. Because of the effects of antennas and coaxial cables, the overall LLNL system time domain and spectral characteristics are shown in Figure 13.16b and c. Compensating for the waveform changes was a feature of all research programs. © 2001 CRC Press LLC
FIGURE 13.10 Example ARL BoomSAR image for an HH data set. Compare the spiky clutter of forested regions with the tactical targets seen in the zoom window. (Source: reprinted from Ref. 8 with permission of SPIE and ARL.)
Figure 13.17 shows the geometry and problems of remote mine and buried-object detection. Because the SAR must look to the side and down, the beam will have an angle of incidence θB. The signal will not penetrate the ground with an excessive angle θB. Because there must be enough signal power to get a detectable return at the receiver, this implies a high-power pulse for remote detection. Tests by MIT LL researchers determined the optimum angle of incidence θB to be about 70°, implying a depression angle of 20°.11 When GPR SAR systems imaged test areas such as the one shown in Figure 13.18a, the resulting unprocessed image looked like Figure 13.18b, where three rows of objects are clearly visible. The problems and special techniques for converting raw SAR data into usable images is covered in other references, including the SPIE and IEEE proceedings radar conferences. Processed SAR images look like Figure 13.19a, which shows mines, fiducial markers, and clutter for a section of the Nevada Test Site (NTS) Buried Object and Mine Detection Facility. A preliminary look at the two- and three-dimensional images shows that mines were seen along with other objects. The effects of depth are shown in Figure 13.20a and b, showing images of 40 cm diameter disks buried from 5 to 40 cm below the surface. As a practical matter, mines are buried at different depths, depending on their purpose and size. Smaller antipersonnel mines will be barely covered by soil, corresponding to the 5 cm in depth. Larger mines, intended for tanks and heavy vehicles, will be buried deeper. (Historical note: Iraq planted mines extensively during the 1991 Gulf War to block the allied attack from Kuwait and Saudi Arabia. Desert winds blew away the surrounding © 2001 CRC Press LLC
FIGURE 13.11 The effects of angle of incidence, polarization, and conductivity on tree trunk reflection characteristics. For this case, the tree trunk was modeled as a cylinder 5 m high and 50 cm in diameter. Snell’s law relationships for propagation reflect and absorption at the interface were modeled at 500 MHz. (Source: reprinted from Ref. 8 with permission of SPIE and ARL.)
FIGURE 13.12 BoomSAR data showing the effects of tactical target scattering at different frequency bands. Body resonance signature is observed at VHF frequencies, but dihedral scattering starts to dominate the returns at the highest frequencies. (Source: reprinted from Ref. 8 with permission of SPIE and ARL.) © 2001 CRC Press LLC
FIGURE 13.13 Image of vehicle no. 4 from the Aberdeen tests includes a mixture of both target and different tree returns. At VHF, both the vehicle and one tree are the dominant feature but, at UHF, the vehicle outline is clear. The back corner of the vehicle predominates in the L-band image. (Source: reprinted from Ref. 8 with permission of SPIE and ARL.)
140
Vehicles 130
120
db
Trees 110
100
90 0
200
400
600
800
1000
1200
Freq (MHz)
FIGURE 13.14 Statistical summary of UWB radar return data from vehicles and trees. There is a significant difference in radar returns above 400 MHz. Automatic target discrimination methods can be built using this difference. (Source: reprinted from Ref. 8 with permission of SPIE and ARL.) © 2001 CRC Press LLC
(a)
(b)
FIGURE 13.15 ARL engineers developed signal processing methods to discriminate between vehicles and confuser objects such as trees. (a) BoomSAR image of tactical vehicles (circled) and clutter from trees. (b) After processing with algorithms to recognize and suppress tree returns, all targets were detected without false alarms. (Source: reprinted from Ref. 8 with permission of SPIE and ARL.) © 2001 CRC Press LLC
transmit antenna
3.5-kV pulse transmitter
receive antenna shaft encoder
pulse counter
trigger generator
transient digitizer data acquisition
computer
data processing computer
(a) Experimental GPR block diagram
relative amplitude
1.0
0.5
0.0
-0. 5
-1.0 0
2
6
4
8
10
Time (ns)
(b) Combined GPR time domain response of the transmitter, antennas and coaxial cables.
5
0
dB
-5 -10 -15 -20 -25 0.5
1.0
1.5
2.0
Frequency (GHz)
(c) Normalized frequency domain response. FIGURE 13.16 The LLNL Ground-Penetrating Radar system block diagram, waveform, and spectrum. These are typical of other ground-penetrating radar experiments. (Source: reprinted from Ref. 10 with permission of SPIE and LLNL.) © 2001 CRC Press LLC
antenna transmitted pulse
refracted return from object surface reflection B = Brewster angle
surface scattered energy
B
object scattered energy
air ( r = 1) refracted energy
soil ( r , )
buried object
FIGURE 13.17 The geometry and problems of buried mine and object detection. Changes in the dielectric constant of the medium will refract the incident wave. Any changes in dielectric constant will produce reflections. There will also be changes in the waveform due to the frequency characteristics of the soil. Remote mine detection must be done at an angle to avoid passing the radar over dangerous areas. Any excessive angle of incidence will prevent penetration of the ground. (Source: reprinted from Ref. 10 with permission of SPIE and LLNL.)
soil, leaving many exposed, thus defeating their purpose. All ground troops received mine training and indoctrination.) Radiated signal polarization is an important factor in mine detection, so all of the experimental GPRs were polarimetric. MIT LL and the Air Force Wright Laboratory studied the reflections of copper pipes, mines, bomb fragments and other materials as part of the ARPA program. The MIT Rail SAR was a stepped-frequency system, which produced the same effects as an impulse system. Using discrete frequency steps permitted data collection at different frequencies. Characterizing the RCS of mines and other canonical shapes was a program objective. MIT LL researchers found that mine radar cross section depended strongly on polarization, as shown in Figure 13.21 by the larger peak RCS for VV images. Both frequency and polarization were important, as shown in Figure 13.22. As expected, the position and surroundings of a mine also affected the radar cross section characteristics, as shown in Figure 13.23.11 System designers can draw some conclusions about mine RCS from the data presented here. Target Discrimination Detecting buried objects is easy; however, the problem is to discriminate between mines and clutter. Target discrimination has become a major research area. ARL researchers looked at the problems on mine field detection using the BoomSAR described earlier.7 Figure 13.24 shows the extent and nature of the clutter problem. This picture would be a typical unprocessed image taken by a low-altitude flight over a suspected area. Naturally occurring clutter from rocks, small plants, roots, and other nonmetallic objects puts extra returns into the image. The raw image does not provide full information about the area. While mines might be distinguished by their small size, and minefields by geometric regularity, becoming lost in clutter would be easy for mines. Mistaking clutter for mines is another problem. Planting objects that have the same radar characteristics as mines can further confuse the image. Researchers from ARL, ERIM, Lockheed Martin Tactical Defense Systems, ERIM International, and other organizations have been working on the mine and clutter separation problem. They have produced much the same results as demonstrated in the ERIM and ARL foliage-penetrating radar demonstrations did with vehicles hidden in trees.13 © 2001 CRC Press LLC
R/F
R
R/F
M
S
F
S
M
subplot 3
subplot 2
subplot 1
M
S
F
S
M
R = rebar
F
M
S
F
M = mine
subplot 4 F
S
M
subplot 5
M
S
F
F = fid marker
F
S
M
M
S
F
R/F
S
M
R
R/F
S = surrogate
(a) Typical Nevada Test Site (NTS) minefield plot.
(b) Reconstructed SAR image of minefield I-south, subplot 4. Note that three rows of objects clearly stand out.
FIGURE 13.18 Passing a UWB GPR SAR along a minefield produces a composite radar image. The problem is to process the image to show the location of buried objects. (Source: reprinted from Ref. 10 with permission of SPIE and LLNL.)
ARL engineers determined that impulse radar returns from mines viewed obliquely, as shown in Figure 13.25a, have distinct returns from the leading and rear edges. Typical impulse radar returns and spectra from M-20 mines are shown in Figure 13.25b. In contrast, Figure 13.25c shows how clutter or confuser objects have distinctly different frequency domain characteristics below 500 MHz.13 © 2001 CRC Press LLC
(a)
(b)
FIGURE 13.19 A reconstructed SAR image of the test minefield. The reflectors marked with FIDIS indicate the physical locations. Note that the mines do produce a different image from the reflectors and clutter objects. (a) Two-dimensional image. (b) 3D SAR image of the minefield shows in (a). (Source: reprinted from Ref. 10 with permission of SPIE and LLNL.) © 2001 CRC Press LLC
(a) Reconstructed SAR image of 40 cm disks buried at depths from 5 to 40 cm.
(b) 3D images of the buried disks. A titling of the disk buried at 5 cm may have caused the small return compared to the 10 cm depth disk.
FIGURE 13.20 The depth of an object will affect the SAR image. These show images of 40 cm dia. disks buried a depths from 5 to 40 cm. Steel rebars were also buried as confuser images. (Source: reprinted from Ref. 10 with permission of SPIE and LLNL.) © 2001 CRC Press LLC
42
RANGE (M)
RANGE (M)
42
30
18 _ 12
_6
0
6
30
18 _ 12
12
_ CROSS RANGE(M)
_6
0
6
12
_ CROSS RANGE(M) (b) VV, Peak RCS = –13.6 dBsm
(a) HH, Peak RCS = –18.4 dBsm
FIGURE 13.21 Images of 10 M-20 antitank mines on grass, taken at a depression angle of 30° between 0.2 and 0.5 GHz by MIT LL. (Source: reprinted from Ref. 11 with permission of MIT Lincoln Laboratory, Lexington, MA, and the SPIE.)
0
HH W
RCS( dBsm )
_ 10
_ 20
_
30 0.0
0.5
1.0
1.5
2.0
FREQUENCY( GHz ) FIGURE 13.22 Mine radar cross sections vary with frequency. In this case, MIT LL measured the nine antitank mines distributed in a 3 × 3 square on grass and seen from about a 30° depression angle. Measurements were taken over a frequency range of 0.2 to 2 GHz. RCS for 4 subbands 0.2 to 0.5 GHz, 0.5 to 1.5 GHz, 1 to 1.5 Hz, and 1.5 to 2 SHz is shown here. (Source: reprinted from Ref. 11 with permission of MIT Lincoln Laboratory, Lexington, MA, and the SPIE.) © 2001 CRC Press LLC
33
30
30
RANGE(M)
RANGE(M)
33
27 24
27 24
21 _6
_3
0
3
CROSS_RANGE(M)
21 _6
6
33
33
30
30
RANGE(M)
RANGE(M)
0
3
CROSS_RANGE(M)
6
b. Trihedral markers, VV. Peak RCS -3.5 dBsm
a. Trihedral markers, HH. Peak RCS = -6.5 dBsm
27 24
27 24
21 _6
_3
0
3
CROSS_RANGE(M)
21 _6
6
c. Mines on sand, HH. Peak RCS = -17 dBsm 33
33
30
30
27 24 21 _6
_3
0
3
CROSS_RANGE(M)
e. Buried mines, HH Peak RCS = -17.6 dBsm
_3
0
3
CROSS_RANGE(M)
6
d. Mines on sand, VV Peak RCS = -12 dBsm
RANGE(M)
RANGE(M)
_3
6
27 24 21 _6
_3
0
3
CROSS_RANGE(M)
6
f. Buried mines, VV Peak RCS = -17.4 dBsm
FIGURE 13.23 Radar images of trihedrals and antitank mines on the surface and buried in sand. All images were taken at 0.2 to 0.5 GHz with a 30° depression angle. (Source: reprinted from Ref. 11 with permission of MIT Lincoln Laboratory, Lexington, MA, and the SPIE.) © 2001 CRC Press LLC
FIGURE 13.24 A VV-polarized frame of data taken by the ARL at the Yuma test site. Notice that the mine field returns are small compared to the many natural clutter objects. Possible geometric regularity is the only characteristic that might distinguish mines from clutter. (Source: reprinted from Ref. 13 with permission of SPIE and ARL.)
Signal processing engineers have developed algorithms for identifying and removing clutter and confuser objects. They have done this by using the spectral differences between mines and clutter. The mathematical details are covered in research papers published by the IEEE, IEE, SPIE, and other organizations. Figure 13.26 shows a practical example of how an SAR minefield image can be cleaned to remove clutter and show only mines. Comparing the before and after pictures shows enhanced mine images and most of the clutter removed.3
13.4
AIRBORNE UWB SAR SYSTEMS
13.4.1 BACKGROUND Earlier sections discussed the demonstrated results that could be achieved by airborne SAR systems for foliage penetration and remote mine detection. The small range resolution available from UWB impulse systems also gives it a capability for precision imaging and mapping. SRI International has a long history of airborne impulse radar development for precision imaging and mapping. This section describes two examples of VHF Impulse SARs. VHF Impulse SAR In 1990, SRI International built VHF SAR carried on a Beech Queen Aircraft, as shown in Figure 13.27a. The improved King Air SAR, shown in Figure 13.27b, used an improved antenna mounted © 2001 CRC Press LLC
(a)
(b)
(c)
FIGURE 13.25 Using the reflected radiation characteristics of mines and clutter can help to distinguish between them. (a) Cartoon illustration of a mine response to a UWB impulse signal. (b) Typical M-20 mine radar returns, SAR images, and spectra. The 500 MHz dip was predicted by method of moments modeling. (c) Confuser objects have similar radar returns but different spectral characteristics below 500 MHz. (Source: reprinted from Ref. 13 with permission of ARL and SPIE.) © 2001 CRC Press LLC
(a)
(b)
FIGURE 13.26 Using the different spectral characteristics of mines and clutter led to developing an algorithm to recognize and remove clutter and confuser objects. (Source: reprinted from Ref. 13 with permission of ARL and SPIE.) © 2001 CRC Press LLC
(a) The nine-element array antenna used on a Beech Queen Air
(b) The impulse antenna of the SRI King Air SAR system
FIGURE 13.27 SRI has been working with impulse SAR systems since 1981. (Source: photos courtesy of SRI International.)
below the fuselage. Table 13.5 gives the system specifications, including the frequency ranges of the three receivers used to cover the entire spectrum. The waveform and spectrum are shown in Figure 13.28. Using an array antenna produced the side-looking, HH polarized elevation beam pattern shown in Figure 13.29. TABLE 13.5 SRI UWB SAR Operational Parameters14
© 2001 CRC Press LLC
Frequency Channel 1 Channel 2 Channel 3
100–300 MHz 200–400 MHz 100–600 MHz
Depression angle
30° to 60°
Swath width
500 m
PRF
165/s
Peak power
50 kW
Pulse width
5 ns
Integration beamwidth
330° to 60° (software selectable)
TYPICAL WAVEFORM 5 dBm/div
TYPICAL SPECTRUM
TIME
5 ns/div
100 MHz/div FREQUENCY
TIME
b. Spectrum
a. Waveform
PEAK VOLTAGE OF RESPONSE TO TRANSMITTER PULSE
FIGURE 13.28 The UWB SAR radiated waveform and spectrum. This closely approximates a monocycle waveform covering the VHF spectrum. (Source: reprinted from Ref. 14 with permission of SPIE and SRI International.)
600
400
200
200
400
600
HORIZONTAL
FIGURE 13.29 The peak pulsed voltage elevation response of the nine-element antenna array shown in Figure 13.27. (Source: reprinted from Ref. 14 with permission of SPIE and SRI International.)
The SAR had a demonstrated 1 m range and azimuth resolution with a 200 MHz bandwidth. The usual scale for reconnaissance SAR images is from 1:50,000–1:500,000, while the scale for the SAR images shown in Figure 13.30 is 1:1500. For a practical benchmark, a 1:50,000 map can barely show man-made features and is the smallest scale military battle map used by ground troops. Features such as small boats are visible in Figure 13.30. Examples of the King Air SAR imagery are shown in Figures 13.31 and 13.32. The high spatial resolution allows imaging of small objects, such as the supports in an antenna array, 1 ft trihedral reflectors, and 1 ft diameter spheres both on the surface and buried in dry ground. The foliage-penetrating abilities are shown in Figure 13.32, where trucks on a forest covered were separated from the background.16
13.4.2
GEOSAR FOLLOW-ON
SRI International’s success in VHF SAR imaging led to a project to use high-resolution impulse SAR for precision topographic mapping beneath foliage. An earlier system called the FOLPEN III radar was the basis for the GeoSAR system carried in a Jetstream 31 aircraft with the antenna radome under the fuselage. Table 13.6 summarizes the characteristics. © 2001 CRC Press LLC
FIGURE 13.30 UWB VHF impulse SAR image of Half-Moon Bay, California. The radar image picks out many of the prominent map features and shows the location of the small boats on the yacht piers. (Source: reprinted from Ref. 14 with permission of SPIE and SRI International.)
13.4.3
TEST RESULTS
The test objective was to provide topographic maps of a remote tree-covered area by using foliagepenetrating VHF impulse signals to image ground returns. Figure 13.33 shows an impulse SAR image of the Laurel Canyon test area. Because the problem was to build a contour map, data from several passes were combined to achieve a stereographic effect. Figure 13.34 shows the geometry used to combine two impulse SAR maps and produce the three-dimensional map shown in Figure 13.35. Additional mathematical processing to smooth the contours produced the map shown in Figure 13.36.15 © 2001 CRC Press LLC
(a)
FIGURE 13.31 Improvements and experience produced the SRI King Air SAR systems, which can produce high-resolution images. (a) Small objects are visible in this picture of the SRI Wide Aperture Radar Facility Receive Array (continues). (Source: photo courtesy of SRI International.) © 2001 CRC Press LLC
(b)
FIGURE 13.31 Improvements and experience produced the SRI King Air SAR systems, which can produce high-resolution images. (b) The SIR Impulse SRI detected medium-sized objects buried in dry soils. The geolocation accuracy of objects in the Mojave Desert was >1 m. (Source: photo courtesy of SRI International.) © 2001 CRC Press LLC
FIGURE 13.32 Airborne SAR detection of foliage-hidden objects. This shows how the SRI King Air SAR detected trucks obscured by North American forests. (Source: courtesy of SRI International.)
TABLE 13.6 SRI FOLPEN III SAR System Description15
13.5
Platform
Jet Stream-31
Antennas
Quad-ridged horns (2)
Polarization
HH, VV, HV (any 2)
Samples per record
4096 each I and Q
Frequency bands
200–400, 300–500, 200–600
Transmitter type
Solid-state impulse
Transmitter peak voltage
15k V
KPRF
150 Hz
Resolution
1 m/0.5 m
Altitude range
1,000–6,000 ft
Digitizer
500 MS/s each I and Q, 8 bit
On-board display
Scrolling raw data
Motion compensation
Post-processed DGPS
CONCLUSIONS
VHF UWB signals can penetrate cover, such as foliage and earth, to produce remote images of concealed objects. This was shown with vehicles hidden by foliage and with buried land mines. However, the demonstrations were all done at short ranges, and practical remote sensing from airborne platforms will require stronger impulse power levels. The CARABAS, ERIM P-3 SAR, and SRI International VHF Impulse SAR systems showed that UWB signals can be applied to remote imaging and mapping. Clutter and returns from unwanted objects such as trees, buried objects, etc. remain a major problem, as shown in FOLPEN demonstrations of ERIM and the ARL. Research efforts in signal processing have shown that the radar returns of targets such as mines and vehicles can be distin© 2001 CRC Press LLC
FIGURE 13.33 A foliage-penetrating impulse SAR image of the Laurel Canyon test area. (Source: reprinted with permission of MIT Lincoln Laboratory, Lexington, Massachusetts.)
based on its spectral characteristics. Signal processing remains a major area of ongoing progress and will determine the effectiveness of any particular system. All of the results shown here resulted from off-line processing. Considerable work goes into preparing images from UWB SAR data, so real-time processing will be a major goal for future system development. Most of the cases examined were driven by defense applications, in which case trees and other objects were relegated to the category of clutter or confuser objects. The fact that different forms of biomass could have distinguishable impulse and other UWB radar returns opens new possibilities for environmental monitoring and surveying. Low-altitude UWB SAR data could augment satellite images for future biomass surveys.
13.6
ACKNOWLEDGMENTS
Special thanks to Dan Sheen, formerly of ERIM International, Roger Vickers and Ken Dreyer of SRI International, Dennis Blejer of the MIT Lincoln Laboratory, and Jeffrey Sichina of the Army © 2001 CRC Press LLC
FIGURE 13.34 Two impulse SAR maps can be combined using the stereographic geometry shown here. (Source: reprinted with permission of MIT Lincoln Laboratory, Lexington, Massachusetts.)
Research Laboratory for their assistance in providing reports and additional information. My thanks to all the organizations for providing the original artwork to help in preparing this book.
REFERENCES 1. D.R. Sheen, D.W. Kletzli Jr., N.P. Malinas, T.B. Lewis and J.R. Roman, “Ultrawideband measurements of foliage transmission properties at UHF: measurements systems and results,” SPIE Proceedings, Vol. 1631, Ultrawideband Radar (1992), pp. 206–218. 2. D. R. Sheen, Susan C. Wei, Terry B. Lewis and Stuart R. deGraff, “Ultrawide bandwidth polarimetric SAR imagery of foliage obscured objects,” SPIE Proceedings, Vol. 1875, Ultrahigh Resolution Radar (1193), pp. 106–103. 3. C. Cafforio, C. Prati and E. Rocca, “SAR Data Focusing Using Seismic Migration Techniques,” IEEE Trans of AES, Vol. AES-27, p. 194–207, March 1991. 4. Dan R. Sheen and Terry B. Lewis, “The P-3 Ultra-Wideband SAR,” SPIE Proceedings, Vol. 2747, Radar Sensor Technology, 1994, pp. 20–24. 5. S. De Graaf, “Sidelobe reduction via adaptive FIR filtering in SAR imagery,” Submitted to IEEE Trans on Image Processing, Sept. 1992. © 2001 CRC Press LLC
FIGURE 13.35 A three-dimensional contour map of the Laurel Canyon prepared from stereo SAR data. (Source: reprinted from Ref. 14 with permission of MIT Lincoln Laboratory, Lexington, Massachusetts.) 6. D. R. Sheen, Susan C. Wei, Terry B. Lewis and Stuart R. deGraff, “Ultrawide bandwidth polarimetric SAR imagery of foliage obscured objects,” SPIE Proceedings, Vol. 1875, Ultrahigh Resolution Radar (1193), pp. 106–103. 7. Norm Vandenberg, Stanley Shackman and Dave Wiseman, “Two-Pass Interferometric UHF SAR Demonstration for FOPEN Applications,” ERIM International, 1996. 8. Lam Nguyen, Ravinder Kapoor, David Wong and Jeffrey Sichina, “Ultra-Wideband Radar Target Discrimination Utilizing an Advanced Feature Set,” SPIE Proceedings, Vol. 3370, Algorithms for Synthetic Aperture Radar Imagery V, April 1998, pp. 289–306. 9. D.J. Daniels, Surface-Penetrating Radar, IEE, London, UK, 1996. 10. Paul D Sargis, Dean Lee, E. Stephen Fulkerson, Billy J. McKinley and William D. Aimonetti, “A ground penetrating radar for mine detection,” SPIE Proceedings 2217 Aerial Surveillance Sensors Including Obscured and Underground Target Detection, 1994, p. 38–49. 11. Dennis Blejer, Carl Frost, Steven Scarborough, Karl Kappra and Keith Sturgis, “SAR imaging of mine-like targets over ultra-wide bandwidths,” Proceedings of the SPIE, Vol. 2496, Detection Technologies for Mines and Minelike Targets, 1995, pp. 54–69 12. T.O. Grosch, Check F. Lee, Eileen M. Adams, Chi Tran, Francois Koening, Kowk Tom and Roger Vickers, “Detection of surface and buried mines with a UHF airborne SAR,” SPIE Proceedings, Vol. 2496, Detection Technologies for Mines and Minelike Targets, 1995, pp. 110–120. 13. Lam Nguyen, Karl Kappra, David Wong, Ravinder Kapoor and Jeffrey Sichina, “A mine field detection algorithm utilizing data from an ultra-wideband Wide Area Surveillance Radar,” SPIE Proceedings, Vol. 3392, Detection and Remediation Technologies for Mines and Minelike Targets III, April 1998, pp. 627–643. © 2001 CRC Press LLC
FIGURE 13.36
(Source: reprinted with permission of MIT Lincoln Laboratory, Lexington, Massachusetts.)
14. Roger Vickers, Victor H. Gonzalez and Robert Ficklin, “Results from a VHF impulse synthetic aperture radar,” SPIE Proceedings, Vol. 1631, Ultra-wideband Radar, 1992, pp. 219–225. 15. Kenneth Dreyer, Marsha Jo Hannah, Joel Kositsky and Ben Noviello, “GeoSAR Follow-On: Final Report, September 1996,” SRI International, Menlo Park, CA. 16. ASRI International Foliage and Ground Penetrating Radar,” brochure from SRI International.
© 2001 CRC Press LLC
14 Bistatic Radar Polarimetry Theory
Anne-Laure Germond, Eric Pottier, Joseph Saillard CONTENTS 14.1 Introduction 14.2 Polarimetry Background 14.3 Radar Wave Polarization Theory 14.4 Radar Target Polarimetry 14.5 The Polarization Fork 14.6 The Euler Parameters 14.7 Monostatic and Bistatic Polarization Conclusions References
14.1
INTRODUCTION
The history of radar (radio detection and ranging) begins in the 1920s with the discovery that metallic objects reflect radio waves. The conventional radar system may measure the amplitude, frequency, and differential phase of the received wave for comparison with the transmitted wave to recover target information. However, the detection and the identification of radar targets will be more difficult with this type of radar because of the increasingly hostile targets environment. To overcome the effects of target environments, the use of wave polarization data can enhance target detection. This notion gives rise to a new theory of radar polarimetry.1 Since the 1950s, scientists such as G. Sinclair have been interested in the way that targets depolarize the transmitted electromagnetic waves reflected back to the transmitter. The Sinclair matrix models the modification of the transmitted polarization state compared to the scattered polarization state.2 Backscattering refers to a scattered wave directed toward the source. If the transmitter and receiver are located at the same place, the radar configuration is called monostatic. When the transmitter and the receiver are widely separated, the configuration is called bistatic. The monostatic polarimetric radar is a particular bistatic case where the bistatic angle tends to zero.3 The requirement to detect small radar cross section (RCS) or stealthy targets raises a need for bistatic radar systems. Applying polarimetric techniques to target detection and classification means that we must now examine the physics of bistatic radar polarimetry.
14.2 14.2.1
POLARIMETRY BACKGROUND HISTORY
First of all, only amplitude and frequency information of the electromagnetic wave were measured in the 1920s. Some current radar systems also allow the measurement of the relative phase, which helps to resolve the physical features of scatters and targets. N. Wiener discovered some important
© 2001 CRC Press LLC
properties of polarized waves during the period 1927 to 1929. R. C. Jones and H. Mueller developed these discoveries. Later, G. Sinclair showed that the polarization state of a wave scattered by radar target is different from the transmitted wave. He expressed the change, which is related to the properties of a coherent radar target, by the 2 × 2 scattering matrix, commonly called the Sinclair matrix. Then, E. Kennaugh4 introduced a new approach to the radar theory, which was based on the studies of G. A. Deschamps,5 and developed the optimal target polarization concept for the reciprocal monostatic relative phase case. The meteorological radar is a direct application of this study. At the beginning of the 1970s, J. R. Huynen6 made a phenomenological study of the target in the particular case of the backscattering. Huynen’s target measurements on the relative phase backscattering matrix sparked a new interest in polarimetric radar developments. He published his phenomenological theory of radar targets in which he defines nine independently modeled physical parameters for the monostatic configuration case. These Huynen parameters bring out geometrical properties and physical information about the structure of the target, and they can help in target identification. These parameters are linked together by four monostatic target equations. A second polarimetric tool is the polarization fork or Kennaugh fork, which is defined with the characteristic polarization states. This increases the target information in a monostatic radar configuration. Furthermore, the location of the monostatic characteristic points can be determined by the five Euler parameters, which are extracted from the backscattering matrix. All of these previous monostatic theoretical results7 have been redeveloped in this chapter according to a bistatic configuration. First, Polish researcher Z. H. Czyz8–10 proposed a theory of the bistatic radar polarimetry with a geometrical approach of the problem, whereas the approach we developed follows an analytical approach. Currently, M. Davidovitz and W. M. Boerner have proposed a decomposition of any matrix into the sum of symmetric and skew-symmetric matrices.11 The extension of the monostatic theory to the bistatic one is based on this decomposition. W. M. Boerner is well known throughout the radar polarimetry community for his research in the area of the direct and inverse vector electromagnetic scattering problems.12,13 His theoretical contributions have encouraged a new international community of engineers and scientists. He promotes the advantages of the radar polarimetry around the world and presents regularly very encouraging results gained from using polarization vector measurements. The expansion of the polarimetry theory to the bistatic case leads to the definition of seven new bistatic parameters and the derivation of nine polarimetric bistatic target equations.14 Additionally, we propose the concept of a bistatic target diagram. The 14 characteristic polarization states of the new bistatic polarization fork and the 7 bistatic Euler parameters are also presented.15 The results concerning the particular case of the monostatic theory are recalled after the presentation of those of the bistatic theory.
14.2.2
WHY POLARIZATION IS IMPORTANT
Radar polarimetry theory starts in the 1950s, when G. Sinclair demonstrated that a target changes the polarization state of a transmitted wave. More information about target can proceed from the knowledge of the scattered wave polarization state for a given transmitted wave. During the past 15 years, basic research studies on the fundamentals of coherent and partially coherent radar polarimetry were carried out with applications to target detection in clutter, target and background clutter classification, and target imaging and identification. To understand the importance of bistatic radar polarization, consider that a conventional scalar radar measures only one component of the scattered wave, which is the scattered part defined by the single receiver antenna characteristics. A polarimetric radar has two receiving antennas with orthogonal polarization states. Both components of the scattered wave are measured successively to get the radar vector. First, one © 2001 CRC Press LLC
polarization state is transmitted, and two measurements made in both the transmitter polarization and orthogonal to the transmitter polarization. Then, the orthogonal polarization state is transmitted, and another two measurements are made. Polarimetric radar has additional information in the received vector radar, which allows monostatic polarimetric measurements that bring out geometric properties and physical information about the structure of the target. Furthermore, the use of the polarimetric radar allows detection of some stealthy targets that completely depolarize the polarization state of the transmitted wave. In that case, the copolarized power, when antennas at the transmission and at the reception have the same polarization state, equals zero, but the crosspolarized power is then maximum.
14.2.3
MONOSTATIC
AND
BISTATIC RADAR
Whereas many radar systems utilize the same site for transmission and reception, this is not the only configuration that can be employed. A system with widely separated antennas used for transmission and reception is called bistatic radar. In the bistatic case, the radiation source and the receiver are at different locations. The target cross section is not only a function of its orientation and frequency, it is also function of the bistatic angle described by the location of the target toward the transmitter and the receiver.16,17 Bistatic radar appeared at the beginning of the 1930s.18–20 Because of the difficulty of the integrating transmitters and receiver with the same antenna, the first radars had bistatic configurations. When duplexer technology appeared in 1936, the monostatic system took the place of the bistatic system radar for practical reasons.21 For our discussion, monostatic radars have the transmitter and receiver located at the same place. Bistatic radars have large distances between the transmitter and the receiver relative to the transmitter-target and target-receiver ranges.22 The geometry of a monostatic configuration and bistatic configurations are illustrated in Figures 14.1 and 14.2. The bistatic radar system has some advantages, because the wide separation of the receiver and transmitter eliminates any coupling. Target Transmitted signal Backscattered signal Transmitter Receiver
FIGURE 14.1 Monostatic radar systems have the transmitter and receiver in essentially the same place. Target Transmitted signal
Transmitter
FIGURE 14.2
β
Scattered signal Receiver
Bistatic radar systems have a large angle β between the transmitter and receiver.
© 2001 CRC Press LLC
14.3 14.3.1
RADAR WAVE POLARIZATION THEORY MONOCHROMATIC ELECTROMAGNETIC WAVES
The solution of the Maxwell equations shows that the electrical field vector E of a monochromatic plane electromagnetic wave is normal to the direction of propagation k , and the magnetic field vector H is such that the trihedral (E, H, k) is direct. So, for an electromagnetic plane wave, the H magnetic vector is proportional to the electrical wave E. The magnetic field vector gives no more information. The electrical field is only a function of the z and t, the propagation equation is 2
2 ∂ E ∂ E --------2- – ε 0 µ 0 --------2- = 0 ∂t ∂z
(14.1)
The solution of the previous equation is E ( z,t ) = E 1 t – -z- + E 2 t + -z- c c
(14.2)
with the first term associated to a propagation following the sense of the positive z at the speed c, and the second term as a propagation in the sense of the negative z at the same speed. In the following, only the term describing the z growth is taken into account, because the studied electromagnetic wave is assumed to be progressive. The combination of two sinusoidal waves along two directions mutually orthogonal to the direction of propagation defined for the positive z, can be written E ( z, t ) = E x ( z,t )x + E y ( z,t )y
(14.3)
E ( z,t ) = E ox cos ( wt – k 0 z + δ x ) with x E y ( z,t ) = E oy cos ( wt – k 0 z + δ y ) where
f = wave frequency w = pulsation of the wave k0 = wave number in the empty space δx and δy = absolute phase of the two components of the E electrical wave
14.3.2
THE ELLIPSE
OF
POLARIZATION
The two components Ex and Ey of the wave are linked by the following relationship: Ex Ey E 2 Ex- 2 ------ cosδ + ------y- = sin 2 δ – 2 ------------- E 0x E 0y E 0x E 0y
(14.4)
with δ = δy – δx The path described in the course of the time by the projection, onto an equiphase plane, of the extremity of the electrical field vector is an ellipse of equation given by the Equation (14.4). The polarization of a monochromatic progressive plane wave is graphically represented in Figure 14.3. © 2001 CRC Press LLC
FIGURE 14.3 The polarization ellipse form moves around from the segment (linear polarizations) to the circle (circular polarizations) according to the associated polarization state.
The wave polarization state is totally described by five geometrical parameters of the polarization ellipse: • The ϕ angle represents the ellipse orientation. Its definition domain is [–π/2,π/2]. • The τ angle defines the ellipticity. Its definition domain is [–π/4,π/4]. • The polarization sense is determined by the rotation sense along the propagation axis. The polarization is right elliptical if the ellipse rotates clockwise when the evolution of the wave is the propagation sense; otherwise, the polarization is left elliptical. • The square of the magnitude A of the ellipse is proportional to the power density of the received wave at the observation point. • The α angle represents the absolute phase of the ellipse. The definition domain is [–π,π].
14.3.3
JONES VECTOR
The electrical field of a monochromatic plane wave of any polarization is defined in Equation (14.3). Because the components of the electrical wave oscillate at the same frequency when the wave is monochromatic, the temporal information can be omitted, and Equation (14.3) is simplified to
E( z) = e
2π – j ------z λ
Ex e Ey e
jδ x jδ y
jωt
,E ( z,t ) = Re { E ( z )e }
(14.5)
As the wave is planar, the electrical field E(z) is the same at any point of the wave plane. It is possible to suppress the spatial information. The study of the wave can become restricted to the plane wave corresponding to z = 0, for example. The electrical field vector becomes
E( 0) =
E 0x e E oy e
jδ x
(14.6) jδ y
The vector E(0) is called the Jones vector of the wave. The amplitude and phase of the complex components of the electrical field are totally defined by this vector. In the general case, the Jones vector is expressed in any basis (x,y) so that © 2001 CRC Press LLC
Ex
E =
Ex e
=
Ey
Ey e
jδ x
(14.7) jδ y
The geometric characteristic of the polarization ellipse can be extracted from the Jones vector. The phase difference will be δ = δy – δx
(14.8)
2 Ex Ey -2 cosδ tan2ϕ = -------------------------2 Ex – Ey
(14.9)
2 Ex Ey -2 sin δ sin 2 τ = -------------------------2 Ex + Ey
(14.10)
The ϕ orientation is
The τ ellipticity is
The sense of polarization is τ < 0: right elliptical polarization τ > 0: left ellliptical polarization
(14.11)
The magnitude of the wave is A =
2
Ex + Ey
2
(14.12)
Any polarization state, represented by its Jones vector, can be expressed in any orthogonal polarization basis so that E = Ex x + Ey y
(14.13)
The general expression of a Jones vector, associated to any elliptical polarization state, with ν the absolute phase, is given by
E ( x,y ) = Ae
14.3.4
– jv
cosϕ – sinϕ cosτ sinϕ cosϕ jsinτ
(14.14)
THE STOKES VECTOR
For each complex Jones vector, there exists a real equivalent representation, which is the Stokes vector g(E) given by g(E) = © 2001 CRC Press LLC
T*
*
2[V] (E ⊗ E )
(14.15)
where ⊗ is the Kronecker product, T means transpose and * conjugate, and [V] is defined by the following matrix: 1 1 0 V = ------2 0 1
1 0 0 –1
0 1 1 0
0 –j j 0
(14.16)
So the components of g(E) are equal in any basis (A,B) to 2
2
g1
g ( E ( A, B ) ) =
2
EA + EB
g0
2
EA – EB
=
(14.17)
*
g2
2Re ( E A E B )
g3
– 2Im ( E A E B )
*
The components g1, g2, g3 of the Stokes vector g(E(A,B)) correspond to the Cartesian coordinates of a point located at the surface of a sphere of radius equal to the value of the component g0. That is the most important interest of the Stokes vector. Moreover, when a wave is totally polarized, the real components of the Stokes vector are linked together by the following relation: 2
2
2
2
g0 = g1 + g2 + g3
(14.18)
Physically, g0 expresses the total intensity of the polarized wave, with g1 the part of the horizontally or vertically polarized wave, g2 the part of the ±45° linearly polarized wave, and g3 the part of the right or left circular polarized wave. Another variable, called the polarization ratio, defines the wave polarization state following so that E ρ E ( x,y ) = -----y Ex
14.3.5
(14.19)
THE POINCARÉ SPHERE
The Stokes vector allows the representation of any polarization state of a totally polarized wave at the surface of a sphere, called the Poincaré sphere, shown in Figure 14.4. So, each polarization state is uniquely represented by a point at the surface of the Poincaré sphere. The last three normalized components of the Stokes vector g(EP) can be expressed in two different ways with q cos 2ϕ cos 2τ p u p = sin 2ϕ cos 2τ sin 2τ vp where ϕ, τ = the spherical angles γ, δ = the Deschamps parameters © 2001 CRC Press LLC
cos 2γ = sin 2γ cos 2δ sin 2γ sin 2 δ
(14.20)
v 2ϕ 2τ
O
P
2γ u
2δ
q FIGURE 14.4 the Poincaré sphere is a helpful tool for visualizing the wave polarization state associated with the τ ellipticity angle and the ϕ orientation angle.
14.3.6
COMPARISON
OF
DIFFERENT POLARIZATION STATES
The hermitian product of two Jones vectors is defined as *
〈 A ( x,y ) ,B ( x,y )〉 = A ( x,y ) B ( x,y ) T
(14.21)
The orthonormal basis (eH, eV) is how the different polarization states are studied. The parallelism and orthogonality notions are only valid for waves that go on the same direction. Two electromagnetic waves are considered orthogonal if the hermitian product of the two associated Jones vectors is null, so that ϕ = ϕA + π --〈 A ( x,y ) ,B ( x,y )〉 = 0 ⇒ B 2 τ = – τ A B
(14.22)
Two electromagnetic waves are considered parallel if the two associated Jones vectors are proportional and ϕ = ϕA A ( x,y ) = kB ( x,y ) , k ∈ C ⇒ B τB = τA
(14.23)
The link between the different polarizations is represented in Figure 14.5. Different representations of the polarization of an electromagnetic wave have been shown. The Jones vector and the Stokes vector, which are associated with a polarization state, and also the Poincaré sphere were defined.
14.4
RADAR TARGET POLARIMETRY
We discussed the polarization of an electromagnetic wave in the previous section. This section describes the link between the polarization state of a wave scattered by a target and the transmitted wave. The polarization transformation of the backscattered wave can be modeled by two matrices: the scattering and the Kennaugh matrices. The objective is to extract information relative to the characteristic of the target from these matrices. © 2001 CRC Press LLC
Ε(x,y) = (A,α,ϕ,τ )
Ε ll(x,y) = (B, β, ϕ, τ )
Initial polarization
Parallel polarization
Ε(x,y) = (A, α, ϕ + π/2, − τ )
+ = (A, α, ϕ, − τ ) Ε(x,y)
Orthogonal polarization
Conjugate polarization
T
FIGURE 14.5 Link between different polarizations.
14.4.1
SCATTERING MATRIX: THE FIRST IMPORTANT CONCEPT
Definition When a target is illuminated by an electromagnetic wave, the polarization of the scattered wave is generally different from that of the transmitted wave. The nature of the depolarization depends on the geometry of the target. G. Sinclair proved that the target acts as a polarization transformer and defined this change by the 2 × 2 complex scattering matrix, which links the Jones vectors of the transmitted and the received waves together. Two orthogonally polarized signals are transmitted successively to measure the components of the scattering matrix.
d
i
E = [ S ] ( A,B ) E ,
d
EA d
EB where
i
=
S AA S AB E A
(14.24)
S BA S BB E iB
Ei = the incident Jones vector and Ed the diffused Jones vector
The polarization state A is transmitted first. Two receiver measurements are made that include the copolarized component SAA and the crosspolarized component SBA. Then, the orthogonal polarization state B is transmitted, and two new measurements are obtained. These measurements fill the complex scattering matrix. Describing the target behavior means that we must know the scattering matrix. However, this matrix depends on the position of the target, but also on the frequency of the transmission, which makes life interesting. © 2001 CRC Press LLC
The two bases at the transmitter and at the receiver are linked by the local basis of the target. Two conventions exist in the literature, the BSA convention and the FSA convention, and they define the link between the local basis of the target and the reception basis differently.23 The BSA Convention BSA is the abbreviation for backscatter alignment. The sense of propagation of the scattered wave is opposite to the sense of the propagation vector defining the direct reception basis, as shown in Figure 14.6. For the particular case of the backscattering, the bases are identical. For the particular case of the forward scattering, the ki, kd, vi, and vd vectors belong to the same incident plane, and hi and hd are of same direction but of opposite sense. The FSA Convention FSA is the abbreviation for forward scatter alignment. The sense of propagation of the scattered wave and the sense of the propagation vector, which defines the basis at the reception, are identical. The transmission and the reception bases linked by the FSA convention are visualized in Figure 14.7. For the particular case of the backscattering, the vectors ki and kd on one hand, and hi and hd on the other hand, are of same direction but of opposite sense, whereas vi and vd vectors are equal. For the particular case of the forward scattering, the vectors ki, kd, vi and vd belong to the incident plane, and the hi and hd vectors are equal. Z
vi
hi ki
kd
180 θi
θd
hd vd
ϕd
y
ϕi
x
FIGURE 14.6 Backscatter alignment (BSA) convention.
Z
vi
hi ki
hd k d 180
θi θ d
vd
ϕd
x
ϕi
y
FIGURE 14.7 Forward scatter alignment (FSA) convention for defining transmitter, target, and scattered waves. © 2001 CRC Press LLC
Choice of Convention In the following, all the results are expressed in the BSA convention, which is the usual convention for the radar community. Furthermore, for the particular case of the monostatic configuration, the scattering matrix is symmetric. This specificity has been used when the monostatic radar polarimetry theory has been elaborated. However, it is important to notice that the physical values such as the voltage or the intensity are invariant, whatever the convention used. The transformation of the scattering matrix from a convention to the other one expresses itself as follows:
[S]
BSA
FSA = 1 0 [S] 0 –1
(14.25)
Decomposition of Any Bistatic Scattering Matrix The complex scattering matrix of a target links the Jones vectors of the transmitted and the received waves. The main basic difference between the monostatic and the bistatic matrix is that the bistatic scattering matrix [Sbi] is no longer symmetric in the antenna coordinate system, the BSA convention. When the backscattering is symmetric, [Sbi] can be broken down into a sum of two matrices: a symmetric one and a skew-symmetric one.11 The symmetric matrix models a monostatic configuration of a target, and the skew-symmetric matrix models additional information resulting from the bistatic configuration. However, it is important to notice that the elements of the symmetric part depend on the target but also on the location of the radar system and, consequently, on the bistatic angle.
[ S bi ] ( A,B ) =
S AA S AB
[ S bi ] ( A,B ) =
S AA S AB
S BA S BB
= [ S s ] ( A,B ) + [ S ss ] ( A,B )
S
S
S AB S BB AB + S BA with S SAB = S---------------------, 2
+
0
SS
S AB
SS
– S AB 0
(14.26)
S AB – S BA SS S AB = --------------------2
Examples of Monostatic and Bistatic Scattering Matrices of Canonical Targets With the help of a special program, we can calculate the bistatic signatures of two canonical targets: a rectangular flat plate and a dihedral. The simulations are made for targets of length equal to 30 cm at the frequency of 10 GHz. The signature is studied in the basis (Ox, Oy, Oz), with O the center of the target and (Oz) the direction normal to the plane of the local surface of the target. The direction of the electromagnetic waves is identified by the angles θ and ϕ, θi and ϕi for the incident wave and θd and ϕd for the scattered wave. The Rectangular Flat Plate The signature of a rectangular flat plate is calculated such that the angles ϕi and ϕd are equal to 180°, as shown in Figure 14.8. The transmitter and the receiver are located in the (xOz) plane. The definition domains of angles are given by © 2001 CRC Press LLC
Z
θd <0
θ d>0
180- θi θd
o X
ϕi = ϕd=180
y FIGURE 14.8 Definition of angles for the flat plate.
θ ∈ [90°,270° ] i θ d ∈ [ – 90° ,90° ] ϕ i ∈ [ 0°,180° ] ϕ d ∈ [ 180°,360° ]
(14.27)
So, the monostatic configuration corresponds to θ d + 180° = θ i
(14.28)
Figure 14.9 specifies the locations of the backscattering mechanism, and those of the particular bistatic configurations, which correspond to the case of the bisecting line, merged into the axis (Oz). Figures 14.10 through 14.13 show the evolution of the real and imaginary parts of the copolarized element SAA and the crosspolarized element SAB of the scattering matrices, which model the plate for different configurations of the radar system. The second copolarized and crosspolarized elements are not presented, since they are identical two by two to those presented respectively. For the copolarized elements, the maximum of diffusion is obtained when the configuration is the particular bistatic case defined by the following relation θd + θi = 180°. As for the crosspolarized elements, they are null whatever the direction of the transmission and the reception. So, for the particular bistatic cases presented before, the plate does particular bistatic configurations + 90 o
θd 0o -
backscattering - 90
o
90 o
θi
180 o
FIGURE 14.9 The locations of the backscattering mechanism of those particular bistatic configurations corresponding to the case of the bisecting line merged into the axis (Oz). © 2001 CRC Press LLC
FIGURE 14.10 A plate—real part of the SAA copolarized element.
FIGURE 14.11 A plate—imaginary part of the SAA copolarized element.
FIGURE 14.12 A plate—real part of the SAB crosspolarized element.
FIGURE 14.13 A plate—imaginary part of the SAB crosspolarized element.
not depolarize. It scatters the same polarization state as those of the transmitted electromagnetic wave. The monostatic configuration defined by a normal incidence is such that it is a specific configuration with a bistatic angle equal to zero and with the bisecting line equal to the normal axis. The theoretical backscattering matrix is then equal to [ S plate ] = 1 0 0 1
(14.29)
The Rectangular Sides Dihedral The signature of a dihedral of dimension L = H = 0.30 m, with an aperture angle of 2α = 90° is calculated at the frequency of 10 GHz. Figure 14.14 shows the angles used in the computation. The transmitter and the receiver describe the same plane, which is horizontal so as to study the response of the vertical dihedral, but also the response of an oriented dihedral of β angle presented in the Figure 14.15. © 2001 CRC Press LLC
z L
L
θi = θd = 90o
ϕi = 180o
β
y
P
H
−α side
+α
1
sid
e2
ϕd = 0o
axis of rotation
X
FIGURE 14.14 dihedral.
Definition of the angles for the
FIGURE 14.15 Rotation of the dihedral.
In the monostatic case, the scattering matrix of a vertical dihedral is linked to that of an oriented dihedral by the following relation: T
[ S oriented ] = [ U rotation ] [ S vertical ] [ U rotation ]
with
[ U rotation ] =
(14.30)
cosβ sinβ – sinβ cosβ
where β represents the orientation angle around the line of sight of the radar. The theoretical scattering matrix of a vertical dihedral in a monostatic configuration is [ S vertical ] = 1 0 0 –1
(14.31)
The backscattering matrix of an oriented dihedral around the line of sight is [ S oriented ] = cos2β sin2β sin2β -cos2β
(14.32)
The simulations visualize the scattering of the target in any point of the horizontal plane, situated inside the aperture of the dihedral, such that the ϕi angle belongs to the interval 135° to 225°, and the θi and θd angles equal 90°. Figure 14.17 indicates the location that corresponds to the backscattering defined by ϕi = (ϕd + 180°), and on the other hand corresponds to particular bistatic configurations for which the bisecting line is directed toward the direction such that ϕd equals zero. Figures 14.18 through 14.21 show the evolution of the real and imaginary parts of the copolarized and the crosspolarized elements of the bistatic scattering matrices of a vertical dihedral. The real and imaginary parts of the second copolarized element are not presented, since they are equal in absolute value but opposite in sign. As for the crosspolarized elements, they are null. The modulus of the copolarized element is a maximum for monostatic configurations. The vertical dihedral is very directing in the elevation plane. However, for monostatic configurations, the vertical dihedral is not guiding in the azimuth plane. © 2001 CRC Press LLC
+ 45ο
P
ϕd
0ο
− 45ο 135ο ϕ i=180o , ϕd =0o
particular bistatic configuration
backscattering
ϕi =170o , ϕd =10 o
FIGURE 14.16
FIGURE 14.18 A vertical dihedral—real part of the SAA copolarized element.
FIGURE 14.20 A vertical dihedral—real part of the SAB crosspolarized element.
ϕi
180ο
225ο
FIGURE 14.17
FIGURE 14.19 A vertical dihedral—imaginary part of the SAA copolarized element.
FIGURE 14.21 A vertical dihedral—imaginary part of the SAB crosspolarized element.
The vertical dihedral scatters almost all the power in the direction of the transmission. So, the real and imaginary parts of the copolarized elements tend to zero as soon as the configuration becomes bistatic. After the study of the elements of the scattering matrix of a vertical dihedral, we present the horizontal plane response, of a 22.5° oriented dihedral, as shown in Figure 14.22. © 2001 CRC Press LLC
22.5o
22.5o
P
ϕi=180o , ϕd= 0 o
FIGURE 14.22
ϕi=170o , ϕd=10 o
The 22.5° oriented dihedral.
Figures 14.23 through 14.26 present the curves of the SAA copolarized element and the SAB crosspolarized element calculated in the horizontal plane. The SAA copolarized element and the SAB crosspolarized element are identical. The SBB copolarized element is very closed to –SAA and tends to the equality of –SAA when the configuration is monostatic for a normal incidence. The phase between both copolarized elements is then equal to 180°. The maximum of the modulus for the four elements of the scattering matrix are identical and are obtained for the monostatic configuration of normal incidence. The response of the dihedral becomes very directed and the scattering of the target decreases very quickly as soon as the value of ϕd is different from 0°.
FIGURE 14.23 A 22.5° oriented dihedral—real part of the SAA copolarized element.
FIGURE 14.24 A 22.5° oriented dihedral—imaginary part of the SAA copolarized element.
FIGURE 14.25 A 22.5° oriented dihedral—real part of the SAB crosspolarized element.
FIGURE 14.26 A 22.5° oriented dihedral—imaginary part of the SAB crosspolarized element.
© 2001 CRC Press LLC
For a rotation of 22.5°, the theoretical monostatic scattering matrix, when the direction of transmission is the (Ox) axis, becomes 2 [ S 22.5°oriented ] = ------- 1 1 2 1 –1
(14.33)
The form of the matrix implicates the equality of the amplitude of the four elements of the scattering matrix and a phase opposition between the diagonal elements. These theoretical results are confirmed by the look of the curves obtained by the simulation software, associated to the 22.5° oriented dihedral for monostatic and quasi-monostatic configurations situated around the incidence direction defined by ϕi , equal to 180°. The next situation studied is the 45° oriented dihedral. Figures 14.27 through 14.30 show the evolutions of the SAA and SAB elements of bistatic scattering matrices that model the 45° oriented dihedral.
FIGURE 14.27 A 45° oriented dihedral—real part of the SAA copolarized element.
FIGURE 14.28 A 45° oriented dihedral—imaginary part of the SAA copolarized element.
FIGURE 14.29 A 45° oriented dihedral—real part of the SAA copolarized element.
FIGURE 14.30 A 45° oriented dihedral—imaginary part of the SAA copolarized element.
© 2001 CRC Press LLC
The two copolarized elements are equal, and the two crosspolarized also, so only two are presented. The copolarized elements are very poor facing the crosspolarized one, both for the monostatic configuration of normal incidence and for the quasi-monostatic associated. The theoretical monostatic scattering matrix of a 45° oriented dihedral takes the following form: [ S 45°oriented ] = 0 1 10
(14.34)
The copolarized elements are null. For the particular monostatic case, the crosspolarized elements are of same phase and of maximum value for this angle of rotation when ϕi equals 180°. For any bistatic configuration, the crosspolarized elements are null.
14.4.2
THE KENNAUGH MATRIX
Like the scattering matrix links the two Jones vectors, the Kennaugh matrix relates the transmitted and the received Stokes vectors. The Kennaugh matrix proceeds from the scattering matrix with *
T
[ K ] ( A, B ) = [ V ] ( [ S ] ( A,B ) ⊗ [ S ] ( A,B ) ) [ V ]
(14.35)
Like the scattering matrix links the two Jones vectors, the Kennaugh matrix relates the transmitted and the received Stokes vectors by d
i
g ( E ) ( A,B ) = [ K ] ( A,B ) g ( E ) ( A,B )
(14.36)
If we insert the decomposition of the bistatic scattering matrix into the definition of the Kennaugh matrix recalled in Equation (14.35), that matrix can be broken down as the sum of [Ks], [Kss], and [Kc] as follows: *
T
[ K bi ] ( A,B ) = [ V ] ( ( [ S s ] ( A,B ) + [ S ss ] ( A,B ) ) ⊗ ( [ S s ] ( A,B ) + [ S ss ] ( A,B ) ) ) [ V ] [ K bi ] ( A,B ) = [ K s ] ( A,b ) + [ K ss ] ( A,B ) + [ K c ] ( A,B )
(14.37)
with T
[ K s ] ( A,B ) = [ V ] ( [ S s ] ( A,B ) ⊗ [ S s ]
*
( A,B )
T
[ K ss ] ( A,B ) = [ V ] ( [ S ss ] ( A,B ) ⊗ [ S ss ] T
[ K c ] ( A,B ) = [ V ] ( [ S s ] ( A,B ) ⊗ [ S ss ]
*
*
)[V]
( A,B )
)[V]
( A,B )
) [ V ] + [ V ] ( [ S ss ] ( A,B ) ⊗ [ S s ] ( A,B ) [ V ] )
T
*
(14.38)
• [Ks] is symmetric and corresponds to an equivalent monostatic Kennaugh matrix. • [Kss] is a diagonal matrix. • [Kc] is a skew-symmetric matrix. As the Kennaugh matrix is no longer symmetric, it will be described by 16 parameters, which correspond to the bistatic parameters. We have chosen to keep the definition of the Huynen © 2001 CRC Press LLC
parameters only depending on the equivalent “symmetric” part of the bistatic scattering matrix and to determine seven new bistatic parameters A, I, J, K, L, M, N as follows: A0 ( A,B ) + B0 ( A,B ) + A ( A, B )
C ( A,B ) + I ( A,B )
C ( A,B ) – I ( A,B )
A0 ( A,B ) + B ( A,B ) – A ( A, B )
[ K bi ] ( A,B ) =
H ( A,B ) – N ( A,B )
E ( A,B ) – K ( A,B )
F ( A,B ) – L ( A,B )
G ( A,B ) – M ( A,B )
H ( A,B ) + N ( A,B )
F ( A,B ) + L ( A,B )
E ( A,B ) + K ( A,B )
G ( A,B ) + M ( A,B )
A0 ( A,B ) – B ( A,B ) – A ( A,B )
D ( A,B ) + J ( A,B )
D ( A,B ) – J ( A,B )
– A0 ( A,B ) + B0 ( A,B ) – A ( A,B )
(14.39)
where 1 2 A0 ( A,B ) = --- ( S AA + S BB ) 4
A ( A,B ) = S AB
1 2 s 2 B0 ( A,B ) = --- ( S AA – S BB ) + S AB 4
1 2 s 2 B ( A,B ) = --- ( S AA – S BB ) – S AB 4
ss 2
1 2 2 I ( A,B ) = --- ( S BA – S AB ) 2
1 2 2 C ( A,B ) = --- ( S AA – S BB ) 2
*
*
J ( A,B ) = Im ( S BA S AB )
D ( A,B ) = Im ( S AA S BB ) s
*
K ( A,B ) = Re [ ( S AB ) ( S AA + S BB ) ]
s
*
L ( A,B ) = Im [ ( S AB ) ( S AA + S BB ) ]
s
*
G ( A,B ) = Im [ ( S AB ) ( S AA – S BB ) ]
s
*
N ( A,B ) = Re [ ( S AB ) ( S AA – S BB ) ]
E ( A,B ) = Re [ ( S AB ) ( S AA – S BB ) ] F ( A,B ) = Im [ ( S AB ) ( S AA – S BB ) ] G ( A,B ) = Im [ ( S AB ) ( S AA + S BB ) ] H ( A,B ) = Re [ ( S AB ) ( S AA + S BB ) ]
ss
*
ss
*
ss
*
ss
*
(14.40)
J. R. Huynen identifies the monostatic Kennaugh matrix with the parameters A0, B0, B, C, D, E, F, G, H, called Huynen parameters, which are associated to a geometrical target characteristic. Then, the other parameters are null so that I = 0 ,J = 0,K = 0,L = 0,M = 0,N = 0,A = 0 s
ss
S AB = S AB and S AB = 0 © 2001 CRC Press LLC
(14.41)
14.4.3
THE TARGET EQUATIONS
The absolute phase of the scattering matrix cannot be precisely measured, so the phase relative scattering matrix is studied instead. As the phase relative bistatic scattering matrix is no longer symmetric, the bistatic polarimetric dimension of the target is equal to seven from the four modulus and the three relative phases. The bistatic Kennaugh matrix is described by 16 parameters. So, all of these 16 parameters are linked together by (16 – 7) = 9 independent equations, which will express the interdependence of the bistatic parameters. To derive these target equations, we assume that the target is pure and consequently that the scattered wave is totally polarized. The target vector k24 is introduced like the vectorization of the scattering matrix [Sbi]. 1 k = --- Tr ( [ S bi ] [ ψ ] ) 2
(14.42)
where [ψ] = a set of (2 × 2) complex base matrices, which are a linear combination of the Pauli matrices,25 with [ ψ ] = 2 1 0 , 2 1 0 , 2 0 1 , 2 0 –j 0 1 0 –1 1 0 j 0
(14.43)
The factor of 2 arises from the requirement to keep Tr([S][S]*T) an invariant. The bistatic coherency matrix [Tbi] is generated from the outer product of the k target vector by its conjugate transpose, so that
[ T bi ] ( A, B ) =
2A0 ( A,B )
C ( A,B ) – jD ( A,B )
H ( A,B ) + jG ( A,B )
L ( A,B ) – jK ( A,B )
C ( A,B ) + jD ( A,B )
B0 ( A,B ) + B ( A,B )
E ( A,B ) + jF ( A,B )
M ( A,B ) – jN ( A,B )
H ( A,B ) – jG ( A,B )
E ( A,B ) – jF ( A,B )
B0 ( A,B ) – B ( A,B )
J ( A,B ) + jI ( A,B )
L ( A,B ) + jK ( A,B )
M ( A,B ) + jN ( A,B )
J ( A,B ) – jI ( A,B )
2A ( A,B )
(14.44)
It is interesting to notice that the monostatic 3 × 3 coherency matrix equals the first three columns and rows of the bistatic coherency matrix. It proceeds from the choice to specify 9 of the 16 parameters similarly to the Huynen parameters. For a rank one coherency matrix, all principal minors of the coherency matrix must equal zero. From these minors, we obtain a set of equations. The aim is to define the last five target equations which are, with the four monostatic target equations, generic to the all equations. Finally, the nine polarimetric bistatic target equations are given following in any basis, then 2
2
K ( B0 – B ) = – ( HI + JG )
2
2
L ( B0 – B ) = JH – IG
2 A0 E = CH – DG
2 A0 ( B0 + B ) = C + D
2 A0 F = CG + DH
2 A0 ( B0 – B ) = G + H
2 A E = JM – IN
2 A0 ( B0 – B ) = I + J
2 A F = – ( JN + IM )
2
2
(14.45)
The backscattering matrix is symmetric. So, the monostatic polarimetric “dimension” of a target is equal to five, which corresponds to the three independent modulus and the two relative phases © 2001 CRC Press LLC
of the backscattering matrix. The monostatic polarimetric dimension of the target is equal to five, the nine Huynen parameters are related to each other by (9 – 5) = 4 independent equalities that are called the monostatic target equations. The four monostatic target equations correspond to five among the bistatic target equations. They are still valid because of the decomposition of the bistatic scattering matrix. The nine parameters, which appear in the monostatic target equations, are defined by the symmetric part of the bistatic scattering matrix. If the parameters A0, (B0 + B), (B0 – B), and A equal zero, this induces the canceling of all the other parameters. For that reason, 2A0, (B0 + B), (B0 – B) and 2A are called the bistatic generators of the target structure. The four generators are located at the four tops of the bistatic target diagram. The other parameters are situated such that the square sum of two of them is equal to the product of the two generators, which are on the same line, as shown in Figure 14.31. The target diagram allows the reconstruction of the 9 target equations and, more widely, the set of the 36 dependent equalities, which link the 16 bistatic parameters together. The bistatic target diagram is extended from a triangular surface, the monostatic target diagram, to a tetrahedral volume. Indeed, for the monostatic case, if the parameters A0, B0 + B and B0 – B equal zero, it induces the canceling of the all other monostatic parameters. Six sub-matrices of 2 × 2 matrix dimension, and such that their diagonal elements are two generators, can be extracted from the bistatic coherency matrix such that 2A0 ( A,B )
C ( A,B ) – jD ( A,B )
C ( A,B ) + jD ( A,B )
B0 ( A,B ) + B ( A,B )
2A0 ( A,B )
H ( A,B ) + jG ( A,B )
H ( A, B ) – jG ( A,B )
B0 ( A,B ) – B ( A,B )
2A0 ( A,B )
L ( A,B ) – jK ( A,B )
L ( A, B ) + K ( A,B )
2A ( A,B )
,
,
B0 ( A,B ) + B ( A,B )
E ( A,B ) + jF ( A,B )
E ( A,B ) – jF ( A,B )
B0 ( A,B ) – B ( A,B )
,
B0 ( A,B ) + B ( A,B )
M ( A,B ) – jN ( A,B )
M ( A,B ) + jN ( A,B )
2A ( A,B )
B0 ( A,B ) – B ( A,B )
J ( A,B ) + jI ( A,B )
J ( A,B ) – jI ( A,B )
2A ( A,B )
(14.46)
Six equalities are determined following this rule: The product of two generators is equal to the sum of the value squared of the two parameters that link these two generators on to the tetrahedral shown in Figure 14.32. Furthermore, for each side, the multiplication of one generator by one parameter among the two that belong to the same side, but are located on the opposite line, is equal to the sum or the difference of the crossed product of the two other pairs of parameters of the side. In this way, the 24 equalities, which only depend on one generator, can be constructed. For example, the relations that take into account the A generator are written in Figure 14.33.
2A0
2A0 2A0 (B0 + B) = C
(G
2
+D
(G
,H
B0
2-
B
2
=E
2
+ F2
)
B0 - B
B0 + B
(E,F)
4A0 A = L
)
(L,K)
(C
,D
)
)
(L,K
(C
,D
)
,H
(E,F)
2A0 (B0 - B) = G
2
+ H2
2
2
+K2
B0 - B
2A
FIGURE 14.31 © 2001 CRC Press LLC
(I,J
(I,J
(M,N)
)
)
B0 + B
(M,N
)
2A ( B0 + B) = M
FIGURE 14.32
2A ( B0 - B) = I 2
+N2
2A
2
+J2
2A0
(G
,H
) (L,K
(C ,D )
)
B0 - B
(E,F)
2A E = JM 2A F = - (JN
-IN +
(I, J
)
B0 + B
(M,N
)
IM)
2A
2A C = LM + KN 2A D = KM - LN
2A G = - (IL + JK) 2A H = JL - KI
FIGURE 14.33
The six last equalities, defined in Figure 14.34, are determined by the sum, or the difference, of two crossed products such that each crossed product is constructed by the way of two couples of parameters, which do not belong to the same side. They depend on no generator. The bistatic target diagram is a good helper in reconstructing all the relations obtained by canceling all the principal minors of the bistatic coherency matrix.
14.5
THE POLARIZATION FORK
14.5.1
THE POLARIMETRIC SIGNATURE
OF A
TARGET
The polarimetric signature of a target visualizes the scattered power, for a given configuration, by the target, in the copolarized and the crosspolarized channels. • When the power of the scattered wave is distributed in the two channels of reception of orthogonal polarization states, then it is said that the target depolarizes. • The PCO copolarized power is measured with a receiving antenna of polarization state identical to the polarization state of the transmitting antenna. • The PX crosspolarized power is measured with a receiving antenna of polarization state orthogonal to the polarization state of the transmitting antenna. • The evolution of the theoretical copolarized and crosspolarized powers for two canonical targets in a monostatic configuration are visualized by the sphere and the dihedral. The theoretical monostatic scattering matrix associated to a sphere is equal to [ S ] ( A,B ) = 1 0 0 1
(14.47)
The diagonal form of the matrix means that the sphere is an isotropic and non-depolarizing target.
FIGURE 14.34 © 2001 CRC Press LLC
In the following, the 2δ and 2γ angles specify the position of the point that represents the polarization state of the wave at the surface of the Poincaré sphere. Figures 14.35 and 14.36 show the evolution of the copolarized and crosspolarized power scattered by a sphere, as a function of the transmitted polarization state, expressed with the help of the Deschamps parameters 2δ and 2γ. The copolarized power is maximum for the values of the 2δ angle equal to {0°,±180°} for any value of the 2γ angle. The copolarized power is minimum for the values of Deschamps parameters such that 2δ = ±90°, 2γ = 90°. However, it is usual in the literature to express the polarization states according to the ellipticity and orientation angles. Then, the maximum of copolarized power is obtained when the ellipticity angle is null, and the minimum for an ellipticity angle equal to ±45°, and for any orientation angle. The crosspolarized power is maximum for 2δ = ±90°, 2γ = 90°, or for the value of ellipticity equal to ±45°, for any orientation angle. The crosspolarized power is null for the 2δ angle equal to {0°,±180°} for any value of the 2γ angle, which corresponds to the cancel of the ellipticity angle for any orientation angle. The theoretical monostatic scattering matrix associated to a vertical dihedral equals S ( A,B ) = 1 0 0 –1
(14.48)
The diagonal form of the matrix involves that the dihedral is a non-depolarizing target. Figures 14.37 and 14.38 show the evolution of the copolarized and the crosspolarized power. The copolarized power is maximum for the values of the 2δ angle equal to ±90°, and for any value of the 2γ angle. The minimum of the copolarized power are for 2δ angle equal to {0°,±180°} and for the value of the 2γ angle equal to 90°. As for the crosspolarized power, it is maximum for 2δ equal to {0°,±180°} and 2γ angle equal to 90°. The crosspolarized is null for 2δ angle equal to ±90° and the 2γ angle equal to {0°,±180°}.
14.5.2
PRESENTATION
OF THE
POLARIZATION FORK
The polarization fork is a representation on the Poincaré sphere of a set of points: the characteristic polarization states that form the extreme polarimetric signature of a target. These points maximize or minimize the copolarized, the crosspolarized, and the optimal powers. For a bistatic configuration,
FIGURE 14.35 Signature of a sphere (copolarized).
© 2001 CRC Press LLC
FIGURE 14.36 Signature of a sphere (crosspolarized).
FIGURE 14.37 Copolarized signature of a vertical dihedral.
FIGURE 14.38 tical dihedral.
Crosspolarized signature of a ver-
the maximum of the copolarized power is the optimal received power or less. Figure 14.39 shows the scattering due to a target. The polarization state of the transmitting antenna is called T, and the polarization state of the receiving antenna is called R. The target is modeled by the scattering matrix. The scattered wave is the polarization state equal to S.
14.5.3
THE CHARACTERISTIC BASIS
The copolarized power is measured with a receiving antenna of polarization state identical to the polarization state of the transmitting antenna.
hi = hd = h =
hx
(14.49)
hy
The copolarized power is P
CO
T
= h [ S ] ( A,B ) h
2
Target
T
T FIGURE 14.39
Polarization state of the transmit antenna
Scattering due to a target.
© 2001 CRC Press LLC
S
Polarization state of the receive antenna
R
P
CO
P
CO
S S h = ( h x h y ) AA AB x S BA S BB h y
2
2
2
= ( h x S AA + h x h y S BA + h x h y S AB + h y S BB )
2
(14.50)
Because the uniquely decomposition of the scattering matrix defined in Equation (14.26), the copolarized power can be written as P
CO
2
s
2
= ( h x S AA + 2h x h y S AB + h y S BB )
(14.51)
As the copolarized power only depends on the symmetric part of the bistatic scattering matrix, the eK and eL Jones vectors are the eigenvectors issued of the unitary basis transformation matrix, which diagonalizes the symmetric part of the scattering matrix. Because the K and L polarization states are orthogonal, they are chosen to specify the characteristic basis. The scattering matrix in the (K,L) basis can be expressed as [ S ] ( K,L ) =
S KK S KL – S KL S LL
e
jε
S KK > S LL > 0 and ( S KK ,S LL ) ∈ ℜ
2
(14.52)
So, the characteristic basis is the specific one defined by the two Jones vectors eK and eL associated to the polarization states, which maximize the copolarized power. The Jones vectors of the characteristic points K and L construct the basis transformation matrix from the (A,B) basis to the characteristic one (K,L). Then, the coordinates of the other polarization states on the Poincaré sphere are calculated in the (K,L) basis.
14.5.4
THE COPOLARIZED POWER
Maximization and Minimization of the Copolarized Power (Figure 14.40) The determination of the characteristic polarization states is based on the scattering matrix expressed in the (K,L) basis, so the copolarized power can be expressed in this basis as P
CO
P
CO
T
= h [ S ] ( K, L ) h
2
1 - ( S 2KK + ρ *2 S KK S LL + ρ 2 S KK S LL + ρ 2 ρ *2 S 2LL ) = --------------------* ( 1 + ee )
(14.53)
The expression is identical for any monostatic or bistatic configuration and is a function of the scattering matrix elements of the polarization states that maximize or minimize the copolarized power. The polarization ratios, which are solutions of the cancellation of the derivation of the copolarized power, are S kk S KK p = 0, + ∞, + j ------, – j ------- S LL S LL © 2001 CRC Press LLC
(14.54)
Target
K
K
K scattered
Polarization state of the transmit antenna
K
Polarization state of the receive antenna
FIGURE 14.40
The four polarization ratios obtained correspond to the maximization of the copolarized power (K and L) and to the minimization of the copolarized power (O1 and O2). The associated Stokes vectors are defined by the equalities
1 1 1 1 – , g ( e L ) = 1 , g ( e o( 1,2 ) ) = --------------------------g ( eK ) = S ( KK + S LL ) 0 0 0 0
( S KK + S LL ) ( S LL – S KK ) 0
(14.55)
− + 2 S KK S LL
The fact that the K and L points are antipodal on the Poincaré sphere confirms that the (K,L) basis is orthogonal. Cancellation of the Copolarized Power (Figure 14.41) To cancel the copolarized scattered power, the polarization state of the scattered wave has to be orthogonal to characteristic of the receiving antenna, which collects the copolarized power. In fact, the polarization state of the scattered wave has to be orthogonal to the transmitted wave. The polarization states, which cancel the copolarized power, are solutions of the following relation:
Target
O1 Polarization state of the transmit antenna
FIGURE 14.41 © 2001 CRC Press LLC
O1
T
O1
Polarization state of the receive antenna
O1
P
CO
T
= h [ S ] ( K,L ) h
2
= 0
2
T
2
h [ S ] ( K,L ) h = ( S KK h x + S LL h y )e
jε
= 0
h S KK ⇒ e = ----y = ± j ------hx S LL
(14.56)
The polarization ratios correspond to those of the O1 and O2 points. Scattered Polarization States when the Polarization States K, L, O1, or O2 Are Transmitted σg ( e d ) = [ K ] ( K,L ) g ( e i )
(14.57)
Here, σ takes into account the total scattered power, because the two Stokes vectors are normalized. The scattered polarization states are calculated as σ K g ( e Kd ) = [ K ] ( K,L ) g ( e K ) with σ K = A0 ( K,L ) + B0 ( K,L ) + A ( K,L ) + C ( K,L ) ⇒ g ( e Kd ) ≠ g ( e K )
σ L g ( e Ld ) = [ K ] ( K,L ) g ( e L ) with σ L = A0 ( K,L ) + B0 ( K,L ) + A ( K,L ) – C ( K,L ) ⇒ g ( e Ld ) ≠ g ( e L )
(14.58)
However, these inequalities are transformed into two equalities when the configuration becomes monostatic. σ o1,2 g ( e o1,2d ) = [ K ] ( K,L ) g ( e o1,2 ) A0 ( K,L ) – B0 ( K,L ) with σ o1,2 = A0 ( K,L ) – B0 ( K,L ) + A ( K,L ) − + L ( K,L ) -------------------------------------A0 ( K,L )
⇒ g ( e o1,2d ) =
1 q o1,2d u o1,2d v o1,2d
=
1 – q o1,2i – u o1,2i – v o1,2i
(14.59)
The O1d and O1 on the one hand, and the points O2d and O2 on the other hand, are antipodal on the Poincaré sphere. When the polarization states O1 and O2 are transmitted, the target scatters polarization states that are orthogonal to the transmitted one. © 2001 CRC Press LLC
14.5.5
THE CROSSPOLARIZED POWER
To measure the crosspolarized power PX, the transmitting antenna and the receiving antenna have orthogonal polarization states as shown in Figure 14.42. ⊥
hd = hi x
T
x
⊥T
P = h d [ S ] ( A,B ) h i
2
P = h d [ S ] ( A,B ) h i
2
(14.60)
Because the polarization ratio is expressed like this, e = tanγ e
j2δ
(14.61)
the crosspolarized power depends on the elements of the scattering matrix and on the Deschamps parameters by the relationship S KK S LL 2 1 2 x 2 2 - sin 2γcos4δ + S KL 2 P = --- ( S KK + S LL )sin 2γ – --------------4 2 + Re ( S KL ) ( S KK – S LL )sin2γcos2δ – Im ( S KL ) ( S KK + S LL )sin2γsin2δ
(14.62)
The solutions that cancel the derivation of the crosspolarized power are as shown below. If 2γ = 0[¼] The coordinates of these points are independent of the 2δ angle value and are determined by cos 2 γ ±1 = sin 2γcos2δ 0 sin 2γsin2δ 0
(14.63)
which is about the K and L polarization states.
Target
C1scattered
C1
C1 FIGURE 14.42
Polarization state of the transmit antenna
Polarizationstate of the receive antenna
Crosspolarization measurement geometry.
© 2001 CRC Press LLC
C1
If 2γ = ¼/2 The different values of the second parameter 2δ that defined the location of the points on the Poincaré sphere are determined by 2S KK S LL sin2δcos2δ – sin2δRe ( S KL ) ( S KK – S LL ) – cos2δIm ( S KL ) ( S KK + S LL ) = 0
(14.64)
They correspond to four points C1, C2, D1, and D2, which belong to the vertical plane (OU,OV) on the Poincaré sphere as shown in Figure 14.43. If 2γ ¦ 0 and 2γ ¦ ¼/2 (Figure 14.44) After some derivation, the normalized coordinates of these points are determined to be equal to
1 q E1 ,E2 u E1 ,E2 v E1 ,E2
1 -± = -------------------------2 2 ( S KK – S LL ) --------------------------- 2
2
2
( S KK – S LL ) --------------------------2 2
2
2
( S KK – S LL ) 2 2 2 2 ------------------------------- – Re ( S KL ) ( S KK + S LL ) – Im ( S KL ) ( S KK – S LL ) 4 – Re ( S KL ) ( S KK + S LL ) Im ( S KL ) ( S KK – S LL ) (14.65)
The E1 and E2 points are linked together by
g ( e E1 ) =
1 q E1 u E1
=
1 – q E2
(14.66)
u E2
v E1
v E2 V C1
L D2
2γ=π/2
D1
U
O K
Q C2
FIGURE 14.43 Poincaré for values of the second parameter 2δ. © 2001 CRC Press LLC
Target
E1
E1
E1
Polarization state of the transmit antenna
Polarization state
of the receive antenna
E1
FIGURE 14.44
The E1 and E2 exist only if the following condition is true: 2
2
2
( S KK – S LL ) ----------------------------- ≥ 2Re 2 ( S KL ) ( S KK + S LL ) 2 + 2Im 2 ( S KL ) ( S KK – S LL ) 2 2
(14.67)
The scattered polarization states, when the E1 and E2 characteristic polarization states are transmitted, are identical to those transmitted. The associated Jones vectors eE1, eE2 are the eigenvectors of the bistatic scattering matrix.
14.5.6
THE OPTIMAL POWER (FIGURE 14.45)
The last four characteristic points of the polarization fork are the transmitted polarization states, M and N, and the received polarization states M″ and N″, which maximize and minimize, respectively, the optimal scattered power. The polarization state of the transmit antenna is neither parallel nor orthogonal to the polarization state of the received antenna. These polarization states are obtained from a singular values decomposition of the bistatic scattering matrix with T
[ S diagonalized ] = [ U d ] [ S ] ( K,L ) [ U i ]
(14.68)
Target
M
M FIGURE 14.45
Polarization state of the transmit antenna
Optimal power conditions.
© 2001 CRC Press LLC
M"
Polarization state of the receive antenna
M"
where [Ui] and [Ud] are the two unitary SU(2) matrices associated respectively to the transmitter and the receiver. The eM and eN Jones vectors are the eigenvectors given by the [Ui] matrix. The eM″ and eN″ Jones vectors are the eigenvectors given by the [Ud] matrix. The coordinates of the M and N points on the Poincaré sphere are defined, as a function of the elements of the scattering matrix, by 1 2 --- ( S KK – S 2LL ) 2 ±1 = -----D Re ( S KL ) ( S KK – S LL )
q M,N u M,N v M,N
with D =
– Im ( S KL ) ( S KK + S LL )
2 1 2 --- ( S KK – S 2LL ) + [ Re ( S KL ) ( S KK – S LL ) ] 2 + [ – Im ( S KL ) ( S KK + S LL ) ] 2 4
(14.69)
The scattered polarization states, if M or N are transmitted, are respectively M″ and N″. Furthermore, the M and N and the M″ and N″ scattered polarization states are respectively orthogonal.
14.5.7
CONCLUSIONS
1. The bistatic polarization fork is defined by 14 characteristic polarization states, K, L, O1, O2, E1, E2, C1, C2, D1, D2, M, N, M″, and N″. All of these polarization states are obtained by maximization or minimization of either the copolarized or the crosspolarized power versus the Deschamps parameters 2δ and 2γ, or by the study of the optimal power. These 14 points are located on 4 different circles (KLO1O2), (KLE1E2), (C1C2D1D2), (MNM″N″) as shown in Figure 14.46. If the locations of the transmitter and the receiver are exchanged, a new polarization fork is obtained which corresponds to the initial one after rotation by 180° about the (KL) axis. 2. The monostatic polarization fork reduces to eight points, because K, E1, M, M″ on the one hand, and L, E2, N, N″ on the other hand, are then at the same location on the Poincaré sphere. The K and L points, which are collocated with the points, minimizing the crosspolarized power, in the monostatic theory are called the X1, X2 points. So, eight characteristic polarization states totally define the monostatic polarization fork. O1 and
V N
C1 O1
M" E 2 L
D2
D1 E1
K N Q O2
M C2
FIGURE 14.46 The bistatic polarization fork is defined by 14 characteristic polarization states, K, L, O1, O2, E1, E2, C1, C2, D1, D2, M, N, M″, and N″. © 2001 CRC Press LLC
O2 represent the copoll null, X1 and X2 the xpoll null, C1 and C2 the xpoll max, and D1, D2 the xpoll saddle. The knowledge of the global position of these points is a very good help in classifying a simple target. Because the location of C1, C2, D1, and D2 are independent of the target in a monostatic configuration, only the four next points, O1, O2, X1, X2, which are coplanar, are useful.
14.6
THE EULER PARAMETERS
The coordinates of each of the 14 characteristic points are independently determined, but their locations are linked together. Furthermore, the polarization fork and the scattering matrix are two different representations of the target model. So, seven independent elements (six geometric angles and a magnitude that uniquely define the relative phase bistatic scattering matrix) specify some angles of the polarization fork. They are called the bistatic Euler parameters. Thus, any bistatic phase relative scattering matrix [Sbi](A,B) defined in an (A,B) orthogonal basis, can be completely expressed according to seven Euler parameters as
[ S bi ] ( A,B ) = [ U ( ϕ,τ,v ) ] m
S KL -------m
1
*
S KL 2 – ------- tan α 0 m
jε
e [ U ( ϕ,τ,v ) ]
T*
2
( 1 + tan α 0 ) m with S KL = cos ( 2α E ) ---- ------------------------------------------------------------------------2 2 1 + cos ( arg ( S ) )tan 2 ( 2α ) KL 0 cot ( β E ) arg ( S KL ) = arctan --------------------cos ( 2α 0 )
(14.70)
The seven bistatic Euler parameters are {m, ϕ, τ, ν, α0, αE, βE}. • m: In this case, m2 represents the maximum copolarized received power. Then, m is called the magnitude of the target and is also linked to the radius of the Poincaré sphere in the polarization fork. • ϕ, τ, ν: The [U] matrix that diagonalizes the symmetric part of the bistatic scattering matrix depends on the three angles ϕ, τ, ν, which are the orientation, the ellipticity, and the absolute phase. T
jε
[ S bi ] ( K,L ) = [ U ( ϕ,τ,v ) ] [ S bi ] ( A,B ) e [ U ( ϕ,τ,v ) ] – jv
0 with [ U ( ϕ,τ,v ) ] = cosϕ – sinϕ cosτ jsinτ e sinϕ cosϕ jsinτ cosτ 0 e jv
(14.71)
The basis transformation defining the characteristic basis is based on the [U] matrix. Furthermore, because the [U] matrix belongs to the SU(2) matrix, the [O](3 × 3) matrix is associated the [U] matrix such that © 2001 CRC Press LLC
[ U ] = [ U2 ( ϕ ) ] [ U2 ( τ ) ] [ U2 ( v ) ] ⇒ [ O ] = [ O 3 ( 2ϕ ) ] [ O 3 ( 2τ ) ] [ O 3 ( 2v ) ]
(14.72)
The basis transformation corresponds to three rotations by 2ϕ, 2τ, 2ν about the different axis of the basis. It is interesting to notice that, because the [U] matrix diagonalizes the symmetric part of the bistatic scattering matrix, m, ϕ, τ, ν are defined identically than for a monostatic configuration. • α0: The elements of the scattering matrix are linked to α0 by the following equality: S LL 2 ------- = tan α 0 m
(14.73)
This angular parameter is linked to the characteristic polarization that cancels the copolarized power, O1 and O2, on the polarization fork by the way shown in Figure 14.47. Because the copolarized power only depends on the symmetric part of the bistatic scattering matrix, α0 is identically defined for a bistatic configuration rather than for a monostatic configuration. • 2αE and 2βE: These two angular parameters allow us to determine the cross elements of the bistatic scattering matrix 2
( 1 + tan α 0 ) m S KL = cos ( 2α E ) ---- ------------------------------------------------------------------------2 1 + cos 2 [ arg ( S ) ]tan 2 ( 2α ) KL 0 cot ( β E ) arg ( S KL ) = arctan --------------------cos ( 2α 0 )
(14.74)
Furthermore, these two angles can be represented on the polarization fork, and 2αE is constructed from the E1 and E2 points similarly to the polarizability angle, which is associated to O1 and O2. The βE angle specifies the angle between the (OQ,OV) plane and the (K L E1 E2) one, as shown in Figure 14.48.
v
o
1
2γo1 Q
2αo K
L
O U
o
2
FIGURE 14.47 Definition of the α0 angle. © 2001 CRC Press LLC
V E2
βΕ
π/2
L
Ε1 Ο 2γE1 2δE1
U
K Q
FIGURE 14.48
Definition of the βE angle.
These seven bistatic Euler parameters totally specify the relative phase bistatic scattering matrix and must have a geometrical significance in the bistatic polarization fork. The points M, N, M″, N″ are specified by the angles 2αM and βM, which are constructed on the Poincaré sphere similarly to 2αE and βE, and which are of analytical form given by 2
2
2
[ 1 + tan 2α E ] { 1 + tan 2α 0 + tan [ arg ( S KL ) ] } 2 tan ( 2α M ) = -------------------------------------------------------------------------------------------------------------2 2 { 1 + ( 1 + tan 2α 0 )tan [ arg ( S KL ) ] } 2
cot ( β M ) = [ 1 + tan 2α 0 ]cotβ E
(14.75)
For the monostatic case, four geometrical angles and a magnitude allow the construction of the polarization fork. The five independent elements, which uniquely define the backscattering matrix, link the positions of the characteristic points on the polarization fork. So, any backscattering matrix [S](A,B) defined in any (A,B) orthogonal basis can be expressed completely according to the five monostatic Euler parameters (m, ϕ, τ, ν, α0) following the form
*
[ S mono ] ( A,B ) = [ U ( ϕ,τ,v ) ] m
14.7
1
0 2
0 tan α 0
jε
e [ U ( ϕ,τ,v ) ]
T*
(14.76)
MONOSTATIC AND BISTATIC POLARIZATION CONCLUSIONS
Our study objective is to extend the radar polarimetry concept to bistatic radar systems. The theoretical approach takes into account the influence of the parameters as the polarization of the wave at reception with respect to the transmitted wave. Properly used, polarization studies can help in target recognition. First, different vector representations of the electromagnetic wave were presented. The bistatic relative scattering matrix is decomposed into two matrices: a symmetric one and a skew-symmetric one. This choice implicates the decomposition of the Kennaugh matrix into three matrices. The definition of the nine Huynen parameters, which depend on the symmetric part of the scattering matrix, is kept. Seven new parameters have been introduced to determine the Kennaugh matrix. Because the polarimetric dimension of the target equals 7 for a bistatic configuration, nine inde© 2001 CRC Press LLC
pendent bistatic target equations link together the 16 parameters. The target diagram is also extended to the bistatic case. Moreover, the 14 characteristic polarization states of the scattering matrix that form the bistatic polarization fork are calculated in the characteristic basis. Six angular parameters allow the location of these different polarization states at the surface of the Poincaré sphere. They are called with the maximum copolarized power, bounded to the radius of the Poincaré sphere, the bistatic Euler parameters. We can extend the basic principles of monostatic radar polarimetry theory to bistatic radar polarimetry. However, our analytical treatment assumes a pure target with no noise and extra clutter. Realistic applications will not be pure target cases; therefore, the clutter due to the environment and the speckle must be modeled to extract the target. Different approaches used for target decomposition theory in radar polarimetry exist in the literature: those based on the Kennaugh matrix and Stokes vector using an eigenvector analysis of the covariance or coherency matrix, and those employing coherent decomposition of the scattering matrix. Several monostatic decomposition theorems can solve the problem and allow the separation between the stationary target and the noise. A second study object was to extend the existing decomposition theorem from the monostatic configuration to the bistatic configuration.
REFERENCES 1. A.B. Kostinski and W.M. Boerner, “On foundations of radar polarimetry,” IEEE Trans. Antennas and Propagation, Vol. AP-34, No. 12, pp. 1395–1404, December 1986. 2. W.M. Boerner, A.K. Jordan and I.W. Kay, guest editors, special issue on “Inverse methods in electromagnetics,” IEEE Trans. Antennas and Propagation, AP-29, March 1989. 3. S.K. Chaudhuri and W.M. Boerner, “A polarimetric model for the recovery of high frequency scattering centers from bistatic-monostatic scattering data,” IEEE Trans. Antennas and Propagation, Vol.AP-35, No. 1, pp. 87–93, January 1987. 4. E.M. Kennaugh, “Polarization dependence of RCS—A geometrical interpretation,” IEEE Trans. Antennas and Propagation, vol. AP-29, N.02, march 1981. 5. G.A. Deschamps, “Geometrical representation of the polarization of a plane electromagnetic wave,” Proc. IRE., pp. 540–544, 1951. 6. J.R. Huynen, “Phenomenological theory of radar targets,” (PhD dissertation), PQR press, Drukkerij Bronder-offset N.V. Rotterdam, 1970. 7. E. Pottier, “Contribution de la polarimétrie dans la discrimination de cibles radar. Application à l’imagerie électromagnétique haute résolution,” Thèse de Doctorat, Université de Rennes I, 1990. 8. Z.H. Czyz, “An alternative approach to foundations of radar polarimetry,” Direct and Inverse methods in radar polarimetry, W.M. Boerner et al., NATO-ARW-DIMPRP, pp. 247–266, 1988. 9. Z.H. Czyz, “Bistatic radar target classification by polarization properties,” ICAP ’87, IEE Conf., York, UK, No. 274, Pt. 1, pp. 545–548, 1987. 10. Z.H. Czyz, “Characteristic polarization states for bistatic nonreciprocal coherent scattering case,” ICAP ’91, IEE Conf., Publ. 333, York, UK, pp. 253–256, 1991. 11. M. Davidovitz and W.M. Boerner, “Extension of Kennaugh optimal polarization concept to the asymmetric scattering matrix case,” IEEE Trans. Antennas and Propagation, AP-34, (4), pp. 569–574, 1986. 12. W.M. Boerner and Z.H. Czyz, “A rigorous formulation of the characteristic polarization state concept and its solution for the bistatic coherent case,” ETC, vol. 1, November 1991. 13. W-M. Boerner, Direct and inverse methods in radar polarimetry, Vol. 1, Dorecht, Boston, London, Kluwer Academic Publishers, 1992. 14. A-L. Germond, E. Pottier, J. Saillard, “Nine polarimetric bistatic target equations,” Electronics Letters, Vol. 33, No. 17, pp. 1494–1495, 1997. 15. A-L. Germond, E. Pottier, J. Saillard, “Two bistatic target signatures: The bistatic equations and the bistatic polarization fork,” MIKON, pp. 123-127, Cracow, 1998. 16. J.I. Glaser, “Some results in the bistatic radar cross section of complex objects,” Proc. IEEE, Vol. 77, No. 5, pp. 639–648, May 1989. © 2001 CRC Press LLC
17. V.W. Pidgeon, “Bistatic cross section of the sea,” IEEE. Trans. Antennas and Propagation, Vol.1 4, No. 3, pp. 405–406, May 1966. 18. G.W. Ewell, “Bistatic radar cross section measurements,” Radar Reflectivity Measurement: Techniques and Applications, Chap. 5, pp. 139–176, Ed. Currie, 1989. 19. R.E. Kell, “On the derivation of bistatic RCS from monostatic measurements,” Proc. IEEE, Vol. No. 8, pp. 983–987, August 1965. 20. R.W. Larson et al., “Bistatic clutter measurements,” IEEE Trans. Antennas and Propagation, Vol. 26, No. 6, pp. 801–804, November 1978. 21. N.J. Willis, Bistatic Radar, Artech House, 1991. 22. M.R.B. Dunsmore, “The principles and applications of bistatic radars,” PIERS ’98, Ispra, 1998. 23. A. Guissard, “Mueller and Kennaugh matrices in radar polarimetry,” IEEE Transactions on geoscience and remote sensing, Vol. 32, No. 3, May, pp. 590–597, 1994. 24. S.R. Cloude and E. Pottier, “A review of target decomposition theorems in radar polarimetry,” IEEE Trans on geoscience and remote sensing, Vol. 34, pp. 498–518, 1996. 25. S.R. Cloude, “Group theory and polarization algebra,” Optik, Vol. 75, No. 1, pp. 26–36, January 1986.
© 2001 CRC Press LLC