41• Oceanic Engineering
41• Oceanic Engineering Hydrophones Abstract | Full Text: PDF (530K) Oceanographic Equipment Ab...
373 downloads
1148 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
41• Oceanic Engineering
41• Oceanic Engineering Hydrophones Abstract | Full Text: PDF (530K) Oceanographic Equipment Abstract | Full Text: PDF (179K) Sonar Signal Processing Abstract | Full Text: PDF (261K) Sonar Target Recognition Abstract | Full Text: PDF (402K) Sonar Tracking Abstract | Full Text: PDF (236K) Underwater Acoustic Communication Abstract | Full Text: PDF (180K) Underwater Sound Projectors Abstract | Full Text: PDF (199K) Underwater Vehicles Abstract | Full Text: PDF (127K)
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELE...TRONICS%20ENGINEERING/41.Oceanic%20Engineering.htm16.06.2008 15:14:15
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL...ONICS%20ENGINEERING/41.%20Oceanic%20Engineering/W5401.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Hydrophones Standard Article Kurt M. Rittenmyer1 and Walter A. Schulze1 1Alfred University, Alfred, NY Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W5401 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (530K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases ❍
❍ ❍
Abstract The sections in this article are Theory Conventional Piezoelectric Hydrophones New Piezoelectric Materials and Hydrophones Fiber-Optic Hydrophones Conclusions Appendices: Some Typical Dielectric, Piezoelectric, and Elastic Properties of Hydrophone Materials About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELE...0ENGINEERING/41.%20Oceanic%20Engineering/W5401.htm16.06.2008 15:15:30
Advanced Product Search Search All Content Acronym Finder
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering c 1999 John Wiley & Sons, Inc. Copyright
HYDROPHONES Hydrophones are devices used for the detection of sound in liquids (usually water). In theory, a hydrophone may convert an acoustic pressure or velocity into any measurable quantity. In practice, a hydrophone converts a sound or acoustic pressure or acoustic velocity signal into an electrical signal, which can then be measured by normal electronic means such as a voltmeter, lock-in amplifier, or other common electronic instrumentation. Most hydrophones measure pressure rather than velocity, and the discussion here will be limited to those devices. Very early hydrophones used other means of detection of acoustic pressure, such as modulating the plate separation and, therefore, the capacitance of an air-filled or fluid-filled capacitor. The development of the vast majority of modern hydrophones is intricately linked to the development of piezoelectric materials. The piezoelectric effect occurs in various classes of dielectrics, including single crystal, polymer, and ceramic materials. For hydrophone purposes, the direct piezoelectric effect relates linearly the dielectric displacement or electric field, E, developed by an element of a lossless dielectric when it is subjected to a mechanical stress as defined by the relationships
where T is the applied stress and D is the dielectric displacement and is equal to the charge on the electroded surfaces divided by the area of the electrodes for materials with low dielectric loss. This latter requirement is a necessity for any commonly used piezoelectric material. Piezoelectric materials are intrinsically nonisotropic, and the magnitudes of the piezoelectric effect depend on the direction in which the electrical variables are measured as well as the directions in which the stress is applied. This will be described in more detail later. Most common acoustic signals are detected by employing the piezoelectric effect of certain types of ceramics, most often lead zirconate-titanate ceramics. This material, which is widely used for hydrophones, was developed in the 1950s and replaced more delicate piezoelectric single crystal materials used in Navy SONAR systems during and following World War II as well as barium titanate ceramics, which are commonly used today for electronic capacitors. Early hydrophones were made from piezoelectric single crystals such as Rochelle salt, potassium dihydrogen phosphate (KDP), adenine dihydrogen phosphate (ADP), and lithium sulfate. These materials are hygroscopic and tended to be environmentally unstable although they had high piezoelectric coefficients. Even today, research in new single crystals, which have extremely high piezoelectric coefficients, is conducted. Recently, a number of new types of materials have been developed for specific underwater sound detection applications, which gives the hydrophone designer a wider variety of materials to choose from depending on application. These include piezoelectric polymer, ceramic-polymer composite, and single crystal materials as well as more conventional piezoelectric ceramics and single crystals. Single crystal materials may have macroscopic piezoelectric effects because of the noncentrosymmetric symmetry along one or more of their crystallographic axes. Lithium sulfate serves as an example. Most piezoelectric materials commonly used today 1
2
HYDROPHONES
for hydrophones are also ferroelectric, which means that the polarization can be reoriented in direction by the application of an electric field of sufficient strength. This property generally gives the ferroelectric materials higher dielectric permittivities than the nonferroelectric piezoelectric materials. Dielectric permittivity, εij , relates the charge or dielectric displacement to the electric field:
The subscripts designate direction (i, j = 1, 2, 3). Di and Ej are first-rank tensors (vectors), and εij is a second-rank tensor. For homogenous materials with a single unique polar direction, only ε33 and ε11 (= ε22 ) are independent and nonzero. Sufficiently high dielectric permittivity is often desirable to reduce the noise associated with insufficient capacitance relative to lead and stray capacitance effects, which cause reduction of the voltage signal and add extraneous noise. Ferroelectric materials, however, must be “poled” by applying a high electric field along a single direction in order to align the thermally disordered electric dipoles to the acceptable crystallographic direction nearest that of the applied electric field. This is the case for both ferroelectric single crystals and ceramics. Application of sufficient stress or temperature can again partially or completely randomize the dipoles, resulting in a loss of polarization and, consequently, of piezoelectricity. Materials used in hydrophones are, therefore, chosen according to the environmental conditions (pressure and temperature) at which the hydrophone is designed to operate. Depending on the complexity of the signal, including the level of signal, the frequencies, and the types of signals (i.e., pulse, time-invariant sinusoidal, pulse sinusoid, etc.) that comprise the total signal, the detection systems can be either rather simple or extremely complex. If the level of noise in which the signal is being measured is high, the detection systems can become quite intricate, involving many hydrophones and their arrangement as well as sophisticated analog and digital electronics (used to detect the hydrophone signals and in the analog and digital signal processing of the resulting electrical signal). Technology developed recently also can convert the acoustic signal into optical signals generally by modulating a monochromatic laser beam by some means related to the acoustic pressure and then demodulating the optical signal by optical interferometric techniques into electrical signals. This has the advantage that the transmission of signals is nearly lossless, whereas the piezoelectric is a capacitive sensor and encounters signal loss due to cable capacitances being connected in parallel with the hydrophone. The cable, which has nonzero capacitance, in effect acts as a voltage divider with the hydrophone. Fiber-optic hydrophones avoid this problem and can be directly implemented with optical circuitry, a rapidly advancing technology.
Theory Acoustic Waves. An acoustic signal can be represented by specifying the time dependence of either the acoustic pressure given by (1,2)
or the acoustic particle velocity,
where ω (= 2πf ) is the angular frequency, f is the frequency, k is the wave vector, and x is the coordinate along which the wave is propagating. For hydroacoustic signals, p0 and v0 are the scalar amplitudes of the pressure and velocity waves. The waves are longitudinal waves with particle displacement in the same direction as the
HYDROPHONES
3
pressure gradient,
where v(x, t) is the particle velocity at a point in the fluid and ρ0 is the density of the fluid. The ratio of the pressure to the particle velocity is termed the acoustic impedance,
and, in general, is complex depending on the geometry of the wave and the medium in which it travels. For plane waves in a liquid with density, ρ0 ,
where c is the velocity of sound in the liquid. For water, c is approximately 1500 m/s. At low frequency (acoustic wavelength considerably larger than the dimensions of the hydrophone), Eq. 8 is a suitable approximation. For higher frequencies (where the wavelength is on the order or less than the largest dimension of the hydrophone), the response of the hydrophone becomes directional, the acoustic wave can no longer be treated as a plane wave, and the impedance is a function of the geometry and dimensions of the hydrophone as well as the direction of the acoustic wave (1,2)
Acoustic Transduction Requirements of Piezoelectric Hydrophones. Voltage Sensitivity and Capacitance. In general, hydrophones are specified by their ability to detect both large and small signals over a specified range of frequency. They are often used with significant lengths of cable, and in order not to lose signal, the capacitance of the hydrophone must be significantly larger than that of the cable that is used to connect it to amplifier electronics. Also important is the environmental stability of the hydrophone with respect to pressure and temperature. These requirements, particularly pressure, must be addressed in the performance of a hydrophone. Extremely high pressures require thicker dimensions and materials that resist “depoling.” Often more sensitive designs tend to be more fragile, and again sensitivity must be traded for pressure stability as well as ruggedness. Because of the lower noise associated with voltage electronics compared to charge electronics, voltage electronics are most often used to measure the output of piezoelectric transducers. Specifications for hydrophones that convert acoustic pressure into voltage include the free-field voltage sensitivity (M 0 ), which is defined as
where V is the voltage induced by the acoustic pressure, p0 . Because of the large range of pressure measured by a typical hydrophone, voltage sensitivity is often expressed in decibels relative to an acoustic pressure of 1 µPa such that (in mks units) free-field voltage sensitivity (or FFVS) is often expressed as
which is sometimes a useful specification when amplifiers are not included in the measured response of the hydrophone. However, since most signals from the hydrophone are amplified, and the amplifier gain can be made as large as desired, FFVS must be considered with regard to the noise of the hydrophone itself and the electronics (or optics) associated with detecting and measuring the output of the hydrophone. The bandwidth as well as center frequency are also, of course, primary concerns and are defined by the intended application.
4
HYDROPHONES
In general, sensitivity can be traded for bandwidth by varying the dimensions of the hydrophone element, just as the gain of an amplifier can be traded off for increased bandwidth. By making the sensitive element of the hydrophone smaller, smaller wavelengths (higher frequencies) can be accommodated before they become of size on the order of the dimensions of the acoustic element, thereby resulting in mechanical resonances, which strongly alter the response of the hydrophone. However, smaller dimensions result in lower voltage sensitivity and capacitance. Hydrophone specifications, therefore, include voltage sensitivity, capacitance, frequency range, maximum pressure of operation, temperature range, and noise levels as functions of frequency. Piezoelectric Hydrophone Sensitivity Relative to Noise. The measure of performance of a hydrophone is its ability to detect acoustic pressure relative to noise. This is often defined as the minimal detectable pressure or noise equivalent pressure, pnep , and can be considered the smallest acoustic pressure that can be detected in a noise-free environment given the self-noise of the hydrophone and the electronics associated with it. Noise in systems is a complicated subject, and many factors are involved, including the frequency range of interest, the capacitance, and the dielectric losses of the element. Also, the current and voltage noise of the electronics used to amplify the piezoelectric, random fluctuations in temperature and pressure in the environment and fluctuations of the polarization and capacitance of the piezoelectric element contribute to noise. Noise is typically dependent on frequency, temperature, and load (input resistance of the amplifier) as well as the frequency bandwidth over which the hydrophone is designed to operate. Environmental sources of noise depend on the platform on which the hydrophone is mounted and on the many possible noise sources in the ocean, which vary according to location. The minimal detectable pressure is often calculated assuming that thermal noise is the dominant source of noise, which is not always the case. It is, however, a noise source that is always present and cannot be entirely eliminated. It represents the best-case situation, where electronic and other sources of noise have been lowered beneath this physical limit. The thermal voltage noise in a given frequency bandwidth, f , is given by the Nyquist equation, derivable from statistical mechanics as
where k is the Boltzmann constant, is the absolute temperature, and |Z| is the magnitude of the electrical impedance of the hydrophone. The impedance of the hydrophone is often assumed to be a Thevenin equivalent circuit with resistance, R, which represents the dielectric and conductive losses of the material connected in series with a lossless capacitance, C. The impedance is then given by
where the resistance is related to the dielectric loss tangent of the material by
The ratio of signal to noise voltages is then given by
HYDROPHONES
5
For a barely detectable pressure, this ratio is equal to unity. Then, solving for the pressure, p2 0 (which is equal to the minimum detectable pressure),
When designing a hydrophone, it is desirable to minimize this value. A figure of merit (FOM) for a hydrophone may be defined by the reciprocal of p2 nep , neglecting the factor 4k, as
For any material of use with low losses at low frequency (far below any electromechanical resonances), tan δ 1, and the figure of merit becomes
In other words, the measure of performance of a piezoelectric hydrophone with respect to thermal noise is just the product of the square of voltage sensitivity and the hydrophone capacitance. Other noise sources, such as that from the environment or from the voltage and current noise of the amplifier, can also be included. The current noise often becomes significant at lower frequency and the voltage noise at higher frequencies. Often the environmental noise for an open-water ocean is given in terms of sea-state noise. This is defined in terms of measured noise pressure in a 1 Hz frequency band and is generally given in decibels relative to 1 . Figure 1 shows a plot of sea-state noise as a function of wind and sea conditions. Sea-state zero is µPa/ most commonly used as a noise pressure reference. Since hydrophones are often specified to have self-noise at levels well below sea-state zero, thermal noise is often considered as the practical noise floor for hydrophones. The preceding figures of merit [Eqs. (16) and (17)] are, therefore, commonly used. The frequency range or bandwidth of a hydrophone is also of primary importance. The bandwidth can be limited by either the mechanical resonances of the hydrophone or the electronic circuits used to detect the signal. Mechanical resonances cause the sensitivity of the hydrophone to vary drastically and, therefore, hydrophones are designed to operate generally below any fundamental resonance or sometimes in a region above the fundamental resonance and below higher-frequency resonances. From a piezoelectric materials perspective, figures of merit are also often used. First, however, a brief introduction into the definitions of piezoelectric coefficients and their application to hydrophone technology is required. Piezoelectricity. The piezoelectric effect is defined by the equations in full tensor notation as (3)
and
where the dielectric displacement is defined as the charge developed on the electrode faces perpendicular to the k-direction of the material divided by the area perpendicular to the k-direction for a stress applied down the i-axis to the plane described by the normal vector k (Fig. 2). The piezoelectric charge coefficient, dijk ,
6
HYDROPHONES
Fig. 1. Measurement of sea-state noise pressure. (Figure reproduced from Ref. 4.)
Fig. 2. Definition of axes for piezoelectric stress, strain, dielectric displacement, and electric field for electroded piezoelectric material.
and piezoelectric voltage coefficients, gijk , are defined by Eqs. (18) and (19). For most piezoelectric materials commonly used in hydrophone applications, many of the possible piezoelectric coefficients are zero or are equal or opposite to each other depending on the symmetry of the crystallographic symmetry of the materials used (3). The subscripts for stress are often simplified to matrix form by allowing the following transformation of T jj :
HYDROPHONES
7
so that T 1 , T 2 , and T 3 refer to tensile or compressive stresses along the x-, y-, and z-axes and T 4 , T 5 , and T 6 refer to shear stresses applied to faces perpendicular to the x-, y-, and z-axes. For poled piezoelectric ceramics, Eq. (18) has the form
where d31 = d32 and d24 = d15 because the 1 and 2 directions are equivalent. The gij matrix has the same form relating the Ei coefficients to the stress, T j . Hence, under hydrostatic pressure, which defines the low-frequency case where the acoustic wavelength λ is much larger than the largest dimension of the hydrophone,
the dielectric displacement perpendicular to the electrode faces of the element is given by
where then the hydrostatic piezoelectric coefficient is defined as
For most poled ceramics, as well as many single crystals, the last two coefficients are equal in sign and magnitude. Furthermore, the signs of the d31 and d33 coefficients are opposite and the magnitude of dh is substantially lower than either d33 or d31 . A similar piezoelectric voltage coefficient is defined for low-frequency (hydrostatic conditions) applications as
The dielectric displacement and electric field are related through the dielectric permittivity as given by Eq. (3), so that from Eq. (3), (23), and (25),
where the superscript T indicates that the permittivity is measured under conditions of constant stress, which is the proper boundary condition for low-frequency applications. For higher frequencies, where the wavelength of the acoustic wave is on the order of the dimensions of the transducer, the stresses are not generally equal to the opposite of the pressure and they must be solved for by applying the wave equation to the structure in order to solve for the stress or strain as a function of time and position in the hydrophone with appropriate boundary conditions. The piezoelectric response can then be calculated by integrating the response of each point in the hydrophone structure over the volume of the hydrophone. For applications such as large-area hydrophones, which will be discussed later in conjunction with use of polymer and piezoelectric composite materials, this is necessary. The calculations are beyond the scope of this article, but information has been given elsewhere on
8
HYDROPHONES
such calculations (2). For most applications, however, the hydrostatic coefficients are the most critical measure of performance of a material, as are the voltage sensitivity and capacitance for a hydrophone. The equations given previously are for piezoelectric ceramics with a single unique axis. For many single crystals, the situation is more complicated, and full derivation of the piezoelectric matrices can be found in the textbook by Nye (3). Figures of Merit for Piezoelectric Hydrophone Materials. The performance of a hydrophone material is derived from the hydrophone figures of merit by replacing the hydrophone geometry (4). For a lossless piezoelectric material, the electric field is related to the voltage of a planar piece of material with planar surface area A and thickness t by
so then the material voltage sensitivity is related to hydrophone voltage sensitivity by
The dielectric permittivity is related to the capacitance by
Similarly, the charge sensitivity of the material,
which, when solved for C and put into the hydrophone figure-of-merit equation [Eq. (16)], yields
The materials FOM is then
where the product, tA, equals v, the volume of the element. Since the product of charge and voltage equals the electrical energy, the hydrophone FOM gives the energy converted from the acoustic signal to the electrical signal per unit acoustic pressure squared, whereas the material FOM gives the energy converted per unit volume of material, both at frequency ω. For low loss materials, this reduces simply to the product
which defines the signal-to-noise performance of a low-loss hydrophone material. It is maximum when the conversion from acoustic energy to electrical energy is maximized. The mechanical work done per unit volume on the material is
HYDROPHONES
9
where Si is the elastic strain, sij is the elastic compliance matrix, and the repeated subscripts are summed over the possible values for i and j (3). Similarly, the electrical energy is given by
where
is the dielectric impermeability. Using Eq. (18) and taking the ratio of Eqs. (34) and (35) defines the electromechanical coupling coefficient, kij ,
This coefficient represents the energy conversion efficiency and is in itself an important figure of merit for hydrophones as well as acoustic projectors. For the 3–1 transverse mode discussed earlier, i = 3, j = 1, while for the longitudinal mode, i = j = 3 in Eq. (37). For the hydrostatic mode, the coupling coefficient becomes
where χ is the bulk compressibility given by (3)
For poled piezoelectric ceramics the compressibility matrix is given by
From Eq. (38), it is seen that softer materials should give lower electromechanical coupling, yet in many instances they give higher values of the dh gh materials figures of merit to maintain high coupling. The coupling factor is intrinsically tied to the achievable bandwidth of the material of a particular dimension as well as the sensitivity to stress and, therefore, it is an important measure of a materials performance as well. Directivity. So far we have assumed the acoustic pressure is virtually hydrostatic. For higher frequencies and shorter acoustic wavelengths of operation relative to the shape and dimension of the hydrophone and its acoustic sensing element, the sensitivity of a hydrophone may depend on direction relative to an oncoming acoustic plane wave (5). Note that the case where the acoustic wavelength, λ, is on the same order as the largest dimension of the hydrophone element is considered. The directivity function H(θ, φ, ω) is defined as the ratio
10
HYDROPHONES
of the sensitivity in a particular direction in spherical coordinates (θ, φ) to that obtained in the direction where the sensitivity is maximum, which is referred to as the acoustic axis of the hydrophone, or
and the frequency, ω, is included to indicate that it is generally a strong function of frequency when the acoustic wavelength is on the order of the dimensions of the hydrophone. At low frequencies, H(θ, φ, ω) approaches unity. The directivity is defined as
where is the solid angle and H 2 is integrated over its domain. Again, the tradeoff between sensitivity and bandwidth must be made according to the desired performance criteria (frequency, bandwidth, voltage sensitivity, capacitance, pressure sensitivity, temperature sensitivity, and noise) as well as mechanical ruggedness requirements. Note that the directionality of a planar hydrophone element increases as the frequency increases with the pattern being omnidirectional at low frequencies (where λ a, a being the longest dimension of the hydrophone) but becomes increasingly directional as the wavelength is increased and becomes of the same order as the physical dimension of the element. Depending on geometry and properties of the element, there may be one or more main lobes of the radiation pattern at higher frequency. (See the section on piezoelectric hydrophones.) Bandwidth. The frequency range of a hydrophone is usually determined by the region of use where the voltage sensitivity of the hydrophone is constant. In some cases the electronic detection system also limits the frequency range. Normally, but not always, the frequency range is below the fundamental resonances of the piezoelectric, or fiber-optic sensing element, although in the cases of piezoelectric structures such as spheres, cylinders, and composites, the frequency ranges between the various electromechanical resonances (where the sensitivity is constant) are often used. The resonant frequency depends on element geometry, the dimensions of the element, and its elastic properties and density. The frequency constant is defined as
where f r is the resonant frequency and d is the correct dimension of the element. (For simple thickness mode transducers made using a disk of piezoelectric material, d is the thickness. For radial mode, d is the radius.) For complex geometries, the exact dimension can be complex to calculate as for spheres or cylinders of finite thickness. For thickness (TE) mode transducers, the resonant frequency is related to the thickness, t, of the element, elastic stiffness, cD 33 , and density, ρ, by
where N t is a dimensionless material property. For broadband high-frequency transducers, the amount of energy per unit bandwidth is limited by the total amount of acoustic energy that can be converted to electrical energy, which is related to the electromechanical coupling factor. For a particular vibrational mode the
HYDROPHONES
11
Fig. 3. Hypothetical line array of n hydrophones. (Figure reproduced from Ref. 5.)
where k is the electromechanical coupling coefficient for the appropriate vibrational mode and f p and f s are the parallel and series resonance frequencies, respectively. Hydrophone Arrays. To detect an object in a fluid some distance away, one must detect both the angles at which the object is located relative to the receiver and the distance from the receiver, thereby determining the location in spherical coordinates (r, θ, φ). It is easy to conceptualize that this can be done by an array of highly directional hydrophones (5) with the direction of the acoustic axis (the most sensitive direction) of a number of elements being distributed over 2π radians in both angular directions (θ, φ). The signal from the source will then interact most strongly with the hydrophone elements whose directivity coincides with the direction of the source and that are closest to the source. The direction of the source can then be calculated. For low-frequency signals, the hydrophone directivities are omnidirectional. The location of an acoustically radiating object can still be determined. A plane wave that interacts with elements separated by a distance r1 from the source will experience a time delay and corresponding phase change relative to each other. Since the frequency and wavelength of the signal is determined by each hydrophone, information about the distance of the source from the receiver and its motion can be determined by calculating the cross-correlation coefficients of the signals from the different hydrophone elements and determining the coherence of the signals from the different hydrophone elements of the array. In general, the hydrophone signals are coherent with one another; there is a specific mathematical relationship between them in terms of the phase delays relative to each other. Noise, in general, is not correlated and the relationships between the signals from the various hydrophone elements are random. Consider a simple line array of n hydrophones, of length L; d (= L/n) is the distance between hydrophones, as in Fig. 3. Given that the hydrophones have identical sensitivity, M 0 , the output from the hydrophones can be summed with each other as a function of time delay, t, corresponding to a phase delay
where r is the distance to the array when the array is far from the source. Then all elements are essentially equally distant from the source and differences in distance are negligible. This condition is referred to as
12
HYDROPHONES
far-field conditions. The output voltage of the array is given by (1)
which, using trigonometric identities, transforms to
at a distance far from the source (far-field conditions). This latter function can be separated as
where the maximum voltage amplitude is given as a function of distance, r, and frequency, ω, along the acoustic axis by
and the angular dependence of the signal is given by the directivity function
As a simple example of how an array functions, consider a source that is omnidirectional, radiating an acoustic wave of pressure p0 at low frequency, ω (wavelength λ), to two identical hydrophones separated from the source by distances r1 and r2 , respectively. The distance between the hydrophones is
and is known. Assuming plane-wave conditions, the voltage output from the first hydrophone is
and
from the second hydrophone. Subtracting Eq. (53) from Eq. (54) yields the expression
HYDROPHONES
13
where
If voltages V 2 and V 1 are measured and the sensitivity M 0 and distance R are known, the frequency of the incoming wave, which determines the wave vector k = (ω/c), and the time delay t can be determined directly in a manner similar to measuring the frequencies and phase difference of two sine-wave signals on an oscilloscope with a time base. The pressure amplitude at the source can then be calculated as
and by putting this value into Eqs. (53) and (54), values for r1 and r2 can be solved for giving the distance to the radiating object. Similarly, since n, the number of hydrophone elements; k, the wave vector; and d, the distance between elements, are all known, Eq. (48) can be solved for angle θ provided the directivity function of the array, H(θ), is known. Since both r and θ relative to the object are determined, the object is located in two dimensions and its signal strength is also known. Extension of this example to three dimensions is straightforward. When calibrating a hydrophone, generally the free-field voltage sensitivity (FFVS) and directivity function H(θ) are measured as functions of frequency. Hydrophones are usually designed to make these two parameters as insensitive as possible to signal strength (the voltage/pressure relationship must be linear over wide ranges of acoustic pressure) and environmental variables (pressure, temperature, and time). The previous example was an extremely simple one based on a number of assumptions, which are often incorrect. The array elements were assumed to measure pressure independent of the other hydrophones. In practice, individual elements diffract the sound waves, impinging on them (which alters the field on surrounding hydrophones in the arrays, thereby altering the response of the hydrophone). Thus the individual hydrophones, which are often in close proximity or even physical contact, such as in extended-area hydrophones for towed-array applications, interact with each other. Minimizing these interactions and accounting for the remaining interactions is fundamental to producing a useful hydrophone array. In terms of material parameters, for extended hydrophones, the cross coupling is often strongly related to the transverse piezoelectric coefficients, g31 and d31 , and it is important to reduce the effect of these coefficients through either material or hydrophone design. Also, plane waves or far-field approximations were assumed, and the question of the noise in the environment was not even considered here. There are many noise sources in an ocean environment, such as reverberations, which are frequently nonisotropic and can contribute significantly to the computed cross-correlation coefficients of the various elements of the array. Such contributions generally degrade the performance of an array. Thus the performance of an array is a function of the noise environment in which it operates. One particular type of noise that has received a great deal of attention recently is flow noise. Flow noise results from turbulence and nonlaminar flow over the surfaces of the hydrophone, which induces pressure fluctuations on the hydrophones. Flow noise is generally a broad-spectrum noise source that depends on the velocity of flow, the hydrophone dimensions, and the array configuration. The broad spectrum of the noise tends to particularly excite vibrational modes of the hydrophones and arrays where the sensitivity of the system is high and can mask incoming acoustic signals. The noise tends to be largest at shorter wavelengths and can be averaged out by using larger area hydrophones. Longitudinal (3–3) modes and transverse (3–1) modes of the hydrophones can both be excited. The size and shape of hydrophone elements and arrays must be designed in consideration not only of the acoustic requirements, such as bandwidth, noise, voltage sensitivity, capacitance, and directivity, but also in consideration of sources of noise, such as flow noise. Operating conditions of a structure on which the hydrophones and arrays are mounted, such as a submarine, including its velocity and its hydrodynamic characteristics, must therefore also be considered.
14
HYDROPHONES
Fig. 4. Probability density distributions of noise and signal plus noise. (Figure reproduced from Ref. 5.)
There are numerous methods of correlating and processing the outputs of an array of hydrophones. If the outputs of pairs of hydrophones are correlated in pairs for m hydrophones in m(m − 1)/2 correlators and then added, the power signal-to-noise ratio of the array in isotropic noise is given by
where B is the bandwidth of the receiver and t is the time over which the signal is processed. The performance of an array, therefore, strongly depends on the signal-to-noise ratio of the hydrophone as well as the number of hydrophones in the array and the achievable bandwidth of the array as well as the time interval over which the signals are processed. The detection index, DI , is defined as the square of mean amplitude of signal and noise, M S+N , minus the mean amplitude of just noise, M N , divided by the variance of the noise present, σ2 n , or
The probability density of the noise is assumed to be Gaussian. The detection threshold (DT ) is defined as the level of a signal that is just detectable at a predetermined probability of accurate detection versus a false detection (Fig. 4). This is given as
where the noise power is defined as that in the frequency band f , which is generally defined as 1 Hz. The detection threshold is the measure of a performance of a hydrophone array. The performance of an array can be altered by “shading” the different hydrophones by varying their sensitivity and directivity patterns. Weighting the sensitivity of the hydrophone elements is frequently used to further increase the signal-to-noise-ratio of the hydrophone array. For arrays that use identical elements, this can be done electronically by varying the sensitivity of each element through changes in the gain of the amplifiers associated with the hydrophones (amplitude shading), by digital computation after sampling of the analog signal, or by introducing phase delays in the elements either electronically (either by analog or digital means) or by varying the size and position of the array elements (phase shading). The first technique (amplitude shading) is used extensively in sonar systems. The latter technique (phase shading) has been applied in radar systems but is not widely used in sonar. Amplitude shading is particularly important for use in systems where
HYDROPHONES
15
the noise field is not uniform and isotropic, such as on submarine systems. It is beneficial in such cases to lower the sensitivity of elements exposed to greater noise relative to those in quieter locations. Multiplicative arrays are also possible where the outputs of the hydrophones are multiplied with each other rather than added. This technique can reduce the size and number of elements in an array and is useful in environments with high signal-to-noise ratios well above unity. In such arrays, the DI and DT must be considered together because the signal processing and the array design are closely intertwined with each other. The design of hydrophones is then intimately tied to the design criteria for arrays and systems in which they are used and to the specific application and its physical environment. In the past, there were few options in the design of hydrophone materials in order to tailor the properties of the material to the application. Recently, a number of piezoelectric materials have been developed that can be designed for very specific hydrophone requirements. The interaction of material design, hydrophone design, and array design is critical in designing an acoustic detection system.
Conventional Piezoelectric Hydrophones Single Crystal Materials and Hydrophones. Before about 1955, many hydrophones were based on the use of single crystal materials, which have significant hydrostatic piezoelectric coefficients (dh or gh ). Disadvantages of using single crystals are their mechanical fragility and the limited size achievable by single crystal growth methods. The cost of single crystals is also prohibitive because of the time and sophisticated equipment required. Several purely piezoelectric (nonferroelectric) crystals have very high voltage sensitivity (gh ) but low dielectric permittivity and, therefore, low charge coefficients (dh ). A classic material in this family is lithium sulfate monohydrate (Li2 SO4 –H2 O). This material is still used for a few hydrophones that require high voltage sensitivity and where an amplifier can be physically located very near the single crystal element so that significant capacitive losses due to the cable connecting the crystal to an amplifier can be prevented. Ferroelectric crystals, such as ammonium dihydrogen phosphate (ADP), potassium dihydrogen phosphate (KDP), and Rochelle salt, were extensively used in early hydrophone development because they had higher dielectric permittivity, which lowers the cable loss. Properties of these crystals are given in Appendix 1 (6,7). Hydrophones and arrays built from these crystals were often simple in nature, using one or more crystals mounted on acoustically soft polymer or corprene (rubberized cork) in order to isolate it from vibrations of the transducer housing. By simply providing electrical leads to either a transformer or amplifier, the output voltage of the crystals could be measured and, depending on the size and manner in which the crystals are connected (series and parallel), the voltage sensitivity and hydrophone capacitance could be adjusted to fulfill the design criteria. Recently, a new class of materials, the lead magnesium niobate–lead titanate (PMN–PT) single crystals, were developed and give extremely high dielectric permittivity (∼50,000ε0 ) and piezoelectric charge coefficients (d33 , d31 , dh ) and moderate piezoelectric voltage coefficients, which yield significant material FOM (dh gh ) (8). Application of these materials has only recently begun. The high dielectric permittivity makes the material appropriate for remote acoustic sensors where amplifiers are not in close proximity of the sensor. The large dh can also be used for acoustic sources, while the voltage sensitivity is sufficient for hydrophone applications, making the material intriguing for pulse-echo and other reciprocal transducer applications. Being single crystals, the materials suffer from mechanical shock limitations and are limited to small sizes, but the sensitivity and dielectric permittivity of the material may make them useful for very compact hydrophone applications. Ceramic Hydrophone Materials and Conventional Hydrophone Designs. The most commonly used piezoelectric material for hydrophones is lead zirconate-titanate (PZT) ceramic. This material was developed by B. Jaffe and coworkers at Clevite Corporation (now a division of Morgan-Matroc Corp., Cleveland, OH) in the mid-1950s. The classic textbook, Piezoelectric Ceramics, by B. Jaffe, W. Cook, and H. Jaffe (9) describes the common ceramic compositions of PZT and their properties. Several compositions are so commonly used
16
HYDROPHONES
in Navy systems that they are referred to as Navy Type I, Navy Type II, and Navy Type III piezoceramic corresponding to trade names of PZT-4, PZT-5, and PZT-8. There are many other compositions, but these first two are most commonly used in hydrophones. PZT-8 is almost exclusively used in acoustic source (projector) applications. The properties of these materials are listed in Appendix 2. As can be seen in Appendix 2, these materials have high piezoelectric charge coefficients, d33 , d31 , and d15 when the stress (compressive or shear) applied is along a single direction. However, for low-frequency applications, the stress is hydrostatic and the hydrostatic piezoelectric coefficient, dh , is rather low due to the opposite signs of the d33 and d31 coefficients [Eq. (24)]. The voltage sensitivity is extremely small due to the very high dielectric constants in addition to the difference in sign. Therefore, the object of most hydrophone design is to transform or eliminate either the longitudinal (T 3 ) stress and maintain or increase the transverse (T 1 and T 2 ) stresses or to eliminate the transverse stresses and increase or maintain the longitudinal stress. Hydrophones, which are based on the latter technique, are termed 3–3 or longitudinal mode transducers, while those based on the former strategy are called 1–3 or transverse mode transducers. The Russian monograph by Ananeva (10) provides descriptions on the design of transducers using piezoceramics, although it principally discusses barium titanate ceramic hydrophones, more commonly used in early Russian transducers. The most common method for accomplishing the stress transformation is to form the ceramic into either a hollow sphere or a piezoelectric ceramic tube. Usually, the spherical shell is poled radially. For a radially poled spherical shell with inner radius a and outer radius b, the tangential stresses are amplified, whereas the radial stresses are nearly negligible and the hydrostatic voltage response (voltage/hydrostatic pressure) is given by (10)
where η = a/b. Thus, for very thin shells, η approaches unity and this equation becomes simply
A very thin shell hydrophone, of course, cannot withstand high pressure or shock; often, the shell must be made thicker, which reduces the sensitivity of the hydrophone. Recalling that the g33 and g31 coefficients have opposite signs, it is apparent that the two terms tend to cancel each other in the far right-hand term [Eq. (61)]. For small η (thick shell), the first term in the brackets is zero and the response simply becomes equal to the hydrostatic response of the piezoceramic,
as expected. Spherical hydrophones have the advantage of being omnidirectional up to higher frequencies compared to planar or cylindrical hydrophones. They are generally used for applications where space is restricted or where it is impossible to align the element properly, such as in small acoustic test facilities. The sensitivity of a hydrophone based on a spherical ceramic element is omnidirectional up to near the fundamental resonance of the spherical element. The unamplified FFVS of a series of spherical hydrophones is shown in Fig. 5. The sensitivity, however, is lower than that of a hollow-cylindrical PZT hydrophone (by a factor of 4.5 for thin-walled elements) for elements of similar volume. The cylindrical hydrophone (directivity is shown in Fig. 6) is more commonly used because of its higher capacitance, higher sensitivity per unit volume, and more convenient shape for arrays. The voltage response (FFVS) of a hollow cylindrical hydrophone, poled radially
HYDROPHONES
17
Fig. 5. FFVS of a series spherical hydrophones (NUWC-USRD Models F42 A-D) with OD dimensions of ceramic elements: A: 5.0 cm, B: 3.91 cm, C: 2.54 cm, D: 1.27 cm). (Figure reproduced from Ref. 11.)
with stiff-capped ends, is given by (10)
where ξ = (b − a)/2b, a is the inner diameter and b is the outer diameter of the cylinder. For thin-walled cylinders, this expression becomes
Capacitance can be increased by increasing the length of the tube depending on bandwidth requirements. The response of a cylindrical end-capped hydrophone (NUWC-USRD Model H56) used for calibration purposes is shown in Fig. 7. It has FFVS of −164 dB re 1 µPa, which includes about 30 dB of amplifier gain. The response of the hydrophone is seen to be flat from 10 Hz to near 50 kHz. The FFVS must be considered relative to the noise (the noise equivalent pressure) and is essentially the reciprocal figure of merit for hydrophones. The noise-equivalent pressure of an H56 hydrophone is shown in Fig. 8 (11). For a comparison to other materials technology, the materials FOM = (p2 nep V) − 1 for a hollow cylinder, where V is the volume of this cylinder, is calculated to be 2 × 1015 m2 /N based on the measured noise equivalent pressure, pnep . Directivity patterns for several frequencies are shown in Fig. 6 for the x–z plane. The directivity patterns are omnidirectional for the x–y plane below the fundamental resonance of the hydrophone. The directivity patterns show that the cylindrical transducer becomes directional below the fundamental resonance in the hydrophone. Note the difference of sensitivities, although this must be considered in light of the volume of the ceramic element in the hydrophone. Perhaps the most widely used reversible transducer for Naval underwater applications is the Tonpilz stack, which is a variation on the piezoceramic cylinder (12). This transducer is made from stacks of rings electroded on the flat surfaces and electrically connected in parallel, which can either be poled longitudinally or radially (3–3 or 1–3 mode) and are electrically insulated from each other held together with a stress rod made of very stiff steel. By varying the ring geometry, the number of rings, the static stress maintained by the stress rod, as well as the poling direction, the Tonpilz stack transducer can be designed with a wide range of voltage sensitivity, capacitance, and resonance frequencies and can serve as both acoustic source and hydrophone. Flextensional Hydrophones. An alternative family of transducer designs most often used for acoustic sources, but which can also be used as hydrophones, are not classic 1–3 or 3–3 mode transducers but are referred
18
HYDROPHONES
Fig. 6. Directivity of NUWC Model H56 cylindrical tube PZT hydrophone for several frequencies (dimensions of ceramic element : OD—0.518 , ID—0.446 , L—0.375 ). (Figure reproduced from Ref. 11.)
to as flextensional hydrophones (13,14,15). Unlike the piezoceramic sphere and cylinder hydrophones, which use their own geometry to achieve a high level of acoustic sensitivity, the flextensional hydrophone uses a mechanical shell, generally made of metal such as steel or brass, to transform hydrostatic stress to a stress along one or more of the sensitive axes of a single piezoelectric plate or a stack of piezoelectric ceramic plates or rings. Recent application of a flextensional design generates a high sensitivity because it is possible to convert hydrostatic stress to activate the contributions from two or even all three independent piezoelectric coefficients (d33 , d31 , and d15 ). The various responses sum together with the same sign, thereby making the hydrophone sensitivity greater than is possible with a perfect 33-mode or 31-mode design. Class I–V Flextensionals. The classification of different flextensional transducer designs depends on the geometry of the outer shell and are described elsewhere. Class I flextensional transducers have “footballshaped” shells that are driven into resonance by a piezoelectric stack, while Class II transducers use a spherical or oval shell attached to a longer stack and can generate more power. Class III flextensional transducers use
HYDROPHONES
19
Fig. 7. FFVS of cylindrical tube PZT hydrophone (NUWC-USRD Model H56). (Figure reproduced from Ref. 11.)
Fig. 8. Noise-equivalent pressure of cylindrical tube PZT hydrophone (NUWC-USRD Model H56). (Figure reproduced from Ref. 11.)
shells with two spherical cavities, which give the transducer a broader resonance. The most common flextensional transducer, Class IV, employs convex or concave oval-shaped shells with the ceramic stack mounted along the longest principal axis of the oval transducers. Class V flextensional transducers have a much different design, with spherical or oval cap shells joined to a vibrating ring or disk. Another classification scheme is shown in Fig. 9. It characterizes the different devices according to which piezoelectric coefficient (d33 , d31 , d15 ) is chosen for amplification. In Fig. 10, n is the amplification factor and is often around a value of 3. An example of a Class V “flexi-extensional” or “flextensional” hydrophone is the “moonie” design developed by R. E. Newnham and coworkers (16). This type of design is shown in Fig. 9. A compressive hydrostatic stress causes the stress along the polar axis (z-direction) to be compressive, whereas the flexure of the shell causes the force on the transverse axis (x-direction) to be tensile. Thus, depending on the precise design of the element (dimensions of the elements, material used, and the shape of the metallic caps), the sensitivity of the hydrophone can be made quite large. The design can be adjusted for different applications by changing the dimensions of the ceramic and metallic shell. The ceramic under large stresses can be made to be large and tensile, resulting in fracture of the ceramic plate or in permanent deformation of the shells. Proper design must alleviate this problem and, as usual, will exchange sensitivity for pressure capability and mechanical robustness. Flextensional transducers, such as the “moonie,” have extremely high figures of merit on a per volume basis with figure of merit, gh dh , on the order of 50,000 × 10 − 15 Pa − 1 for some specific designs. This “moonie” has been used in geological hydrophone applications. A similar design (17) transforms hydrostatic stress to shear stress in order to take advantage of the considerably higher shear-mode sensitivity (d15 > d33 , d15 d31 ) of PZT. This flexi-distortional device (Fig. 9)
20
HYDROPHONES
Fig. 9. (a) Classification scheme of flexi-distortional piezoelectric composites. (Figure reproduced from Ref. 16.) (b) Details of a class V flexi-extensional device (the “moonie”). (Figure reproduced from Ref. 16.)
HYDROPHONES
21
Fig. 10. Classification scheme of ceramic-polymer piezoelectric composites. (Figure reproduced from Ref. 2.)
has theoretically several times the sensitivity per unit volume of the class V flextensional design such as the “moonie,” with values for gh dh being on the order of 180,000 Pa − 1 (17).
New Piezoelectric Materials and Hydrophones The Cold War produced a need to develop hydrophones for submarine applications, such as hull-mounted and towed arrays for lower-frequency applications, which extended the capabilities of the large spherical array in the front of submarines. The need to operate at lower frequency is the result of the acoustic absorption due to magnesium sulfate ions in the ocean. The large, broad frequency absorption reduces acoustic signal power and lowers the detectability of objects as frequency increases. Larger arrays were required for accurate detection, extending the frequency of operation and increasing the amount of data available to offset the longer time required to acquire and process data at lower frequency. It is necessary to limit the weight of such large systems for submarines in order for the submarine to operate with sufficient crew munitions and supplies. Another advantage of larger hydrophones is that high-frequency fluctuations in pressure due to flow noise on the surfaces of the hydrophone are averaged out. Thus, new hydrophones must provide satisfactory detection of acoustic signals at higher platform velocities. Therefore, large, lightweight, conformal planar hydrophones were required for such arrays. In addition to the old requirements of high voltage sensitivity and adequate dielectric permittivity, a new requirement of these materials was that they have low lateral sensitivity (small g31 and d31 ) in order to reduce the sensitivity of the hydrophone to low-frequency flexural vibrations, which are significant for large area structures. New materials were required that could make such hydrophones feasible. For this, new piezoceramics, piezoelectric polymers, and composites of polymers and piezoceramics were developed. The latter two materials, though soft, are not necessarily limited in frequency and can also be used for high-frequency applications in fields such as medical acoustic imaging and acoustic nondestructive evaluation of structures and materials. For hydrophone applications, they also have a great deal of flexibility since they can be easily shaped into different geometries. Lead Titanate Piezoceramic Hydrophones. Ceramics of PbTiO3 modified with a variety of dopants are shown to have very small lateral coupling and nearly zero lateral piezoelectric coefficients (d31 ). Therefore, dh is very nearly equal to d33 . The d33 coefficient is fairly low (∼70 pC/N) compared to PZT ceramics, however (18,19,20). The low lateral coupling is particularly useful in high-frequency applications, such as ultrasonic detectors and sources, since almost ideal plane waves can be generated with no coupling to transverse or shear modes. However, the material can also be used for other hydrophone applications since the hydrostatic response is higher than for conventional PZT hydrophones. These materials have moderate dielectric permittivities and are, therefore, straightforward to use in a number of applications without need for mounting amplifiers or transformers close to the source, particularly for array applications where a number
22
HYDROPHONES
of ceramic elements are connected electrically in parallel. Rather simple low-frequency hydrophones have also been designed for planar array applications using fairly large area ceramic rectangles arranged in a planar array enclosed in polymers (Edo-western Flatpac) (21). The very low lateral piezoelectric coefficients result in negligible coupling between elements, which greatly simplifies the design of arrays. Another example of an application is for a high-frequency transducer capable of both producing a high-frequency (0.2 MHz to 2 MHz) acoustic wave and detecting it (NUWC-USRD standard Model E8 transducer) (11). In such a hydrophone, the electronics for driving the piezoceramic element as a source are different from those used to detect signals, and suitable switching of the electronics must be made. For a pulse-echo application, the switching could be done electronically, allowing the pulse to be sent and then rapidly detected. Piezoelectric Polymer Hydrophones. Piezoelectricity can be induced in several ferroelectric polymer materials by electric poling in a manner similar to piezoceramics. The most successful of these are polyvinylidene fluoride (PVDF) and its copolymer polyvinylidene fluoride-trifluorethylene (PVDF–TrFE). However, order of magnitude higher electric fields are required to pole these polymers and, therefore, the thickness (t) of the material is limited by the dielectric breakdown strength (which is also an order of magnitude higher than PZT). Therefore, its voltage sensitivity,
is limited. These materials have substantial piezoelectric voltage coefficients, although their dielectric permittivities are low (∼13ε0 ), thus causing low values of the piezoelectric d coefficients. However, they can be produced relatively inexpensively in large sheets and are, therefore, useful for large-area hydrophone arrays. Significant effects were first reported for PVDF in Japan by Kawai et al. (22). There has been much work on PVDF processing, thus improving its properties, and particularly in making thicker materials possible (23). A significant breakthrough in processing was achieved at Raytheon Corporation (24), resulting in improved properties. The properties of the best PVDF and PVDF-TrFE polymers reported in the literature are listed in Appendix 3. PVDF-TrFE has a significant advantage for hydrophone applications since the material is biaxially drawn in order to achieve significant piezoelectric effects and is consequently piezoelectrically isotropic in the plane perpendicular to the polar axis. The unidirectional drawing process for PVDF orients crystallites with films of PVDF but results in piezoelectric properties that are anisotropic, as shown in Appendix 3. This complicates hydrophone design since the different flexural modes in large-area hydrophones contribute undesirable response to the overall hydrophone sensitivity at frequencies below the transverse and thickness-mode resonances. Reducing flexibility of the polymer by applying backing plates to the polymer, thus stiffening the material against flexural vibrations, is essential in the design of large-area hydrophones, but this increases the frequency of lateral modes. By using two PVDF sheets with the drawn axis of one sheet oriented perpendicular to the other, the flexural vibrations can be significantly reduced when the sheets are connected electrically in parallel. The copolymer PVDF-TrFE is also easier to pole and can consequently be made into thicker sheets, allowing the designer more flexibility in trading higher voltage sensitivity for lower capacitance (25). The large flexibility of the polymer allows for hydrophones of different shapes as well. In particular, cylindrical hydrophones using PVDF have been designed with high voltage sensitivities. Forming of the PVDF material with voids increases the voltage sensitivity of the material even further although the dielectric permittivity is reduced proportionately. An advantage of this material is that its acoustic impedance,
where c is the acoustic velocity and βv is its volume compressibility, can be made to match that of water. To an incoming acoustic signal, the hydrophone is then transparent and no reflections will occur. This reduces the
HYDROPHONES
23
detectability of a submerged platform on which the hydrophones are mounted. Such a hydrophone has been developed (26). Piezoelectric-Polymer Composite Hydrophones. The piezoelectric polymers are, in essence, composite materials since their microstructure consists of small crystalline segments of polymer that are poled piezoelectric regions joined together by amorphous polymer. The cystalline regions are weakly piezoelectric compared to the piezoelectric ceramics since the spontaneous polarization of the material is low. Instead of using this weak piezoelectric effect, PZT ceramic, with its strong piezoelectric effects, can be combined with different polymers in a number of geometries. Such materials are referred to as piezoelectric composites, or piezocomposites, and their properties can be greatly varied to optimize the material for specific applications. This adds a new dimension to the design of hydrophones; before the piezocomposite, the designer had to use a material such as one of four or five compositions of PZT that varied by a factor of 2 to 3 in dielectric and piezoelectric properties and very little as far as elastic modulus. Furthermore, many of the geometries are amenable to fairly straightforward and accurate mathematical modeling, and materials for particular applications can be readily designed analytically. Many models have been developed over the past two decades that may be applied to design problems (27,28). Many of these piezocomposite materials have only recently been evaluated and utilized in new hydrophone designs, and very little information on these designs and their performance is available yet in the open literature. Classification Scheme of Ceramic-Polymer Composites. The different possible composite geometries were classified by R. E. Newnham according to how the two phases are connected (29). The possible structures are shown in (Fig. 10). The first number tells how many directions the ceramic is connected with itself, while the second number is the number of directions the polymer phase is connected to itself. In Fig. 10 for the 1–3 composite, the ceramic phase (white) is connected with itself in only one dimension, whereas the polymer phase is interconnected to itself in all three dimensions (hence the designation as 1–3). The 2–2 composite is simply a layer structure, with each phase connected to itself in two dimensions. Phases, such as voids (air), can be added to the polymer. For example, for the 1–3 piezocomposite, if these voids are isolated from each other, they are connected in zero dimensions and the composite is classified as 1–3–0. If stiffening fibers are placed in the polymer perpendicular in a single direction to the PZT rods, the material would be termed a 1–3–1 composite. If stiffening fibers were placed perpendicular to the PZT rods in two dimensions, the classification would be 1–3–2, and so forth. For simplicity, only composites consisting of two phases will be discussed because these have been more highly developed and have, in fact, been commercially manufactured. 1–3 Piezocomposites. The 1–3 piezocomposites are most widely used and have high-frequency applications in medical ultrasonics as well as in underwater acoustics and other acoustic applications (29,30,31). They were developed in the late 1970s and 1980s by R. E. Newnham and associates. The design flexibility of the material makes it very versatile for many applications. This material is starting to be applied in new applications, such as piezoelectric damping of structures, ultrasonic sources, and detectors as well as hydrophones, because of its versatility. The material is now produced commercially (31) and should find numerous other applications. Many properties of these materials, such as dielectric permittivity, piezoelectric properties, elastic properties, density, and corresponding acoustic properties (such as resonant frequencies, mechanical damping, bandwidth, acoustic impedance), can be widely varied by controlling the ceramic/polymer volume ratio, the dimensions of the ceramic rods, and the types of piezoceramic and polymers used. Both simple and rather elaborate mathematical models for controlling these various properties have been developed. These models aid in the design of an optimum material for a specific application. Some typical properties of several 1–3 PZT-polymer composites are listed in Appendix 3. These properties are typical and can be varied considerably by changing the components and the geometry of the composite in many ways. The noise performance of a hydrophone depends on the voltage sensitivity, capacitance, and electrical and mechanical losses. Using the calculations outlined previously, the noise in a 1–3 piezocomposite can be estimated and the FOM per unit volume, (p2 neq V) − 1 , evaluated to be roughly 2.5 × 1012 m2 /N, about two orders of magnitude lower than conventional PZT hydrophones but with lower density and flexibility in design for larger-area applications.
24
HYDROPHONES
The 1–3 material is being studied for use in large-area hydrophones (32) as well as in high-frequency applications such as acoustic imaging for mine hunting, active vibration and damping. Since this material is new, its use in many Naval applications is still kept classified, and not many details of these applications are available in the open literature. At present, it appears that this material may replace conventional ceramics in a number of hydrophones and raise applications such as underwater acoustic imaging to new levels of performance. 0–3 Piezocomposites. The first true piezocomposites were explored in the beginning of the 1970s by Kitayama (33). The zero indicates the piezo-active (powder) is not connected. This composite can be considered a material because it may be subdivided into small portions and still retain consistent properties. The 0–3 piezocomposite was the first to be developed commercially by H. Banno (34). It is produced commercially in Japan (35) (NTK Ceramics, NGK Sparkplug Corp.). It basically consists of 70 volume percent piezoelectric ceramic particles in a rubber matrix. The ceramic used is a doped lead titanate, while the rubber used in the commercial material is Neoprene. The 0–3 piezocomposite has significantly lower piezoelectric coefficients than the 1–3 composite due to the lower sensitivity of PbTiO3 and the microstructure of the composite. The 0–3 may have advantages, such as greater flexibility, near-zero lateral coupling, mechanical ruggedness, and a lower mechanical quality factor (higher damping). It has been evaluated for large-area hydrophone applications. The extremely low lateral coupling and high damping provide for a significant bandwidth with flat sensitivity for such sensors, but at a sensitivity considerably lower than the 1–3 piezocomposite (31). The advantageous mechanical properties of 0–3 would be of use as mechanical shock sensors, active damping applications, and rugged hydrophones. 3–3 Piezocomposites and Reticulated Ceramic. The 3–3 composite was the first to be developed in the United States (36). Several different methods of manufacturing have been demonstrated (37,38). The structure consists of two materials completely interconnected. A newer development is to elongate the random structure by stretching a preform on which the ceramic is deposited and then dissolving the preform leaving an interconnected tubular structure. This structure is termed “reticulated ceramic” (37). Stretching the preform aligns a good deal of the structure into a single direction, making the 3–3 composite mechanically anisotropic similar to a 1–3 composite. The ceramic is poled in the elongated direction. The piezoelectric properties are remarkably similar. A difference is that the reticulated ceramic is somewhat reinforced perpendicular to the polar direction by the ceramic itself. This stiffens the structural laterally, which has effects on the usable bandwidth compared to a 1–3 piezocomposite as the lateral modes are increased in frequency. For large-area wide-bandwidth applications of the 1–3 and 3–3 composites, the frequency range is limited by the lateral resonance modes in the hydrophone material. For very-low-frequency applications, the reticulated ceramic may have advantages since the lateral mode is higher in frequency and a hydrophone could be designed for the frequency range below the lateral resonance. However, for large-area, large-bandwidth applications, the frequency range between the lateral modes and the thickness mode defines the usable bandwidth. The 1– 3 piezocomposite can be made much softer in the lateral direction, thus lowering the fundamental lateral resonance frequency. Because the thickness mode is changed little due to the stiffness of the ceramic rods, the usable bandwidth is much larger. Of course, higher-order lateral modes generally cause perturbations in the acoustic response but can be controlled by damping the vibrations. The fundamental modes determine critical frequencies where the sensitivity of the hydrophone changes drastically. A possible advantage of the 3–3 composite is that it should be less sensitive to static pressure changes due to the lateral reinforcement of the stiff ceramic. 2–2 Piezocomposites. High hydrostatic charge sensitivity was discovered in 2–2 piezocomposites, particularly if the ceramic is poled in a thickness direction and the ceramic plates are connected as shown in Fig. 11 (39,40). Simpler 2–2 piezocomposite designs are also possible but yield lower sensitivity. In this case the effective contributions for the d33 , d31 , and d32 (= d31 ) all can add with the same sign depending on the stiffness and Poisson’s ratio of the polymer of the composite. If the electrodes of the plates are connected in parallel, the charges on the different plates add. Alternative connections of the plates in series or series-parallel combina-
HYDROPHONES
25
Fig. 11. Transverse mode 2–2 piezoelectric composite. (Figure reproduced from Ref. 39.)
tions should result in higher voltage sensitivity but lower charge sensitivity similar to tradeoffs used in the design of traditional hydrophone arrays. This offers considerable design flexibility. The bandwidth is limited at high frequency by the lateral resonance frequency, which is probably lowest in the y-direction, although this depends on the stiffness of the cover plates as well as the polymer phase. At present, the frequency response of this type of transducer has not been analyzed or measured. In terms of sensitivity, this design is probably fairly comparable to similar-sized spherical or tubular PZT hydrophones. Comparisons would have to be made on the basis of capacitance, resonant frequencies, usable bandwidths, depth capability, temperature stability, and directivity in order to judge 2–2’s performance relative to more traditional hydrophone technologies for small-point hydrophone applications. Larger hydrophones are also possible but would require significant manufacturing complications compared to other materials.
Fiber-Optic Hydrophones A new and radically different technology for detecting acoustic waves as well as many other parameters (pressure, temperature, electric fields, magnetic fields, acceleration, rotational and linear displacements, velocities, and chemical compositions) has been developed in the past 20 years (41). This technology has been extensively reviewed elsewhere (41,42,43,44). Only the hydrophone applications will be considered here. The best optical hydrophones are based on detecting acoustically induced strains in an optical fiber by means of optical interferometry. Optical interferometry is a classic method of measuring differences in phase between two coherent lightwaves and can be used to measure a number of optical parameters to extraordinary accuracy. For instance, early very precise measurements of the speed of light were performed using a Michelson interferometer. The good performance of optical hydrophones therefore relies much more on the detection scheme, interferometry, as opposed to the performance of the hydrophone design and hydrophone materials. Recent development of
26
HYDROPHONES
fiber-optic photonic devices, such as extremely stable low-noise lasers, virtually lossless fibers, stable photodetectors, efficient couplers, electrooptic modulators, Bragg cells, and numerous other optical devices, has made this technology competitive with a variety of traditional sensor technologies at very reasonable costs. Interferometric acoustic sensors can be based on Mach–Zender, Michelson, Fabry–Perot, or Sagnac configurations. Which configuration is superior depends on the application. The vast majority of optical hydrophones have been based on the Mach–Zender interferometer configuration because of its relative simplicity and versatility. The basic concept of a Mach–Zender fiber-optic hydrophone is illustrated in Fig. 12. The output of a single coherent source such as a laser is divided by a beam splitter into two beams. Typically, these beams are split into two beams of lower intensity and are coupled into two different optical fibers. One optical fiber is exposed to the acoustic pressure. The other is shielded from the pressure. Typically, but not always, these fibers are of similar length to provide for a balanced optical configuration. The acoustic signal changes both the length of the fiber due to its elasticity and also the refractive index of the fiber material. The relative change in phase of the optical signal is then given by (39,40,41)
where the phase
and where n = refractive index k = wave vector = 2π/λ λ = wavelength Sz , Sr = strain in the direction of the length of the fiber and in the radial direction, respectively, Pij = photoelastic constants = −dni /n3 dSj and i defines the direction in which the refractive index is measured and j determines the component of strain as defined in Eq. (40). Silica optical fibers are commonly used. To guide lightwaves through the fiber efficiently, the outer portion of the cylindrical fiber is typically doped with a few percent (in the radial direction) of any number of elements, which decreases the refractive index of the fiber and makes a very-low-loss waveguide. The fiber itself is relatively insensitive to pressure, particularly hydrostatic pressure, because of the high stiffness of silica and its low photoelastic constants. There are two commonly used techniques to amplify the strain to a level that is easily measurable by interferometric techniques. The first is to coat the fiber with a fairly soft polymer. Commonly used coating materials are rubbers such as silicone, thermoset plastics, and ultraviolet (UV)-cured elastomers. The coating acts to transform hydrostatic stress into a uniaxial stress along the length of the fiber. The stress and resultant strain have been analyzed and modeled theoretically. Theoretically, the material coatings can give several orders of magnitude increase in strain, but realistic geometries and polymer materials yield about an order of magnitude increase in sensitivity. This is sufficient for low-frequency applications where long lengths of fiber can be used. The size limits this type hydrophones to low-frequency applications such as planar arrays. An example of such a hydrophone is shown in Fig. 13. The hydrophone has a sensitivity of −318 dB re 1 µrad/Pa and the noise floor was estimated at 1–3 µrad, giving the hydrophone signal-to-noise performance similar to that of a piezocomposite hydrophone of similar dimensions with 20% volume fraction PZT. However, unlike the piezocomposite transducer, it is limited to frequencies below 2 kHz.
HYDROPHONES
27
Fig. 12. Mach-Zender fiber-optic hydrophone configuration. (Figure reproduced from Ref. 42.)
Fig. 13. Design of a flat fiber-optic hydrophone utilizing a soft polymer coating. (Figure reproduced from Ref. 43.)
The second technique used for increasing strain in the fiber is to wrap the optical fiber around a compliant mandrel, which can be of a variety of shapes but is again generally spherical or cylindrical (44). The sensitivity of such a hydrophone increases greatly as the compressibility of the mandrel increases. However, the greater compressibility limits the frequency range of the transducer since the mandrel resonates at low frequency. The pressure capability is also limited since highly compressible materials tend to stiffen significantly under large hydrostatic pressures, causing significant degradation in hydrophone sensitivity.
Conclusions From World War II until around 1975, conventional piezoelectric ceramic technology has dominated most hydrophone engineering. In the last 20 years, a number of new technologies have been developed that will
28
HYDROPHONES
probably complement the conventional hydrophone technology rather than replace it. These new materials and design strategies will make possible many new acoustic technologies for fields as diverse as mineral and oil exploration, medicine, active vibration damping, and materials characterization, as well as for traditional airborne acoustic and hydroacoustic applications.
Appendices: Some Typical Dielectric, Piezoelectric, and Elastic Properties of Hydrophone Materials Appendix 1
Appendix 2
Appendix 3
HYDROPHONES
29
BIBLIOGRAPHY
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24.
25. 26. 27. 28. 29. 30. 31.
L. E. Kinsler et al. Fundamentals of Acoustics, 3rd ed., New York: Wiley, 1982. M. Junger D. Feit Sound, Structure and Their Interaction, 2nd ed., Cambridge, MA: MIT Press, 1986. J. F. Nye Physical Properties of Crystals, Oxford, UK: Clarendon Press, 1985, chap. 7. J. W. Young Optimization of acoustic receiver noise performance, J. Acous. Soc. Am., 61: 1471–1476, 1977. R. J. Urick Principles of Underwater Sound, 2nd ed., New York: McGraw-Hill, 1975. F. Jona G. Shirane Ferroelectric Crystals, London: Oxford Press, 1962. K. H. Hellwege A. M. Hellwege (eds.) Landolt-Bornstein: Numerical Data and Functional Relationships in Science and Technology, Berlin: Springer-Verlag, 1981, vol. 16. S. E. Park T. R. Shrout Characteristics of relaxor-based piezoelectric single crystals for ultrasonic transducers, IEEE Trans.-Ultra. Ferro. Freq. Con., 44: 1140–1147, 1997. B. Jaffe W. R. Cook H. Jaffe Piezoelectric Ceramics, New York: Academic Press, 1971. A. A. Ananeva Ceramic Acoustic Detectors, New York: Consultants Bureau, 1965. Standard Underwater Transducers Catalog, Naval Underwater Warfare Center, Newport, RI. D. F. McCammon W. Thompson, Jr. The design of Tonpilz piezoelectric transducers using nonlinear goal programming, J. Acous. Soc. Am., 68: 754–757, 1980. K. D. Rolt History of the flextensional transducer, J. Acous. Soc. Am., 87: 1340–1349, 1990. L. H. Royster Flextensional underwater transducer, J. Acous. Soc. Am., 45: 671–685, 1989. R. A. Nelson, Jr. L. H. Royster Development of a mathematical model of class V flextensional transducers, J. Acous. Soc. Am., 49: 1609–1620, 1970. K. Onitsuka et al. Metal-ceramic composite transducer, the ‘Moonie’, J. Intell. Mater. Syst. Structures, 6: 447–455, 1995. W. B. Carlson et al. Flexi-distortional piezoelectric composites, Ferroelectrics, 188: 11–20, 1996. Y. Yamashita et al. (Pb,Ca)((Co1/2 W1/2 ), Ti)O3 piezoelectric ceramics and their applications, Jpn. J. Appl. Phys., 20 Suppl. 20-4: 183–187, 1981. W. Wersing K. Lubitz J. Mohaupt Anisotropic piezoelectric effect in modified PbTiO3 ceramics, IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 36: 424–433, 1989. Product literature for modified PbTiO3 composition EC-97, Edo-western Corporation, Salt Lake City, UT. Flatpac Hydrophone, Edo-western Corporation, Salt Lake City, UT. H. Kawai Piezoelectricity of poly(vinylidene fluoride), Jpn. J. Appl. Phys., 8: 975, 1969. R. G. Kepler Ferroelectric, pyroelectric, and piezoelectric properties of polyvinylidene Fluoride, in H. S. Nalwa (ed.), Ferroelectric Polymers, New York: Marcel Dekker, 1995, pp. 183–232. R. H. Tancrell et al. PVDF piezoelectric polymer: Processing, properties and applications, in M. McCollum, B. F. Hamonic, and O. B. Wilson, (eds.), 3rd Int. Workshop Transducers Sonic Ultrason., Lancaster, PA: Technomic, 1994, pp. 103–112. T. R. Howarth K. M. Rittenmyer Transduction applications, T. T. Wang, J. M. Herbert, and A. M. Glass (eds.), The Applications of Ferroelectric Polymers, Glasgow: Blackie, 1988, pp. 735–770. J. M. Powers M. B. Moffett J. McGrath A PVDF ρc hydrophone, J. Acous. Soc. Am., 80: 375–381, 1986. W. Cao Q. M. Zhang L. E. Cross Theoretical study on the static performance of piezoelectric ceramic-polymer composites with 1–3 connectivity, J. Appl. Phys., 72 (12): 5814–5821, 1992. Q. M. Zhang et al. Characterization of the performance of 1–3 type piezocomposites for low frequency applications, J. Appl. Phys., 73 (3): 1403–1410, 1993. R. E. Newnham et al. Composite piezoelectric transducers, Mater. Eng., 2: 93–106, 1980. T. R. Gururaja et al. in L. M. Levinson (ed.), Electronic Ceramics, New York: Marcel Dekker, 1987, pp. 92–128. L. J. Bowen et al. Design, fabrication, and properties of SonopanelTM 1–3 piezocomposite transducers, Ferroelectrics, 187: 109–120, 1996.
30
HYDROPHONES
32. J. Bennet G. H. Hayward Design of 1–3 piezocomposite hydrophones using finite element analysis, IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 44: 565–574, 1997. 33. A. Kitayama Flexible piezoelectric materials, Bull. Ceram. Soc. Jpn., 14 (3): 209–214, 1979. 34. H. Banno S. Saito Piezoelectric and dielectric properties of composites of synthetic rubber and PbTiO3 or PZT, Jpn. J. Appl. Phys., Suppl. 22-1: 67–69, 1983. 35. NTK Ceramics, a subsidiary of NGK Sparkplug Corp., Nagoya, Japan. 36. R. E. Newnham D. P. Skinner L. E. Cross Connectivity and piezoelectric-pyroelectric composites, Mater. Res. Bull., 13: 525–536, 1978. 37. K. Rittenmyer et al. 3–3 piezoelectric composites, Ferroelectrics, 41: 189–195, 1980. 38. M. J. Creedon W. A. Schulze Axially distorted 3–3 piezoelectric composites for hydrophone applications, Ferroelectrics, 153: 333–339, 1994. 39. Q. M. Zhang H. Wang L. E. Cross A new transverse piezoelectric mode 2–2 piezocomposite for underwater transducer applications, IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 42: 774–784, 1994. 40. Q. M. Zhang et al. Piezoelectric performance of piezoceramic-polymer composites with 2–2 connectivity—a combined theoretical and experimental study, IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 41: 556–564, 1994. 41. T. G. Giallorenzi et al. Optical fiber sensor technology, IEEE J. Quantum Electron., QE-18: 626–665, 1982. 42. J. Bucaro et al. in W. P. Mason and R. N. Thurston (eds.), Physical Acoustics, New York: Academic Press, 1982, vol. 16, pp. 385–455. 43. N. Lagokos et al. Planar flexible fiber-optic acoustic sensors, J. Lightwave Technol., 8 (9): 1298–1303, 1990. 44. A. Dandridge A. D. Kersey Proc. SPIE Conf. Fiber Optics Laser Sensors VI, SPIE 985: 1988, p. 34–52. 45. R. S. Bobber Underwater Acoustic Measurements, Los Altos, CA: Peninsula Publishing, 1988. 46. L. E. Kinsler et al. Fundamentals of Acoustics (3rd ed.), New York: Wiley, 1982. 47. J. M. Powers Long Range Hydrophones, in T. T. Wang, J. M. Herbert, and A. M. Glass (eds.), The Applications of Ferroelectric Polymers, New York: Chapman and Hall, 1988, pp. 118–161.
KURT M. RITTENMYER WALTER A. SCHULZE Alfred University
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL...ONICS%20ENGINEERING/41.%20Oceanic%20Engineering/W5403.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Oceanographic Equipment Standard Article Frank M. Caimi1 and Syed H. Murshid2 1Harbor Branch Oceanographic Institute 2Florida Institute of Technology, Fort Pierce, FL Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W5403 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (179K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Oceanographic Instrument Design Criteria Basic Instrument Systems Current Measurement Pressure Acoustic Transducers/Hydrophones Magnetometers Navigational Sensors Positioning, Tracking, and Survey Systems About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELE...0ENGINEERING/41.%20Oceanic%20Engineering/W5403.htm16.06.2008 15:15:47
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
OCEANOGRAPHIC EQUIPMENT
OCEANOGRAPHIC EQUIPMENT The oceans consist of nearly 1.4 billion km3 of salt water that accounts for nearly 97% of the free water on earth (1). The great volumes of water in the oceans influence the earth’s climate by storing, absorbing, transporting, and releasing water, heat, and trace gases. Predictions of future climate conditions depend on understanding the processes that control ocean circulation and water mass formation. The goal of oceanography in general, and physical oceanography in particular, is to develop a quantitative understanding of the physical processes of the ocean. Some important processes include circulation, mixing, waves, energy flux transfer, momentum, as well as the production and distribution of chemical and biological substances within the ocean and across its boundaries. Addressing these problems requires sustained large-scale observations of the world oceans. A successful observation can only be achieved by employing and advancing measurement and computation technology. The design and deployment of a global observation system is an important but difficult task, as such a system would require existing measurement parameters as well as observations that may be different from the routine. In order to achieve these scientific objectives, and to make more comprehensive observations, oceanographers must use both proven methods and new technologies. These include: measurements based on electronic; acoustic and optic sensing methods; measurements made from volunteer observing ships; images from satellites; and observations from buoys. The data may consist of electrical, optical, acoustic, chemical, and other physical parameters. The timeliness of the measurements, the data volume, and sampling density are obvious factors affecting the scientific utility of the data-acquisition process. Thus, data communications plays an important role in oceanography, so much so that it can limit the sampling density. There has been a distinct trend to improve the density of sampling to better understand the effects of the oceans on world climate and other large scale processes, so that it is fair to conclude that the key to tomorrow’s physical oceanography will emphasize oceanographic sensor development, telemetry, and communications.
OCEANOGRAPHIC INSTRUMENT DESIGN CRITERIA The design of oceanographic instruments is a complex subject. Issues taken for granted in the laboratory may be a luxury aboard ship at the ocean surface. Oceanographic instrument design must take into account a number of parameters, including the poor optical properties of the ocean. Visibility rarely exceeds 30 m (2). Generally, operators of oceanographic instruments cannot see the device they operate, as the instrument packages are generally lowered from the surface and lie at the end of a cable thousands of feet away from the operator. Hence the instruments must be designed to operate unattended. Other problems can be caused by the chemical composition of ocean water and by biological fouling. Any material immersed in the ocean for a long time is vulnerable to corrosion and tends to become an attractive area for many different organisms. The sensor and type of measurement rely on environmental and ambient conditions. Small salt particles present in the humid atmosphere tend to corrode electrical contacts and connections at a much faster rate than is usual
81
on land. Voltage and frequency variations of shipboard power as compared to shore-based power necessitate more stringent electrical specifications. In contrast, submersible sensors and instruments have an entirely different set of requirements. Isolation from continuous power sources requires an energyconserving design. Very high pressures associated with ocean depths are leading to the use of new materials and new design concepts. Vibration and platform motion associated with ships and buoys can occasionally produce effects that may render even well-designed instruments useless. In summary, most parameters measured in the natural environment are not homogeneous in either time or space and are therefore subject to variability with respect to both frames of reference. The instruments of tomorrow’s global observation system will incorporate state-of-the-art technology and the latest knowledge and information in physical oceanography, and they must be capable of interfacing with the best modeling and computing resources. In addition to the aforementioned design hurdles, the trend to understanding ocean processes has led to increased attention to the scale of measurements. Microstructural effects have been observed and are believed to be important in understanding various ocean processes. The challenge then is to make fine-scale measurements and use them to ‘‘ground truth’’ high-fidelity physical models that are being developed concurrently. Such modeling efforts are now common as a result of the advanced computational technology that is now available.
BASIC INSTRUMENT SYSTEMS Sensing instruments and/or instrument systems attempt to convert some parameter of interest into a quantity that can be easily interpreted by the user. An instrument system generally is comprised of all or some of the following components. 1. A sensor or transducer that converts a measurand (an environmental parameter) into an electrical, mechanical, chemical or optical signal 2. A translator that converts the signal output of the sensor into a convenient (generally electrical) form 3. A signal processor or analyzer that enhances the information contents of the translator’s output by electronically processing it 4. A readout or data display system that converts this output into easily understandable terms At times, items 2 and 3 are lumped together under the name of signal conditioner. Some combination of these four components will render information about the environment in a fashion that can be readily interpreted by the observer or by a computing system. It is necessary to provide a communication link such as a wire, radio, or acoustic link for transmission of the signal information between the components listed. Instrument Characterization Every instrument can be characterized in terms of a number of desirable properties and every design attempts to incorpo-
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
82
OCEANOGRAPHIC EQUIPMENT
rate them. Some of these properties can be summarized as follows: 1. Accuracy. The ability of an instrument to reproduce or describe a measurand or an environmental condition within known limits of error 2. Sensitivity. The ability of an instrument to detect and measure small changes in the measurand. Sensitivity may depend on the measurand and the environmental characteristics. 3. Repeatability. The ability of an instrument to produce consistent output for same set of parameters 4. Ruggedness. The ability to withstand shocks and manhandling and be able continue to operate within specifications 5. Durability. The ability of an instrument to last a long time with minimum maintenance and still properly perform its intended functions 6. Convenience. The ability of an instrument to be fully functional with minimum attention from the operator. 7. Simplicity. The ability of an instrument to be easily used and maintained. The instrument should not require a crew of engineers to operate it 8. Ease of Operation. The ability of an instrument to be easy to operate and understand, both in terms of the concept and the manner in which output is represented. 9. Reasonable Cost. The cost of an instrument to be as low as possible for obvious reasons. Every instrument design should strive to incorporate as many of the above criteria as possible. There may be cases where the requirements appear to contradict each other. For instance, it may be very difficult to design an instrument that is extremely sensitive and accurate without sacrificing simplicity, ease of operation, and low cost. In such cases the instrument designer has to make trade offs. The designer must decide which characteristics are most important and must be retained and what characteristics are less important and can be sacrificed before finalizing the design. Oceanographic Instruments Common oceanographic instruments described in this text include conductivity meters, turbidity meters, salinometers, current meters, thermometers, pressure/depth meters, and acoustic sensors. Most of the sensors can be categorized as either acoustic or nonacoustic devices. Examples of acoustic sensors include hydrophones (underwater microphones), sidescan sonar, passive sonar, and so on, whereas magnetometers, gyroscopes, accelerometers, conductivity meters, and the like, represent the nonacoustic type. Generally, there is a trend to develop instrumentation that is robust for long-term deployment. This is now particularly true for water remote monitoring of nearshore or inland areas that may suffer the effects of pollution from agricultural runoff, pesticide spraying, or fresh water dilution. Therefore, techniques for sensing are being developed that require little maintenance and infrequent calibration. In some cases, the calibration is done unattended under microcomputer control at periodic intervals, in others, calibration is still done by the user prior to deployment.
Research and development activities are emphasizing the detection of fine-scale or low-observable effects, as well as the measurement of new parameters. In this regard, new sensors using the latest technologies are being developed, and in some cases modified, for in-water use. While the trend toward higher sensitivity and accuracy is ongoing, it has become necessary to develop means for sensing new observables such as trace metals, organic chemicals, and so on. The direct adaptation of analytical laboratory instrumentation to in situ sensor suites has been traditional for many oceanographers, and government funding continues to be available to adapt laboratory analytical techniques to in situ instrumentation. The viability of these efforts has been brought about by the rapid advancement of microcomputer processing capability, the miniaturization of electronic components, and the reduction of required energy or power for electronic systems. In some cases it is the sensing technology area itself that becomes viable through breakthrough efforts in other fields. Examples are the use of fiber optics for other uses than communications and the advancements in DNA molecular research that are now allowing specific sensors to be made for certain biologic agents or species. In the case of fiber optics, novel fiber structures coupled with high-sensitivity detection techniques and negligible signal attenuation make them very attractive for communications, as well as for detection and sensing of many different parameters. As a result, fiber optics is generating tremendous interest among researchers, particularly with the US Navy, which has fully realized this potential and is actively participating and encouraging efforts to study and develop fiber optic sensors. Sensor types have been demonstrated for acoustic pressure waves, seismic activity, electromagnetic radiation, strain and structural failure in composites, linear and rotational acceleration, chemical species, biological agents, and so forth. The motivation for using an all-glass approach (instead of wires) is obvious from the standpoint of electromagnetic interference, thermal operating range and, in some cases, complexity. In navigation, for example, the gyrocompass is a mechanically complex device compared to the fiber optic gyro (FOG). It can be safely predicted that fiber optics will play a major role in the oceanographic instruments of tomorrow. Some of the classical measurements taken by oceanographers are water conductivity, turbidity, salinity, current, depth (pressure), and temperature. Common systems used at sea employ acoustic transducers, hydrophones, seismometers, magnetometers, accelerometers, gyro and magnetic compasses, as well as camera systems, Light Intensity Detection and Ranging (LIDAR), and other laser imaging systems. Conductivity Measurement The electrical conductivity of seawater depends on the temperature of the water and amount of dissolved solids (salts) present. In other words, the electrical conductivity varies as a function of salinity for different temperatures. Unfortunately, this variation is nonlinear, giving different incremental value of conductivity for the same salinity at different temperatures. Therefore, the effects of temperature must be negated if conductivity is to be used as a measure of salinity. The platinum resistance thermometer has a response curve that is highly suited for the compensation of temperature as required by the conductivity to salinity relationship. As a re-
OCEANOGRAPHIC EQUIPMENT
R1
roid. The secondary provides an output ac signal that is proportional to the seawater conductivity. A comparison of the two conductivity measurement approaches indicates that polarization effects at the electrodes for the system of Fig. 1 require sine wave excitation frequencies of at least a kilohertz. Furthermore, phase shifts in the bridge network can produce errors. This is particularly true for remote measurements (3). The inductive system of Fig. 2 allows a direct conversion from cell conductance to frequency by making the cell an integral part of the frequency control section of an oscillator. The stability achievable provides some clear advantages for systems intended for high-accuracy measurement and long-term deployment.
R2
Vin
Output R3
R4
Conductivity
Turbidity Meters
Figure 1. Schematic construction of electrode-type cell.
sult, platinum resistance thermometers are commonly used for compensation. Traditionally, conductivity measurement instrumentation has been designed according to two different methods. One uses an electrode type cell that forms part of an ac bridge network. Bare-metal electrodes are used to form the basic cell structure that contacts the sample volume. Figure 1 shows the schematic construction of cell type conductivity meter. Changes in cell resistance unbalance the bridge and create a proportional signal across the bridge output. Unfortunately, even small electrode fouling can produce uncertainties on the order of magnitude of the desired accuracy, especially at depth. Cell geometries have been devised that reduce sensitivity to fouling, yet the small diameter tube required for adequate sensitivity leaves this cell susceptible to drift from sediment accumulation and chemical accretions. As a result, the method is not particularly suited for long-term, deep-water oceanography. despite its simpler and comparatively inexpensive design. Still, with proper care, this design is useful for profiling, and for short-term usage in shallow waters. A preferred method for shallow water deployment in biologically active waters uses an inductively coupled cell in which the seawater forms a single-turn current loop between an excitation coil and a secondary pick-up coil. In this design, electrodes are not necessary and water need not contact any metal surface so that fouling and corrosion are not an issue. Physically, the primary is wound on a toroid that is located in proximity to the secondary. The toroid axes are aligned to allow seawater to create a single-loop coupler as shown in Fig. 2. A change in water conductivity changes the electrical resistance in series with this loop, causing a proportional change in coupled magnetic flux to the secondary signal to-
T1 Vin
83
T2 Inductive loop of water
Figure 2. Construction of inductively coupled cell.
Vout
Originally, the term turbidity may have been a reference to the effects of turbulence near the sea floor and its effects on the suspension of particulate material. More recently, turbidity has been used as a general term referring to the measurement of the visible optical properties of water. When water is ‘‘turbid,’’ it scatters light and makes it difficult to see at a distance. One of the first methods developed to measure water turbidity was the Secchi Disk. The method incorporates a white disk that is lowered to a depth where it seems to disappear. This depth, called the Secchi Depth, is best used as a measure of visibilty, which, in turn, is related to ‘‘turbidity.’’ The method was first noted by a ship captain who observed a white dish trapped in a net. The observation was recorded and investigated years later by Commander Cialdi, head of the papal Navy in 1865. Cialdi enlisted the help of Professor C. A. Secchi a year later and together they published a complete report. Although the method seems unexacting, it is able to provide results that are mathematically sound in relation to other more modern measurement techniques Another means of estimating turbidity used by geologists involves filtering a volume of water and weighing out the remaining solids to develop a mass per unit volume measure. Naturally, the particle size distribution is unknown but is strongly related to the diffractive and scattering properties affecting the visibility characteristics of the medium. Nevertheless, the method is useful in a given geographic area where particle size distribution can remain relatively constant due to suspension from the seabed or from runoff. Rather than further discuss the many methods used to estimate a general parameter such as turbidity, it is preferable to describe the types of measurements used to characterize the optical properties of water. Understanding the relationships between the many optical properties has been an ongoing research topic that is important to (1) interpretation of satellite-derived images of the sea surface, (2) defense-related detection and reconnaissance systems, and (3) the understanding of radiative transfer processes associated with the ocean itself. Generally, the optical parameters are categorized as either inherent or apparent types; that is, according to whether the property changes with the radiance distribution at the surface or elsewhere. Inherent properties, those not depending upon radiance distribution, are attenuation, refractive index, absorption, scattering, and so on, each having an associated coefficient or parameter.
84
OCEANOGRAPHIC EQUIPMENT
definitions for scattering coefficient b, and absorption coefficient a.
Collimating lens
c = aw + b w + ap + b p + ad Water Source
Detector
Optical windows Figure 3. Optical beam transmissometer.
Attenuation is probably the most used optical measure of water clarity. Attenuation can be measured in several ways, according to two definitions. Beam attenuation refers to the loss of optical flux as a collimated beam of light is passed through a medium. Diffuse attenuation refers to the reduction of light irradiance from a diffusely illuminating light source as observed through the medium with a detector designed to measure irradiance over a 2 앟 steradian angular field. Although the two definitions appear to be similar, the attenuation coefficient produced by the diffuse method will include a greater amount of scattered flux and will therefore produce a lesser attenuation coefficient. In either case, light lost through the processes of absorption and scattering (both inelastic or elastic) is measured. The most common meter used to measure attenuation is the beam attenuation meter or transmissometer. The meter, shown in Fig. 3, consists of a white or monochromatic light source that is collimated to a high degree, usually less than several milliradians, and a detector also having a similarly small angular acceptance. Usually the transmissometer is designed with a beam diameter sufficient to include a statistically significant number of the largest particles to be included in the measurement over the measurement interval and path length used. Pathlength and beam diameter are typically 25 cm or 1 m and 25 mm, respectively. Transmissometers have been designed using multiple- or single-wavelength lasers, as well as incandescent, arclamp, flashtube, and other white light sources. Beam attenuation coefficients generally range from 0.05/m for the clearest waters to 0.3/m for coastal regimes and to greater than 1/m for estuaries. An equation describing beam attenuation coefficient c is typically given as follows: I = I0 exp(−cz) The term I0 is the emitted flux or irradiance in the collimated beam and I is the received flux at a distance z through the medium. The units of c are therefore m⫺1. The beam attenuation coefficient is actually made up of several separate terms. c = cw + cp + cd The subscripts w, p, and d refer to the contributions from water, particulate matter, and dissolved substances, respectively. Each of the terms can be further partitioned into contributions from scattering and absorption according to the
Another useful fact is that, for a given type of particulate material, the attenuation coefficient cp is directly proportional to the particle concentration expressed as mass per volume. As might be expected, there is no additional scattering term due to the dissolved matter. Due to the difficulty in measuring light scattered over a solid angle of 4 앟 steradians, it has been customary to measure the absorption coefficient and subtract it from the total attenuation c in order to estimate b—a parameter useful in the prediction of image quality and satellite remote sensing performance. Scattering meters have been designed, but are usually cumbersome to use and calibrate. They are typically designed for operation at a fixed angle (small forward angles or 45⬚), free angle (separate angles over the entire angular range), or integrating (over an angular range suited for measurement of b directly). In addition to the aforementioned issues, there exists no exacting standard for ‘‘clear water,’’ so that the user of the beam transmissometer must rely partially on computed relationships from theory for calibration or from constants provided by the manufacturer. The Secchi Depth ZD is related to the beam attenuation coefficient in an approximate manner independent of scattering. ZD ≈ 7c A Secchi disk is generally white (reflectance unspecified) and about a foot in diameter. The relationship defining the diffuse attenuation coefficient is obtained from successive measurements of irradiance at different distances from the source. The diffuse attenuation coefficient K is defined in terms of the irradiance IZ1 and IZ2 measured at two depths Z1 and Z2 with a submersible radiometer. Often, solar radiation is used as a source, although other methods using lamps have been devised. IZ1 = IZ2 exp(−Kz) A challenge in making this measurement is to obtain a physically stationary depth and a temporally stationary irradiance measure as the radiometer is relocated. Measurements are therefore sometimes made with a surface-located solar reference radiometer that is used to normalize the readings taken at depth. In addition, the effects of surface waves create a disturbance in the readings when the radiometer depth is less than several attenuation lengths. A depth gauge is usually added to the instrument suite to allow a more precise estimate of the measurement depth and a profile is taken against depth to allow measurement of the radiance slope versus depth. Another relationship suggested by physical principles is the proportionality of light backscattered from the propagating light field to the concentration of suspended solids. A backscatter meter is designed to provide an illumination source and a sensitive detector arranged to view the illuminated volume from the source location. Care must be taken in the design to reduce the light backscattered from surfaces such as windows and foreign objects in the volume. Novel de-
OCEANOGRAPHIC EQUIPMENT
signs using infrared semiconductor sources and synchronous detectors are now available. Measurement range is substantially better than for the transmissometer if suspended particle mass is of interest. Mass per unit volume ranges from 20 애g/L for very clear water to over 10 mg/L for extreme turbidity conditions associated with floods and storms. Recent studies of optical parameters have concentrated on the development of models to describe the relationship of backscattered light at multiple wavelengths to biological and physical processes. These models have been refined and used to interpret satellite data for the purposes of monitoring temperature, ocean circulation, bioactivity (chlorphyll content), water depth, and so on. Instruments have been designed to measure the absorption, elastic scattering (at the same wavelength), inelastic scattering (wavelength shifted Raman and Brouillin processes) (4). Salinity Measurement In 1901, salinity was initially defined as the gram weight of dissolved solids in 1000 g of seawater. Shortly thereafter, new definitions arose defining salinity in terms of chlorinity or the amount of chloride ion in parts per thousand (ppt). Later, this definition was changed to relate chlorinity to salinity measured by means of electrical conductivity. Salinity can range from near zero per thousand for fresh water to 36 parts per thousand for seawater. Required measurement accuracy is determined by the application and is usually specified to be better than 0.02 ppt for oceanic density studies. The measurement of salinity can be performed in a number of ways, as follows: • • • • •
Chemical titration Density Index of refraction Velocity of sound Conductivity
The first two are uncommon, as they do not lend themselves readily to direct measurement (5). Density and refractive index are quite sensitive to the environmental effects of temperature and pressure, but the latter is useful for high-resolution measurement of microstructural salinity layering. Measurements can exceed 0.01 ppt using refractive index measurement techniques over spatial scales of 1 cm or less. Required resolution in refractive index is approximately 2 ppm (parts per million) for salinity resolution of 0.01 ppt. Chemical titration techniques are difficult to use in situ. Acoustic and sound velocity sensing devices lack the accuracy needed to properly resolve changes in salinity. Similarly, density is of little practical use for the measurement of salinity as it has only a second-order effect on these variables. The classical method of measurement is with a CTD (conductivity, temperature, and depth) meter. Electrical conductivity has a first-order effect on salinity and therefore it is much more sensitive than any other quantity for measurement of salinity. It should be noted that the electrical conductivity is sensitive to the effects of temperature and to a lesser degree to those of pressure, but these effects are no worse than other methods of sensing salinity. Furthermore, electrical conductivity can be measured directly by electrical means; therefore it is considered as the most ap-
85
Temperature
– + Conductivity
Σ
Salinity –
Pressure Figure 4. Use of conductivity to obtain salinity.
propriate method. The use of a single inductively coupled conductivity sensor, together with a temperature and pressure sensor connected in a single electrical bridge configuration, was demonstrated to produce an accurate salinity readout as early as the late 1960s. Empirical equations relating seawater salinity, temperature, pressure, and electrical conductivity started evolving during the same time period with the original development of Ribe–Howe equation (6). It was found that a resolution of 0.01 S/m in conductivity and 0.01⬚C in temperature were required for salinity resolution of 0.01 ppt. Even today, this accuracy is difficult to achieve for long periods in moored sensor arrays without frequent calibration, but is easily achievable for short-term measurements. Ocean-going instrumentation often use two approaches for computing salinity. The first separately records conductivity, temperature, and pressure, and then computes salinity from these variables. The second method combines the outputs of conductivity, temperature, and pressure sensors electronically, such that the output registers salinity alone. Figure 4 illustrates the basic concept of using conductivity to obtain salinity. The second approach can reduce the accuracy requirement if a telemetry link is used and reduces the number of telemetry channels from three to one. The relationship between salinity and chlorinity is given by S = 1.80665 · Cl where each quantity is measured in units of parts per thousand (ppt). Since World War II this definition has been abandoned in favor of that in terms of electrical conductivity: S = − 0.08996 + 28.2972R + 12.80832R2 − 10.67869R3 + 5.98624R4 − 1.32311R5 The parameter R is defined as the ratio of conductivity of the sample at 15⬚C and 1 atm to that of water at 15⬚C and 1 atm with salinity of 35 ppt. In an attempt to observe fine-scale salinity distributions, novel instrumentation has been developed using refractive index methods. The relationship between refractive index n of seawater and parameters such as wavelength, pressure, and temperature is determined by the equation of state. Although there has been considerable controversy over the determina-
86
OCEANOGRAPHIC EQUIPMENT
tion of the best form, the approximate relationships are as follows:
∂n/∂λ ≈ −4 × 10−5 /nm, visible ∂n/∂P ≈ +1 × 10−5 /bar ∂n/∂T ≈ −6 × 10−5 /◦ C ∂n/∂S ≈ +2 × 10−1 Developmental techniques for measuring refractive index to the required part per million level have reached varying levels of performance. The following is a list of demonstrated refractive index measurement techniques that have been used for high-resolution refractive index determination (7). Abbe Half-Sphere Differential Michelson Critical Wavelength Refraction Pellin Broca Prism refractometer
10⫺6 ⬍10⫺6 2 ⫻ 10⫺5 2 ⫻ 10⫺5
1982 1984 1987 1983
CURRENT MEASUREMENT An important goal of physical oceanography is to understand the physical properties and movement of seawater and its interactions with the surroundings. Hence the quantization of water movement or currents is important. Traditionally a mechanical sensor that rotates due to mechanical drag or lift caused by the moving water measures water current. This rotation is proportional to the water velocity. Unfortunately, mechanical sensors become unreliable when water velocity approaches a few centimeters per second and they can disturb the hydrodynamic conditions of the fluid. Furthermore, they may not be suitable for fast turbulence studies due to their limited bandwidth of less than 1 Hz. Classical methods of measuring current have been by either indirect or direct means. The equations of motion provide a means for determining current under the geostrophic approximation, where only the pressure gradient force per unit mass and the coriolis force per unit mass are in balance. In this case it is only necessary to measure density or pressure gradients. 1 ∂p = 2v sin ϕ ρ ∂x Here is the medium density, ⍀ is the rotational speed of the earth, v is the speed along the y-axis, is the latitude, and p is the pressure. The pressure gradient can be converted to a density gradient, which provides enough information to compute the speed. Another indirect method relies upon the earth’s magnetic field to determine current. From Maxwell’s equations, an electrostatic field will be created for charges flowing through the earth’s magnetic field B at speed v. If the vertical field component is Hz, the electrode separation is l, and the potential is V, the relationship is given by the force balance. V / = v × B = kvHz ≈ 1.1vHz × 10−8 V Since all units are in the cgs system, the produced voltage is small and is affected by contact potentials at the electrodes,
which are often as much as 1 mV. This method is therefore better at establishing current direction rather than absolute speed. Electrochemical half-cells can be unintentionally produced at each electrode. (When designed to do so, these halfcell reactions may be used to detect hydrocarbons in sediments as a result of bacterial activity, producing potentials of several millivolts or more.) Direct methods of current measurement include the socalled Lagrangian and Euler-based approaches. The former uses drifting objects such as buoys or dyes. Although seemingly primitive, modern drifting buoys may use the global positioning system (GPS) for position updates and may employ satellite communication for data transfer providing exceptional data. Subsurface buoys may be tracked acoustically and fluorescent dye plumes may be detected at low concentration at great distance. Euler methods consist of dynamic and static sensors; for example, rotating vane devices such as the propeller and Savonius rotor, or static devices like the pressure plate, arrested rotor, and pitot tube. The Savonius rotor is preferable over propeller-type rotors, since it is sensitive to water flow in only one direction. The pitot tube uses the pressure differential principle and is commonly employed for aircraft air speed sensors. Although the aforementioned current-sensing techniques are common, development of electronics and advances in transducer technologies have made it possible to measure fluid velocities by exploiting the interaction between a moving fluid and either an acoustic wave or electromagnetic field. A number of instruments have been designed and built using nonmoving sensors. They include electromagnetic current meters, laser Doppler current meters, and acoustic or ultrasonic current meters. The electromagnetic flow sensor contains a coil to produce a magnetic field. A set of electrodes is used to measure the voltage gradient across the face of the coil. A voltage gradient is induced in water when it flows through the field. According to the principle of induction, the induced voltage field is the vector product of the velocity and the magnetic field. The magnetic field of the coil depends on the current and the number of turns. But the magnetic field varies with the square of the power. Therefore, with a typical 100 mW dc-powered coil, the resulting field reacts to produce a potential difference of 10 애V to 15 애V for a flow of one knot. A flow of 0.01 knot will result in an output electrode potential of 앑0.1 애V. Due to chemical uncertainties at the electrode surface, it is nearly impossible to get two electrodes to remain within a few microvolts potential. Stray currents and electrochemical effects resulting from fouling of the electrode surface may produce two to three orders of magnitude larger static offset. However, if the magnetic fields are periodically reversed, the polarity of the electrode potential due to water flow will change but the static offset will remain constant. The electrode voltage can be detected in synchrony with the reversal of the field. The magnitude of field is a function of the velocity of flow. Electrode errors and amplifier dc offset and drift become insignificant with this approach, allowing large ac voltage gain to be used without saturation from dc potentials. It is important for stability that the dc bias potentials remain constant during the field cycle. Furthermore, after each field reversal the measurement of the electrode voltage has to be delayed until the field becomes stable.
OCEANOGRAPHIC EQUIPMENT
while the other is used to determine the difference between the two speeds. Hence by taking the difference of the singaround frequencies of two the velocimeters, we obtain an output signal that has its frequency proportional to current flow. The ideal velocimeter has an output frequency ( f) given by
Y B
Ly
V(y)
x, y
f = c/L
dy
where c is the velocity of propagation and L is the distance between the two transducers. Since the transducers send pulses in opposite direction and if v is the current flow, then the two sing-around frequencies are
dx A
87
Lx
X
Figure 5. Travel time arrangement to determine current velocity.
f1 =
c+v L
and
f2 =
c−v L
Taking the difference of the two frequencies and assuming n repetitions at an angle (for v2 Ⰶ c2), we get All acoustic and ultrasonic current-measurement instruments are based on the principle that the net velocity of an acoustic wave propagating in a moving fluid is the vectorial sum of the fluid velocity and the velocity of sound in the fluid at rest. Ultrasonic current measurement is mostly made with the help of two piezoelectric transducers placed in a moving fluid in a unidirectional field arrangement. Although a number of different signal processing techniques are available, the following three systems are generally used—(1) the ‘‘travel time’’ or the ‘‘leading edge’’ system, (2) the ‘‘sing around’’ system, and (3) the ‘‘phase difference’’ system. In the travel time or leading edge system, voltage steps of few hundred volts at a fixed repetition rate excite the two piezoelectric transducers simultaneously. Each excitation produces a burst of exponentially damped acoustic oscillations at the surface of the two transducers. These acoustic wave trains travel toward the opposite transducer. As a result the travel time of the leading edge of the signal can be determined and can be correlated to the current speed, assuming the speed of sound is known and remains fixed. In order to compensate for changes in sound velocity, the average travel time for both the piezoelectric transducers are simultaneously measured and computed. Figure 5 illustrates the travel time arrangement to determine current velocity. It comprises two transducers A and B, pointed toward each other in a one-dimensional flow field v(y). The sound path between the two transducers is at an angle to the x-axis that coincides with the direction of flow. The transit time difference, ⌬t, for (v2 Ⰶ c2), can be given as t =
2Lv cos θ c2
In other words, the travel time difference (⌬t) is a function of the mean fluid velocity (v), the velocity of sound (c), and the projected length (L) of the path followed by the sound. The sing-around method is basically a pulse technique in which each new pulse in one direction is triggered by the arrival of the previous. After a number of repetitions the sound direction along the length is reversed. The difference in the pulse repetition rate then provides the time difference. The instrument is consisting of two sing-around velocimeters. The velocimeters are arranged in such a fashion that the transmission paths in the liquid are equal and adjacent but the directions of travel of the pulses are opposite. One velocimeter measures the sum of the speeds of sound and current flow
fv = f1 − f2 =
2v cos θ nL
Hence the difference of the two sing-around frequencies ( fv) is proportional to the current flow, and in this ideal case, the velocity of sound does not effect the measurement of flow and currents. It can be noted that small individual differences in time intervals are accumulated, by n repetitions, to make a larger and more easily detectable difference signal. Using similar physical principles, Acoustic Doppler Current Profilers (ADCP), acoustic Doppler single-point current meters, and correlation sonars are now reaching maturity and dominate research and development in current sensing. They are true beneficiaries of advances in technology and computing that employ state-of-the-art signal processing, high-performance acoustic transducers, and large data rates. The operating principle generally relies on return and processing of Doppler-shifted acoustic signals from a remote volume of water to assess fluid flow. Acoustic Doppler sensors have also driven technological advancement. Broadband techniques have enhanced the sample frequency, speed resolution, and range product limit as compared to the earlier incoherent ADCPs, permitting custom application to turbulence measurements where fast sampling and high resolution are necessary to resolve turbulent spectra. Another advantage of ADCPs has been the reduction in mooring cost of bottommounted instruments. This is particularly true in shallow waters where the profile of an entire water column is possible, except for an approximate of 15% ambiguous region near the surface. Finally, remote satellite imagery is used for remote determination of oceanic currents. When current systems are composed of waters whose characteristics are slightly different from the surrounding water, it is possible to locate these currents by exploitation of slight differences in relative motion in the same direction or different directions. Sensing methods used include temperature, texture, solar glint, back-scattered light radiance, and Doppler radar. These techniques are best described under the classification ‘‘remote sensing.’’ Ocean current sensors have employed a variety of measurement techniques and are continuing to develop. Rotorand-vane or impeller-type sensors are now giving way to acoustic Doppler-type measurements. Mechanical sensors continue to be used but are being upgraded with digital and
88
OCEANOGRAPHIC EQUIPMENT
more advanced electronic readouts and interfaces. There has also been an emphasis on the use of airborne or air-operated Doppler instruments for numerous applications. Radar backscatter at multiple frequencies provides current maps as well as directional wave spectra. The number of such instruments and their acceptance are increasing with demand for remote sensing. Acoustic travel time current meters continue to be employed for in situ applications. The implementation of Electromagnetic and Laser Doppler Velocimeter (LDV) current measurements is complicated by cost and size constraints, although three-dimensional measurements and miniaturization for in situ deployment are currently of interest to some users. Indirect means, including drifters, altimeters, and hydrographic means, are still popular and remain as important as they were. Historically, sensors are getting smaller and are measuring a wide variety of current-related flows including boundary layers, heat flux, and vorticity. It is projected that development in current meters will remain an important and active process.
PRESSURE Most of the physical quantities like conductivity, salinity, and depth measurement are closely related to pressure. Like any other process, pressure measurement is critical to physical oceanography. A classical method employs two mercury filled thermometers, one protected and the other unprotected, to perform deep-sea measurement of pressure. The unprotected thermometer is similar to that used for measuring atmospheric temperatures. The protected thermometer is encased in a second glass housing with a mercury chamber designed to allow better heat transfer from the surrounding water. The two thermometers are lowered together and measurements are taken simultaneously. The unprotected thermometer will be subject to the effects of pressure more so than the protected one, and will therefore read higher. If the readings are standardized, the difference in temperature will allow estimation of the hydrostatic pressure. With the advent of electrical measurement technology, a variety of new pressure-measurement techniques were devised. These methods were somewhat different in construction from the mercury-filled thermometers and generally used a mechanically deformable sensing element. The most common of these is the spring bellows or the aneroid element. This is made with the help of a compressible waterproof chamber that acts against a spring, forming a bellows type structure. As the pressure is increased the bellows experiences inward motion due to compression, and vice the versa. This motion may be used to drive an electrical transducer or mechanical indicator. Another way of translating the increase in pressure to mechanical motion is achieved using a Bourden tube. The Bourden tube is a fluid filled, curved tube that changes its curvature and tends to straighten with an increase in pressure. Readout is achieved with a mechanical indicator. Another transducer, the Vibratron, uses a vibrating wire as its sensing element. The vibrating wire is attached to the two tines of a fork. Hence the frequency of vibration of the wire depends on the tension exerted by the fork. When pressure is applied to the fork, the wire tension changes, producing a different fundamental oscillation frequency of the
wire. The oscillation is sensed by a magnetic pickup located near the wire. A more modern mechanical method uses a piezoelectric element to sense the pressure directly. The capacitance change is converted to frequency by allowing it to parallel the tank circuit capacitance in a relaxation or other oscillator. The physical configuration consists of an inner cylindrical quartz element coated with a platinum film that is fused to an outer section of a precision bore quartz tube. This tube is also coated on the inside with platinum film. Together, these two electrodes form a capacitor. As pressure acts on the outside tube, the diameter of the tube decreases, reducing the spacing between the elements, lowering the capacitance. Quartz is a material of choice to its high stability, availability, construction, and relatively small temperature coefficient of expansion. Pressure transducers also use materials whose resistance tends to vary with pressure. A common example is carbon. Carbon granules are used in some pressure sensors in the same manner. An increase in pressure reduces the bulk resistance of a package of carbon granules. Some semiconductor devices also utilize this mode; for example, the tunnel diode is an example of a device that changes resistance with pressure. Pressure may also induce mechanical strain in certain structures, allowing strain sensors to be used for measurement. Electronic bathroom scales often use a strain gauge to observe deformation of cantilevered beams as weight is applied. Strain may also be sensed using different transducers; for instance, the current flowing through a semiconductor varies exponentially with strain, and electrical strain gages exhibit a change in resistance under varying strains. Similarly, fiber optic sensors are extremely sensitive to changes in strain, providing resolution of several microstrain or less. All of these techniques have been used in one form or the other to determine pressure.
ACOUSTIC TRANSDUCERS/HYDROPHONES The ocean is virtually opaque to light in the infrared and ultraviolet spectral regions. Visible light is also attenuated with attenuation coefficients of 0.05/m in the blue-green spectral region under the best conditions. Practical transmission distances, therefore, are always less than several hundred meters. Therefore, except for short-range examination, photography and video, optical techniques are of little use for longrange detection, communication, and sensing. Conversely, the ocean is a good conductor of sound energy. Acoustic waves travel readily in water whereas all but the lowest frequency (VLF) electromagnetic waves are rapidly attenuated. Acoustic or pressure waves, therefore, offer an opportunity to see the interior of the ocean. For all practical purposes, hydrophone arrays serve the dual purpose of underwater eyes and ears of the oceanographer. The bandwidth associated with underwater acoustics is good, varying from the millihertz to the megahertz range. This allows the use of sound as a probe of objects and processes whose scales can vary from millimeters to ocean basin scales. The ocean is especially transparent to low frequencies where it offers comparatively low attenuation. At high fre-
OCEANOGRAPHIC EQUIPMENT
quencies, attenuation is increased, but the wavelength is much shorter as determined by the speed of sound c. λ∼ =
1500 m/s c = = 0.1 m at 15 kHz f f
Because angular resolution is determined by the diffraction limit, which, in turn, is dependent upon wavelength, higher frequencies are suited to the development of imaging sonar and narrow beamforming arrays. The attenuation coefficient in fresh water 움F is generally a function of the square of the frequency f, as well as the density-speed of sound product FcF, and shear/bulk viscosities, 애F애F. Attenuation in seawater 움s is a little more than an order of magnitude greater, having contribution from magnesium sulfate and boric acid relaxation terms. 4.34 4 µ + µ f 2 ⇒ 4.9 × 10−2 dB/km at 10 kHz αF ≈ 4π 2 ρF c3F 3
αs ≈ 0.1
f f2 40 + 2.75 × 10−4 f 2 (kyd −1 ) 2 1+ f 4100 + f 2
Therefore, only short-range performance is available when the wavelength is suitable for locating objects at centimeter resolution. Examples of various high-frequency systems include current profilers, active and passive sonar, doppler-velocity logs, and communications systems. Continuous measurement of ocean currents is possible from shipboard acoustic sensors known as acoustic doppler current profilers (ADCPs) can provide a two-dimensional record to several hundred meters. Active sonar includes multibeam types capable of imaging at ranges from 10 m to 1000 m or more, depending on the frequency of operation. Passive sonar, aside from having proven advantages for detection, is still the object of advanced development for source localization, bioacoustic characterization, and imaging. Doppler logs use spatial correlation principles to assess platform movement (velocity) for acoustically obtained seafloor signatures, whereas communications systems use various modulation and receiver design methods to obtain maximum channel utilization for a given range/frequency. In spite of these proven application areas, sound is still an underemployed tool in oceanography. Significant developments are being made in this area by thoughtful application of acoustic principles and techniques for direct probing of the ocean and information transfer through it. Some of the applications of underwater acoustics are simple. Others may require complex and improved signal processing techniques and instrumentation. Coverage of the many signal processing advancements and system configurations are beyond the scope of this text, but many good references exist. All underwater acoustic observations are made with the help of transducers that convert part of the energy of the acoustic wave to an electrical signal. Some appropriate electronic circuit processes this signal to provide the output. The output devices can range from an audio recorder or oscilloscope to computer waterfall displays of power spectrum (sonagrams) and other signal processing constructs. The transducer used for reception is called the hydrophone and is generally made of ceramic or quartz crystal. It is a broad band device operating well below the resonant frequency of its active elements. Its construction resembles that of the piezoelec-
89
tric microphones, where a ceramic or quartz crystal is either linked with a diaphragm or is directly exposed to acoustic waves. Stresses in the crystal, resulting from the acoustic or sound wave, generate an output voltage that is proportional to the acoustic pressure. Some designs incorporate a built-in preamplifier next to the crystal to reduce electrical noise and output impedance. The elements of construction are well sealed and can operate over a wide range of frequencies. Sometimes a transducer similar to the hydrophone is used as a generator or projector of acoustic signals. The projector often requires thousands of volts of excitation to achieve a large acoustic signal output. Fiber optics and piezofilms are two of the new candidate technologies for hydrophones that are gaining rapid recognition. The piezofilm consists of an electrically active fluoropolymer that has piezoelectric properties. It exhibits both parallel and transverse piezoelectric effects, but due to its physical characteristics, the parallel, or thickness mode is commonly used. This is generally known as PVDF or PVD due to its chemical name Polyvindylidene Fluoride, and it has found application in noise-canceling hydrophones. It is stable and operable at temperatures over 120⬚C, in addition to withstanding application of voltage stresses of several thousand volts or accelerations to several hundred times without becoming depolarized. It can be laminated to form multilayered bimorphs or multimorphs that result in multiplication of transducer response levels, but like many hydrophones, is not good for applications requiring power-handling capability. It is suited for low-power-emissive transducers and in hydrophone or microphone applications. Due to its pliability, it can be directly attached to almost any structure to form transducers of almost any shape. This and similar materials are suited for large array fabrication and for high-frequency use in imaging sonar applications. Fiber optics also lends itself well to wet area applications and is used extensively for undersea communications due to its high bandwidth capability, low weight per unit length, and low loss. Perhaps surprisingly, optical fibers are also a viable means for constructing hydrophones and seismic sensors. Acoustic pressure tends to change the characteristics of optical fibers which can, in turn, be sensed by changes in the propagating light field in terms of light intensity, phase, polarization, spectral distribution, or allowed spatial mode. Interferometric phase sensors are particularly sensitive, since changes of fractional wavelength dimensions can be measured. Over the last ten years, fiberoptic sensors have been built to demonstrate measurement of many parameters, including strain, temperature, force, electromagnetic field intensity, vibration, shock, pH, refractive index, and some chemical species. Practical fiber optic devices, including hydrophones, are now commercially available. Fiber optic sensors have a number of inherent advantages over traditional techniques with respect to electromagnetic interference and detection, mainly because they are photon based, not electron based, and transmission is therefore in dielectric materials instead of conductive wires. Aside from the obvious advantages of electromagnetic interference, optical fibers also exhibit low cross-talk, light weight, compact size, large bandwidth, resistance to corrosion, ability to operate safely in wet, hazardous, and explosive environments, multiplexing capability, and remote real-time operation. Their small mass and submillimeter size allow em-
90
OCEANOGRAPHIC EQUIPMENT
bedding and in situ operation. The sensitivity of the fiber optic sensors to the measurand is measured in terms of the induced phase shift of light. The phase shift , for light of wavelength 0, propagating in a single-mode fiber of gauge length L, and refractive index n1, can be written as
W
Vo
F = PA
2πLn1 ϕ= λ0 In operation as a hydrophone, the fiber is wound on a complaint mandrel, where acoustic pressure will result in a force F that will predominantly change the length L. The induced change, ⌬L, depends on the Young’s modulus, E, of the material. Mathematically it can be represented as L =
FL AE
where A is the cross-sectional area of the fiber. The Young’s modulus for quartz glass is 2 ⫻ 1011 Pa. Hence the resultant change of phase ⌬ due to an axial force F will be ϕ =
2πn1 LF λ0 AE
Similarly if the same fiber sensor is subjected to a uniform radial pressure P, then the gauge length L of the sensor will also increase due to the Poisson effect. If is the Poisson’s ratio of the material ( ⫽ 0.2 for quartz glass) then the increase in length can be represented (8) as L =
2ξ PL E
and the change in phase of the sensor becomes ϕ =
4πn1 ξ LP λ0 E
Two light signals, one from a reference arm and one from the sensor, interfere at the detector to produce an output electrical signal that changes proportion to variations in the optical path. Generally fiber optic hydrophones utilize the Michelson or the Mach–Zhender interferometers. Coherent light from the source is split into the two paths using a fiber optic coupler. One path is fed to an isolated reference arm and the other to the sensing arm. The acoustic energy applied to the sensing arm produces a change in the optical path length, and is detected as an sinusoidal intensity change. Signal-to-noise performance with this approach exceeds that available from hydrophones using piezoelectric transduction, and its use is, therefore, becoming more widespread. There are many applications where a single transducer cannot provide the performance required for signal detection and analysis. In such cases more than one transducer can be used. An assembly or group of more than one transducer is called an array. It is basically a series of transducers, which are driven together in case of a projector, and their outputs are integrated in some prearranged fashion in case of a detector, to enhance the received signal-to-noise ratio. The array gain (AG) is used to measure the enhancement of signal-to
L
T
Figure 6. Piezo crystal produces output voltage with changing pressure.
noise-ratio. The arrays could be two- and three-dimensional as well. (S/N)Array dB ArrayGain = 10 ∗ log10 (S/N)SingleHydrophone Basic hydrophone construction is shown in Fig. 6. The hydrophone element(s) are usually backed with materials having specific acoustic properties. Computer programs are now available for individual hydrophone design. External coatings are selected for low loss, low bubble content, durability, and speed of sound similar to that of the water. Acoustic impedance matching mandates selection of c products close to that of the medium. Certain rubber compounds and urethanes meet these requirements and are typically used. The array gain may be used to determine the link performance of any acoustic transmission and reception system. When the signal transmitted is a plane wave and the noise is isotropic, the array gain reduces to the directivity index (DI). The source level (SL) is defined in terms of decibels relative to 1 애Pa (1 N/m2 pressure). SL = 171.5 + 10 log P + DI The term P is the emitted power in watts. Once SL is known, the transmission loss TL can be used to determine the signal level at a distance R in meters. Generally, the source is considered to follow an inverse square reduction as a function of R for deep water. TL = 20 log R + αR × 10−3 Here the attenuation 움 is given in units of kilometers⫺1. In shallow water the loss is cylindrical, following an inverse R relationship where the factor of 20 in the equation is halved. The sound level at a hydrophone located at a distance R is given by the difference SL ⫺ TL. At the hydrophone array with directivity index (DI), the signal level can be compared to the ambient noise (NL to establish the signal-to-noise margin or detection threshold (DT). DT = SL − TL + DI − NL = SL − TL − 15 + 20 log f The noise is computed for a deep-sea location devoid of other than thermal sources. In the case of sonar, where the path is
OCEANOGRAPHIC EQUIPMENT
two-way, TL is doubled and the target strength is added to the right side of the equation. Target strengths are computed for different target types and may be found elsewhere, but generally these equations provided here are adequate to assess hydrophone performance. The actual electrical signal produced by the hydrophone is obtained by conversion of SL ⫺ TL ⫹ DI to an open-circuit voltage using the OCV response for the hydrophone in units of dB re:1 V/애Pa. MAGNETOMETERS Magnetic flux density sensors provide an electrical output that is proportional to the magnetic field strength. The most common device for measuring flux density is the Hall effect sensor, where the excitation current is kept constant and a semiconductor crystal, placed in the field, produces voltage along the axis perpendicular to the direction of the field and current flow. This voltage is proportional to the flux density. Hall effect sensors are designed as small-size probes and can contain one, two,or three crystals in a single package to measure one, two, or three mutually orthogonal transverse directions. Another common type of magnetic sensor is the inductive magnetometer, made of iron core or other inductor. According to Faraday’s law, the induced voltage in a coil that is placed in an alternating magnetic field is proportional to the measured flux density. Steady-state magnetic fields can be measured by spinning the coil and measuring the induced ac voltage. The nuclear magnetic resonance flux meter is also used to measure magnetic fields and field anomalies. It is based on the dependence of the sensitivity of nuclear resonance frequencies, of certain substances, under magnetic field strength. The transduction element generally consists of a coil wound around a small container of deuterium or lithium. The coil is excited by a high-frequency current. The resonance frequency is detected by a sharp increase in the power consumption of the system due to the characteristic energy absorption of the material. The frequency is related to the desired magnetic signature. The flux gate magnetometer is also a very sensitive device and it is used to measure extremely small magnetic signals. It consists of several saturable reactors that are excited by an ac signal. The ac drive signal ensures that the induction in core of the reactors is kept close to saturation. Under the influence of steady external fields, components of second harmonic current are induced in the reactor circuit. The second harmonic signals provide measure of the flux density. These components are not detected in the absence of any external fields. Three mutually orthogonal reactors can be used to measure the flux density along the three axes. NAVIGATIONAL SENSORS The most common navigation instruments include the ‘‘compass’’ and the ‘‘gyro compass’’. Compass construction may be mechanical, magnetic, or electromagnetic (flux-gate). The gyrocompass includes electromechanical, laser, and fiber optic types. A basic mechanical compass consists of a permanent magnet dipole magnetic and a graduated indicator. In the simple
91
case, the magnetic dipole is a magnetized bar or needle that pivots on a bearing and is installed so that it is free to move about an axis aligned approximately with the gravitational pull. If properly positioned, the needle points toward magnetic north established by the magnetic field structure of the earth’s core. The graduated or marked disk is fixed to the vessel structure. The relative displacement between the needle and the disc indicates the deviation of the vessel course from magnetic north. The top of the compass has a small lookthrough window onto which is painted a straight line known as the lubber’s line. A compass is a standard part of almost any undersea vehicle and is used by the operator to guide the vessel until the desired direction is opposite the mark. Directions of travel are generally given in degrees. North is assigned zero degrees. As the circle has 360⬚, moving clockwise through 90⬚ will lead to an easterly course. South is at 180⬚ and moving clockwise through 270⬚ will lead to west. Generally, the marine compass is marked in 5⬚ increments. A magnetic compass may be affected by ferrous material on the vessel on which it is mounted. This error is known as magnetic deviation. Deviation keeps the compass from pointing to the magnetic north. Declination keeps it from pointing to geographic north. Usually, a compass is installed on a wellselected location and, if inaccuracies are still detected, small magnets are placed in special slots within the compass to correct the inaccuracy and compasses must be checked frequently. Compass readings are generally made when on a straight course at a steady speed. Generally they are not used during turns because of the inertial effects of the damping fluid on the indicator. Before making a turn the navigator notes the compass reading and then sets the directional (inertial) gyro to that reading. Then the turn is made and the directional gyroscope indicates the direction of the turn. Some time after completing the turn and after resuming the straight course, the compass readings are checked again to make sure of the exact direction being followed. The directional gyro does not replace the magnetic compass but it is valuable in making precise turns and maintaining a straight course. It is a practical necessity, even in ordinary voyage conditions. It is a gyroscope that is mounted in such a way that it can move in any direction. When the gyroscope wheel is set in a certain position it will remain in that position according to the law of conservation of momentum, in spite of inertial forces observed in the vessel frame of reference due to motion. The property of the gyroscope that allows it to hold a fixed position is known as rigidity. The rigidity of a mechanical gyroscope wheel in the directional gyro is tremendous because the gyroscope wheel is massive and travels at high rotational speeds—nominally hundreds of miles per hour at the circumference. Thus, the angular momentum and energy are large compared to frictional losses. Like most other gyros, the wheel of the directional gyro does not stay indefinitely in the direction in which it was started. It tends to drift of slowly off position because of the rotation of the earth. At the North Pole the gyro would drift nearly 15⬚ per hour. On the contrary, at the equator, there would be no drift at all. Anywhere in the United States, the drift is such that the operator should adjust the gyro after about every 15 to 20 min. Modern high-precision gyroscopes are sometimes optical rather than mechanical. A higher degree of accuracy can be obtained with proper design. Laser and fiber optic gyros are
92
OCEANOGRAPHIC EQUIPMENT
the dominant types. The optical gyro is basically a type of interferometer that is used to detect rotation by the Sagnac effect. Consider a circular coil of fiber wound around an axis of rotation. Alternatively, a square, or triangularly shaped ring laser may be used. The idea is to divide the light beam into two equal-amplitude clockwise- and counterclockwise-rotating beams. If the vessel containing this structure is not rotating, the optical transit time is same for both the beams. If the plane of the ring starts rotating in a clockwise direction, at any rate, then the clockwise beam will have to cover a slightly longer path as compared to the counter clockwise beam, before the two optical beams meet and interfere with each other. This will cause a difference in the propagation times of the two counterpropagating optical beams. The change in phase due to this time delay can be detected and processed to obtain very high-resolution information regarding rotation. The laser gyro has been used for many years now and can provide low drift rates and circular error performance (CEP) when used as part of an inertial navigation system (INS). The performance of the Sagnac interferometer improves with the area of the ring per loop. For this reason, the fiber gyro is becoming more practical for small, high-performance applications. Many turns of fiber can be used to increase the delay for a given rotational rate, thereby improving sensitivity. The optical fiber gyro is insensitive to most of the unwanted environmental effects on the fiber, as both the counterpropagating beams travel along the same path. Fiber gyros have proven performance for short-duration operation of ten to fifteen minutes, while laser gyros have established performance for long-mission undersea operations. Fiber gyros, (FOGs) are still being improved, but several commercial models are low cost (several thousand dollars) and provide rotational linearity of 1% and drift rate of 2⬚ per hour, respectively. The fiber optic gyro is basically a type of fiber optic sensor that is used to detect rotation. The primary principle of operation for the fiber optic gyro uses the Sagnac effect. Consider a circular coil of fiber wound around an axis of rotation and a fiber optic coupler at the input separates, the input transmitted beam, into two equal amplitude clockwise- and counterclockwise-rotating beams. If the vessel containing this fiber is not rotating the optical transit time , is same for both the beams and it is given as τ =
2πR c
where R is the radius of the loop of fiber and c is the speed of light. If the plane of the fiber starts rotating in a clockwise direction, at a rate of ⍀ radians per second, then the clockwise beam will have to cover a slightly longer path as compared to the counterclockwise beam, before the two optical beams meet and interfere with each other. This will cause a difference in the propagation times (⌬), of the two counterpropagating optical beams that can be given as τ =
4πR2 c2
The change in phase (⌬), due to this time delay can be detected and processed to obtain very high-resolution informa-
tion regarding rotation. If v is the operating frequency of light then the change in phase can be given (9) by φ =
8π 2 R2 v c2
The optical fiber gyro is insensitive to most of the unwanted environmental effects as both the clockwise- and counterclockwise-propagating beams travel the same path. These gyros are excellent for applications that involve short duration of time, (e.g., few tens of minutes). They are still in developmental stage and have just started penetrating the market. Continuing research on behalf of industry and academia to produce lower-cost, high-performance units is ongoing. Accelerometers are also an essential part of any inertial navigation system. Both optical and mechanical types are common. The most common type uses piezoceramics. They are mostly designed on the spring and mass concept. A mass connected to a spring-loaded system will react to every acceleration due to its inertia. As a result, a force that is proportional to the acceleration will be exerted on the piezo crystal. This force on the crystal causes an output voltage that can be correlated to acceleration. In addition to single-axis accelerometers, biaxial and triaxial accelerometers are also available. Currently the fiber optic and beryllium Hopkinson bar accelerometers are state of the art (10). Other methods of sensing acceleration employ p–n junctions, MEMs, and capacitive transducers. POSITIONING, TRACKING, AND SURVEY SYSTEMS Although surface vessels continue to use magnetic and inertial navigational aids, as described in the previous section, these are rapidly being supplanted by systems incorporating the Global Positioning System (GPS) capability. GPS receivers operate in the 1.575 GHz (L1) and 1.22 GHz (L2) spectral regions and utilize concurrent and precision timing information from a constellation of up to 12 satellites at a time to establish geodetic position by differential timing. A minimum of four satellites must be received to compute latitude, longitude, and altitude (with respect to mean sea level). The satellites orbit at an altitude of 10, 898 miles in six 55⬚ orbital planes with four satellites in each plane. The system incorporates a network of up to 24 satellites including four spares. The stand-alone accuracy is purposely reduced to the above approximate values for nonmilitary personnel due to security and defense, rather than technological, concerns. In reality, the global positioning system (GPS) has a much better resolution but errors are intentionally built in to ensure that this system may not be abused in a manner that it causes concern for the national security. This is done by sending a spread spectrum coded sequence containing two codes, a precision Pcode and a coarse acquisition C/A code, on the L1 frequency. The L2 carrier contains the P-code and is made available to the military and authorized civilian users. The department of defense is able to achieve pinpoint accuracy with the Pcode—about 17.8 m horizontal and 27.7 m vertical. This technology is immediately available to surface vessels and to any vehicle that can support a hand-size microwave antenna. The system can also be used with an auxiliary fixed location receiver, the so-called ‘‘differential’’ DGPS mode, to provide resolution of centimeters at close range. Shallow water submers-
OCEAN THERMAL ENERGY CONVERSION
ible vehicles have already been designed to use DGPS for underwater navigation via a tightly tethered surface buoy containing the GPS receiver. Soon most vehicles including automobiles will be using the GPS navigation for unheard-of navigational ease and accuracy. In addition to the aforementioned navigational methods, there are acoustic (sonar) and optical aids for positioning, tracking, and survey applications. These are only very briefly discussed here. Since there are many activities associated within offshore work, there are over ten different systems that can be employed to various degrees of utility. For tracking and positioning, the use of directional hydrophones has been abandoned in favor of more advanced sonar systems, such as ultrashort, short, and long baseline transponders. In operation, several tansponders are deployed at known locations, if possible. The distance to each transponder is determined by the acoustic travel time and therefore establishes a reference for location of the inquiring platform. The transponders can be deployed without knowledge of their exact position and, in that case, the ship or other platform must move to find the minimum range (depth) from which the surface position is then known. All transponders are treated in the same fashion until all coordinates are located. The use of ‘‘smart’’ transducers having depth measurement capability allows computation of slant ranges and therefore position without using the search procedure. Long-range navigation at low frequencies of 8 kHz to 16 kHz provides an accuracy of up to 1 m to 2 m at 10 km range. A positional accuracy of 0.1 m can be obtained at higher frequency of 40 kHz to 80 kHz, but at a reduced range of 1 km. The short baseline system uses three or more hydrophones attached to a vessel at known position. The principle of operation is similar to the long baseline system with the exception that all the transducers are aboard the surface vessel. Only one seabed transponder or pinger is required for this type of system. The ultrashort baseline system is again similar to the short baseline system, with the added advantage that only one hydrophone/transducer is required. All timing information is determined from within the one transducer head. Accuracy for these systems is about 0.5% to 0.75% of the slant range. Scanning sonar, either mechanical or by array electronic scanning, is used for forward imaging in obstacle avoidance, surveillance, vehicle navigation, or survey work. Narrowbeam mechanical scan, phase comparison, side scan, synthetic aperture, and multibeam are just a few sonar types. Usually, narrow-beam sonar has a thin beam of 1⬚ to 2⬚ in the horizontal direction and scans over a multisecond period. CTFM (continuous transmission frequency modulation) is a subset of this category. Phase comparison sonar uses the phase information to determine bearing on two or more wide beams at a time. Data rate is improved over mechanical scan systems, but bearing resolution is proportional to SNR. Sidescan sonar uses a line array to produce a narrow (1⬚) horizontal beam and a wide (30 to 70⬚) vertical beam. The system operates by observing the interruption of the acoustic reverberation caused by an object in the beam. Images are difficult to interpret for the untrained observer. Multibeam sonar either steers a multiplicity of single beams or duplicates angular sectors to arrive at a complete image in the time it takes one pulse to travel to the target and back. Thus information rate is high. Many different configurations are possible and
93
beyond the scope of this discussion. Synthetic aperture techniques rely upon coherently summing many returns from a sonar system as it passes a target area. The resolution of the system is increased by the synthetic size of the aperture that is formed by many data records put together to make a much larger time record. Angular resolution from diffraction theory is related inversely to the aperture width. BIBLIOGRAPHY 1. National Research Council, Oceanography in the Next Decade, Washington, DC: Natl. Academy Press, 1992, p. 53. 2. J. Williams, Oceanographic Instrumentation, Annapolis, MD: Naval Inst. Press, 1973, p. 4. 3. T. M. Dauphinee, In situ conductivity measurements using low frequency square wave A.C., Div. Appl. Phys. Natl. Council, Ottawa, Canada, pp. 555–562. 4. F. M. Caimi (ed.), Selected Papers on Underwater Optics, Society of Photo-Optical Instrumentation Engineers, Milestone Series, Vol. MS 118, B. Thompson, Series Ed., 5. N. L. Brown, In situ salinometer for use in the deep oceans, Marine Sci. Instrum., ISA, vol. 4, 1968, pp. 563–577. 6. R. L. Ribe and J. G. Howe, An empirical equation relating sea water salinity, temperature, pressure, and electrical conductivity. MTS J., 9 (9): 3–13, 1975. 7. F. M. Caimi, Refractive index measurement of seawater: Several methods, Proc. IEEE, 1989, pp. 1594–1597. 8. J. Wilson and J. Hawkes, Optoelectronics: An Introduction, 2nd ed., New York: Prentice-Hall, 1989. 9. J. P. Powers, An Introduction to Fiber Optic Systems, Homewood, IL: Aksen, 1993. 10. S. Murshid and B. Grossman, Fiber optic Fabry-Perot interferometric sensor for shock measurement, 44th ISA Symp., Reno, NV, 1998.
FRANK M. CAIMI SYED H. MURSHID Harbor Branch Oceanographic Institute Florida Institute of Technology
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL...ONICS%20ENGINEERING/41.%20Oceanic%20Engineering/W5406.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Sonar Signal Processing Standard Article David M. Drumheller1, Charles F. Gaumond2, Brian T. O'Connor3 1Naval Research Laboratory, Washington, DC 2Naval Research Laboratory, Washington, DC 3Naval Research Laboratory, Washington, DC Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W5406 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (261K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are A Brief History of Sonar Signal Processing Sound in the Ocean Functions of Sonar Signal Processing Scattering and Signal Modeling Conclusion About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELE...0ENGINEERING/41.%20Oceanic%20Engineering/W5406.htm16.06.2008 15:16:09
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
672
SONAR SIGNAL PROCESSING
SONAR SIGNAL PROCESSING Sonar is an example of remote sensing. Although sonar systems are used for fish-finding, acoustic imaging through turbid water for remote underwater operations, and exploration of geophysics, they are most commonly identified with detecting ships and submarines. In principle, sonar and radar are similar because both use wave energy to detect distant targets. Yet, in practical implementation, they are vastly different. Most notable is the difference in media: sonar relies on acoustical waves, whereas radar relies on electromagnetic waves. Furthermore, the sonar medium is much more variable: channel effects are more severe, propagation rates are 200,000 times slower (1500 m/s rather than 3 ⫻ 108 m/s), frequencies are much lower (10 kHz to 100 kHz rather than 0.1 GHz to 100 GHz), and the signal bandwidths as a percentage of the carrier frequency, in general, are much larger than those in radar. There is also more noise and reverberation. Although the speeds of ships and submarines are considerably lower than those of aircraft and missiles, the much greater difference in propagation speed yields greater Mach numbers (v/c) for sonar (typically 10⫺3) than for radar (typically 10⫺6). As discussed later, the higher J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
SONAR SIGNAL PROCESSING
Mach numbers achieved in sonar imply that echoes from moving targets have to be processed differently. The differences in the parameter values imply that radar and sonar systems collect data about targets at different rates and with different resolutions. For example, several seconds or minutes can pass between each sonar transmission. In radar, hundreds or thousands of pulses are transmitted, received, and integrated within one second.
A BRIEF HISTORY OF SONAR SIGNAL PROCESSING Sonar and sonar signal processing possess a history rich in development and implementation. Unlike radar, which has a number of civilian uses, sonar is primarily used for military purposes. Thus, most research and development of sonar technology has been sponsored by the world’s navies. Hundreds of years ago, it was recognized that sound travels in water. Leonardo da Vinci observed that sound from distant ships could be heard by placing one end of a tube in the water and the other to the ear. This system offered no gain and no directivity. Sound had to be sufficiently strong to overcome the noise induced by the motion of the boat and nearby breaking waves. Prior to World War I, little was done beyond da Vinci’s work. Any kind of signal processing would require the development of electronic technology, something that did not occur at any significant level until the twentieth century. During World War I, most sonars were ‘‘passive’’ acoustic systems. One system of this era resembled a stethoscope and was composed of two air-filled rubber bulbs mounted on the end of a tube connected to earpieces. An operator listened for sounds that indicated a ship or submarine. Because it was a binaural system, the operator could estimate the bearing to the detected vessels. Later versions of this system had a similar in-water configuration, but with several bulbs attached to each earpiece. Such an arrangement offered directivity, so it had to be manually steered to detect a vessel and estimate its bearing. This is perhaps the earliest example of beam forming, a topic covered later. Later in World War I, electric underwater transducers called hydrophones were developed using electromechanical materials that deform with the application of an electric or magnetic field (piezoelectrics and magnetostrictives). The use of these materials, which allowed the efficient coupling of electric power with underwater acoustic power, was crucial to the development of sonar because it made possible more general arrangements of sensors (arrays). Consequently, towed, horizontal line arrays were developed that offered more gain and directivity than previous passive systems. A single horizontal line array cannot be used to distinguish signals arriving from both sides of the array but approaching from the same angle. Therefore, a pair of line arrays was towed, because it was possible to resolve the ‘‘left-right ambiguity’’ of the target bearing. This system was the forerunner of the modern military towed-array sonar system. After World War I, reliable, high-power electronic amplification allowed development of ‘‘active’’ sonars. In this type of sonar, an acoustic pulse is transmitted that generates echoes which are detected aurally, electronically, or visually (cathode ray tube). Active sonar systems were employed by ships and submarines during World War II. Such systems did not em-
673
ploy much signal processing because the equipment required to implement complex algorithms did not exist or was too large to install on vessels. Only simple vacuum tube electronic equipment was available. It was bulky and consumed much electrical power. Reliable, high-speed, silicon-based electronics was decades away. Today’s sonar systems employ large towed or hull-mounted arrays composed of many hydrophones. The signals from these arrays are processed by small, high-speed computers. Thus, it is possible to implement many computationally intensive, multiple input signal processing algorithms to detect, classify, and track ships and underwater targets. The operating frequency for a modern sonar system depends on its application, which determines the required operating range and resolution. The higher the frequency, the more attenuation a signal experiences per unit distance of propagation. As shown later, for a fixed array size, the ability to resolve and locate a target increases as the frequency and signal bandwidth increase. Modern military sonar systems generally fall into one of three categories: weapons (torpedoes), tactical systems, and surveillance systems. These three categories roughtly correspond to three operating frequency ranges: high-frequency (above 10 kHz), midfrequency (1 kHz to 10 kHz), and lowfrequency (below 1 kHz). High frequencies attenuate greatly per unit distance of propagation, but as explained later, offer the highest angular resolution of a target for a fixed array size. Active and passive torpedoes operate in this frequency range, because they use two-dimensional arrays that must fit within the torpedo housing and still achieve sufficient angular resolution over distances that are not too great. Active mine-hunting sonars also operate at high frequency, because high-frequency arrays yield high-resolution images of the terrain and mines that are used for identification or classification. Passive tactical sonar systems, which typically operate in the midfrequency range, are used by surface ships or submarines to avoid being successfully targeted by an attacker. They must be small and not impede maneuvering. Active tactical sonar systems are also used for searching moderately wide areas defined by the stand-off distance of particular offensive weapons, such as torpedoes or cruise missiles. Active and passive surveillance sonar systems are often large and possibly covert (therefore passive sonar) and are used to detect and track targets over a wide area. These sonars use low frequencies that propagate over great distances underwater.
SOUND IN THE OCEAN The oceanic environment is broadly categorized as either deep water or shallow water (1). In deep water, the water channel is sufficiently deep that propagating sound is well approximated as rays. Deep water supports sound propagation with a depth-dependent sound speed, c(d) (d denotes depth), which differs in regions of the ocean and times of the day and year. The channel response is approximated as a finite sum of weighted time-delayed impulse responses, each of which corresponds to the arrival of a nondispersive ray. There are several computer programs for estimating this channel response (2). In shallow water, the boundaries of the water channel (the surface, water-sediment interface, and the sedimentbasement interface) are separated by a few wavelengths, and
674
SONAR SIGNAL PROCESSING
propagating sound is best approximated as a sum of modes (traveling standing waves). In general, the sound speed is depth-dependent, and the modes are dispersive or frequencydependent. There are also computer programs to simulate this behavior (2). The propagation effects just described imply that sound traveling through the ocean exhibits time-spreading (multipath distortion) at long ranges. Sound also spreads in angle because of horizontal inhomogeneities, and spreads in frequency because of time variations in acoustic parameters, such as the depth-dependent sound speed and surface motion. When all three forms of spreading occur, it is termed FAT (frequency, angle, time) spreading. Any sound deliberately transmitted in the ocean, upon reception, is contaminated by noise and echoes from the ocean boundaries and inhomogeneities called reverberation. First, consider the simplest passive sonar system configuration: a nondirectional radiating point-target (source) with a nondirectional point-hydrophone (receiver) in a time-invariant, homogeneous (space-invariant), infinite medium. Here, a transmitted signal s(t) travels directly from the source to the receiver. At the receiver, the pressure field is given by ps (t) =
s(t − Rsr /c) Rsr
(1)
where Rsr is the range of the receiver with respect to the source. The signal ps(t) is corrupted by additive noise n(t), which is white, Gaussian, and isotropic in the most restricted case. This noise originates from sources in the ocean (radiated noise from ships and breaking waves on the ocean surface) and from noise introduced by the system electronics. Because noise is the primary source of interference in passive sonar systems, they are often called ‘‘noise-limited.’’ Next, consider the simplest active sonar system configuration: a nondirectional point-projector (source), a nondirectional point-hydrophone (receiver), and a point-target in a time-invariant, homogeneous (space-invariant), infinite medium. When the source signal is scattered from the point-target, the pressure field at the receiver is given by
a pt (t) = Rst
s(t − Rst /c − Rtr /c) Rtr
(2)
where a is proportional to the fraction of sound scattered by the point-target, Rst is the range of the point-target with respect to the source, and Rtr is the range of the receiver with respect to the point-target. Thus, the effect of propagation is a time delay and a decay in amplitude. The source signal is also scattered from the surface, bottom, and volume inhomogeneities (fish) to produce reverberation. At the receiver, the reverberation pressure field is given by prev (t) =
b(i) s[t − R (i)]/c − R /c) sb br R (i) R (i) sb br i
to a volume (volume reverberation). Because reverberation is the primary source of interference in active sonar systems, they are often called ‘‘reverberation-limited.’’ The purpose of sonar signal processing is to enhance the detectability of a particular type of signal from noise, reverberation, or any source of deliberate interference. Generally speaking, a sonar operator’s ability to detect and track a target improves if a signal processing system increases the signal-to-reverberation ratio (SRR), the signal-to-noise ratio (SNR), or the signal-to-interference (SIR), defined as the ratios of the expected received signal power to the expected powers of the reverberation, noise, or reverberation and noise plus any deliberate interference. Accordingly, SRR, SNR, and SIR are measures of system performance. It has become customary to express the SNR and SRR in terms of the sonar equations, which are written as a sum of logarithms of the power or energy:
EL = SL − TL + TS SNR = EL − NL
where EL is the echo level, TL is the transmission loss from the projector to hydrophone, NL is the ambient noise level, and RL is the reverberation level. These and other terms commonly used in variations of the sonar equations that account for other factors affecting signal excess are given in Table 1. The accepted units for the sonar equations are 애Pa for pressure and meters for length. A real ocean environment is timevarying and inhomogeneous, and the noise field is anisotropic. Therefore, expressing the performance of a sonar system with a sonar equation is only approximate because there are convolutional, rather than multiplicative, relationships between the source array, receiver array, target and medium in time and space. The transmission loss, noise level, and reverberation level depend on how acoustic energy spreads (propagates) away from a projector. Two types of spreading are commonly considered: spherical spreading (deep water, short range, all frequencies), and cylindrical spreading (shallow water, medium and low frequencies, long range). Consider transmission loss. If spherical spreading occurs, then TL ⫽ 20 log r ⫹ 움Lr, where r is the range from the projector to hydrophone, and 움L is called the absorption loss coefficient. If cylindrical spreading occurs, then the change in range at long range is approxi-
Table 1. Sonar Equation Terms Term
Name
AN
Ambient noise
DI
Directivity index
DT EL SE
Detection threshold Echo level Signal excess
SL TL
Source level Transmission loss
TS
Target strength
(3)
where b(i) is proportional to fraction of sound scattered by the ith scatterer, Rsb(i) is the range of the ith scatterer with respect to the source, and Rbr(i) is the range of the receiver with respect to the ith scatterer. In a realistic ocean environment, b(i) may be proportional to a surface area for surface reverberation (surface roughness), or it may be proportional
(4)
SRR = EL − RL
Description Power of ambient noise at hydrophone Measure of projector or hydrophone directivity Signal power required for detection Echo power Excess of signal over detection threshold Power level of projector Power drop due to spreading and absorption Measure of target reflectivity
SONAR SIGNAL PROCESSING
675
Hydrophone array elements
Matchedfilter
Beam former
Background estimation
Detection Figure 1. System architecture of a passive or active sonar receiver.
mately given by TL ⫽ 10 log r ⫹ 움Lr. The reverberation and ambient noise levels are also affected by the propagation. Consider the case of volume reverberation at medium and high frequencies where scattering occurs at every point in the ocean. If spherical spreading occurs, then RL changes in range by ⫺20 log r, where r is now the range from a colocated projector and hydrophone to a point in space. If cylindrical spreading occurs, then the change in range at long range is approximately given by ⫺30 log r. Unlike volume reverberation, surface reverberation is independent of the type of spreading and changes in range by ⫺30 log r. With a colocated projector and hydrophone, the time (range delay) is related to range by t=
2r c
ment, called an ‘‘array,’’ allows projecting acoustic energy to or receiving energy from a given direction. Thus, the sonar operator, or autonomous weapon, can interrogate a particular volume of the ocean and avoid a large echo from an interfering target (a sea mount, the surface, a fish school) or reduce the interference from an acoustic noise source (distant shipping or a noise-source countermeasure). Beam forming is the combining of projector or hydrophone signals to direct or receive acoustic energy to or from a given direction in the ocean. The degree of precision with which this is accomplished depends on the spatial distribution and number of projectors or hydrophones and the operating frequency. Consider a monochromatic pressure plane wave of the form p(t, x, y) = e j(ωt−k x x−k y y)
(5)
Thus, formulas for reverberation yield the time-dependence of the expected power of the reverberation component of a received signal. Although the sonar equation is a simple tool generally for ‘‘back-of-the-envelope’’ calculations, it is useful for quantifying the improvement gained through signal processing. A more detailed description of sonar equation terms is given in Ref. 3. Conceptually, improvement of SNR or SRR is achieved in two separate ways because signals can be described as functions of both time (or frequency) and space (position). Filtering a received signal in the time domain or frequency domain exploits the coherence of signal and eliminates noise or reverberation that does not occupy the intervals of time or frequency occupied by the signal. Filtering in the spatial domain allows directing sound toward or received from a particular direction and is accomplished by combining the signals from projectors or hydrophones distributed in the water. Filtering is a principal function of sonar signal processing described in detail in the following section.
(6)
where k ⫽ 兹kx2 ⫹ ky2 ⫽ 웆/c is the called the wave number, 웆 is the radian frequency, and c is the propagative speed. Also consider a horizontal linear array of uniformly spaced hydrophones as shown in Fig. 2. If we use the signal at the first hydrophone as a reference signal and realize that monochromatic signals are presented by each hydrophone, then the signal from each hydrophone is given by ri (t) = e jω(t−(d/c) cos θ )
for i = 1, . . ., n
(7)
where is the plane-wave arrival angle. Suppose that the hydrophone signals are added together in the form of the weighted sum y(t, θ ) = e jωt
n
wi e− jdk cos θ
(8)
i=1
Y Plane wave
FUNCTIONS OF SONAR SIGNAL PROCESSING Sonar signal processing systems vary in their complexity and capability, depending on their application and the number of signals they process. Yet, almost all systems must do beam forming, matched filtering, detection, and background estimation. These functions are interrelated. In reception, they are performed sequentially as shown in Fig. 1. In transmission, only beam forming is done.
Arrival angle θ 1
2
3
4
5
6
7
8
9
10
X
Hydrophones
Beam Forming Many sonar systems, particularly those for military use, do not employ a single projector or hydrophone. Many sensors are used and arranged in a regular pattern. Such an arrange-
Element spacing d Figure 2. A horizontal line array with uniformly spaced hydrophones.
676
SONAR SIGNAL PROCESSING
length of the array. For an array of fixed length and fixed number of projectors or hydrophones, shading reduces the sidelobe level but at the expense of a wider main lobe. Lengthening the array with more elements reduces both the main lobe width and sidelobe level. The linear array of uniformly spaced sensors is the simplest beam former to analyze. However, beam forming is done for any array configuration. In general, for n projectors or hydrophones arranged in a three-dimensional pattern, the beam former output is given by
1 0.9 Relative linear output
0.8 0.7 0.6 0.5 0.4 0.3
y(t, θ, φ) = e jωt
0.2
wi e− jωτ 1 (θ ,ξ )
(10)
i=1
0.1 0
n
0
20
40
60 80 100 120 140 Incident angle, degrees
160
180
Figure 3. A beam pattern for a horizontal linear array with ten hydrophones uniformly spaced by one-half wavelength. The beam pattern is steered 60⬚ from boresight.
where each weight wi is a complex number. The sum is also a monochromatic signal, and if we constrain the weights with magnitudes no greater than 1, then the amplitude of y(t, ) is maximized if we choose the weights as wi = e jdk cos θ
for i = 1, . . ., n
(9)
With this choice of weights, plane waves arriving from other directions do not produce an output signal with as large an amplitude as the signal arriving from angle (azimuth) . Thus, the choice of weights ‘‘steers’’ the array in the direction of the incoming plane wave. Figure 3 displays the magnitude of the response of the array previously described to plane waves arriving at all angles between 0⬚ and 180⬚ of azimuth. The plot in the figure is called a ‘‘beam pattern’’ with several features common to all beam patterns. First there is a ‘‘main lobe’’ which points in the direction the beam is steered. The width of the main lobe reflects how tightly the acoustic energy is directed or received. The remainder of the beam pattern is composed of ‘‘sidelobes’’ and ‘‘nulls.’’ It is desirable to have a beam pattern with a main lobe that is as narrow as possible and sidelobes as small as possible. The width of the main lobe and the maximum level of the sidelobes are changed by adjusting the magnitude of the weights (called ‘‘shading’’) or by increasing the
where i() is the time delay between the first and ith sensor for a plane wave arriving at an azimuth of and elevation . Generally speaking, the beam pattern is a function of the array size in any one dimension and also of the operational frequency. Therefore, what really counts is the size of the array array in wavelengths: the greater the number of wavelengths across an array, the narrower the beam width. Radar systems typically operate at frequencies in the GHz region where the wavelengths are measured in centimeters or fractions of centimeters. The wavelengths for sonar systems are generally much larger. Hence, radar systems are generally capable of higher angular resolution for a fixed array size. There are several common array configurations used in military sonar systems, some of which are shown in Fig. 4. Tactical sonar systems, which typically operate at frequencies from 1 kHz to 10 kHz, often employ towed-line arrays hundreds of meters long. They also use spherical arrays mounted inside an acoustically transparent, water-filled housing installed on the hull of a ship or submarine. Figure 5 shows a spherical array mounted on the bow of a cruiser. Surveillance sonars, which typically operate at frequencies below 1 kHz, use large line or planar arrays mounted on the sea bottom or suspended in the water. These low-frequency arrays can also be hundreds or thousands of meters long. Torpedo sonars operate at frequencies above 10 kHz and employ planar arrays mounted on the torpedo’s flat nose or on the side of the torpedo body. Although beam forming is done with analog circuitry, digital processing is more convenient and, hence, the principal form of implementation today. Analog circuitry is bulky, comparatively inflexible, and allows for only a small number of fixed beam patterns. In contrast, digital processing allows for almost any value of beam-forming weight, which can be de-
Moored vertical array
Hull-mounted spherical arrays
Towed line arrays
Hull-mounted planar arrays
Figure 4. Common sonar array configurations on ships, submarines, and deployed systems.
Sonobuoy vertical array
SONAR SIGNAL PROCESSING
677
Figure 5. A spherical, midfrequency sonar array on the bow of a cruiser in drydock.
rived adaptively in situ. For reception, beam forming is done on a computer using samples of the hydrophone outputs. On transmission, the signals for the projectors, each with its own unique time delay and amplitude, are generated by a computer, sampled, delivered to a digital-to-analog converter, and amplified to drive a projector. As stated earlier, beam forming allows an operator to reduce the receiving sensitivity of a sonar to sources of noise or reverberation. In principle, this is accomplished by placing nulls in the beam pattern coincident with the angular position of these sources. In the case of a linear array with uniformly spaced hydrophones, the beam pattern in Eq. (8) is a polynomial in e⫺jkdcos. Therefore, placement of the nulls is equivalent to determining the roots of a polynomial. If a null is required at some ⫽ 0, then the polynomial in Eq. (8) must have a zero at e⫺jkdcos0. Because the polynomial is of degree n, it can have as many as n unique zeros, and so as many n nulls may be steered against interference sources. Placement of the zeros is accomplished by selecting appropriate values for the weights w1, . . ., wn. The previous formulation assumed that direction of the interference sources is known, which allows direct calculation of the weights. In practice, calculation of the weights is done indirectly. One method for determining the weights begins with finding an estimate of the hydrophone data correlative matrix given by R ⫽ E兵rr H其, where rT ⫽ 兵r1(t), . . ., rn(t)其 is a vector of monochromatic signals. The weights are determined by solving the minimization problem: minww H Rw subject to w H η (θd ) = 1
(11)
where T(d) ⫽ 兵1, e⫺jkdcosd, . . ., e⫺j(n⫺1)kdcosd其 and d is the desired direction of maximum signal response, typically a ‘‘look
direction’’ where a target exists. The solution is given by w=
R −1η (θd ) R −1η (θd ) η H (θd )R
(12)
This method works well if the echo or radiated signal in the hydrophone data from the target is dominated by noise and reverberation. This is usually true if noise-generating countermeasures are dropped by an evading target. The beam-forming problem for reducing noise and reverberation becomes more complicated if the sonar platform (ship, submarine, torpedo) or the sources of interference are moving. In this case, the angular positions of the sources move with respect to the sonar platform, and beam forming becomes a time-varying problem. This dictates modifying any algorithm for beam steering and null placement to use only timely data to derive an estimate of the correlative matrix R. One method, called the recursive least square (RLS) estimation algorithm, does this by exponentially weighting the contribution of each measured time series used to estimate R, weighing heavily the most recently measured time series (4). To this point, beam forming has been presented in terms of receiving (directing) acoustic energy from (to) a remote point in space. Certain assumptions were made in deriving the results presented thus far. In particular, it was assumed that the array and point of interest are far enough apart to assume that an acoustic field is approximated as a plane wave. A more general view of receiving acoustic energy, called matched-field processing, recognizes that the acoustic field received is a complex function of the hydrophone and projector locations and the way sound propagates in the ocean. Suppose that a single source (projector) is placed in the ocean and the output signals are available from hydrophones
678
SONAR SIGNAL PROCESSING
placed nearby in some general configuration. If the oceanic environment and the positions of the projector and hydrophones were exactly known, then the output signals from the hydrophones could be exactly predicted. Of course, in practice, only the hydrophone positions and output signals are measured, whereas the projector location and environment are usually not well known. It is possible, however, to assume values for the projector location and environmental parameters, calculate the resulting hydrophone output signals based on those assumptions, and compare them with the measured outputs. If the difference is small, then the assumed project location and environmental parameters are close to the real values. This is the fundamental principle of matched-field processing (5). To illustrate matched-field processing, consider a shallowwater oceanic environment usually defined as any area where the depth is 300 m or less. In such an environment, it is known that the pressure field as a function of depth d due to a monochromatic omnidirectional source (projector) with amplitude A at range rs and depth ds is expressed by
p(d) =
N
an ψn (d)
n=1
an =
pk r ψ (d )e A
n
s
(13) − jk n r s
n
where kn is the horizontal wave number, and 1(d), . . ., N(d) are orthogonal functions called modes. The exact forms of the modes depend on the velocity of sound as a function of depth c(d). If c(d) and ds are known, then the hydrophone outputs can be predicted exactly or at least to the limit of the accuracy of the mode propagative model used. In practice, only the outputs from hydrophones are available. Thus, if pressure measurements are available from a vertical array of M hydrophones, a measurement vector is formed with pressure measurements from different depths, written as pT ⫽ 兵 p(d1), . . ., p(dM)其. An hypothesized pressure field vector is given by pˆT ⫽ 兵 p(dˆ1), . . ., p(dˆM)其, where
p(d) ˆ =
N
aˆ n ψn (d)
n=1
aˆ n =
pk rˆ B
(14) ψn (dˆs )e− jk n rˆ
n
where rˆs is the hypothesized source range, dˆs is the hypothesized source depth, and B is chosen so that pˆHpˆ ⫽ 1. Assuming that the modes are known, the matched-field processor output is given by the inner product of the measured field and normalized hypothesized field: P (rˆs , dˆs ) = | pˆ p |2 H
If M is sufficiently large that it may be assumed that φi (dk )φ ∗j (d) ≈ 0 for i = j
(15)
(16)
k
it follows that
2 ∗ ˆ P (rˆs , ds ) = aˆ n an n
(17)
Maximizing this sum with respect to rˆs and dˆs yields the best estimate of the source range and depth. Because it is assumed that the modes are known, the procedure described here is one of determining the correct weighted sum of modes that match the measured pressure field. Hence, it is referred to a matched-mode processing. Matched-field processing is computationally intensive because it requires an exhaustive search over a multivariable acoustic parametric space. Significant computational benefits result from matched-mode processing because of the assumed structure of the pressure field (modes). However, the modal representation of an acoustic field is not appropriate in deepwater or range-dependent, shallow-water environments. Matched-field processing has been extended to include the estimation of more sonar system parameters, such as noise level and ocean acoustic properties, to achieve greater robustness. Detection and Matched Filtering Detection is the process of deciding whether a particular portion of the beam-former output contains a target echo. In its most simple form, it is merely deciding if there is enough energy to declare that a target is present. This is typically accomplished by comparing the value of the beam-former output at a particular time with a threshold 웂 whose value is some multiple of the estimated background level. The decision is made using the recorded echo from a single transmission (single-ping detection) or several echoes (multiple-ping or sequential detection). The same detection algorithms used in radar systems are also employed in sonar systems. There is considerable processing of the raw hydrophone data before detection. First beam forming is done to steer the sensitivity of the hydrophone array in several directions, allowing the operator to observe the entire environment. The beam-former outputs are then bandpass filtered to contain only the frequency band of interest and to eliminate out-ofband noise and reverberation. This is followed by windowing which divides the beam-former output into several overlapping pieces. Finally, each portion of the windowed output is Fourier transformed and displayed. At this point, detection is done. The output of a passive sonar signal processing system is displayed to an operator in several different ways. Typically the square magnitude of the Fourier transforms of the windowed data are displayed as either a color contour (planar) plot, or waterfall plot. For a fixed beam, successive transforms are displayed, thus providing a two-dimensional display with frequency as one axis and time as the other. Alternatively, a fixed time is chosen (a single data window), and a two-dimensional display of frequency versus beam angle is displayed. Passive systems identify the presence of target sources emitting signals of fixed frequency. Such targets appear as fixed lines, or ‘‘tonals,’’ in the frequency versus time display previously described. An operator looks for such lines in the display, which, over time, drift in frequency because the target moves (motion-induced Doppler). In the frequency versus beam display, the target appears as a peak, which shifts from beam to beam because of its motion. Both displays show the signatures of short, transient signals for the target as well. These signals appear as short lines or frequency sweeps. In any case, the tonals and transients are observed by an operator, who can thus track the target.
SONAR SIGNAL PROCESSING
In its simplest form, detection in active sonar systems is essentially deciding between two mutually exclusive events: (1) only noise and reverberation are the active echo (hypothesis H0) or (2) a target echo, noise, and reverberation are in the active echo (hypothesis H1). Detection in active sonar systems lends itself to automation, as in torpedoes, but can still involve an operator, as with many tactical and surveillance systems. After beam forming and filtering, an active echo r(t) is commonly processed by a matched-filter receiver: 2 m(α1 , . . ., αn ) = r(t)g∗ (t|α1 , . . ., αn ) dt
(18)
where g(t兩움1, . . ., 움n) is the unity energy filter function, which models the expected form of the target echo subject to the parameters 움1, . . ., 움n, such as speed and range. In the case of a stationary point-target, the target echo is nothing more than a time-delayed version of the transmitted signal f(t). Thus, g(t|τ ) = f (t − τ )
g(t|τ , s) = f [s(t − τ )]
peaks that are responses to one or more targets. The remainder of the surface is the response of the matched filter to noise and reverberation. Detection is accomplished by comparing the matched-filter output threshold, which is some fixed value higher than the average matched-filter response, with the noise and reverberation. If the value of the surface exceeds the threshold, then a target is declared, and the bin is tagged as a target response. Otherwise, the bin is tagged as containing no target energy. The result is a simplified, range Doppler map that contains the target responses and a few noise and reverberation responses that happened to exceed the detection threshold (false alarms). The value of the detection threshold depends on the statistical nature of the target and clutter. Consider examining a range Doppler map at the point (0, 0) where a target response exists. Let the value of the matched filter at this point [m(0, 0)] be described by the random variable z. If the probability density functions of the two detection hypotheses, f Z(z兩H0) and f Z(z兩H1), are known, then the probability of detection is given by
(19)
More generally, if a point-target is moving, then the transmitted pulse compresses or expands on reflection. Thus, (20)
Pd =
c±v ≈ 1 ± 2v/c c∓v
∞ γ
f Z (z|H1 ) dz
(24)
where z is the matched-filter output. The probability of a false alarm is given by
where 0 ⬍ s is the Doppler variable given by s=
679
Pfa =
∞ γ
f Z (z|H0 ) dz
(25)
(21)
where v is the range rate or velocity of the target along the line of sight. More often, the Doppler effect is modeled as a simple spectral shift of the signal. In this case, if f c is the signal carrier frequency, then g(t|τ , φ) = f (t − τ ) exp( j2πφt)
(22)
φ = (s − 1) f c = s f c
(23)
The density functions depend on the statistical nature of the noise, reverberation, and target. The simplest model is a nonfluctuating point-target in white Gaussian noise. In this case, if the return contains a target echo, the probability density function of the matched-filter output is given by f Z (z|H1 ) =
√ 1 (z + A2 ) A z exp − I0 2σ 2 2σ 2 σ2
for z ≥ 0
(26)
where
called the ‘‘carrier frequency Doppler shift.’’ The matched-filter function in Eq. (20) is called the wideband, point-target reflection model, and the function in Eq. (22) is called the narrowband, point-target reflection model. As discussed at the end of this article, the wideband model is used when the signal bandwidth is a significant fraction of the signal carrier frequency. Without loss of generality, the narrowband model is used throughout the remaining discussion on detection. The point-target models described above do not model the echoes from real-world targets. However, they are used in practice for several reasons. First, they are simple. Second, no general model for a target echo may be available, especially if the type of target is unknown. Finally, if the target is composed of many highlights, the matched filter produces a large response to each of the target highlights. If we consider the case of searching for a moving target in a fixed direction, then we must perform matched filtering over a range of time delays and Dopplers. This yields a two-dimensional surface called a ‘‘range Doppler map,’’ which contains
where σ 2 = E{m(τ , φ)}noise and reverb
(27)
and A is the amplitude of the return signal. This is known as the Rician density function, which is used to model the matched-filter response to a stationary point-target. If the return does not contain an echo, but only noise and reverberation, then the probability density function of the matched-filter output is given by f Z (z|H1 ) =
z 1 exp − 2 σ2 σ
for z ≥ 0
(28)
Equation (26) must be integrated numerically, but the values have been tabulated and are available in almost any text on detection theory. The false alarm probability is determined in closed form given by
γ
Pfa = exp −
σ2
(29)
SONAR SIGNAL PROCESSING
;; ;;;;; ;
If the point-target fluctuates, and its amplitude is modeled as a complex Gaussian random variable, then the probability density function of the matched-filter output is given by
1 z f Z (z|H1 ) = 2 exp − 2 σT + σ 2 σT + σ 2
for z ≥ 0
where σT2 = E{m(τ , φ)}target In this case, the probability of detection is given by 1/(1+SNR) Pd = Pfa
(30)
E{m(τ , φ)}target E{m(τ , φ)}noise and reverb
Test bin
Guard bins Estimations bins
Guard bins
Estimations bins
Range delay
(31)
Figure 6. The test bin, guard bins, and estimation bins used for estimating the background level for constant false alarm rate detection.
(32)
where the false alarm probability is given by Eq. (29), and the signal-to-noise ratio is given by SNR =
Matched-filter output
680
(33)
The previous equations reveal the dependence of the detection process on the detection threshold 웂. There are a number of ways to choose a detection threshold, but the most common approach is to choose the false alarm rate first and then determine (and simply live with) the resulting probability of detection. This approach, known as the Neyman–Person detection method, is popular because setting the false alarm rate at an acceptable level avoids task-loading an operator with tracking too many false targets. The probabilistic models described above are commonly used in detection analysis for sonar systems. They are used for a ‘‘first cut’’ analysis if no other information about the target or environment is available. However, sonar systems are routinely deployed in environments where the statistical fluctuations of the noise and reverberation cannot be modeled by a complex Gaussian process. The most common attribute of an environment that deviates from the simple models described above is that the tails of the probability density functions f Z(z兩H0) and f Z(z兩H1) contain more area than would be present if Gaussian statistics were valid. In such cases, using a threshold derived for a fixed false alarm rate given Gaussian noise and reverberation yields a true false alarm rate higher than predicted. In instances where non-Gaussian noise and reverberation prevails, extensive measurements must be performed to gather enough data to estimate its probability density function and, if possible, the probability density function of matched-filter response to the target. It is possible to estimate the power of the noise and reverberation and to use the estimate to derived a detection threshold. This is known as background estimation. Background Estimation Background estimation is the process of estimating the power and frequency distribution of the noise or reverberation in the beam-former output during reception. It is performed by examining a portion of the beam-former output time signal that is assumed to contain no target echo. It typically uses the discrete values of the beam-former output as inputs to a statistical estimation algorithm. The estimated background level
is then used to determine the detection threshold for a given false alarm probability. Consider Fig. 6 which shows a target response in a matched-filter output. The output is divided into bins, which reflect the digitization of the analog data received from the beam former. It is assumed that the test bin contains the matched-filter target response and that the values in the estimation bins are used to estimate the expected value of the background. The guard bins are not used directly, but provide a ‘‘buffer space’’ between the test bin and the estimation bins, so that no target energy ‘‘spills’’ into the estimation bins and biases the estimate. The simplest way to estimate the background level is to average the values of all of the matched-filter values in the estimation bins. The estimated background level is given by σˆ 2 =
1 z, M i i
(34)
where zi is a sample of the matched-filter output in the ith bin and the summation is taken over M estimation bins. Assuming that the noise and reverberation is Gaussian, the probability of false alarm is given by Eq. (29). Therefore, substituting ˆ 2 for 2 in this equation and solving for 웂 yields the detection threshold used in the detection bin: γ = −σˆ 2 ln Pfa
(35)
The arrangement of estimation bins, guard bins, and test bin is shifted to the right a fixed number of bins, usually commensurate with the resolution of the matched filter. The estimation and detection process is then repeated. The detection process described is called bin-average or cell-average constant false alarm rate (CFAR) processing because the probability of a false alarm has a fixed value. It works well as long as all of the estimation bins contain only noise and reverberation. If other target returns occupy the estimation cells, then the power of the background estimate is high (biased), and the detection threshold is too high. Thus, if the test bin contains a target response, it might not exceed the threshold, and the target is not detected. More robust estimation algorithms have been developed to circumvent this and other nonuniformities in the background. For example, a trimmed-mean estimate is performed where the highest value acquired from the estimation cells is discarded before averaging. Alternatively, the mode of the values in the estimation cells is used as the background estimate. This is known as order-statistic CFAR processing.
SONAR SIGNAL PROCESSING
SCATTERING AND SIGNAL MODELING
;;;; Doppler shift φ
Some knowledge of the scattering properties of the environment and target are essential for evaluating the performance of a sonar system. Because the matched-filter is the principal processing algorithm in the detection state of a sonar signal processing system, it is essential to understand how the matched-filter responds to a return containing echoes from the target and the environment. Signal Scattering and the Ambiguity Function
Consider the case of narrowband scattering where it is sufficient to model a Doppler shift by a spectral shift. Under the assumption of wide-sense stationary scattering, it can be shown that the expected value of the matched filter to a scatterer is given by E{m(τ , φ)} =
∞ −∞
∞ −∞
ˆ S(τˆ , φ)|χ (τˆ − τ , φˆ − φ)|2 dτˆ dφˆ
(36)
where S(, ) is the scattering function of the scatter, χ (τ , φ) =
∞ −∞
x(t)x∗ (t − τ )e− j2π φt dt
(37)
is the narrowband uncertainty function, and 兩(, )兩2 is called the ambiguity function (9). The scattering function is estimated from measured data or derived if the geometry of the scatters is simple. The integral in Eq. (36) is a linear convolution between the signal ambiguity function and the target scattering function. The scattering function of several simple scatterers is known. A simple point-target at range 0 and a range rate inducing a Doppler frequency shift of 0 has a scattering function that is a two-dimensional delta (Dirac) function: S(τ , φ) = δ(τ − τ0 , φ − φ0 )
(38)
The scattering function of a line-target with the same range and Doppler and length L is given by S(τ , φ) = G2L/c (τ − τ0 )δ(φ − φ0 )
681
(39)
Range delay
Figure 7. A scattering function for volume reverberation.
target (point or line) and the environment are used. It is used to estimate the relative expected values of the responses of the matched filter to target and reverberation, which are expressed as signal-to-noise ratios. Equation (36) also reveals that sonar system performance depends on the shape of the ambiguity function, which is controlled by modulating the sonar signal. Thus, the ambiguity function is another ‘‘parameter’’ that is adjusted by the system designer. A great deal of technical literature has been written about the signal design problem, which couches the problem in terms of the volume distribution of the ambiguity function. A few examples demonstrate this important point. Consider a simple continuous-wave (CW) signal, which is nothing more than a gated tone given by 1 x(t) = √ GT (t) T
(41)
The narrowband ambiguity function for this signal is given by |τ | sin[π (T − |τ |)φ] 2 |χ (τ , φ)|2 = G2T (t − T ) 1 − (42) T π (T − |τ |)φ This ambiguity function is shown in Fig. 8. It is a simple ‘‘lump’’ whose width in range delay is T and width in Doppler Ambiguity
where
0
if 0 < t < W otherwise
(40)
and is called the ‘‘rectangular-pulse function.’’ The scattering function of simple volume reverberation, as seen by high-frequency sonar systems, straddles the ⫽ 0 line as shown in Fig. 7. The overall amplitude of the scattering function dies off according to the way energy spreads in the environment. For example, if acoustic energy propagates by spherical spreading, then the amplitude decays in range delay as 1/ 2. The profile of the scattering function along the axis for a fixed range is usually modeled by a simple unimodal function (such as a Gaussian pulse), but for simple analysis it is modeled as a rectangular-pulse function. Scattering function analysis lends itself to quick and simple analysis of system performance if simple models for the
Frequency shift
GW (t) =
1
30
0
20
–5
10
–10
0
–15
–10
–20
–20
–25
–30 –2 –1.5
–30 –1 –0.5
0 0.5 Delay
1
1.5
2
Figure 8. A narrowband ambiguity function of a continuous wave (CW) signal.
682
SONAR SIGNAL PROCESSING
1 x(t) = √ GT (t) exp T
jπBt 2
Frequency
is approximately 1/T. These values determine the resolution of the signal. Point-targets separated in range and Doppler by more than these values are separate responses in a range Doppler map. Now consider the case of a linear frequency modulated (LFM) signal given by (43)
T
The narrowband ambiguity function for this signal is given by
|χ (τ , φ)|2 =
|τ | sin[π (T − |τ |)(φ − Bτ /T )] 2 G2T (t − T ) 1 − T π (T − |τ |)(φ − Bτ /T )
(44)
This ambiguity function is shown in Fig. 9. The resolution of this signal is approximately 1/B in range and approximately 1/T in Doppler. Although these values are quite high and demonstrate the ‘‘pulse compression’’ property of the LFM, the signal cannot discriminate between point-targets separated in range and Doppler cells aligned with the timefrequency slope of the signal. Thus, the signal is used to overresolve (image) stationary targets of large range. It also offers some processing gain (SRR improvement due to matched filtering) over a CW against point-targets in volume reverberation. A number of other signals have been derived to control the volume distribution of the ambiguity function to make a sonar system more effective in detecting or imaging certain classes of targets. Of particular note are the time-frequency, hopcoded signals. Such signals are based on Costas arrays, one of which is displayed in Fig. 10 (8). If such a pattern is shifted vertically and horizontally, it intersects the original pattern at no more than one other ‘‘pulse.’’ If a series of CW pulses is concatenated in time, each with a different frequency allocated in the same relative fashion as the pulses in the Costas array, then the narrowband ambiguity functions looks much like a ‘‘thumbtack.’’ An example of such an ambiguity function
Time Figure 10. A Costas array for designing hop-code signals.
is shown in Fig. 11. Hop-code signals are used to image high Doppler targets composed of several point highlights. Wideband Versus Narrowband Processing Thus far, it has been assumed that a Doppler shift could be modeled by a spectral shift, implying that the narrowband, point-target reflection model in Eq. (22) is valid. Use of such a model in matched-filtering is called narrowband processing. When the relative motion between the sonar projector/hydrophone and a target is sufficiently large, the effects of time dilation must be considered. If this is true, then the wideband, point-target reflection model in Eq. (20) is valid. Use of such a model in matched-filtering is called wideband processing. Suppose that a signal of time length T and bandwidth B is transmitted from a stationary projector/hydrophone and is reflected by a target with and approaching line-of-sight velocity v. The received signal has length sT, where s is given by Eq. (21). Thus, the difference in signal duration is (s ⫺ 1)T. The signal range resolution is approximately 1/W. Therefore,
Ambiguity
Hop-code ambiguity
30
20
0
20
0
15
–5
–5
10
–10
0
–15
–10
–20
–20
–25
Frequency shift
Frequency shift
10 5
–10
0
–15
–5
–20
–10 –25 –15 –30 –2 –1.5
–30 –1 –0.5
0 0.5 Delay
1
1.5
2
Figure 9. A narrowband ambiguity function of a linear, frequencymodulated (LFM) signal with BT ⫽ 30.
–20 –1
–30 –0.5
0 Delay
0.5
1
Figure 11. The narrowband ambiguity function of a hop-code signal based on the Costas array in Fig. 10.
SONAR TARGET RECOGNITION
if the change in length is equal to this narrowband signal resolution or larger, then the matched-filter output is large in two or more adjacent bins. In other words, the energy is split between the bins. This implies at least a 3 dB drop in the matched-filter response from that attained if narrowband processing is sufficient. Thus, the criterion for wideband processing is given by (s − 1)T > 1/W
(45)
Using the formula for the carrier frequency Doppler shift in Eq. (23), the criterion is given as f c /W >
1 Tφ
(46)
Wideband processing implies that the scattering function and the signal ambiguity function must be defined differently. Accordingly, the expected value of the wideband matched-filter output is given by ∞ ∞ E{m(τ , s)} = S(τ , s)|χ[s/s, ˆ s(τ ˆ − τˆ )]|2 dτˆ ds, ˆ (47) τˆ =−∞
sˆ =0
where S(, s) is the wideband scattering function, ∞ χ (τ , s) = x(t)x∗ [s(t − τ )] dt
(48)
−∞
is the wideband uncertainty function, and 兩(, s)兩2 is called the wideband ambiguity function. The integral in Eq. (47) is not a linear convolution as defined in the narrowband case. The distinction is not always important for calculating backof-the-envelope performance predictions. For example, the narrowband assumption is used when calculating processing gains for signals used for detecting slowly moving (low-Doppler) targets.
ber of projectors and hydrophones arranged in an inadequate array configuration. Despite the difficulties cited, new developments in materials and electronics will allow the development of low-cost sensors, compact deployment systems, and high-speed signal multiplexing and processing electronics. This, in turn, will create new demands for sonar signal processing algorithmic development and present opportunities for improving sonar system performance. BIBLIOGRAPHY 1. I. Tostoy and C. S. Clay, Ocean Acoustics, Washington, DC: American Institute of Physics, 1987. 2. P. C. Etter, Underwater Acoustic Modeling, New York: Elsevier Applied Science, 1991. 3. R. J. Urick, Principles of Underwater Sound, New York: McGraw– Hill, 1983. 4. M. L. Honig and D. G. Messerschmitt, Adaptive Filters: Structures, Algorithms, and Applications, Boston, MA: Kluwer, 1984. 5. A. Tolstoy, Matched Field Processing for Underwater Acoustics, Singapore: World Scientific, 1993. 6. W. Burdic, Underwater Acoustic System Analysis, New York: Prentice–Hall, 1984. 7. B. D. Van Veen and K. M. Buckley, Beamforming: A versatile approach to spatial filtering, IEEE ASSP Mag., 5 (2): 4–24, 1988. 8. S. W. Golomb and H. Taylor, Construction and properties of Costas arrays, Proc. IEEE, 72: 1143–1163, 1984. 9. L. J. Ziomek, Underwater Acoustics: A Linear Systems Theory Approach, New York: Academic Press, 1985.
DAVID M. DRUMHELLER Naval Research Laboratory
CHARLES F. GAUMOND Naval Research Laboratory
BRIAN T. O’CONNOR Naval Research Laboratory
CONCLUSION Readers seeking a more detailed general overview of sonar system design and deployment or an understanding of the environmental parameters that affect sonar system performance should consult references such as Urick (3). Readers seeking a knowledge of the basic theoretical material for sonar signal processing should consult references such as Burdic (6). Furthermore, the large volume of radar literature on filtering, detection, and beam forming also serves as foundational material for sonar signal processing. Sonar signal processing algorithmic development is faced with inherent difficulties. First, the oceanic environment is hostile and highly variable: sound does not always travel in straight lines, important environmental parameters are often unknown in situ, and the knowledge of surface and bottom scattering mechanisms is incomplete and highly site-dependent. This makes it difficult to develop reliable detection and classification systems for general use. Second, practical systems are plagued by high sensor cost, difficulty in array deployment and recovery, power limitations, and communication constraints. Consequently, good target localization and reliable in situ environmental parametric estimation are difficult to achieve because there are often an insufficient num-
683
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL...ONICS%20ENGINEERING/41.%20Oceanic%20Engineering/W5407.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Sonar Target Recognition Standard Article David H. Kil1 and Frances B. Shin2 1Lockheed Martin, Goodyear, AZ 2Lockheed Martin, Goodyear, AZ Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W5407 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (402K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Integrated Sonar ATR Processing Real-World Experiments Emerging Technologies in Sonar Target Recognition Acknowledgment About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELE...0ENGINEERING/41.%20Oceanic%20Engineering/W5407.htm16.06.2008 15:16:31
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
SONAR TARGET RECOGNITION
683
SONAR TARGET RECOGNITION Sonar target recognition deals with identifying the source and nature of sounds by employing various signal-processing strategies. Target recognition includes detection (knowing something is out there), classification (knowing whether or not it is a target of interest), and identification (knowing the type of target). Sonar targets, such as submarines, surface ships, autonomous underwater vehicles, mines, and intruders, may be quiet or emit various sounds that can be exploited for passive sonar target recognition. There are passive and active modes of sonar target recognition. In passive sonar operation, typical sound emissions exploited for target recognition are as follows (1): 1. Transients. Unintentional (dropping a tool, hull popping from a depth change, periscope cavity resonances, etc.) and intentional (low-probability-of-intercept signals for navigation and communication) signals with short time duration and wideband characteristics 2. Machinery Noise. Noise caused by the ship’s machinery (propulsion and auxiliary) J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
684
SONAR TARGET RECOGNITION
3. Propeller Noise. Cavitation at or near the propeller and propeller-induced resonances over the external hull 4. Hydrodynamic Noise. Radiated flow noise, resonance excitation, and cavitation noise caused by the irregular flow of water past the moving vessel While transients occur infrequently, the latter three types exist continuously. They collectively give rise to line-component (i.e., sinusoidal) and continuous spectra, which are known as passive narrowband (PNB) and passive broadband (PBB), respectively. Passive sonar processors perform signal processing on raw data generated by a number of passive sonar arrays mounted throughout the vessel, present both audio and video channels to sonar operators, and generate contact reports by comparing extracted signature parameters or features—harmonic lines characteristic of propeller types, transient characteristics, cavitation noise properties, and so on— with templates stored in the passive sonar database. Sonar operators listen to audio channels and watch displays before validating or correcting the processor-generated contact reports. The second mode of sonar operation is active. Active sonar can be used to ensonify quiet targets. Echo patterns can give considerable insights into target structures, which can be useful for active target detection and classification. For instance, low-frequency sonars penetrate the body of the vessel, eliciting echoes caused by both specular reflection and the sound waves interacting with discontinuities in the body (2). Highfrequency sonars are commonly used to image an unknown target after being cued by other long-range sensors. Midfrequency sonars are used in tactical situations for target recognition by taking advantage of both specular echo patterns and moving target indication (MTI) based on Doppler after reverberation suppression (3). The operational concept of active sonar is very similar to that of radar. Active sonar processors perform beam forming, replica correlation, normalization, detection, localization, ping-to-ping tracking, and display formatting. Sonar operators differentiate underwater targets from background clutter using echo returns. Since the end of the Cold War, there has been a proliferation of regional conflicts in which the US Navy must project power in littoral waters in order to maintain peace. This paradigm shift has forced the US Navy to focus on shallow-water sonar processing. The shallow-water environment is characterized in general by (1) a high level of the ambient noise, (2) complex propagation or multipath, and (3) a lot of clutter from merchant ships, marine biologics, and complex bottom topography. Furthermore, new quieter threats, such as diesel-electric submarines, are a major challenge to passive sonar target detection and recognition especially when coupled with the shallow-water environment. As a result, most advanced sonar processors rely on a combination of active processing and fullspectrum passive processing that takes advantage of every available signal bandwidth for improved sonar target-recognition performance. The use of an active sonar to compensate for poor passive detection performance of quieter threats in shallow water, however, can pose problems because of too many echo returns unless automatic detection and recognition algorithms reduce the number of returns to a manageable level for sonar operators.
The main objective of sonar automatic target recognition (ATR) is information management for sonar operators. Unfortunately, sonar ATR is confronted with many challenges in these situations. Active target echoes must compete with reverberation, clutter (any threshold-crossing detection cluster from nontarget events), and background ambient noise while passive signals must be detected in the presence of interfering sources encompassing biologics, background noise, and shipping traffic. Furthermore, environmental variation in shallow water can alter signal structures drastically, thus degrading target-recognition performance. These challenges must be overcome through a synergistic combination of beam forming, signal processing, image processing, detection, situationally adaptive classification, tracking, and multisensor fusion. Sonar ATR is an interdisciplinary field that requires diverse knowledge in acoustics, propagation, digital signal processing, stochastic processes, image understanding, hardware and software tradeoffs, and human psychology. The foremost task here is to convert a large amount of raw data from multiple sensors into useful knowledge for situational awareness and human decision making. The challenge is to design a robust system that provides a high probability of correct recognition (PCR) at low false-alarm rates (PFA) in complex and nonstationary environments. To design an effective sonar target-recognition system, we must explore a number of algorithms in the areas of signal projection or filtering, interference suppression, feature extraction, feature optimization, and pattern classification (4). The five crucial components of sonar target recognition are the following. 1. Signal sorting in various spaces, such as time, frequency, geometric space, and transformation space 2. Signal processing that takes advantage of the underlying physical mechanism by which target signatures are generated 3. Compact representation of signal attributes (features) 4. Design of a classifier that takes advantage of the underlying good-feature distribution 5. Performance quantification in terms of operationally meaningful criteria In short, the key to achieving excellent target-recognition performance is an integrated and systematic approach that spans the entire spectrum of sonar processing in a mutually reinforcing manner. In this context, we introduce an integrated sonar ATR paradigm that addresses the five components effectively as shown in Fig. 1. Data projection deals with representing signals as compactly as possible while preserving crucial signal attributes. Since we do not have the a priori knowledge about good features, we initially extract as many pertinent features as possible. Feature ranking involves finding features that add value to target recognition and deleting the ones that do not. Classifiers estimate class-conditional probability density functions (pdfs) to map input features onto an output decision space. It is essential that this mapping algorithm be devoid of model-mismatch errors to achieve upper bounds in classification performance. The performance upper bounds in classification are conceptually similar to the Cramer-Rao lower
SONAR TARGET RECOGNITION
Neural networks, hidden Markov models (HMMs), conventional classifiers, hierarchical classifiers, hybrid classifiers
Subspace projection transforms the raw data onto appropriate projection spaces in which signal attributes can be better captured and be less sensitive to extraneous variables, such as interference and environmental noise.
Raw data
Data projection
Feature extraction
Classifier database
Feature PDF to classifier topology
Feature ranking One-dimensional multidimensional compressed dim.
Cramer-Rao bounds
No
Sensor limit? Yes Sensor improvement
685
No
Confusion matrix, receiver operating Classifier performance characteristics curves, rank analysis order curves, decision pending time (recognition rate)
Good?
Yes
Real-time implementation Memory, throughput, retraining, performance, special chips
Figure 1. The integrated ATR paradigm combines signal filtering, feature optimization, and classification to achieve maximum sonar target-recognition performance.
bounds (CRLBs) in parameter estimation (5). Model-mismatch errors can occur if the classifier structure does not model the underlying good-feature pdf adequately. The CRLB concept allows us to assess whether poor performance is attributable to sensor limitation (sensors not providing enough useful information) or algorithm limitation (algorithms not capturing all the useful information in data). This article is organized as follows. We first study how various aspects of signal transformation, signal classification, and data compression can be combined in order to extract the maximum amount of useful information present in sensor data. Next, we apply sonar target-recognition theories to challenging real-world problems—active sonar classification and passive full-spectrum processing for transient signal classification. Finally, we explore new, advanced concepts in sonar target recognition. Throughout this article, our focus is on the general framework of sonar target recognition so that the readers can appreciate the big picture on how sonar targets are recognized.
INTEGRATED SONAR ATR PROCESSING In this section, we introduce the integrated sonar ATR processing and explain the role of each processing block within the system’s context. Figure 2 depicts a general sonar-processing flowchart. Joint time-space processing sorts multiple signals as a function of time of arrival (TOA), direction of arrival (DOA), and spectral band. That is, any separation in TOA, DOA, or frequency will be sufficient for signal deinterleaving. Beam forming handles DOA sorting while wideband pulses are used for TOA sorting in active sonar. Each separated signal will then be projected to appropriate transformation spaces. The main purposes of signal projection are data compression and energy compaction. For example, a continuous wave (CW) time-domain signal can be projected onto the frequency domain by the Fourier transform. This signal-projection operation yields two related benefits: compression of the entire time-domain data into one
686
SONAR TARGET RECOGNITION
frequency bin and signal-to-noise ratio (SNR) improvement by a factor of 10 log NFFT, where NFFT is the size of the fast Fourier transform (FFT). Not only does signal projection improve the probability of discriminating multiple sinusoids by virtue of data compression, but it enhances the algorithm robustness in parameter estimation thanks to the SNR gain. The key concept here is that multiple projection spaces be investigated as a function of signal characteristics to obtain orthogonal, mutually reinforcing information for improved detection and classification. In general, most traditional detectors, such as a replica correlator or an m-out-of-n detector (m detections in n opportunities, where M ⬍ N constitutes detection), rely on a single parameter—integrated energy after constant-false-alarmrate (CFAR) processing—for detection (6). This approach is acceptable as long as the number of false returns that exceeds the detection threshold remains reasonable. Unfortunately, the number of false alarms can be rather significant in today’s operating environments. Instead of relying on the amplitude feature alone, we extract and fuse multiple signal attributes using a classifier. ATR can be performed in sequential steps, borrowing from the divide-and-conquer paradigm. In Fig. 2, we first perform target-versus-nontarget discrimination, followed by target identification. The latter processing itself can be broken into hierarchical steps depending on the complexity of target types (7). Furthermore, both static and dynamic features, coupled with integration of frame-based classification scores, can be used to improve the confidence level of target identification.
Now, we discuss signal projection, feature optimization, and target recognition thoroughly. Signal Projection and Feature Extraction The main objective of signal projection is low-dimensional signal characterization, which naturally leads to subspace filtering. Figure 3 illustrates the basic concept of signal projection. Let y ⫽ f(x), where x and y represent raw and projected data, respectively. The f( ⭈ ) is a projection operator that transforms x and y in order to compactly represent x in y. The behavior of x is governed by the probability law derived from its components: target and clutter. That is, the probability law consists of two conditional pdfs, P(x兩target) and P(x兩clutter). In general, the overlap between the two class-conditional pdfs is quite high, rendering target recognition difficult in x. Signal projection alleviates this problem by projecting x onto y in which both target and clutter components are captured with a much smaller set of parameters (dimension reduction or energy compaction) (5). More important, capturing target and clutter components in a reduced dimension improves the probability of separating target and clutter in y— subspace filtering. Therefore, the criteria for selection of projection algorithms are the amount of energy compaction and the extent to which various signals can be separated. We present two examples to illustrate the effectiveness of signal-specific data projection. In adaptive interference suppression, the interference component can be modeled more efficiently in the projected vector space spanned by y. After in-
Raw data
Clustering and data compression Joint time-space processing
Signal projection
Signal sorting based on time, frequency, and direction of arrival
First-level feature extraction
Interference suppression, signal filtering, transformation, and echo processing Information Detection fusion and and decision classification architecture Target vs. nontarget Tracking
Signal attributes over time
Figure 2. For high-performance sonar target recognition, many processing elements—beam forming, signal projection, tracking, and pattern recognition—must work in cooperation within the overall systems framework. In this article, we focus on the boldfaced blocks.
Dynamic feature extraction
Static feature extraction
Per event signal characteristics
Identification What type of target?
Situational awareness
SONAR TARGET RECOGNITION
x3
687
Interference suppression x = xtarget + xclutter + nx
P(xtarget)
P(xclutter)
Target and clutter inseparable in x y = ytarget + yclutter + ny Target and clutter separable in y
x1
y^ clutter = yclutter
x2
x^ clutter = f –1( y^ clutter) x^ target = x – x^ clutter y = f(x)
Ideally, signal projection or y = f(x) must achieve both dimension reduction (Ry< Rx) and separation of multiple classes—target and clutter in this case—to facilitate automatic target recognition (ATR).
y2
Figure 3. Conceptual framework of signal projection—dimension reduction and subspace filtering. In general, dimension reduction occurs when the number of basis functions in y for representing a signal is less than that in x. nx and ny refer to noise in x and y, respectively.
P(ytarget) P(yclutter) y1
terference modeling, its structure in x can be estimated through reverse transform and coherently subtracted from the original time-series data as shown in Fig. 3. One such approach is the principal component inversion (PCI), where the interference structure is modeled as a linear combination of orthogonal basis vectors derived from a Toeplitz data matrix (8). This approach has been applied successfully to reverberation suppression for CW, hyperbolic frequency-modulated (HFM), and linear frequency-modulated (LFM) waveforms. Figure 4 shows the results of PCI on reverberation suppres-
sion for a CW waveform. Note that PCI was able to recover a low-Doppler target hidden in reverberation. The second example deals with time-frequency representation of sonar transients. Although the short-time Fourier transform (STFT) is the most widely used time-frequency distribution function, Ghitza’s ensemble interval histogram (EIH) deserves a special mention here because of the importance of aural processing in sonar target recognition. EIH is based on an auditory neural model (9) that consists of two parts: the preauditory part comprising a bank of cochlear
Original and recovered signal (solid) spectra
Magnitude (dB)
50 ×=target
40
x 30 20 10
–30
–20
–10
0 Frequency (bin)
Reverberation + signal
10
20
Recovered weak signal 40
400 10
10
20
300
30
200
40 100
50 60
0 20 60 40 Doppler frequency (bin)
Range (bin)
Range (bin)
30
20
30
30
20
40 10
50 60
0 20 40 60 Doppler frequency (bin)
Figure 4. PCI estimates the interference structure using principal components and coherently subtracts it from the raw waveform to extract the weak signal.
688
SONAR TARGET RECOGNITION
Levelcrossing histogram
x(n)
Bandpass filters
Σ
ElH(t,f )
One band-pass filter output T=1/f
ng
Ra
Magnitude
15
ng
5
e
(b
10
in)
20 15 10 5 0 Fre0 20 qu en 40 cy (bi 60 n)
Ra
ng
5
e
(b
10
in)
15
Ra
Figure 5. EIH is an auditory neural model that provides robust transient signal characterization, particularly at low SNR. This transient contains a dual-tone structure, which is preserved better with EIH than with STFT.
in)
(b
e
e
ng
Ra
Low SNR EIH
Magnitude
STFT × 105 2.5 2 1.5 1 0.5 0 Fre0 20 qu en 40 cy 60 (bi n)
EIH 10 8 6 4 2 6 0 5 4 Fre 0 20 3 qu 2 en 40 cy 60 1 (bi n) High SNR Magnitude
(b
in)
Magnitude
× 105 STFT 3 2.5 2 1.5 1 0.5 6 5 0 4 0 3 Fre 20 qu 2 en 40 cy 60 1 (bi n)
bandpass filters whose cutoff frequencies are logarithmically spaced for multispectral analysis and the postauditory part that performs spectral content estimation via multiple levelcrossing detectors as shown in Fig. 5. Note that EIH captures the time-frequency characteristics of the transient with a dual-tone structure more accurately than STFT, particularly at low SNR. After signal projection, features are extracted from each projection space. Feature extraction is a process by which signal attributes are computed from various projection spaces and fused in a compact vector format. Good features should possess the following desirable traits:
Features can be broadly categorized into static and dynamic types. For very short events, we can extract static features that characterize the entire event period. For events with longer durations, it is often advantageous to compute key features at a fixed time interval so that their transition characteristics over time can be further exploited for signal discrimination. It is intuitive that a hybrid classifier that can accommodate both static and dynamic features usually outperforms classifiers that rely exclusively on either static or dynamic features alone.
1. Large interclass mean separation and small intraclass variance 2. Insensitive to extraneous variables (little dependence on SNR) 3. Computationally inexpensive to measure 4. Uncorrelated with other features 5. Mathematically definable 6. Explainable in physical terms
Feature optimization is an integral part of sonar target recognition and involves feature normalization and ranking based on an appropriate criterion. Normalization is necessary to prevent numerical ill-conditioning. Feature ranking can be broadly categorized into two types (4):
Feature Optimization
1. Derive M features y ⫽ [y1 ⭈ ⭈ ⭈ yM]t from the original N features (M ⬍ N) by applying an M ⫻ N linear transformation matrix A or a nonlinear mapping function g( ⭈ )
SONAR TARGET RECOGNITION
In general, parametric classifiers make strong assumptions regarding the underlying class-conditional pdfs while nonparametric classifiers estimate class-conditional pdfs from the available training sonar data. On the other hand, boundary-decision classifiers construct linear or nonlinear boundaries that separate multiple classes (targets) according to some error-minimization criteria. The key concept here is that some classifiers do better than others for certain feature sets. Therefore, synergy between a classifier and a good-feature subset must be maximized whenever possible. For example, if class-conditional pdfs exhibit unimodal, Gaussian characteristics, a simple parametric classifier may suffice. In contrast, if class-conditional pdfs are multimodal and non-Gaussian, nonparametric classifiers with adaptive vector quantization would be preferred to parametric classifiers. In essence, a system designer must perform judicious trade-offs in the areas of target-recognition performance and computational requirements during training and actual sonar system operations as a function of the amount of available training data, anticipated feature-space perturbation by environmental variation, and the need for in situ adaptation.
Data compression One-dimensional decision space created by a classifier
N-dimensional vector space spanned by raw data
M-dimensional vector space M<
Nonparametric
pdf assumption
Boundary Decision Mapping error criterion as a function of an activation function
Kernel estimator (Parzen window)
REAL-WORLD EXPERIMENTS In this section, we apply theories to two challenging, realworld problems. These examples illustrate how various signal-processing concepts in echo processing, filtering, and pattern recognition can be integrated to detect the presence of sonar targets.
Functional form p ( yc) = Σα kφ k
Histogram
k nearest neighbor
Figure 6. Classifiers map the vector space spanned by selected features onto a decision dimension.
to the original feature vector x such that y = Ax or y = g(x)
689
(1)
2. Rank individual features according to their contribution to the overall recognition performance. This can be further divided into computationally efficient single-dimensional feature ranking, computationally expensive multidimensional feature ranking, and feature ranking in a compressed feature dimension as a compromise. The multidimensional ranking approach is equivalent to a combinatorial problem of finding the best M-feature subset out of the N original features. We will denote this method as a feature-subset selection approach. Automatic Target Recognition—Mapping Features to Classifiers The fundamental issue in classifier design is quantifying the extent to which a classifier captures all the useful information present in input features (training data) while remaining flexible to potential mismatch between training and test data. In order to achieve the performance of the optimal Bayes classifier, we need to approximate the class-conditional pdfs from the available training data and design a classifier architecture based on the estimated class-conditional pdfs. This approximation can take a form of parametric, nonparametric, and boundary-decision types. Figure 6 describes the relationship between feature extraction and classification succinctly.
Active Sonar Target Recognition One of the most difficult challenges in active sonar processing is differentiating target returns from false returns. In impulsive-echo-range (IER) processing, an additional challenge is dealing with stochastic impulsive source variability. In order to resolve range ambiguities, impulsive sources are transmitted at a variable repetition rate in a multistatic environment. The goal of active sonar target recognition is to remove as much clutter as possible while maintaining an acceptable target-recognition performance for an eventual confirmation by sonar operators. In this section, we present an active targetecho recognition algorithm using an integrated pattern-recognition paradigm that spans a wide spectrum of signal and image processing—target physics, exploration of projection spaces, feature optimization, and mapping the decision architecture to the underlying good-feature distribution (4,10). Projection-Space Investigation. In general, selection of a projection space is domain specific and largely motivated by inputs from experienced sonar operators and phenomenology. For example, operators often listen for distinct ‘‘metallic’’ sounds for aural discrimination. This observation implies that various speech-processing algorithms can be applicable to sonar target recognition. Moreover, energy detector and timefrequency distribution (TFD) outputs seem to provide a good operator aid for visual discrimination. The complex timevarying echo structures dictate the use of frame-based processing to capture time-dependent signal attributes. Transformation algorithms should be able to perform both noise (ambient noise and reverberation) suppression and separation of target and clutter components.
690
SONAR TARGET RECOGNITION
Beam-formed raw time-domain data
Ensemble interval histogram, reduced interference distribution, short-time Fourier transform Time-frequency distribution
Constant-falsealarm-rate processing
Image compression Higher-order spectral analysis
Thresholding
Detection event clustering
Linear prediction analysis
Ergodic hidden Markov model
Compressed time-embedded phase map
Principal component analysis
Feature extraction and optimization
A scan
FFT spectrum
Figure 7. The overall processing flow chart.
Figure 7 depicts the overall processing strategy consisting of detection-cluster or snippet segmentation, feature extraction, feature optimization, fusion, and classification. First, we perform snippet segmentation based on CFAR detectionthreshold crossing. Each segmented snippet is projected onto various projection spaces. We extract features from seven projection spaces consisting of smoothed energy or A-scan output, FFT spectrum, TFD using STFT, the reduced interference distribution (RID) (11), and EIH, higher-order spectrum (HOS) (12), principal component analysis (PCA), a compressed phase map (13), and a speech-related processing domain using linear prediction, cepstral, and 웃 cepstral coefficients. Instead of extracting high-dimensional features from raw TFD and HOS projection spaces, we utilize an image coding algorithm to achieve further data compression (14). After feature extraction, we perform thorough feature analyses for feature optimization and ranking to select the optimal feature subset based on an appropriate class separability criterion. Finally, we evaluate the target-recognition performance using the selected feature subset and construct the best classifier topology. In essence, given the optimal feature subset, selection of the best classi-
Automatic target recognition
Target contact report
fier structure is equivalent to finding the best mapping function between input parameters (features) and desired outputs (class label—target or clutter). Now we describe projection spaces with good features in detail. 1. Temporal Space. Derived mainly from the energy detector and linear predictor outputs, temporal features provide clues on target extent and highlight structures (bow and stern planes, railings, and periscopes) as a function of aspect. For seamounts with a few distinct scatterers, the envelope structure is complex and asymmetrical as measured by shape skewness and kurtosis, while the cylindrical target at broadside yields a symmetrical, Gaussian envelope shape. Good features from this projection space are pulse width, rise and fall times, and amplitude and shape statistics. 2. Time-Frequency Distribution with Image Compression. Features from the TFD attempt to capture spectral and temporal variations associated with the highlight structure and secondary arrivals from helical and flexural waves (15). We explore the following three TFDs to as-
SONAR TARGET RECOGNITION
sess the impact of time-frequency resolution on active classification: STFT, RID, and EIH. 3. Compressed Phase Map. A phase map is a convenient way of representing time-embedded samples in a multidimensional state space and is quite effective in capturing dynamics of low-dimensional, deterministic signals. A typical example can be found in nonlinear dynamical system modeling (13). For this application, we capture transitional signal characteristics from sample-to-sample differences of the energy detector output. For returns from smooth-surface objects, sample-to-sample deviations of the differencer output are small and their trajectory follows a well-defined path with small fractal dimension. Fractal dimension provides information on how much of the state space is filled by the trajectory. On the other hand, returns from complex-scattering objects, such as seamounts and wrecks, exhibit large trajectory fluctuations, leading to a diffused phase map with large fractal dimension. The same concept of subspace filtering is used to capture desirable signal transitional characteristics efficiently. That is, we use the singular value decomposition (SVD) to project noisy points in the state space Rd onto a new space RNr, where d and Nr represent the original embedding dimension (the total number of consecutive time samples used in constructing the state space) and the reduced dimension representing the signal subspace, respectively. The computational procedures are explained below. a. Generate a differencer output as follows: p(n) =
x(n) − x(n − 1) x(n)
(2)
where x(n) is the normalized energy detector output. b. Construct a phase map matrix ⌽ of size d ⫻ K using time-delay embedding of the differencer output, Pn = {pn pn−1 . . . pn−d+1}t
(3)
= {P1 P2 . . . Pn . . . PK }
K is N ⫺ d, where N and d denote a total length of differencer output pn and the embedding dimension, respectively. c. Perform the SVD on the covariance matrix R ⫽ ⌽⌽t. Estimate the matrix rank using the minimum description length (MDL) criterion (16) to obtain orthonormal projection operators:
d(k) = − (p = k)Nav log10
p
p−k λ1/ i
i=k+1 p 1 p − k i=k+1
(4)
+ 0.5k(2p − k) log10 Nav where Nav is the averaged sample size, p is the dimension of R, i is the ith eigenvalue arranged in descending order of magnitude, and k ⫽ 0, 1, . . ., p ⫺ 1. The rank of R is equal to the value of k that minimizes d(k),
Nr = arg min d(k)
691
(5)
k
d. Use the estimated signal subspace projection operator to project a full-rank matrix ⌽ to the compressed phase space: R=U r =
Ut
(6)
U1t : Nr
(7)
where U1:Nr and ⌽r denote a left singular matrix with a rank Nr and a compressed phase map, respectively. 4. Speech-Processing Features. The primary motivation for extracting speech-processing related features is that the eye (visual) and the ear (aural) process the same information in a somewhat different fashion. For example, the eye is capable of processing a large amount of information in a short time, but tends to be deficient in details. On the other hand, the ear has a much higher dynamic range and resolution and thus can better distinguish details but is slower than the eye. The main objective for applying frame-based speech processing to IER clutter reduction is to capture detailed acoustic transitional characteristics that cannot be captured adequately from the visual projection spaces. Echoes from objects with various structural properties—rib, airfilled cavity, solid filling (seamounts), chemical filling (mines)—can possess distinct sound characteristics, which can be compactly represented with linear prediction, cepstral, and 웃 cepstral coefficients. Linear predictive coding estimates spectral phase and amplitude variation over time while cepstral coefficients attempt to separate spectral envelope from the underlying harmonic structure. We use standard ergodic hidden Markov models (HMMs) to characterize both target and clutter echoes (17,18). We extract features from concatenated log-likelihood ratio scores as well as transition and observation statistics associated with each state (2). Real-Data Analysis Results. In this section, we present our clutter-reduction performance results based on real-data analysis and compare our performance with that of the baseline processing that consists of CFAR detection and rulebased clutter rejection. For this analysis, we use segmented detection clusters from the shallow-water real-active-data set and ground truth information obtained during data reconstruction. After extracting features from the seven projection spaces, we perform a comprehensive feature analysis for feature pruning and optimization prior to classification performance analysis. We evaluate target-recognition performance using the top 10 to 15 features. Borrowing from the divide-and-conquer paradigm, we perform hierarchical sequential pruning classification in two steps: primitive and fine classification (7). During the first stage of primitive classification, the pulse width is used to reject obvious false contacts. We use a conservative prescreening threshold to ensure that there is little risk of false dismissal of genuine target echoes. Not only is this approach computationally attractive due to the reduced number of detection clusters to process during the computationally intensive second stage, but it provides an additional benefit of not
692
SONAR TARGET RECOGNITION
having to waste degrees of freedom on modeling obvious false contacts later in fine classification. For the second-stage fine classification, we derive clutter-reduction performance from an average of 64 independent runs to minimize performance bias caused by uneven class population. We evaluate performances of multivariate Gaussian classifier (MVG), k-nearest-neighbor classifier (KNN), nearestneighbor classifier (NNC), probabilistic neural network (PNN), and fast backpropagation neural network (FBPN) (4) to determine the most appropriate classifier architecture. Since the underlying multidimensional feature pdfs exhibit unimodal characteristics with reasonable class separation as
shown in Fig. 8, MVG and PNN perform quite well while KNN and NNC perform poorly. (KNN and NNC are nonparametric classifiers that estimate class-conditional pdfs from a small fraction of training data. This procedure can backfire if class-conditional pdfs are unimodal.) Boundary-decision classifiers, such as FBPN, perform well initially as decision boundaries are relatively simple for a small decision dimension. Nevertheless, as the decision dimension increases, the class boundaries become more complex and FBPN’s performance suffers. In summary, MVG and PNN provide the best performance because their mapping structures match the underlying good-feature pdfs.
Good feature pdfs Pulse width
0.3 0.2 0.1 0
0
0.5
1
Shape mean
0.4 0.3 0.2 0.1 0
0
0.5
1
0.25 0.2 0.15 0.1 0.05 0
0
0.5
0.1
0.1
0.05
0.05
0
1
Shape standard deviation
0
0.5
0
0.5
1
TFD shape kurtosis
0.2
0
0.15
0.1
0.1
0.05
0.05
1
0
0.2
0.15
0
Amplitude skewness
0.15
0.15
0.5 Fractal dimension
0.95 0.9 0.85 MVG KNN NNC PNN FBPN
0.8 0.75 0.7 0.65
0
5
20 10 15 Number of features
25
30 ROC curves
Figure 8. Performance rank-order curves are useful in determining an appropriate decision dimension in classification. Since good-feature pdfs (solid, target; dotted, clutter) seem unimodal and slightly nonGaussian with some class overlap, PNN and MVG perform the best.
P(targettarget)
100
Good-feature pdfs exhibit unimodal, reasonably normal shapes. Therefore, MVG and PNN offer the best performance.
FBPN PNN NNC KNN MVG
10–1 –2 10
1
0 1 0.5 1 0.5 0 x axis = normalized feature value y axis = probability
0
Rank-order curves Performance rank-order curves
1
Recognition performance
0.5 0.4 0.3 0.2 0.1 0
Amplitude standard deviation 0.2 0.2
Rise time
10–1 P(targetclutter)
SONAR TARGET RECOGNITION
multimodal overlap measure (MOM) defined as
Risk reduction vs. baseline performance with 1970s hardware constraint
MOMi =
1
PD
0.8 0.6 No screener Baseline screener RR one-dim. feature ranking RR multidim. feature ranking
0.4 0.2 0
0
50 100 200 250 150 False alerts per ping per 24 buoys
100
693
300
Risk reduction performance with modern hardware constraint
yi
Min[P(yi |target), P(yi |clutter)] dyi
(8)
where yi is the ith feature (the lower the MOM, the better the corresponding feature in differentiating the target from clutter), we were able to achieve over 90% false-alarm reduction from the baseline/no-screener approach. The bottom ROC curves show clutter-reduction performance comparison between the computationally inexpensive features (derived from the A-scan, FFT, and STFT outputs) and features extracted from the seven projection spaces in the traditional PD-versusPFA format. With the top 15 features, we were able to achieve an additional 4.5% improvement in overall correct classification performance (88.6% to 93.1%) for snippets that exceed the lowest SNR threshold. This improved performance translates to a 5% increase in P(target兩target) (PD jumped from 0.85 to 0.90) and a 50% reduction (7.8% to 3.9%) in P(target兩clutter).
PD
Passive Sonar Target Recognition
RR one-dim. ranking RR multidim. ranking 10–3
10–2
10–1
100
PFA Figure 9. Classification ROC curves demonstrate clutter-reduction performance improvement with our sequential hierarchical classification approach at four different SNRs. The bottom figure shows the improved clutter-reduction performance with the modern hardware constraint at the lowest SNR only. RR stands for risk reduction. PD ⫽ P(target/target). PFA ⫽ (target/clutter). Arrows show performance improvement.
Figure 9 shows receiver operating characteristics (ROC) curves for the baseline and risk-reduction processing with the two computational resource constraints in an operationally meaningful format. For this analysis, we use both one-dimensional and multidimensional feature-ranking algorithms to assess the clutter-reduction performance. The motivation for using the computationally expensive multidimensional feature-ranking algorithm is that it enables us to derive the performance upper bounds for a given data set and a feature set. The baseline processing consists of a constant-falsealarm-rate normalizer, a short-time averager, and a threshold detector. The baseline rule-based screener uses pulse width and fall time for clutter rejection. We used the baseline performance as a benchmark with which our riskreduction performance was compared. Operating points are derived from the echo returns after detection as a function of SNR. Our real-data analysis results indicate that we can achieve maximum classification performance with approximately 10 to 15 features. Note that using the first risk-reduction algorithm with one-dimensional feature ranking based on the
In order to maximize recognition performance of passive target emissions, it is important that we understand and exploit the underlying signal microstructure. PBB acoustic signatures often exhibit a microstructure that has time-varying, low-dimensional characteristics if projected onto an appropriate transformation space. With this in mind, we investigate how our knowledge of signature characteristics can be reflected on the PBB algorithm design to enhance targetrecognition performance in shallow water. For this analysis, we use SWell-EX1 and PBB data sets provided by the Naval Research and Development (NRaD) and the Office of Naval Research (ONR), respectively (19). Our processing strategy is based on exploitation of any microstructure inherently present in the target signature by projecting raw data onto various projection spaces, identification of key parameters or ‘‘features’’ crucial in determining the presence of a signal, designing a classifier topology that best matches the underlying feature distribution, and thorough detection performance analysis and comparison with that of a traditional energy detector to quantify performance gains as a function of input SNR. Technical Approach. Figure 10 depicts the PBB processing flowchart consisting of subspace projection, feature extraction, and classify-before-detect processing. We initially project raw data onto a time-frequency map using the STFT to capture time-varying striation patterns visible in the PBB target signature. The next step is to emphasize important target signature attributes with image compression and Viterbi line extraction. Image compression takes advantage of transform coding and principal component filtering to emphasize desirable signal components while suppressing noise. The Viterbi line extractor works as an adaptive, variable-length line integrator that enhances the time-varying striation pattern present in the PBB signature. Figure 11 demonstrates the effectiveness of the Viterbi line extractor in recovering weak time-varying frequency lines.
694
SONAR TARGET RECOGNITION
Beam-former output (time-domain data)
Preprocessing
Short-time Fourier transform
Viterbi line extraction
Energy integration
Image compression
Feature extraction
Frame-based classification
Passive target recognition
Classification score integration
Passive target contact report Figure 10. The PBB classify-before-detect flow chart.
The objective of the classify-before-detect processing is to utilize a more favorable decision space spanned by multiple, mutually reinforcing discriminatory features than the traditional amplitude decision space based on the integrated energy, particularly at low SNR. Finally, we compare the performance of our classify-before-detect algorithm with that of the conventional energy detector in terms of ROC curves and processing gain as a function of input SNR. Real-Data Analysis Results. In this section, we present realdata analysis results. Figure 12 shows STFT spectrograms of the typical PBB target signature before and after various transformations: singular value decomposition (SVD), two-dimensional (2-D) discrete cosine transform (DCT), and compressed 2-D DCT. The signal that we are interested in detecting occupies the middle half of the spectrograms. We initially extract a total of 64 features from the three projection spaces and perform thorough feature optimization and classification performance analysis using the Integrated Pattern-Recognition Toolbox. We achieve the maximum recognition performance using 8 to 10 features. We evaluate the extracted feature set with five classifiers that represent the three broad classifier categories: parametric, nonparametric, and boundary decision. Since good-feature pdfs are both nonGaussian and multimodal, nonparametric classifiers based on vector quantization or k nearest neighbors outperform the others. We quantify performances of the classify-before-detect algorithm in terms of the ROC curves and processing gain as a function of input SNR and compare them with those of the traditional energy detector. For performance evaluation of our algorithm, we use randomly partitioned, independent training and test data sets for algorithm tuning and cross validation. Figure 12 displays the ROC curve comparison of our
Freq.
Clean signal spectrogram 20 40 60 80 10
20
30
40
50 Time
60
70
80
90
70
80
90
70
80
90
Freq.
Corrupted signal spectrogram 20 40 60 80 10
20
30
40
50 Time
60
Freq. (bin)
Viterbi spectrogram
Figure 11. The Viterbi line extractor can effectively recover weak wandering frequency lines.
20 40 60 80 10
20
30
40 50 60 Time (range bin)
SONAR TARGET RECOGNITION
Frequency (bin)
Raw STFT
SVD
50
50
100
100
150
150
200
200
200 400 600 800 1000 1200 1400 1600
200 400 600 800 1000 1200 1400 1600
Time (bin)
Time (bin)
2-D DCT
Compressed 2-D DCT
50
50
100
100
150
150
200
695
200
200 400 600 800 1000 1200 1400 1600
Time (bin)
200 400 600 800 1000 1200 1400 1600
Time (bin)
(a) Advantages of signal projection on revealing the microstructure—frequency ( y) vs. time (x) ROC curve comparison (SNR=–25 dB)
100
pdfs of N and S+N (SNR=–15 dB)
Probability
0.2
PD
Classify-before-detect STA
10–1 –3 10
10–2
10–1
Cumulative LLR pdfs STA pdfs 0.1
0 –0.2
100
PFA
pdfs of N and S+N (SNR=–25 dB) 0.2
20 10 0 STA –10 CBD –20 –30 0 –30 –25 –20 –15 –10 –5 Input SNR (dB) (b) Performance summary
Cumulative LLR pdfs
Probability
Output SNR (dB)
PBB perf. gain (STA vs. classify-before-detect)
0.2 0.4 0.6 0.8 1 0 STA + cumulative LLR outputs
5
STA pdfs
0.1
0 –0.2
1 0 0.2 0.4 0.6 0.8 STA + cumulative LLR outputs
(c) Visual illustration of the advantages of the CBD algorithm on transient detection
classify-before-detect algorithm with the energy detector. We also summarize and compare the processing gain of the two detectors. Overall, we achieve an average of 10 dB additional detection performance improvement with the classify-before-detect approach over the traditional energy detector. The integration sizes for the short-term averager (STA) and the classify-before-detect processing are 10 and 5 frames, respectively. We deliberately compare the performance of our algorithm with the 5 frame integration to that of the STA with 10 frames to provide a slightly pessimistic performance comparison. That is, using the integration size of 10 for the classify-before-detect processing would have resulted in a higher processing gain. The input SNR is measured with respect to the full band
Figure 12. PBB acoustic signature and SWell-Ex1 ambient noise spectrograms and the CBD algorithm performance summary. N and S ⫹ N denote noise and signal ⫹ noise, respectively.
while the output SNR is derived from the STA and cumulative log-likelihood ratio (LLR) pdf plots using the deflection index criterion. Note that output SNR in decibels is 10 log(⌬애2 /2sn), where ⌬애 is the mean difference between the signal-plus-noise and noise-only pdfs. s and n denote standard deviations of signal-plus-noise and noise-only pdfs, respectively. Since the STA processing involves STFT, envelope detection, and two-dimensional integration (signal subband and time), the output SNR is not a simple function of the temporal integration size. The advantage of the classify-before-detect algorithm can be better appreciated by a qualitative look at the pdf plots of the STA and classify-before-detect cumulative LLR outputs. Figure 12 shows the signal-plus-noise and noise-only pdfs of
696
SONAR TARGET RECOGNITION
the two processing outputs at input SNRs of ⫺15 and ⫺25 dB. At ⫺25 dB, the two pdfs at the STA output completely overlap, rendering detection in the amplitude space very difficult if not impossible. On the contrary, pdf plots derived from the cumulative LLR output show a good separation, indicating that a judicious selection of features combined with an appropriate classifier topology is crucial in achieving an additional detection performance improvement.
EMERGING TECHNOLOGIES IN SONAR TARGET RECOGNITION The two key areas for future research are accurate quantification of classification performance upper bounds and situationally adaptive target recognition. In this section, we first explore the underlying concepts of data compression, class separability, and sufficient statistics in the context of estimating performance upper bounds in classification. Next, we provide insights into developing a reconfigurable feature-classifier architecture to accommodate environmental variability. Classification Cramer-Rao Bounds Let us make a suite of measurements y that can be described by the probability function p(y), where parametrizes p(y) and p(y) ⫽ p(y兩). If z ⫽ f(y), where the dimension of z is smaller than that of y and p(y兩z) ⫽ p(y兩z), then we say that z captures all the useful information in y. Furthermore, z is more memory efficient than y since f( ⭈ ) compresses y into a sufficient statistic (7,20). Sufficient statistics are closely related to class separability. In general, optimality score J is measured by J(y, h, z ) =
1 Ny
y=h(z )
CS[pθ (y|z ), . . ., pθ 1
Nc
(y|z )] dy
(9)
where 1 ⬎ i, i ⬎ 1. Sb and Sw refer to the interclass and within-class scatter covariance matrices, respectively. For a two-class problem, 웆 can be directly computed by ω = S−1 ω (µ1 − µ2 )
(11)
where 애i is the ith class mean vector. The LLR score can be approximated as 웆ty, where y is an input test feature vector. Frequently, it is possible that the two classes may share the same mean vectors, but can be differentiated by the difference in the covariance matrices. In this case, we can use the generalized likelihood ratio test (GLRT) concept to derive the weight vector as the eigenvector of R⫺1 1 R2 associated with the largest eigenvector, where Ri is the ith class covariance matrix. In short, depending on the estimate of ⌬애,
ω=
S−1 ω (µ1 − µ2 ) eigenvector of
µ > γ R−1 R2 1
otherwise
A successive implementation of LFC coupled with token pruning (i.e., feature vectors or tokens that fall into separable regions are pruned so that the next stage LFC works with the remaining feature tokens—successive approximation of class-conditional pdfs) at each stage forms the backbone of a discriminant neural network (DNN) architecture (4). 2. Multivariate Gaussian Classifier (MVG). This is a parametric classifier that assumes that the multidimensional feature pdf can be characterized by its mean vector 애 and covariance matrix R. Mathematically, it computes the Mahalanobis distance associated with each class and selects the class with the shortest distance: d(i) = (y − µi )t R−1 i (y − µi ) iy = arg min d(i)
where Ny is the dimension of y, z⍀ is the overlapped region (between two classes) in z that gets projected onto y via a mapping operator h( ⭈ ) (h( ⭈ ) is in essence f ⫺1( ⭈ ) and a function of a classifier structure), and CS( ⭈ ) is a class separability function that measures the degree of feature space overlap between classes. In essence, a classifier performs the f( ⭈ ) operation. Therefore, is equivalent to class label while y and z denote an input feature vector and a classification LLR score, respectively. In short, the degree of sufficient statistics can be measured by class separability in the multidimensional feature space ⍀. This concept can be reinforced with an interesting twoclass, two-feature problem as shown in Fig. 13. In this case, we use the following two classifiers: 1. Linear Fisher’s Classifier (LFC). This is a simple boundary-decision classifier that computes a weight vector 웆 that maximizes the Rayleigh quotient 웆tSb웆/웆tSw웆, where 웆 is the first eigenvector of the following generalized eigenvalue problem. Sb x = λSw x
(10)
(12)
(13) (14)
1≤i≤Nc
LLRij = d(i) − d( j)
(15)
where i and Nc refer to the class index and the number of classes, respectively. iy is the selected class label for an input test feature vector y. For this problem, the two class-conditional pdfs—p1(y) and p2(y)—are both normal with the same covariance matrix, but with different mean vectors. Naturally, MVG or LFC with 웆 that maximizes the Rayleigh quotient is the Bayes classifier. In order to measure the extent to which MVG captures useful information present in the two input features, the following class separability function is used: CS = |pθ (y|z ) − pθ (y|z )| 1
2
(16)
where z⍀ is the region in z with high class overlap. As expected for a class separability measure, CS 앒 0 when p1(y兩z⍀) 앒 p2(y兩z⍀). The areas in z with relatively little class overlap are excluded since prediction errors in those regions are minimal. That is, we zero in on the area with most predic-
SONAR TARGET RECOGNITION
697
Theoretically, MVG is the Bayes classifier for this problem of known class-conditional pdfs. Projected pdfs
MVG J=0.036 Original features 6 0.05 Probability
4 y2
2 0 –2
0.03 0.02
–5
0 y1
0
5
0 z
5
0.05 Probability
0.04 Probability
–5
Overlapped region pdfs for y2
Overlapped region pdfs for y1
0.03 0.02 0.01 0
Ω
0.01
–4 –6
0.04
0.04 0.03 0.02 0.01
–5
0
0
5
–5
y1
0 y2
Original features
Projected pdfs
5
LFC with a suboptimal weight (–0.45, 0.89) J = 0.196 6
0.05 Probability
4 y2
2 0 –2
0.06
–5
0 y1
Overlapped region pdfs for y1
0.05
Ω –5
0 z
5
Overlapped region pdfs for y2
0.04 Probability
Probability
0.02
0
5
0.05 0.04 0.03 0.02
0.03 0.02 0.01
0.01 0
0.03
0.01
–4 –6
0.04
–5
0
5
y1
0
–5
0 y2
5
Figure 13. For a two-class problem with multivariate Gaussian pdfs, MVG is the Bayes classifier. MVG and LFC with a suboptimal weight vector of [⫺0.45, 0.89] yield J of 0.036 and 0.196, respectively. The J score of zero means that the two class-conditional pdfs in y derived from the overlapped region in z (i.e., ⍀) completely overlap—capturing all the useful information in the original feature space y.
tion errors to investigate the extent to which prediction performance can be further improved. For comparison, LFC with a suboptimal weight vector 웆 of [⫺0.45, 0.89] in z ⫽ 웆ty was implemented. As expected, MVG performs far superior to LFC as evidenced by a smaller
amount of class overlap in z. More important, the optimality score J for MVG is much lower than that for LFC. Based on numerous experiments with a number of known and unknown class-conditional pdfs, J of less than 0.0375 implies that a classifier is in essence the Bayes classifier (21). That
698
SONAR TARGET RECOGNITION
is, the correct classification performance of around 70% in this case cannot be further improved by changing the classifier architecture. Instead, we should concentrate on gathering additional input data to improve the information content.
here were supported by the Naval Air Warfare Center under Contract No. N62269-94-C-1179, the Office of Naval Research under Project No. RJ14C42, and the Naval Research Laboratory under Contract No. N00014-93-C-2246.
Situationally Adaptive Target Recognition
BIBLIOGRAPHY
Environmental robustness requires that target-recognition algorithms be insensitive to extraneous confusion factors. In real-time implementation, we employ the following strategies to mitigate the negative impacts of environmental variation on target-recognition performance:
1. R. J. Urick, Principles of Underwater Sound for Engineers, New York: McGraw-Hill, 1967.
1. Implement more features that absolutely necessary for automated feature subset selection as a function of environment.
3. W. Chang and B. Bosworth, Performance comparison of neural network and conventional classifiers and significance of feature set for single ping active classification, Naval Undersea Warfare Center (NUWC) TR Report No. 10743, January 1995.
2. Train classifiers adaptively by joint supervised and unsupervised learning (22). In essence, the original classconditional pdfs are used as a starting point and as the system receives new data, it adaptively adjusts or estimates ‘‘slightly’’ different new class-conditional pdfs using a combination of self-organizing feature mapping and expectation-maximization algorithms.
4. D. H. Kil and F. B. Shin, Pattern Recognition and Prediction with Applications to Signal Characterization, Woodbury, NY: AIP Press, 1996.
3. If possible, collect and process new data with known ground truth.
7. D. Kil and F. Shin, A unified approach to hierarchical classification, Proc. ICASSP, VI, Atlanta, GA, May 1996, pp. 1549–1552.
4. Develop software toolboxes to facilitate rapid in situ algorithm optimization. Typical toolboxes deal with ground truthing and target-cluster segmentation, pattern recognition, and environmental prediction.
8. D. W. Tufts, D. H. Kil, and R. R. Slater, Reverberation suppression and modeling, in D. D. Ellis, J. R. Preston, and H. G. Urban (eds.), Ocean Reverberation, Boston: Kluwer, 1993.
Nevertheless, it is imperative that we resort to a totally integrated approach that sequentially removes as much clutter as possible while accommodating environmental uncertainties. This approach would entail robust adaptive joint time-space filtering, matched-field processing, acoustic tomography, detection, reconfigurable feature extraction and classification (2), localization, and multiping- and multisensorbased fusion. Recent advances in acoustic communication permit in situ acoustic channel calibration that can be used to model the extent of target-signature distortion caused by rapid channel fluctuations (23). Furthermore, several university research teams are investigating how dolphins and bats use acoustic sonars to make fine discriminations between objects with small differences in material composition, shape, and interior in an adaptive fashion despite environmental variations (24,25). These research activities can shed insights into the processing architecture of the future sonar targetrecognition system. As computing power doubles every 18 months according to Moore’s law, we are bound to witness an integrated sonar target-recognition system that can adapt to changing environmental conditions to provide robust performance in four to seven years.
2. D. Kil, F. Shin, and R. Fricke, LFA target echo characterization with hidden Markov models and classifiers, J. Underwater Acoust., 41 (7): July 1995 (a special theme issue on multisensor fusion).
5. L. L. Scharf, Statistical Signal Processing, Reading, MA: AddisonWesley, 1991. 6. M. I. Skolnik, Introduction to Radar Systems, New York: McGrawHill, 1980.
9. O. Ghitza, Auditory models and human performance in tasks related to speech coding and speech recognition, IEEE Trans. Speech Audio Process, 2 (II): 115–132, 1994. 10. D. Kil, F. Shin, and R. Wayland, Active impulsive echo discrimination in shallow water by mapping target physics-derived features to classifiers, IEEE J. Oceanic Eng., 22: 66–80, 1997. 11. J. Jeong and W. J. Williams, Kernel design for reduced interference distributions, IEEE Trans. Signal Process, 40: 402–412, 1992. 12. C. L. Nikias and M. R. Gaghuverr, Bispectrum estimation: A digital signal processing framework, Proc. IEEE, 75: 869–891, 1987. 13. C. Myers et al., Modeling chaotic systems with hidden Markov models, Proc. ICASSP, IV, San Francisco, CA, April 1992, pp. 565–568. 14. D. Kil and F. Shin, Reduced dimension image compression and its applications, Proc. Int. Conf. Image, III, Washington, D.C., October 1995, pp. 500–503. 15. C. N. Corrado, Jr., Mid-frequency acoustic backscattering from finite cylindrical shells and the influence of helical membrane waves, Ph.D. dissertation, MIT, Cambridge, MA, 1993. 16. M. Wax, Detection and estimation of superimposed signals, Ph.D. dissertation, Stanford University, Stanford, CA, 1985. 17. A. S. Weigend and N. A. Gershenfeld (eds.), Time Series Prediction, Reading, MA: Addison-Wesley, 1994. 18. L. R. Rabiner, A tutorial on hidden Markov models and selected applications in speech recognition, Proc. IEEE, 77: 257–285, 1989.
ACKNOWLEDGMENT
19. F. B. Shin and D. H. Kil, Full-spectrum processing using a classify-before-detect paradigm, J. Acoust. Soc. Am., 99: 2188–2197, 1996.
The authors would like to thank Dr. Weita Chang, Dr. Rick Wayland, Dr. Dick Heitmeyer, Tom Hayward, and Dr. Marshall Orr for their support. The research efforts mentioned
20. E. Real, Feature extraction and sufficient statistics in detection and classification, Proc. ICASSP, VI, Atlanta, GA, May 1996, pp. 3049–3052.
SONAR TARGET RECOGNITION 21. D. Kil and F. Shin, Cramer-Rao bounds on stock price prediction, J. Forecasting, 1997 (a special issue on neural networks). 22. B. Shahshahani and D. Landgrebe, Classification of multi-spectral data by joint supervised-unsupervised learning, TR-EE-94-1, (Purdue University Technical Report), Purdue University, Lafayette, IN, January 1994. 23. M. Johnson, M. Grund, and D. Brady, Reducing the computational requirements of adaptive equalization in underwater acoustic communications, Proc. Oceans, III, San Diego, CA: October 1995, pp. 1405–1410.
699
24. A. Simmons, Biosonar acoustic imaging for target localization and classification by bats, SPIE Conf. 3079, Orlando, FL: April 1997, pp. 7–13. 25. N. P. Chotiros et al., Observation of buried object detection by a dolphin, SPIE Conf. 3079, Orlando, FL: April 1997, pp. 14– 18.
DAVID H. KIL FRANCES B. SHIN Lockheed Martin
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL...ONICS%20ENGINEERING/41.%20Oceanic%20Engineering/W5408.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Sonar Tracking Standard Article Vivek Samant1 and Dale Klamer1 1ORINCON Corporation, San Diego, CA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W5408 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (236K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Correlation, Association, and Fusion Classification and Identification Sensor Management in Fusion Systems Mathematical Formulation and Representations for the Sensor Management Function About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELE...0ENGINEERING/41.%20Oceanic%20Engineering/W5408.htm16.06.2008 15:16:57
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
SONAR TRACKING
CORRELATION, ASSOCIATION, AND FUSION
Over the past three decades, a large number of investigators have contributed to the theoretical and practical aspects of sonar tracking. Our intent in this article is to expose key developments that give the reader a sufficiently complete overview of many topics in tracking with particular emphasis on sonar tracking. Comprehensive treatment of these topics can be found in Blackman (1), Waltz and Llinas (2), Antony (3), Bar-Shalom (4,5), and Bar-Shalom and Fortmann (6). The invention of the Kalman filter is perhaps the single, most influential technological advance that has made possible the current mature state of sonar tracking. Necessarily, our exposition includes a discussion of the Kalman filter. Although during the early stages of development the Kalman filter provided a computationally revolutionary mechanism for estimating the state of a tracked object, the practical real-life applications in sonar tracking were limited to single-target tracking because of the limitations imposed by the computing capabilities of the processing hardware. In sonar tracking, the source of information consisted of only passive acoustic sensors. As computing resources became more readily available, multiple-target tracking capabilities were developed. Thus emerged the concept of developing an overall integrated surveillance scene containing multiple targets. Capabilities were developed for processing information from a variety of sensor systems, in addition to acoustic sensors, to develop a sonar scene. Multisensor, multitarget tracking systems have been routinely used in a variety of applications during the past two decades. In many applications, and especially in sonar environments, because of their high clutter character, it became evident that a single hypothesis regarding the scene used to represent the interpretation of all the inputs from all the sensor systems was not adequate. In Ref. 7, a new approach was proposed to represent the information using multiple simultaneous interpretations in the form of multiple-scene hypotheses. Thus began a new era in sonar tracking, with a number of approaches developed to deal with ambiguity, efficiency, and accuracy. Our discussion includes a fairly complete review of many issues related to the multihypothesis tracking (MHT) subject. Much of the early development of algorithms and techniques in sonar tracking focused on the topic of tracking the state of individual objects. In the parlance of the more encompassing domain of data fusion, the individual target tracking is considered to be occurring at Level 1—also known as object refinement—of information processing. In the more recent past, the focus of these developments has shifted to the higher level of information content. The concepts of situation refinement, threat refinement, and process refinement were the natural evolutionary steps in the development of tracking. The related theoretical topics include use of both knowledge-based techniques and fuzzy-neural representations, and new developments in sensor management and fusion strategies. Much of the discussion that follows presents these areas in more detail.
The central problem in multisensor, multitarget sonar tracking is the data association problem of partitioning contacts into tracks and false reports. This problem is formulated as multiscan processing, it is valid for either centralized fusion or decentralized tracking. The mathematical formulation of the data association problems is separated from the algorithms that solve this problem. Before discussing problem formulation, a brief review of data association follows. General approaches to single-scan processing include nearest neighbor, global nearest neighbor (solved by the two-dimensional assignment problem), probabilistic data association (PDA), and joint PDA (JPDA). The former two approaches are real-time, but decisions once made are irrevocable, leading to poor track estimation, to fragmentation, and even to loss of tracks. The latter two approaches have been successful for tracking in heavy clutter, but have had difficulties with closely spaced targets. Another class of methods is called deferred logic, or multiscan, processing. The most popular method is called multiple-hypothesis tracking (MHT). These methods are well-suited to tracking a potentially large number of targets in a cluttered environment. The fundamental problem for multiscan processing is to maximize the probability of data partition into tracks and false reports (8–10). The data association problems for multisensor and multitarget tracking are generally posed as maximizing the posterior probability of the set of tracks (given the data) according to Maximize{P(π = |ZN )| ∈ ∗ }
(1)
where ZN represents N data sets or scans, 앟 is a partition of indices of the data (and thus induces a partition of the data into tracks), ⌸* is the finite collection of all such partitions, ⌸ is a discrete random element defined on ⌸*, P(앟 ⫽ ⌸兩ZN) is the posterior probability of a partition 앟 being true given the data ZN, and P is the probability measure of a partition 앟 of the cumulative data ZN into tracks and false reports. For the assignment formulation, under independence assumptions, this problem is equivalent to finding a solution of
Minimize − ln
MN M1 P(π|ZN ) N ≡ · · · cN i 1 ...i N zi 1 ...i N P(π 0 |ZN ) i =0 i =0 1
where ciN1.
. .iN
is the negative log of the likelihood ratio LiN1.
zi
1 ...i N
=
N
. .iN
,
1 if (zi , . . ., zi ) are assigned to the track 1
N
0 otherwise
is a zero-one variable, and 앟0 is a reference partition consisting of N false reports. The constraints for this problem impose the requirement that each report zik from scan k must be assigned to exactly one track of data (zi1, . . ., ziN). This problem is precisely what all approaches to association and fusion try to solve. The difficulty is that the problem 1
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
2
SONAR TRACKING
• Individual • Organization entities • Structure • Location • Activities • Classification/ identification
• Intent • Capabilities
Level 1 processing
Level 2 processing
Level 3 processing
Object refinement
Situation refinement
Threat refinement
Sensors Level 4 processing
Humancomputer interaction
Process refinement Figure 1. High-level JDL four-level data fusion functional model.
is nonpolynomial (NP)-hard, so that any algorithm that solves it is NP-hard, and all known algorithms that solve the problem optimally require a time that grows exponentially with the size of the problem. A fundamental problem with sequential processing is that data association decisions are irrevocable. MHT corrects this problem by allowing changes in the data association over the last N scans. Now consider N data sets Z(k), k ⫽ 1, . . ., N, with Mk N k , respectively, and let Z denote the cumulative reports 兵zikk其iMk⫽1 k data set defined by Z(k) ⫽ 兵zikk其iMk⫽1 and ZN ⫽ 兵Z(1), . . ., Z(N)其, respectively. The data sets Z(k) may represent different objects, and each data set can be generated from different sensors. For track initiation, measurements are partitioned into tracks and false alarms. In track maintenance, which uses a moving window over time, one data set will be tracks and remaining data sets will be scans of measurements. In sensorlevel tracking, the objects to be fused are tracks from multiple sensors. In centralized fusion, the objects may be a combination of measurements that represent targets or false reports and tracks that have already been filtered; the problem is to determine which measurements emanate from a common platform. Fusion Strategies The Joint Director of Laboratories (JDL) model divides the data fusion processing into four levels. All four levels of processing use and share the same data and information, as shown in Fig. 1. Processing in Level 1 deals with object refinement, which is positional, kinematic, and attribute fusion of single tracks within the ocean. In Level 2 situation refinement processing, a description or interpretation of the current relationships among objects and events in the context of the environment is developed. Threat assessment, in Level 3 processing, develops a threat-oriented perspective of the data to estimate enemy capabilities, identify threat opportunities, estimate enemy intent, and determine levels of danger. Finally, Level 4 processes refinement processing monitors and evaluates the ongoing fusion process to refine the process itself, for example, by tasking sensors to gather additional information or resolve ambiguities.
The JDL model defines the process of expanding from traditional statistical/mathematical techniques of fusion to include artificial intelligence for data assimilation, correlation, and abstraction, resulting in a ‘‘hybrid’’ system that uses cognitive processing technologies to add intelligence to the process of data fusion and determination of target identification. The advanced fusion technology analyzes the situation, as a human operator would, with awareness of the situation beyond the data that is being reported by the current sensors. With this awareness, the system can make inferences based on knowledge of the environment, the current state of the situation, threat tendencies, and the assets it has available to help resolve target identification. Kalman Filtering At the heart of data fusion algorithms is a tracking algorithm, typically a Kalman filter. Under certain conditions (11,12), the Kalman filter provides an optimal estimator that minimizes the mean square error. In addition, the Kalman filter can be implemented in an efficient recursive manner. In the case where a nonlinear relationship exists between the measurement vector and the state vector, for example, a range/ bearing measurement where the tracking coordinates are x-y, an extended Kalman filter (EKF) or an iterated EKF (IEKF) provides a suboptimal approximation. A summary of the Kalman filter is presented in Table 1. The models describe the motion of the target, including the uncertainty of the model represented by system noise 웆k and the relationship between the state and the measurement. When a new measurement is processed, the first step of the Kalman filter is to predict the latest state estimate and its Table 1. Summary of the Nonlinear Iterated Extended Kalman Filter Models xk⫹1 ⫽ ⌽k⫹1 xk ⫹ 웆k⫹1 zk⫹1 ⫽ h(xk⫹1) ⫹ ⑀k⫹1 where 웆k is N(0, Qk) and ⑀k is N(0, Rk) Prediction xk⫹1兩k ⫽ ⌽k⫹1 xk兩k
(1)
Pk⫹1兩k ⫽ ⌽k⫹1 Pk兩k⌽Tk⫹1 ⫹ Qk⫹1
(2)
Iterative Updates—For i ⫽ 0, 1, 2, 3, . . . Note: The extended Kalman filter is obtained by setting i ⫽ 0. zˆk⫹1, i ⫽ hk⫹1(xk⫹1, i ) Hk⫹1, i ⫽
⭸hk⫹1(x) ⭸x
(3)
冏
(4)
x⫽xk⫹1, i
rk⫹1, i ⫽ zk⫹1 ⫺ zˆk⫹1, i ⫺ Hk⫹1, i(xk⫹1兩k ⫺ xk⫹1, i ) Ck⫹1, i ⫽ (Hk⫹1, i Pk⫹1兩k H Kk⫹1, i ⫽ Pk⫹1兩k H
T k⫹1, i
T k⫹1, i
⫹ Rk⫹1 )
Ck⫹1, i
⫺1
(5) (6) (7)
xk⫹1, i⫹1 ⫽ xk⫹1兩k ⫹ Kk⫹1, i rk⫹1, i
(8)
Pk⫹1, i⫹1 ⬟ (I ⫺ Kk⫹1, i Hk⫹1, i ) Pk⫹1兩k
(9)
Initial Conditions xk⫹1,0 ⫽ kk⫹1兩k
SONAR TRACKING
covariance (or uncertainty) of the time of the measurement [Eqs. (1) and (2) of Table 1]. The next step is to estimate the expected measurement by using the predicted measurement, as specified by Eq. (3). Next, the residual between the estimated and actual measurement is computed [Eqs. (4) and (5)], along with its estimated covariance [Eq. (6)]. Finally, the Kalman filter gain [Eq. (7)] is computed and used to update the state estimate and its covariance [Eqs. (8) and (9)]. Bearing-Only Tracking. One of the fundamental problems of sonar tracking is performing localization from a set of measurements obtained from a passive sensor, i.e., given a set of passive bearings or line-of-bearing measurements, develop an estimate of the target position and velocity. Much effort has been focused on the issues of observability (the inherent information contained in the measurement set to provide a localization) and coordinate systems (13,14). The fundamental result on observability states that the relative motion between the observing platform must be nonlinear. In simplest terms, if the target is on a constant course/constant speed leg, the observer must maneuver at least once before a localization solution can be computed. Details on localization can be found in Refs. 15 and 16. In order to isolate the problem of observability, researchers have investigated the impact of coordinate systems used in the Kalman filter. One popular coordinate system is the inverse polar coordinate system (웁, 웁˙ , r, r˙ /r), where 웁 and 웁˙ are the bearing and bearing rate, respectively, and r and r˙ /r are the range and normalized range rate, respectively (17). Gaussian Sum. Based on the fact that most density functions can be approximated arbitrarily close by a sum of Gaussian density functions, the Gaussian sum approach (18,19) provides an attractive alternative to an inverse polar coordinate system for bearing only tracking. Given a density function f with a finite number of discontinuities, f can be approximated arbitrarily close by a finite sum of Gaussian density functions. Let
f k (xx ) =
K
αk N(xx − µk ; k )
k=1
where N is the Gaussian density function with mean 애k and covariance ⌺k, with K
αk = 1
k=1
and 움k ⱖ 0 for all k. Then, selecting 움k, 애k, ⌺k, and K, f K can approximate f to an arbitrary degree of closeness. Given a line of bearing, a sum of Gaussians can be used to approximate a bearing wedge, as depicted in Fig. 2. In addition, enviβ = 90° Line of bearing measurement
Figure 2. Gaussian sum approximation to a line of bearing measurement.
3
Table 2. Assignment Problem Example New New system track New system track New system track System track System track System track System track System track System track
T1 T2 T3 T4 T5 T6 T7 T8 T9
c1 0.5 0.0 0.0 0.4 0.3 0.05 0.0 — —
c2 0.0 0.15 0.0 0.0 0.2 0.8 0.7 — —
Old c3 0.0 0.0 0.03 0.0 0.0 0.7 0.6 — —
c4 — — — — — — — 0.3 —
c5 — — — — — — — — 0.8
ronmental information, such as direct path and convergence zone propagation, can be directly modeled as a Gaussian sum. Thus, the sum of Gaussians can be used to model the nonlinear bearing measurement. An advantage of the Gaussian sum approach is that a linear Kalman filter can be used by running K filters, one for each term in the Gaussian sum. Optimal Assignment Strategies We assume that part of the overall sonar system is a preprocessor that associates measurements. For example, an automatic line tracker on a gram provides the association of a specific narrowband signal source. Thus, contacts and measurements can be of two types: first, the sensor system provides an association of some of the measurements into contacts, in which case the reported contacts in a scan are either new (not previously reported) or old; second, the sensor system does not perform association, in which case all the contacts reported in a scan are new. The measurement data-to-track assignment problem is depicted in Table 2. The assignment matrix is structured such that the first M2 rows consist of possible new tracks and the last M1 rows consist of tracks from the current hypothesis. Note that the upper M2 ⫻ M2 block is simply a diagonal matrix (a measurement can be assigned to only one new track). The objective of the assignment function is to find a ‘‘best’’ set of solutions. The optimal solutions to assignment problems are given in Refs. 1 and 20. The ‘‘solution vector’’ assigns each data measurement to some track in the hypothesis; each data point either updates an existing track within the hypothesis or is assigned to a new track. In this case, the optimal assignment is c1 to T4, c2 to T7, c3 to T6, c4 to T8, and c5 to T9. Note that for c2, the optimal is T7, not the largest likelihood that is associated with T6, because the c2 /T7 and c3 /T6 assignment pair has a higher likelihood (0.49) than the c2 /T6 and c3 /T7 assignment pair (0.48). Multiple-Hypothesis Tracking The multiple-hypothesis tracker (MHT) data fusion algorithm with clustering is, in essence, a two-layer algorithm: the first (lower) layer consists of a multiple-hypothesis algorithm that carries alternative hypotheses of how the data is partitioned into tracks, and the second layer consists of cluster management that breaks the problem into noninteracting, disjoint clusters. The lower layer multiple-hypothesis algorithm matches data to tracks, updates tracks, generates hypotheses, and manages algorithm resources (both tracks and hypothe-
4
SONAR TRACKING
ses). The second layer, cluster management, monitors each set of hypotheses to ensure that tracks within a cluster do not interact with tracks that are in other clusters. A typical MHT implementation is depicted in Fig. 3. The six primary processing functions are: • Gating (track-data scoring) • Clustering • Assignment solution (association, track updating, hypothesis generation) • N-scan pruning • Renormalization • Splitting
Load scan
Prediction
Scan measurement loop
Associate tracks
Update clusters
Next scan measurement
Generate hypotheses
N-scan prune
Renormalize
Score/prune tracks
Split clusters
Output
The lower level algorithms of track-data scoring, association, track updating, hypothesis generation, and algorithm resource management have an extended Kalman filter at the heart of the MHT. Track-data scoring is based on the value of the density function of the normalized residual. This score is computed for each existing track and each measurement in a scan, except when the normalized residual itself is larger than a fixed threshold (usually set at four to six standard deviations). In the association step, a two-dimensional assignment algorithm (such as a modified Munkres algorithm) is used to select the optimal assignment of measurements in a scan to tracks in a hypothesis. Part of this assignment is the determination of a new track for each of the measurements. In order to generate additional hypotheses, the assignment algorithm is run again on the original problem with modified costs of the assignment matrix of the optimal assignment. Once the optimal and suboptimal solutions are obtained, the hypothesis scores are computed and compared. High-scoring hypotheses are kept for further analysis, whereas lowscoring hypotheses are pruned. A hypothesis score is recursively obtained by multiplying the old hypothesis score by each of the assignment scores of the track-data associations determined by the two-dimensional assignment solutions. Finally, N-scan pruning is used to accomplish two goals. First, N-scan pruning helps keep the overall number of hypotheses under control. More importantly, N-scan pruning forces a hard decision on all measurements in the (N ⫺ 1)th oldest scan. Thus, N-scan pruning is a sliding window that allows the MHT algorithm to carry multiple hypotheses on the most current data and make hard decisions on older data (which is based on data up to the current time). The following assumptions and conditions are made: 1. The measurement data of the scan are valid at the same time tk. 2. Each measurement comes from a distinct target. The first assumption is made to simplify the cluster gating implementation; thus, tracks are predicted to the time of the current scan only once for the entire scan of the measurements. This assumption can be relaxed at the cost of additional time for execution. The second assumption is fundamental to hypothesis generation. Because each measurement comes from a distinct target, the number of data association combinations is limited as two measurements from the same scan cannot be put into the same track. Thus, the fundamental number of hypotheses is limited. Gating. All tracks are predicted to the time of the current scan. The Kalman filter prediction equations are used to extrapolate each track state and error covariance estimate to the time of the current scan using Eqs. (1) and (2) of Table 1. Next, the extrapolated state estimate is used to calculate the predicted measurement vector using the state to measurement transformation Eq. (3) of Table 1. The normalized residual is computed as r = r TC −1r
Figure 3. Core algorithm flow chart.
where r is the residual vector from Eq. (5) of Table 1, and C⫺1 is the inverse of the residual vector covariance matrix
SONAR TRACKING
from Eq. (6) of Table 1. The normalized residual r is a m2 statistic with m degrees of freedom, where m is the dimension of the measurement vector zk. A probability of geometric association Pg(r) is computed for a track-to-measurement candidate if the normalized residual passes the gating criterion 2 χm ≤ n2
where the value of n, the gate size, is a parameter that can be interpreted to mean an n ⫺ track-to-measurement containment. For each normalized residual that passes the gating criteria, the probability of geometric association Pg(r) is computed as the likelihood density function of an N(0, C) normal random variable. This probability is evaluated as Pg (rr ) =
1 exp(−r/2) C| (2π )m/2 |C
where C ⫽ HPHt ⫹ R (the residual vector covariance of the Kalman filter equations, see Table 1), m is the measurement dimension and r is the computed normalized residual. Scoring of the new track probability of association is based on the likelihood density function. Clustering. The basic purpose of clustering is to divide the large data fusion problem into a number of smaller ones that can be solved independently. Each cluster maintains a noninteracting set of tracks and data. Clustering is an adaptive process, driven by the interactions and ambiguity of the incoming data. Clusters are initiated in two distinct ways. A new cluster is initiated each time a data point is received that does not fall within the correlation gates of any track contained in an existing cluster. The new cluster then contains one hypothesis consisting of a single track (the new track) with a probability of one. In addition, a new cluster is initiated when a given track is contained in all hypotheses of a previous cluster. In order that clusters remain distinct, they must be combined when a new data point is received that fits with tracks from more than one cluster. Thus, when a data point falls within the correlation gates of two or more clusters, the clusters are merged. New hypotheses are formed from all combinations of the hypotheses in the clusters being merged. The set of tracks and data points in the new ‘‘super cluster’’ is the union of those in the prior clusters. The number of hypotheses in the new super cluster is the product of the number of hypotheses in the prior clusters and the associated probabilities are the products of the prior probabilities. An explicit example of cluster merging is now presented. (1) Let cluster C1 contain two hypotheses H(1) 1 and H2 with hy(1) (1) pothesis scores p1 and p2 , respectively. Let cluster C2 contain (2) (2) (2) three hypotheses H(2) 1 , H2 , and H3 with scores pi , i ⫽ 1, 2, 3. Then the new merged cluster contains a total of six hypothe(2) (1) (2) (1) (2) (1) (2) sis, namely, H(1) 1 丣 H1 , H1 丣 H2 , H1 丣 H3 , H2 丣 H1 , etc., (1) (2) where the hypothesis Hi 丣 Hj is formed simply by taking (1) the union of the track sets contained in H(1) i and Hj . The probability of the new hypothesis pij is computed as the product of (2) the probability of the corresponding hypotheses pij ⫽ p(1) i pj . Pruning, if necessary, is based on the pij, and a normalization of the new cluster hypothesis scores is performed.
5
Assignment Solution. The primary objective of the assignment solution function is to find the ‘‘best’’ set of solutions for each hypothesis in each cluster. The solutions in each cluster are ranked on score, with the lower scoring hypotheses pruned; the top-ranked solutions are then used to generate a set of new hypotheses for the cluster. Any new tracks are initialized and existing tracks are updated. The score of the assignment is computed as the product of the individual association probabilities. The solution score is computed as the product of the assignment score and the solution’s generating hypothesis score. The optimal solution is used to obtain the set of next-best solutions. This step is accomplished by not allowing associations that are in the optimal solution one data point at a time. The most important aspect of cluster management is the allocation of the number of hypotheses that each cluster is allowed to carry. The solutions in each cluster are ranked on score, with the Nn highest scoring solutions retained; the lower scoring solutions are pruned if the number of solutions is greater than Nn. An adaptive pruning mechanism is also used. Solutions with scores less than an adaptive threshold score are pruned. The adaptive threshold score is computed as a ratio of the top-scoring solution. New tracks are initialized for each measurement that is not assigned to an existing track in the generating hypothesis. The track states and covariances are initialized according to the type of measurement, e.g., range/bearing, latitude/longitude, or bearing-only. Existing tracks that have a measurement assigned are updated. Eqs. (3)–(10) of Table 1 are used to perform the track update. N-Scan Pruning. The two primary functions of N-scan pruning are ancestry update and N-scan pruning. Each hypothesis that was generated must have its ancestry updated. Pruning is accomplished by computing the sum of probabilities of current hypotheses that have a common ancestry on the previous Nth scan. The ancestor set with the largest probability is kept and all other hypotheses are pruned. Each hypothesis that was generated must have its ancestry updated. Each generated hypothesis H⬘ points to the parent hypothesis H. This ancestry update is kept for the last N scans of measurement data. Renormalization. Hypothesis scores within a cluster are renormalized such that the sum of the probabilities of all hypotheses within a cluster is one. Track scores are computed for each track by summing the probabilities of the hypothesis in which they occur. After all scoring and pruning is complete, hypothesis scores within each cluster are renormalized such that the probabilities of all hypotheses within a cluster sum to one. This simply involves adding the scores of all hypotheses within a cluster and dividing each hypothesis by the resulting sum. Specifically, let cluster Ci, i ⫽ 1, . . ., NC, contain hypotheses Hj(i), j ⫽ 1, . . ., NC(i). Let pj(i) be the probability of hypotheses Hj(i); then the renormalized hypothesis score for hypothesis Hj(i) is
p(i) j
N (i ) C
j=1
p(i) j
6
SONAR TRACKING
Track Score and Prune. After the hypothesis scores of a cluster are renormalized, a score is computed for each track in the cluster by summing the probabilities of the hypotheses in which the track is contained. Thus, the track score ranges from zero to one; it is equal to one if the track appears in all hypotheses. If a track appears in cluster Ci, then the score P(T) of track T is computed as p(T ) = p(i) j
Minimize M1
MN
···
ci
1 ...i N
zi
1 ...i N
i 1 =0
i N =0
zi
= 1, i1 = 1, · · · M1
Subject to M2
T ∈H (i ) ∈C i j
···
i 2 =0 M1
where Hj(i) varies over all the hypotheses in cluster Ci, and pj(i) is the renormalized hypothesis score. The final step of the MHT algorithm is cluster splitting, which is the process of subdividing an existing cluster into smaller, independent clusters. Clusters are split for two distinct reasons. A cluster is split when a track is contained in all hypotheses of a previous cluster. This track is removed from all hypotheses of the previous cluster and inserted into a single hypothesis in the new cluster. In addition, clusters containing one hypothesis with more than one track are split.
MN i N =0
1 ...i N
M k−1
···
i 1 =0
M k+1
···
i k−1 =0 i k+1 =0
MN
zi
i N =0
1 ...i k−1 i k+1 ...i N
=1
for ik+1 = 1, . . ., Mk+1, and k = 1, . . ., N − 1 M1
···
i 1 =0
zi
1 ...i N
M N −1
zi
i N −1 =0
1 ...i N
(2)
= 1, iN = 1, · · · , MN
∈ {0, 1} for all i1 , . . ., iN
Efficient algorithms for solving Eq. (2) are specified in Refs. 22–24.
N-Dimensional Assignment An alternative to MHT, which processes a single scan at a time, is the N-dimensional (ND) assignment approach, which simultaneously solves the assignment problem over N scans of data. For notational convenience in representing tracks, we add a zero index to each of the index sets and a dummy report z0k to each of the data sets Z(k), and define a ‘‘track of data’’ as (zi11, . . ., ziNN) where ik and zikk can now assume the values of 0 and z0k, respectively. A partition of the data refers to a collection of tracks of data wherein each report occurs exactly once in one of the tracks of data and such that all data are used; the occurrence of a dummy report is unrestricted. The dummy report z0k serves several purposes in the representation of missing data, false reports, initiating of tracks, and terminating of tracks (9,21,24). Next, under appropriate independence assumptions, the track scores are computed as
P(π = |ZN ) = Lγ = Li ...i N 1 N P(π = 0 |Z ) i ...i ∈ 1
N
where Li1 . . . iN is the likelihood ratio containing probabilities for detection, maneuvers, and termination as well as probability density functions for measurement errors, track initiation, and termination. Then, with ci1 . . . iN ⫽ ⫺ln Li1 . . . iN, P(|ZN ) = − ln ci ...i N 1 N P(0 |Z ) i ...i ∈γ 1
Probabilistic Data Association A popular method for tracking in highly cluttered environments is joint probability data association (1,4). At time k, let the measurements zik, i ⫽ 1, . . ., mk fall within the association gate of a track and let Pg (rr i ) =
β m k (1 − PD ) exp(−ri /2) Ci | (2π )m/2 |C
where 웁 ⫽ PNT ⫹ PFA is the sum of the new track and false alarm probabilities and PD is the probability of detection, Ci ⴝ HPHt ⴙ Ri (the residual vector covariance of the Kalman filter equations, see Eq. (4) of Table 1), m is the measurement dimension, and ri is computed normalized residual. We assume that the new track and false alarm rates follow a Poisson distribution. For convenience, let z0 represent a missed measurement and Pg(r0) ⫽ 웁mk(1 ⫺ PD) the likelihood that none of the measurements inside of the gate were generated by the track. Let
βi (k) =
β0 (k) =
N
Expressions for the likelihood ratios Li1 . . . iN can be found in Refs. 7–10, 21. In track initiation, the N data sets all represent reports from N sensors, possibly all the same. For track maintenance, we use a sliding window of N data sets and one data set containing established tracks. The formulation is the same as in the preceding except that the dimension of the assignment problem is now N ⫹ 1. With the zero-one variable zi1 . . . iN if i1 . . . iN 僆 앟 and 0 otherwise, the problem can be formulated as the following Ndimensional assignment problem:
e(−r i /2) m (−r /2) k e i b + j=1
b+
b m k
j=1
e(−r i /2)
where C |1/2 b = β(2π )m/2 (1 − PD )|C and C ⴝ Ci is assumed to be constant for all measurements within the gate. Then the updated mean is
x k|k =
mk i=0
βi (k)xxk|k (zz k )
SONAR TRACKING
and the covariance is Pk|k−1 + [1 − β0 (k)][II − K kH k ]P P k|k−1 + P˜ k Pk|k = β0 (k)P where
P˜ k = K k
m k
βi (k)rri (k)rrTi (k) − r (k)rrT(k)
K Tk
i=1
is the ‘‘correction’’ term to the standard Kalman filter and
r (k) =
mk
βi (k)rri (k)
i=1
is the weighted residual. When additional information is available, such as the amplitude information from a passive narrowband source, improved performance can be achieved. A probabilistic data association-based maximum likelihood estimator, using amplitude information, has been developed (24a). Although a small improvement in the Cramer–Rao lower bound is achieved, Monte Carlo simulations showed gains in increased accuracy and a reduction in false tracks, especially at low signal-to-noise ratios. CLASSIFICATION AND IDENTIFICATION Beyond the localization of tracks, the classification of the individual contacts is an important aspect of the overall sonar tracking problem.
and mutually exclusive set of hypotheses. These sets of hypotheses are called ‘‘cuts.’’ Some examples of a cut are the sets 兵Subsurface, Surface其 and 兵Hostile Submarine, Neutral Submarine, Friendly Submarine, Surface其. A cut is considered valid if every element in the cut is always independent of the others and the probabilities of every element in the cut sum to unity. An example of an invalid cut is the set 兵Nuclear, Surface其. This cut is invalid because not all nodes are represented. Pearl Tree Evidence Propagation. Sensor-specific attribute data and geometric heuristic information are used as evidence to determine target identification. The likelihood ratio i measures the degree to which the evidence supports or refutes the hypothesis hi represented by node i. That is, for a piece of evidence e, the likelihood ratio is given by
Integrating or fusing attribute information over time is an important processing mechanism required to derive target identity. A taxonomic hierarchy is the perfect mechanism to maintain belief over time for every identity level. For simplicity, a taxonomic hierarchy, one form of Pearl tree or Bayesian evidential reasoning algorithm, is presented. A complete discussion of taxonomic hierarchies and general Bayesian networks is presented in Ref. 25. Pearl Tree Structure. A Pearl tree is an N-node, as opposed to binary, tree structure. Each tree node represents a specific hypothesis. Each hypothesis can be divided into subhypotheses, or be a subhypothesis itself. Every node is initially assigned an a priori measure of belief reflecting the prior probability that the hypothesis is true. These measures of belief range from 0.0, reflecting no confidence, to 1.0, reflecting complete confidence. The measure of belief of the tree’s root node sum is always 1.0. In general, the probability of a specific node equals the sum of the probabilities of its subnodes. Figure 4 illustrates a simple Pearl tree for target identification. In this example, the number inside the node name represents the node probability. Clearly, the evidence suggests that the target is most likely a Hostile Submarine target, and of all possible Hostile Submarine platform types, most likely a nuclear submarine. Even in this small example, every identity level is enumerated. In the example illustrated in the preceding, many different paths exist through the tree that represent an independent
Pr(e|hi ) Pr(e|not hi )
λi =
Positive support for hi is given if i ⬎ 1.0; negative support is given for hi if i ⬍ 1.0. Generally, the likelihood ratios for an entire cut are arbitrarily assigned rather than explicitly computed. Let BEL(hi) be the measure of belief in the hypothesis represented by node i. Then, for every node i in a cut, an updated belief is obtained by Pr (hi ) = αλi Pr(hi )
(3)
where 움 is a normalization factor given by
Bayesian Inference Networks
7
α=
−1 λi Pr(hi )
(4)
i
Every subnode j below node i in a tree is updated by Pr (h j ) = αλi Pr(h j )
(5)
Each supernode k above the nodes in a cut is updated by summing the updated beliefs of those nodes in a cut that are subnodes of supernode k. That is,
Pr (hk ) =
Pr (hi )
(6)
i a subnode of k
As an example, suppose the nuclear hostile subsurface node from the tree shown in Fig. 4 is injected with the likelihood value ⫽ 4. The updates to the nodes are shown outside the nodes in the figure. Fuzzy Rule-Based Fusion Strategies In cluttered and uncertain sonar environments, the information provided by the sensor systems is not precisely specified. In many situations, one or more components of the sensor information are supplied with nonquantitative qualifiers. Fuzzy representations can be used efficiently in these situations to extract the information for data fusion purposes (26– 28). Here, a fuzzy representation is simply the mapping from an input measurement space to an output measurement us-
8
SONAR TRACKING
Target 1.0 Subsurface 0.9 0.986
Hostile 0.7
Neutral 0.1 0.956
0.015
Surface 0.1
Friendly 0.1
0.015
Hostile 0.07 0.010
Nuclear 0.65
Diesel 0.05
Nuclear 0.09
Diesel 0.01
Nuclear 0.09
Diesel 0.01
0.949
0.007
0.014
0.001
0.014
0.001
0.014
Neutral 0.02
Friendly 0.01 0.013
0.001
Figure 4. Simple Pearl tree for target identification.
ing linguistic variables. It gives us the ability to model imprecisions by incorporating qualitative components into a quantitative analysis. The use of fuzzy logic in data association or correlation (29–34) is a more recent development in sonar tracking. Some of the relevant techniques for association are summarized in what follows. The Use of Fuzzy Measures. Fuzzy measures provide a mechanism for assigning belief or plausibility to a set of crisp events. We can structure the data correlation problem to fit within the framework of fuzzy measure theory. Furthermore, this treatment of data as fuzzy sets can be incorporated in sonar tracking problems through the multiple-hypothesis fusion architecture to be described later. Here, fuzzy membership functions and traditional statistical methods are used to represent each crisp event. The primary mechanism is to use a fuzzy implementation of the extended Kalman filter (EKF) discussed earlier. This approach provides a powerful method for data representation through the use of the nonquantitative and unpredictable character of sensor measurements. In an environment in which clutter exists, to reduce the effects of the clutter measurements without losing the information contained in the true measurements from the target, a weighting scheme for the measurements that uses fuzzy logic has been developed by Priebe and Jones (35). The fuzzy filter defined in Ref. 35 uses only the distance information for the fuzzy membership. However, the technique is derived on the basis of general rules, and not rules specifically related to this distance measure. To incorporate more rules, we can simply incorporate them with existing rules via fuzzy logic. We define the Mahalanobis distance for each observation as Rk + HPH T )−1r k,i µk,i = r Tk,i (R where rk,i is the residual from sensor i. This distance serves as the universe of discourse for the fuzzy predicate. For example, the fuzzy predicate ‘‘similar’’ is defined as f similar(µk,i ) = e−µ k,i /2 A ‘‘valid’’ membership function is created to reduce the computational requirements; the membership function is defined as
f valid (µk,i ) =
1 µk,i ≤ γk 0 else
Invoking the fuzzy intersection of the two membership functions in the preceding, the resulting membership function is one that is the minimum of the two membership functions f similar∩ valid (µk,i ) = min[ f similar(µk,i ), f valid (µk,i )] The term 웁0 ⫽ 1 is defined as the output if no intersections are valid. The defuzzified output residual that is subsequently fed into the Kalman filter update equations becomes mk
rk =
i=1
β0 +
mk
f valid∩similar(µk,i )rr k,i mk i=1
= f valid∩similar(µk,i )
βk,ir k,i
i=1 mk
βk,i
i=1
where 웁k,i is a weighting function based on the fuzzy intersection of the ‘‘similar’’ and the ‘‘valid’’ membership function. Processing Fuzzy Measurements. A fuzzy extended Kalman filter (EKF) is an extension to the standard EKF in which a set of fuzzy rules and models are used. We discuss two fuzzy EKF algorithms. The first algorithm incorporates only fuzzy measurements. During the processing of the state estimates, the algorithm defuzzifies the measurement information and computes a crisp state estimate. The second algorithm permits all variables to be fuzzy numbers. The resulting state estimate vector comprises fuzzy numbers. Using Fuzzy Measurements in the Extended Kalman Filter. A general way to admit a fuzzy set in the place of the measurement vector over a general class of estimation procedures is introduced in what follows. This technique, first proposed by Watkins (36), provides reasonable answers for situations in which the actual measurement is rendered ambiguous. The basic premise of this work is to incorporate a fuzzy membership function and the concept of a fuzzy estimator into the Kalman filter. Given a new measurement z, an estimator maps the measurement data to an estimate. Also, we assume that a suit-
SONAR TRACKING
able fuzzy membership function madj(z) has been defined a priori for the measurement type. The fuzzy set madj is said to be informative if the relation 0<
madj (z) dz < ∞
(7)
holds where the integral is taken over ᑬn for the n-vector z. For an informative membership function, we define the estimator x, using the normalized membership function as a weighting function, as
E(x) =
x(z)madj (z) dz madj (z) dz
(8)
Equation (8), which averages the estimates against the given fuzzy set, becomes our estimate. Although the EKF estimator is commonly used, the estimator x can be any desired estimator. Because it is normalized with respect to the membership function, Eq. (8) is a moment generating function. The following two results provide a basis for the fuzzy estimator in Watkins (8). Result 1. Given an informative fuzzy set madj and an estimator x that has a finite first moment with respect to madj, Eq. (8) estimates the same quantity as does x. Moreover, this estimate is optimal in the sense of average squared-error with respect to madj. Result 2. The estimate of Eq. (8) reproduces the original estimator x evaluated at z when the input data is crisp, and when the point z is the ‘‘limit’’ of a sequence of membership functions that converge to atomic measure at z. With Eq. (8) and Results 1 and 2, we can now proceed to develop the results to apply fuzzy measurements to an EKF. The linearity of the Kalman filter with respect to the measurement trivializes the implementation of the EKF to handle fuzzy data as shown in the following results. Result 3. Let madj be an informative fuzzy set and x(z) be an update algorithm integrable with respect to madj. Then, if x(z) is a matrix-linear function of the vector input z, the estimator defined by Eq. (8) is just the given function applied to the first moment vector mom1(madj) of madj. In order for the EKF routine described in the preceding paragraphs to be implemented, a set of both antecedent membership functions and consequence membership functions must exist for the sensor measurement. Examples of these membership functions used in sonar and ground tracking are given in Lobbia (37). The resulting implementation for the fuzzy EKF is achieved in the following three steps. Step 1. Apply the fuzzy inference. By using the knowledge about the premise of the fuzzy measurement and the consequence membership function, we create the new membership function of the fuzzy conclusion.
9
Step 2. Compute the mean value of the fuzzy conclusion membership function. This step is the first moment discussed in Result 3 here. Step 3. The moment computed in Step 2 is the crisp value to be applied to the EKF. From this point, apply the standard EKF algorithm. A Fuzzy Extended Kalman Filter. In a recent paper, Hong and Wang (38) presented a technique that allows fuzziness to propagate throughout the extended Kalman filter. They believe that, because the measurements are fuzzy, the state will be fuzzy, as will the measurement’s noise covariance. This fuzziness then propagates throughout the computed estimates of the EKF. To evaluate the equations involved with the Kalman filter, it is necessary to avoid the problem that occurs after multiple fuzzy arithmetic operations, in which the fuzziness of the data will continue to grow into an unacceptable range. The following implementation is suggested to avoid this problem. Step 1. Defuzzify the measurement noise covariance R* and the error covariance P*. Step 2. Compute the Kalman gain K by
R + HP ∗H T )−1 K 1∗k = P ∗H T(R R ∗ + HPH T )−1 K 2∗k = PH T(R and then take the intersection of K1* and K2*. Step 3. Defuzzify the Kalman gain K*, the measurement z*, and the state estimate x*. Step 4. Update the state estimate by computing
x 1∗ = x ∗k|k−1 + K (z − h(xx ∗k|k−1 )) x 2∗ = x k|k−1 + K ∗ (z − h(xxk|k−1 )) x 3∗ = x k|k−1 + K (z∗ − h(xxk|k−1 )) and then take the intersection of x1*,,x2*, and x3*. Step 5. Update the error covariance by computing
P 1∗ = P ∗k|k−1 − KHP ∗k|k−1 P 2∗ = P ∗k|k−1 − K ∗HP k|k−1 and then take the intersection of P1* and P2*. Step 6. Defuzzify the updated error covariance, the process noise covariance Q*, and the updated state estimation xk兩k. Step 7. Compute the error covariance prediction by computing
P ∗k|k T + Q P 1∗ = P P k|k T + Q ∗ P 2∗ = P and then take the intersection of P1* and P2*. Step 8. Compute the state estimation prediction as x*k⫹1兩k ⫽ (x*k兩k), and return to Step 1. Neural Network Algorithms A commonly occurring situation in sonar tracking is that the dynamics of the target change or become unknown. Therefore,
10
SONAR TRACKING
the model representations used in the tracking system must be adaptively adjusted. In recent work, Lobbia and Stubberud (39) have developed an adaptive state estimator that is an EKF augmented by an artificial neural network (ANN). This method was developed for use with control systems where the dynamics of the system were not completely known. The known dynamics were used by the EKF as its dynamical model, while the ANN learned the unmmodeled dynamics of the system. Thus, the neural network-based EKF’s overall dynamical system approached that of the true plant. This technique can also be applied to learn the maneuver motion model from the sensor measurements. A detailed development of this technique can be found in Refs. 39–41. A summary of this development is presented here. The general discrete-time model that is applied to the EKF tracking algorithm is given in Table 1. The motion model of the target k(xk) is usually not a fully known quantity, especially during a maneuver. Also, it is not known when the target starts to implement a maneuver. For these reasons there is an error function between the true trajectory of the target k( ⭈ ) and the mathematical model was developed to approximate that trajectory ˆ k( ⭈ ) given by x k+1 = ϕ(xx k ) + νk
(9)
z k = h(xxk ) + ηk
(10)
k = φk (xx k ) − φk (xx k )
(11)
Obviously, the smaller the error, the better will be the tracks from the EKF. Using a simple multilayer feedforward ANN gk(xk, wk), where xk is the track estimate and wk is the set of weights of the ANN as a function approximator, we can estimate ⑀k. Unfortunately, the weights are a set of unknown quantities that must be identified. To train the weights for the ANN, we use a variation of the EKF training paradigm of Singhal and Wu (42). We will not reconstruct their results here but, simply stated, we construct an EKF to estimate the states of the dynamical system w k+1 = w k
(12)
with the residual ˆ k) k − gk (xx k , w
(13)
The resulting states of the EKF become the weights of the ANN. By integrating our ANN into the a priori mathematical model, we let our total model become the sum of the approximate model ˆ k(xk) and our ANN approximation gk(xk, wk) is given by x k+1 = φˆ k (xx k ) + gk (xx k , w k )
(14)
However, note that Eq. (14) is dependent on the weight estimates of wk. Therefore, we must include the ANN and its training in the EKF algorithm, thus redefining the estimatedstate prediction as
x k+1|k φˆ k (xx k|k ) + gk (xx k|k ) = w k+1|k w k|k
x=
(15)
Similarly, we incorporate the ANN into the covariance prediction. We rewrite the error covariance prediction Pk⫹1兩k as
P k+1|k
˜ + =
∂g(xx k|k , w k|k ) ∂xx k|k
˜ + P
∂g(xx k|k , w k|k )
k|k
T
∂xx k|k
+ Qk (16)
where x is the augmented state vector of Eq. (15) and
0 ˜
= 0 Iw where the Jacobian of our a priori model ⌽ is defined by
∂ k (xx )i
ij =
∂xx j
x =xx k|k−1
The terms of the other EKF equations are augmented to handle the dimensionality increase resulting from the addition of the ANN weights into the state estimation vector. The primary change is the augmentation of the Jacobian H with zeroes, so as not to affect directly the estimated output with the ANN weights. As Eq. (16) shows, the new EKF is of significantly larger dimension than the standard EKF because of the weight training. This increased complexity can reduce run-time efficiency. However, with efficient programming techniques, we can reduce the computational complexity of the routines. The algorithm also has the advantage of being simply a larger EKF. Thus, we can incorporate the fuzzy capabilities into the algorithm with relatively minimal work. One other problem can exist with this technique. The size of the ANN can affect convergence. If the ANN is too small, it may not have the capability to learn the modeling error. If the ANN is too large, the training can become too slow for useful implementation. A feasible implementation of the neural network-based extended Kalman filter is given in Ref. 39. SENSOR MANAGEMENT IN FUSION SYSTEMS Efficient management of sensor resources in a dynamic environment requires optimized coordination of the actions of the controllable sensor system assets available to the platform. In sonar applications, both passive and active assets need to be managed. Passive sensor management involves optimal use of the information content of reports for data fusion and control of the operational and processing environment in which they operate. For active sensors, the management function requires control of the actions of the sensor to focus its attention in desired surveillance space and correct operating conditions. Among the information collection functions of the surveillance process are those that use the sensors to search a desired area for targets, detect and acquire targets, and track acquired targets. In the search mode, the sensor systems are given a vague description of the target states; the controlled sensor systems have not detected the target yet, and the sensor controls are generated by the sensor management process using the ‘‘null’’ information to optimize the actions of the sensor system. The ‘‘null’’ information is given to the sensor management process in the form of a report that states that
SONAR TRACKING
the execution of the control actions dictated by the process resulted in a failure to detect a signal. The detection mode is used to transition the control actions from the search mode to the track mode. In the track mode, the sensor systems generate positive reports in the form of measurements that are functionally related to the state of the system. The objectives of the control function are different in the search and track modes. In the search mode, the control process strives to optimize sensor configurations to obtain a first detection. In the track mode, it continually tries to optimize the sensor configuration to avoid a first missed detection. Therefore, ideally the system desires to minimize the time of first detection or to maximize the time for first failure to detect a tracked target. Because of a lack of suitable computational structures for these times, other suitably formulated and tractable measures of performance are used by the control process in obtaining sensor control strategies. In the search mode, the detection probability is one such measure. In the track mode, estimation accuracies are used as a measure of performance. MATHEMATICAL FORMULATION AND REPRESENTATIONS FOR THE SENSOR MANAGEMENT FUNCTION The general approach for deriving sensor control strategies consists of the following steps: 1. Optimal processing of information gathered by sensors, to compute statistics that can be used by control algorithms 2. Optimal processing of the preceding statistics to compute search strategies that are used to reconfigure the sensor system operation 3. Reconfiguration of the sensor system using the preceding sensor strategies In subsequent paragraphs, these steps are described in more detail. Evolution of Surveillance State and Its Probability Density The fundamental quantity that underlies this investigation is the a posteriori transition density for the state of the target system given all the information up to the current time. A brief discussion of the effects of information on the transition probability density function and rules for its evolution are given in what follows. Let x(t) denote the state of a target. The dynamics of such a target can be adequately described by a suitable stochastic differential equation dx(t) = φ(xx (t), t) dt + g(xx (t), t) dβ(t)
(17)
where 웁(t) is an independent increment process defining the noise. Let It,t0 denote all the information available at time t. This information is collected by many sensors in the system. Let y움(t) denote the state of the 움th sensor. This sensor generates two types of reports: 1. A positive report is given when the sensor detects the target. Over the time interval [0, t] a sensor may detect the target several times; N움(t) denotes the number of
11
detections reported by the 움th sensor over the interval [0, t]. Some types of sensors also provide additional information about the target when a detection is made. Let z움(t) denote the measurement generated by the 움th sensor when it detects a target; N(t) and z(t) will be the composite information in the form of vector processes. 2. A negative report at time t is one in which no detections are recorded by the 움th sensor over the interval [0, t], i.e., N움(t) ⫽ 0. Therefore, there is no z움(t) associated in this case. For the purpose of analysis, the information It,t0 used in computing the a posteriori transition probability density function p(x(t), t兩x(t0), t0, It,t0) is equivalently described by the sub--field Gtt0 generated by N(t) and z(t) over the interval [t0, t]. For notational convenience, when t0 ⫽ 0 we denote Gtt0 simply by Gt. The two important questions of concern are: 1. Given all the information It,t0, what is the optimal sensor assignment policy, Y*t ⫽ 兵Y*(), t ⱕ ⱕ t其, and 2. Given It,t0, and a sensor assignment Yt, what is the best estimate xˆ(t) of the target state x(t)? To consider further the issues stated in the foregoing text requires suitable measures of performance. For the sensor assignment problem, a suitable selection criterion is the resultant detection probability PD(Yt). For the state estimation problem, the widely used performance measure is the error covariance associated with the estimate xˆ(t). Under certain conditions, maximization of PD(Yt) is equivalent to minimization of the error covariance (43). The first problem to address is a stochastic control problem. The sensor assignment can be affected by construction of optimal control policies U*t ⫽ 兵u*(), t0 ⱕ ⱕ 其, where the state y움(t) of the 움th sensor is controlled via the dynamics nα (t) dyy α (t) = aα (yy α , uα , t) dt + dn
(18)
where dn움(t) describes the noise process. To be able to solve this problem, we must first determine the transition density p(x(t), y움(t), t兩x(t0), t0, It,t0, t0, Ut). This computation, in turn, requires knowledge of the density p(x(t), t兩x(t0), t0, It,t0, t0, Yt). The solution to the second problem is well known. The minimum variance estimate xˆ(t) is given by xˆ (t) = E{xx (t)|It,t 0 , Yt }
(19)
Therefore, computation of xˆ(t) also requires the knowledge of the density p(x(t), t兩x(t0), t0, It,t0, Yt). The result that follows gives the rules of evolution for the a posteriori transition probability density function as already stated here. The derivation permits use of the reports from multiple sensors. The approach also incorporates effects of ‘‘positive’’ information in the same framework. Furthermore, the approach presented here will be able to accommodate joint search/detection/estimation schemes. Finally, multitarget/multisensor systems can also be considered under this formulation. These results are stated without proofs. Two types of measurements are considered. The continuous measurement is given by ω (t) dz(t) = h(xx (t), t) dt + dω
(20)
12
SONAR TRACKING
where w(t) is a Wiener process. The second class of measurements used here is jump processes. The number of detections N(t) will be defined as a jump process with Poisson statistics. The following theorem gives the equations for the temporal evolution of the transition probability density function for the state x(t) without the use of information It,t0 (43). Theorem: Let x(t) be a Markov process generated by dxx (t) = f (xx, t) dt + dβw + dβN
(21)
Let p ⫽ p(x(t), t兩x0, t0) denote the transition probability density function for process x(t). Then p satisfies the partial differential equation ∂p = L+ (p) ∂t
(22)
Pure Search Strategies First consider the ‘‘pure’’ search case. There are M sensors surveying the area. During the time interval [t, t ⫹ ⌬t], the 움th searcher counts the number of detections dN움(t). The probability that the 움th searcher detects the source during [t, t ⫹ ⌬t] is given by *움 (x(t), y움(t))⌬t, where x(t) is the state of the source and y움(t) is the state of the 움th sensor at time t. Let
λ∗1 (xx, y1 ) .. λ∗ (xx (t), y (t)) = . λ∗M (xx, yM )
(26)
where y(t) is the vector representing the state of all sensors. Let dN(t) denote the composite report from all searchers defined
where
L+ (·) = −
n ∂ ( f i ·) + ∂x i i=1
1 2
n n i=1 j=1
Q ij
n ∂ 2 (·) + λ [pa i ∗·−·] ∂xi ∂x i=1 i
N (t) dN
,
dN1 (t) .. . dNM (t)
(27)
(23)
In the preceding equations, Qij is the covariance matrix associated with the Wiener process 웁w(t) and is the rate parameter associated with the generalized Poisson jump process 웁N(t),
βN (t) =
x(t )
aiU (t)
(24)
Assuming that the searchers are efficiently deployed, the probability that two sensors will detect the source simultaneously in an interval ⌬t is infinitesimal and will be ignored. Thus, possible outcomes for dN(t) are 1. dN(t) ⫽ 0, in which no detections are reported 2. dN(t) ⫽ e움, in which the 움th sensor reports a detection, and all others report no detections
i=1
with U(t) the unit step function. Also,
pa i ∗ p =
pa i (ui − vi )p(u1, u2 , . . ., vi , . . ., un , t|x(t0 ), t0 ) dvi (25)
where pai(a) denotes the density for the random variable ai. Equation (23) can be solved analytically for only very specialized cases. Therefore, numerical evaluation of p(x, t兩x0, t0) will be necessary in using these equations in practice. The next quantity of interest in the conditional density that describes how the information It,t0 affects the evolution of the transition density. The preceding unnumbered theorem is to be used to determine the equations for the temporal evolution of p(x, t兩Gt0) for the following two cases: 1. The measurements are given by dN(t) alone with no accompanying continuous measurements. 2. The measurements are given by dN(t) and dz(t) at time t. The first case corresponds to a pure search policy in which the sensors register detections only without being able to obtain further information about the state x(t). The second case is a more general surveillance policy in which the sensors not only perform the search but also are capable of providing tracking information.
Theorem: Evolution of density under ‘‘pure’’ search by multiple sensors. Let x(t) be the vector Markov process defined in Eq. (21) describing the behavior of the signal source. Let the measurement process consist of unit jump process defined by Eq. (27) and with statistics defined by the rate parameter *(x(t), y(t)) in Eq. (26). Under the assumption that only one sensor detects the source at any given time, the density p ⫽ p(x, t兩x0, t0, Gtt0) satisfies (Snyder’s equation)
dN (t) M ∂p α = L+ (p) + − E{λ∗α } p (λ∗α − E{λ∗α })(E{λ∗α })−1 ∂t dt α=1 (28) where *움 ⫽ *움 (x(t), y움(t)) and the expectation E兵*움 (x(t), y움(t))其 is with respect to the density p(x(t)兩Gtt0). The operator L⫹( ⭈ ) was defined in Eq. (23). Search Under Negative Information An important case of interest is one in which no sensor detects the source over the time interval [t0, t0 ⫹ T]. In this case, dN(t) ⬅ 0 for t 僆 [t0,, t0 ⫹ T] and the conditional density evolves according to M ∂ p0 = L+ (p0 ) − [λ∗α (xx (t), y α (t)) − E{λ∗α (x(t), y α (t))}]p0 ∂t α=1 (29)
SONAR TRACKING
This equation is the partial integrodifferential equation that describes the evolution of the transition density of the state of the source when M sensors are searching and have failed to detect the target. The solution to the preceding partial differential equation, p(x(t)兩Gtt0, Yt), gives us the fundamental quantity of interest to the fusion and sensor management problems. From an overall systems point of view, the fusion center uses the a posteriori density p(x(t)兩Gtt0, Yt) in many different ways: 1. The optimal sensor control strategies U*t ⬅ 兵u*(), t0 ⱕ ⱕ t其 are computed to maximize the probability of detection that depends both on xt ⬅ 兵x(), t0 ⱕ ⱕ t其 and Yt. 2. The time evolution of the optimal estimates xˆ(t) of the target state x(t) and the associated error covariance matrix P(t) are obtained by integrating with respect to the partial differential equation (29). 3. When a positive report is generated at tk, Bayes’ rule is used to incorporate this information into the fusion process. 4. When classification information is given to the fusion center via cued transformations, it is correlated and used to improve the estimates generated above. 5. The differential equations in Eq. (18) here are similar to Kalman-Bucy filter equations and can be conveniently used in multitarget situations. Numerical techniques to solve the nonlinear partial differential equation in Eq. (29) are not readily available and need to be investigated. Promising approaches are discussed here that provide procedures that are computationally economical, albeit approximate to the second order. First, the differential equations for the mean and covariance are given under the assumption that the density vanishes rapidly as we approach infinity. Solutions to these equations give the all-important conditional mean estimate xˆ(t) ⫽ E兵x(t)兩Gtt0其 and its error covariance matrix. We assume that these two statistics define with sufficient accuracy the a posteriori density as a nearly Gaussian density. For further considerations, the Gaussian form of density is used to derive sensor control strategies. Joint Search and Track: The Surveillance Policy The a posteriori density functions in Eq. (28) can be feasibly computed for the case in which no positive reports were made by the sensors (i.e., dN(t) ⬅ 0). Although the equations are valid for situations in which detections are reported by sensors (i.e., dN(t) ⬆ 0), their implementation is not computationally feasible. In the paragraphs that follow, we outline a simplified scheme for enfolding information provided by the sensors reporting a detection. Two approaches are possible to obtain computationally feasible approximations for the first and second moments of p(x(t)兩Gtt0). The first approach analytically integrates the partial differential equation. The second approach uses Bayes’ rule to compute the conditional density at discrete time points at which detections are available. The resulting equations provide a technique to compute xˆ(tk) and p(tk) in a recursive manner. The conditional density p(x(t)兩Gtt0) for the state of the target under surveillance then evolves according to Eq. (28).
The conditional mean xˆ(t) is defined by x (t)p(x(t)|Gtt ) dxx xˆ (t) = 0
n
13
(30)
By integrating Eq. (30) with respect to Eq. (28), the behavior of xˆ(t) is given by M dxx ∗ D T (xx ∗ , t)eα (λ∗T eα )−1 = φ(xx∗ , t) + P ∗ (t)D dt α=1
dN(t) dt
∗
− λα (xx , y α )
T
(31)
eα
where e움 is the 움th coordinate direction in ᑬn, P*(t) is the first-order approximation to the covariance matrix, and D(x*, t) is the Jacobian matrix for (x, y) with respect to x evaluated at x*(t). The evolution of P*(t) is given by M P∗ (t) dP dN(t) H α (xx ∗ t)P P ∗ (t)eTα = B (xx ∗ , t) + P ∗ (t)H dt dt α=1
−
M
∗
∗
(32)
∗
E α (xx , t)P (t) P (t)E
α=1
where B(x*, t) ⫽ A(x*, t)P*(t) ⫹ P*(t)AT(x*, t) ⫹ Q(t), A(x*, t) is the Jacobian matrix for (x, t), E움(x, t) is the Hessian for 움(x, t) and H움(x, t) is the Hessian for ln[움(x, t)]. The important case, in which there are no detections in a given interval, is much easier to solve. Equations for this case are obtained by setting dN(t)/dt ⬅ 0 over the interval [tk, tk⫹1], where tk denotes the sequence of arrival times for the detections. At tk, where a detection is reported, the density is updated using Bayes’ rule. Although this approach seems computationally complex, the differential equations to be solved for x*0 (t) and P*0 (t) (i.e., no detection case) are quite similar to the continuous time Kalman filtering equations, and solutions are computationally feasible. The algorithm is described in what follows. Let tk be the sequence of times at which detections are reported by any one of the sensors. For notational simplicity, ⌬ denote Gk ⫽ Gtt0k and p(xxk |Gk )
, p(xx(tk )|Gtt
k 0
)
Assume that at time tk⫺1 the fusion center has computed p(xk⫺1兩Gk⫺1). Also, during the interval [tk⫺1, tk] some of the sensors detect the source. Let Nk be the index set for sensors that did not report detections. The following procedure provides a general technique for computing p(xk兩Gk) using p(xk⫺1兩Gk⫺1) and the information provided by the sensors during [tk⫺1, tk]. Denote by Gk the information provided by negative reports from sensors in the index set Nk. Step 0. Initialize p(x0兩G0) using a priori information about the target. For k ⫽ 1, 2, . . ., Step 1. Using Eq. (31), compute p0(xk兩Gk) using p(xk⫺1兩Gk⫺1) as the initial density. Step 2. Compute p(xk兩Gk) using Bayes’ rule and p(xk兩Gk).
14
SONAR TRACKING
In the preceding algorithm, Step 1 uses either the solution to the partial differential equation (31) or the recursive formulation of the representation for p(xk兩Gk) given in Eq. (29). An alternative approach for computing p(xk兩Gk) is to determine xˆk(Gk) and Pk(Gk) using differential equations (31) and (32) and an approximate Gaussian form p(xx k |Gk ) = N(xx k |ˆx k (Gk ), Pk (Gk )) For computations in Step 2, two distinct cases must be considered. To simplify the analysis, assume that only one sensor (움th) reports a positive detection (Dk ⫽ 1) during [tk⫺1, tk]. Then, one of two things can result: 1. The 움th sensor does not provide any further information about the source and Gk ⫽ (Gk, Dk ⫽ 1). 2. The 움th sensor provides a measurement zk at time tk. Assume that this measurement has the form zk ⫽ hk(xk, yk) ⫹ vk. In this case, Gk = (Gk , Dk = 1, z k )
(33)
In both cases, Bayes’ rule is applied to compute p(xk兩Gk) from p(xk兩Gk). Having computed the effects of a sensor report on the location density of a signal source, the next step is to use this information for efficiently assigning several sensors.
probability mass for the location density peaks in the surveillance region around that cell. If the sensor scans the cell and reports no detection, then the probability mass around that cell depletes and the mass is spread throughout the remainder of the surveillance region. This contraction and spreading of the probability mass is reflected in the average entropy of the probability distribution after a report. As this distribution starts peaking in a particular area, the entropy of the distribution decreases. As the probability distribution spreads, the average entropy is increased. This effect of detection performance on the entropy is intuitively appealing. Because the target is dynamic, if a detection is not made it gives the target time to move around in the surveillance region and, thus, increases uncertainty. On the other hand, a detection in a particular cell localizes the target and reduces uncertainty. The optimization process involves three steps: 1. Computation of the posterior density after a report 2. Computation of the average entropy of the posterior density 3. Assignment of the sensor to a cell in the surveillance region The methods discussed in the text provide the means to accomplish Step 1. Step 3 is trivially accomplished once the average entropy of the posterior density function is computed. A recursive control policy for this case and computational results are given in Ref. 43.
Sensor Control Consider the surveillance problem in which the a priori target state at time t0 is described through the density function p(x兩G0), where x denotes the target state and G0 denotes all available information up to t0. The problem is to determine the part of the state space, denoted by S, to be closely watched by a single sensor at subsequent time instants tk. We define tk = tk−1 + tk
for k = 1, 2, . . ., N
(34)
where ⌬tk is the time interval of search at the kth stage of the search. In general, the surveillance space S is n-dimensional. For the sake of practicality, however, we focus attention on the two-dimensional space described by the latitudelongitude of the target. The analysis presented here is general enough to include first and higher derivatives of the position as well as other parameters of interest. The optimal sensor control problem is that of computing the sensor search plan 兵u(x, ) 僆 (t0, t其) that will maximize the detection probability (or some other suitable criterion) resulting from a search plan. The surveillance space S 債 is divided into M discrete cells; the ith cell is denoted ⌳i. The simplest sensor control problem is one in which only one sensor is available and during any ⌬ interval Ik ⫽ [tk, tk⫹1], k ⫽ 0, 1, . . ., N ⫺ 1, the sensor can search only one of the M cells that collectively constitute the surveillance space S. Therefore, at tk, a decision is to be made as to which one of the M cells the sensor ought to search. The measure of performance for evaluating different cell assignments for search will be the target state uncertainty. A computable measure of this uncertainty is the average entropy of the location density after a report from the sensor. It should be noted here that when a detection is reported in a cell, the
BIBLIOGRAPHY 1. S. Blackman, Multiple-Target Tracking with Radar Applications, Norwood, MA: Artech House, 1986. 2. E. Waltz and J. Llinas, Multisensor Data Fusion, Norwood, MA: Artech House, 1990. 3. R. T. Antony, Principles of Data Fusion, Norwood, MA: Artech House, 1995. 4. Y. Bar-Shalom, Multitarget-Multisensor Tracking: Advanced Applications, volume I, Norwood, MA: Artech House, 1990 5. Y. Bar-Shalom, Multitarget-Multisensor Tracking: Applications and Advances, volume II, Norwood, MA: Artech House, 1992. 6. Y. Bar-Shalom and X. R. Li, Multitarget-Multisensor Tracking: Principles and Techniques, Storrs, CT: YBS Publishing, 1995. 7. D. B. Reid, An algorithm for tracking multiple targets, IEEE Trans. Autom. Control, 34: 843–854, 1976. 8. T. Kurien, Issues in the design of practical multitarget tracking algorithms, in Bar-Shalom (ed.), Multitarget-Multisensor Tracking: Advanced Applications, Norwood, MA: Artech House, 1990, pp. 43–83. 9. A. B. Poore, Multidimensional assignment formulation of data association problems arising from multitarget and multisensor tracking, Computat. Optim. Appl., 3: 27–57, 1994. 10. J. J. Stein and S. S. Blackman, Generalized correlation of multitarget data, IEEE Trans. Aerosp. Electron. Syst., AES-11: 1207–1217, 1975. 11. H. W. Sorenson, Parameter Estimation, New York: Marcel Dekker, Inc., 1980. 12. P. E. Caines, Linear Stochastic Systems, New York: Wiley, 1988. 13. S. C. Nardone and V. J. Aidala, Observability criteria for bearings-only target motion analysis, IEEE Trans. Aerosp. Electron. Syst., AES-171: 262–266, 1981.
SPACEBORNE RADAR
15
14. S. E. Hammel and V. J. Aidala, Observability requirements for three-dimensional tracking via angle measurements, IEEE Trans. Aerosp. Electron. Syst., AES-21: 200–207, 1985.
35. R. Priebe and R. Jones, Fuzzy logic approach to multitarget tracking in clutter, SPIE Acquisition, Tracking, and Pointing V, 1482: 1991, pp. 265–274.
15. A. G. Lindren and K. F. Gong, Position and velocity estimation via bearing observations, IEEE Trans. Aerosp. Electron. Syst., AES-14: 564–577, 1978. 16. V. J. Aidala, Kalman filter behavior in bearings-only tracking applications, IEEE Trans. Aerosp. Electron. Syst., AES-15: 29–39, 1979. 17. V. J. Aidala and S. E. Hammel, Utilization of modified polar coordinates for bearings-only tracking, IEEE Trans. Autom. Control, AC-28: 283–294, 1983.
36. F. A. Watkins, ‘‘Fuzzy Engineering,’’ Ph.D. thesis, Univ. California, Irvine, Dept. Electr. Comput. Eng., June 1994.
18. H. W. Sorenson and D. L. Alspach, Recursive Bayesian estimation using Gaussian sums, Automatica, 7: 465–479, 1971. 19. D. Alspach, A Gaussian sum approach to the multitarget identification-tracking problem, Automatica, 11:285–296, 1975. 20. D. P. Bertsekas and D. A. Castanon, A forward/reverse auction algorithm for asymmetric assignment problems, Computat. Optim. Appl., 1: 277–297, 1992. 21. A. B. Poore, Multidimensional assignments and multitarget tracking, in Partitioning Data Sets, I. J. Cox, P. Hansen, B.Julesz (eds.), vol. 19, DIMACS Series in Discrete Mathematics and Theoretical Computer Science, Providence, RI: American Mathematical Soc., 1995, pp. 169–198. 22. A. B. Poore and A. J. Robertson III, A new Lagrangian relaxation based algorithm for a class of multidimensional assignment problems, Computat. Optim. Appl., 8 (2): 129–150, 1997. 23. A. B. Poore and N. Rijavec, A Lagrangian relaxation algorithm for multidimensional assignment problems arising from multitarget tracking, SIAM J. Optim., 3: 544–563, 1993. 24. K. R. Pattipati et al., A new relaxation algorithm and passive sensor data association, IEEE Trans. Autom. Control, AES-37: 198–213, 1992. 24a. T. Kirubarajan and Y. Bar-Shalom, Low observable target motion analysis using amplitude information, IEEE Trans. Aerosp. Electron. Syst., AES-32: 1367–1384, 1996. 25. J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, Palo Alto, CA: Morgan Kaufmann Publ., 1988. 26. B. Kosko, Neural Networks and Fuzzy Systems: A Dynamical Systems Approach to Machine Intelligence, Englewood Cliffs, NJ: Prentice-Hall, 1992. 27. G. J. Klir and B. Yuan, Fuzzy Sets and Fuzzy Logic: Theory and Applications, Upper Saddle River, NJ: Prentice-Hall, 1995. 28. D. Driankov, H. Hellendoorn, and M. Reinfrank, An Introduction to Fuzzy Control, Berlin: Spring-Verlag, 1993. 29. J. J. Kruger and I. S. Shaw, A fuzzy learning system emulating a human tracking operator, Proc. 1st Int. Symp. Uncertainty Modeling Analysis, College Park, MD, 1990, pp. 25–28. 30. P. J. Pacini and B. Kosko, Adaptive fuzzy systems for target tracking, J. Intell. Syst. Eng., 1 (1): 3–21, 1992. 31. C. G. Moore, C. J. Harris, and E. Rogers, Utilizing fuzzy models in the design of estimators and predictors: an agile target tracking example, Proc. 2nd IEEE Int. Conf. Fuzzy Syst., 2, March 1991, pp. 679–684. 32. C.-W. Tao, W. E. Thompson, and J. S. Taur, A fuzzy logic approach to multidimensional target tracking, Proc. 2nd IEEE Int. Conf. Fuzzy Syst., San Francisco, CA, 2: 1991, pp. 1350–1355. 33. Y. H. Lho and J. H. Painter, A fuzzy-tuned adaptive Kalman filter, Proc. 3rd Int. Conf. Ind. Fuzzy Control Intell. Syst., Houston, TX, 1993, pp. 144–148. 34. C.-W. Tao et al., An estimator based on fuzzy if-then rules for the multisensor multidimensional multitarget tracking problem, Proc. 3rd Conf. Fuzzy Syst., 3: 1994, pp. 1543–1548.
37. R. N. Lobbia, Sensor fusion implementation with neural networks and fuzzy logic, ORINCON Tech. Rep. OCR 95-4155-U0371, October 1995. 38. L. Hong and G-J. Wang, personal communication, 1992. 39. R. N. Lobbia and S. C. Stubberud, Autonomous neural control of space platforms, ORINCON Tech. Rep. OCR 94-4050-U-0037, February 1994. 40. S. C. Stubberud, R. N. Lobbia, and M. Owen, An adaptive extended Kalman filter using artificial neural networks, Proc. 34th IEEE Conf. Decision Control, New Orleans, LA: 1995. 41. S. C. Stubberud, R. N. Lobbia, and M. Owen, Adaptive state estimation using artificial neural networks, Proc. ANNIE’95, St. Louis, MO, 1995. 42. S. Singhal and L. Wu, Training multilayer perceptrons with the extended Kalman algorithm, in Advances in Neural Processing Systems I, D. S. Touretsky (ed.), Palo Alto, CA: Morgan Kaufmann, 1989, pp. 133–140. 43. V. Samant, Estimation and control approaches to sensor control in C3S systems, Proc. 5th MIT/ONR Workshop C3 Syst., Monterey, CA, 1982.
VIVEK SAMANT DALE KLAMER ORINCON Corporation
SONICS IN GEOPHYSICAL PROSPECTING. See GEOPHYSICAL PROSPECTING USING SONICS AND ULTRASONICS.
SONOLUMINESCENCE AND SONOCHEMISTRY, PHYSICAL MECHANISMS AND CHEMICAL EFFECTS. See ULTRASONIC PHYSICAL MECHANISMS AND CHEMICAL EFFECTS.
SOUND-LEVEL INSTRUMENTS. See LEVEL METERS. SOUND LOUDNESS. See PSYCHOACOUSTICS. SOUND MASKING. See PSYCHOACOUSTICS. SOUND PRESSURE. See ACOUSTIC VARIABLES MEASUREMENT.
SOUND, PRODUCTION. See MUSICAL INSTRUMENTS. SOUND WAVES, UNDERWATER. See UNDERWATER ULTRASOUND.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL...ONICS%20ENGINEERING/41.%20Oceanic%20Engineering/W5411.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Underwater Acoustic Communication Standard Article Milica Stojanovic1 1Northeastern University, Boston, MA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W5411 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (180K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Channel Characteristics System Design Signal Processing Methods for Multipath Compensation Active Research Topics About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELE...0ENGINEERING/41.%20Oceanic%20Engineering/W5411.htm16.06.2008 15:17:30
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
688
UNDERWATER ACOUSTIC COMMUNICATION
UNDERWATER ACOUSTIC COMMUNICATION The need for underwater wireless communications exists in such applications as remote control in the off-shore oil industry; pollution monitoring in environmental systems; collection and telemetry of scientific data recorded at ocean-bottom stations and by underwater vehicles; speech transmission for divers; mapping of the ocean floor for detection of objects and discovery of new resources, and military situations involving submarines and autonomous vehicles. Wireless underwater communications can be established by transmission of acous-
tic waves. The underwater acoustic communication channels, however, have limited bandwidth and often cause signal dispersion in time and frequency (1–6). Despite these limitations, underwater acoustic communications are a rapidly growing field of research and engineering. Acoustic waves are not the only means for underwater wireless communication, but they are the best known so far. Radio waves that will propagate any distance through conductive sea water are the extra low-frequency ones (30 to 300 Hz) that require large antennas and high transmitter powers. Optical waves do not suffer as much from attenuation, but they are severely affected by scattering and absorption. Transmission of optical signals requires high precision in pointing the narrow laser beams, which are still being perfected for practical use. Hence, in applications where tethering is not acceptable, acoustic waves remain the single best solution for communicating underwater. The idea of sending and receiving information underwater dates all the way back to the time of Leonardo Da Vinci, who is credited with discovering the possibility of detecting a distant ship by listening on a long tube submerged under the sea. In the modern sense of the word, underwater communications began to develop during World War II for military purposes. One of the first underwater communication systems was an underwater telephone, which was developed in 1945 in the United States for communicating with submarines (3). This device used a single-sideband (SSB) suppressed carrier amplitude modulation in the frequency range of 8 to 11 kHz; it was capable of sending acoustic signals over distances of several kilometers. Low rate acoustic communications based on binary frequency shift keying have been in use since the early 1970s for controlling acoustic releases. Acoustic tomography signals have also been used for many years in transmissions over horizontal distances of several thousand kilometers (26). However, not until the development of very large-scale integration (VLSI) technology did a new generation of underwater acoustic communication systems began to emerge. With the availability of compact digital signal processors (DSPs), with their moderate power requirements, it became possible for the first time to implement complex signal processing and data compression algorithms at the submerged ends of an underwater communication link. During the past few years, significant advancements have been made in the development of underwater acoustic communication systems with respect to their operational range and data throughput (6). Acoustically controlled robots have been designed to replace divers in performing maintenance of submerged platforms (7); high-quality video transmission from the bottom of deepest ocean trenches (6500 m) to a surface ship was established (8); and acoustic telemetry over horizontal distances in excess of 200 km was demonstrated (9). As efficient communication systems are developing, the scope of their applications continues to grow, as do the requirements on the system performance. Many of the developing applications, both commercial and military, are calling for real-time communication with submarines and autonomous, or uncrewed, underwater vehicles (AUVs, UUVs). Freeing the underwater vehicles from cables will enable them to move unencumbered and refine their range of operation. While point-to-point acoustic communication systems remain the ones with most important applications, the emerging communication scenario of the future is that of an underwater
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
UNDERWATER ACOUSTIC COMMUNICATION
data network consisting of both stationary and mobile nodes. This network is envisaged to provide exchange of data, such as control, telemetry and eventually video signals, between many network nodes. The network nodes, located on underwater moorings, robots and vehicles, will be equipped with various sensors, sonars and video cameras. A remote user will be able to access the network via a radio link to a central node based on a surface station. With the aim of achieving these goals, current research is focusing on the development of efficient communications and signal processing algorithms, design of efficient modulation and coding schemes, and techniques for mobile underwater communications. In addition, multiple access communication methods are being considered for underwater acoustic networks, as well as the design of network protocols suited for long propagation delays and strict power requirements encountered in the underwater environment. Finally, data compression algorithms suitable for low-contrast underwater images and related image processing methods (10) are expected to enable image transmission through band-limited underwater acoustic channels.
689
frame transmission rate is to be achieved. Fortunately, underwater images exhibit low contrast and detail, and preserve satisfactory quality if compressed even to 2 bits per pixel. Compression methods, such as the JPEG (Joint Photographic Experts Group) standard discrete cosine transform, have been used to transmit 256 ⫻ 256 pixel still images with 2 bits per pixel at transmission rates of about one frame per 10 s (8). Further reduction of the required transmission rate seems to be possible by using dedicated compression algorithms, e.g., the discrete wavelet transform (10). There have been reports recently of compression ratios in excess of 100 : 1. On the other hand, underwater acoustic transmission of televisionquality monochrome video would require compression ratios in excess of 1000 : 1. Hence, the required bit rates for video transmission are greater than ten kilobits per second, and possibly up to several hundreds of kilobits per second. Performance requirements are moderate, as images will have satisfactory quality at bit-error rates on the order of 10⫺3 ⫺ 10⫺4. Compression of speech and images places more stringent requirements on the bit-error rates than with uncoded signals. CHANNEL CHARACTERISTICS
System Requirements The achievable data throughputs and the reliability of an underwater acoustic communication system as measured by the bit-error rate vary from system to system. However, they are always subject to bandwidth limitations of the ocean channel. By contrast to the majority of other communication media, the use of underwater acoustic resources has not been regulated by standards. In the existing systems, there are usually four kinds of signals that are transmitted: control; telemetry; speech; and video signals. Control signals include navigation, status information, and various on/off commands for underwater robots, vehicles, and submerged instrumentation such as pipeline valves or deep ocean moorings. Data rates of up to about one kilobit per second (kbps) are sufficient for these operations, but very low bit-error rates may be required (5). Telemetry data are collected by submerged acoustic instruments such as hydrophones, seismometers, sonars, currentmeters, and chemical sensors; also included are low-rate image data. Data rates on the order of one to several tens of kilobits per second are required for these applications. The reliability requirements are not as stringent as those for the command signals, and a probability of bit error of 10⫺3 ⫺ 10⫺4 is acceptable for many of the applications. Speech signals are transmitted between divers and a surface station or between divers. While the existing commercially available diver communication systems mostly use analog communications based on single-sideband modulation of the 3 kHz audio signal, research is advancing in the area of synthetic speech transmission for divers because digital transmission is expected to provide better reliability. Transmission of digitized speech using linear predictive coding (LPC) methods requires rates on the order of several kilobits per second to achieve close-to-toll quality. The bit error rate tolerance of about 10⫺2 makes it a feasible technology for poorquality band-limited underwater channels (11,12). Video transmission over underwater acoustic channels requires extremely high compression ratios if an acceptable
Sound propagation under water is determined primarily by transmission loss, noise, reverberation, and temporal and spatial variability of the channel. Transmission loss and noise are the principal factors determining the available bandwidth, range and signal-to-noise ratio (SNR). Time-varying multipath influences signal design and processing in a communication system, thus determining the information throughput and the system performance. Range and Bandwidth Transmission loss is caused by energy spreading and sound absorption. While the energy spreading loss depends only on the propagation distance and depth, the absorption loss increases not only with range but also with frequency, thus setting the limit on the available bandwidth. In addition, source transducer bandwidth presents a key limitation on the system throughput. In addition to the nominal transmission loss, link condition is influenced largely by the spatial variability of the underwater acoustic channel. Spatial variability is a consequence of the waveguide nature of the channel, which results in various phenomena, including the formation of shadow zones. Transmission loss at a particular location can be predicted by many of the propagation modeling techniques with various degrees of accuracy (1). Spatial dependence of transmission loss imposes problems for communication with moving sources or receivers. Noise in the ocean consists of man-made noise and ambient noise. In the deep ocean, ambient noise dominates, while near shores and in the presence of shipping activity, manmade noise significantly increases the overall noise level. Unlike man-made noise, most of the ambient noise sources except biological sources (whales, shrimp, etc.) can be described as having a continuous spectrum and Gaussian statistics (1). As a first approximation, the ambient noise power spectral density is commonly assumed to decay at 20 dB/decade, in both shallow and deep water, over frequencies which are important to communication systems design.
690
UNDERWATER ACOUSTIC COMMUNICATION
Frequency-dependent transmission loss and noise determine the relationship between the available range, bandwidth and SNR at the receiver input. This dependence is illustrated in Fig. 1, which shows the frequency-dependent portion of SNR for several transmission ranges. (SNR is evaluated assuming spherical spreading, absorption according to Thorp’s absorption coefficient from Ref. 1 and a 20 dB/dec decay of the noise power spectral density.) Evidently, this dependence influences the choice of a carrier frequency for the desired transmission range. In addition, it determines the relationship between the available range and frequency band, i.e., the data throughput. Underwater acoustic communication links can be classified according to range as very long, long, medium, short, and very short links. For a long-range system operating over 10 to 100 km, the bandwidth is limited to a few kilohertz (for a very long distance on the order of 1000 km, the available bandwidth falls to 100 Hz). A mediumrange system operating over 1 to 10 km has a bandwidth on the order of several tens of kilohertz, while only at very short ranges below about 100 m, more than 100 kHz of bandwidth may be available. Multipath Within the limited bandwidth, the signal is subject to multipath propagation through a channel whose characteristics vary with time and are highly dependent on the location of the transmitter and receiver. Multipath structure depends on the link configuration, which is primarily designated as vertical or horizontal. While vertical channels exhibit little time-dispersion, horizontal channels may have extremely long multipath spreads. Most notable in the long- and mediumrange channels, multipath propagation causes severe degradation of the acoustic communication signals. How to use the underwater multipath to achieve a high data throughput is without exception considered to be the most challenging task of an underwater acoustic communication system. In a digital communication system which uses a single carrier, multipath propagation causes intersymbol interference (ISI), and an important figure of merit is multipath spread in terms of symbol intervals. While typical multipath spreads in the commonly used radio channels are on the order of several symbol intervals, in the horizontal underwater acoustic channels they increase to several tens of, or a hundred symbol intervals for moderate to high data rates. For example, a com-
70 5 km Relative SNR (dB)
60 10 km 50 40 50 km
30
100 km
20 10 0
0
2
4
6 8 10 12 14 16 18 20 Frequency (kHz)
Figure 1. Frequency-dependent portion of SNR.
monly encountered multipath spread of 10 ms in a mediumrange shallow-water channel causes the ISI to extend over 100 symbols if the system is operating at a rate of 10 kilosymbols per second (ksps). The mechanisms of multipath formation in the ocean are different in deep and shallow water, and also depend on the frequency and range of transmission. Understanding of these mechanisms is based on the theory and models of sound propagation. Depending on the system location, there are several typical ways of multipath propagation. However, it is mostly the water depth that determines the type of propagation. The definition of shallow and deep water is not a strict one, but usually implies the region of continental shelves (with depth less than about 200 m) and the region past the continental shelves (with depth up to several 1000 m). Two fundamental mechanisms of multipath formation are reflection at boundaries (bottom, surface and any objects in the water), and ray bending (rays of sound always bend toward regions of lower propagation speed). If the water is shallow, propagation will occur in surface-bottom bounces in addition to a possible direct path. If the water is deep, as in the regions past the continental shelves, the sound channel may form by the bending of the rays toward the location where the sound speed reaches its minimum; this is called the axis of the deep sound channel. Because there is no loss due to reflections, sound can travel in this way over several thousands of kilometers. This channel is used for sound fixing and ranging, or SOFAR operations. Alternatively, the rays bending upwards may reach the surface, focusing in one point where they are reflected; the process is repeated periodically. The region between two focusing points is called a convergence zone, and its typical length is 60 to 100 km. The geometry of multipath propagation and its spatial dependence are important for communication systems which use array processing to suppress multipath (13,14). The design of such systems is often accompanied by the use of a propagation model for predicting the multipath configuration. Ray theory and the theory of normal modes provide the basis for such propagation modeling. Time-Variation Associated with each of the deterministic propagation paths (macromultipaths), which can be modeled accurately, are random signal fluctuations (micromultipaths), which account for the time-variability of the channel response. Some of the random fluctuations can be modeled statistically (1,2). These fluctuations include surface scattering due to waves, which is the most important contributor to the overall time variability of the shallow-water channel. In deep water, in addition to surface scattering, internal waves contribute to the time-variation of the signal propagating along each of the deterministic paths. Surface scattering is caused by the roughness of the ocean surface. If the ocean were calm, a signal incident on the surface would be reflected almost perfectly, with the only distortion being phase shifting by 앟. However, wind-driven waves act as the displacement of the reflection point, resulting in the signal dispersion. Equivalently, surface motion, and in particular longer waves and swell, cause amplitude variation or fading in the received signal. Vertical displacement of the surface can be well modeled as a zero-mean Gaussian random
UNDERWATER ACOUSTIC COMMUNICATION
variable whose power spectrum is completely characterized by the wind speed (1). Motion of the reflection point results in frequency spreading of the surface-reflected signal; this is significantly larger than that caused by many other phenomena. Doppler spread of a signal component of frequency f caused by a single surface-reflection occuring at an incidence angle is given by 0.0175( f/c)w3/2cos, where c is the speed of sound, nominally taken to be 1500 m/s, and w is the wind speed in meters per second (1). A moderate wind speed is on the order of 10 m/s. The highest Doppler spreads are most likely to be found in short- and medium-range links, which use relatively high frequencies. For longer ranges, at which lower frequencies are used, the Doppler spread will be lower; however, multipath spread will increase as there will be more significant propagation paths. The exact values of multipath and Doppler spreads depend on the geometry of multipath on a particular link. Nevertheless, it can be said that the channel spread factor, i.e., the product of the Doppler spread and the multipath spread, in general, can be expected to decrease with range. As an example, Figs. 2 to 4 each show an ensemble of channel impulse responses, observed as functions of delay over an interval of time. These responses are estimated from experimental measurements (15). Relevant system parameters are indicated in the figures. These figures describe channel responses obtained at three fundamentally different locations with different mechanisms of multipath formation. The channel responses shown in Figs. 2 to 4 are obtained by adaptive channel estimation techniques; in particular, a recursive least-squares algorithm is applied to 4-PSK signals transmitted over the channels at rates indicated in the figures. Figure 2 shows the impulse responses recorded in deep water of the Pacific Ocean, off the coast of California. In this channel, propagation occurs over three convergence zones, which span 110 nautical miles. At each fixed-time instant, the channel impulse response magnitude is shown as a function of delay. Each channel response reveals that two or more signals arrive at the receiver at any given time and the amplitudes and phases of distinct arrivals may vary independently in time.
0.8
Range: 110 nautical miles Rate: 333 sps Channel #6: omnidirectional tx depth: 100 m, rx depth: 640 m
0.6 0.4 0.2 0 – 40 15
–20 Delay (ms)
0
10 20
5 40
0
Time (s)
Figure 2. Ensemble of long-range channel responses in deep water (approximately 2000 m) off the coast of California, during the month of January. Carrier frequency is 1 kHz. Rate at which quaternary data symbols used for channel estimation were transmitted is given in symbols per second (sps).
691
Range: 48 nautical miles Rate: 333 sps Channel #8: omnidirectional tx depth: 25 m, rx depth: 23 m
0.8 0.6 0.4 0.2 0 –50
15 0
Delay (ms)
10 5 50
Time (s)
0
Figure 3. Ensemble of long-range channel responses in shallow water (approximately 50 m) off the coast of New England, during the month of May. Carrier frequency is 1 kHz.
The multipath delay-spread in this channel is on the order of 20 ms and the multiple arrivals have comparable energy, thus causing strong ISI. Along the time axis, variation of the channel response is observed for each given delay. In Fig. 2, significant variation occurs over the shown 15 second interval. This channel does not have a well-defined principal, or strongest arrival, as evidenced by the fact that the maximum amplitude does not always occur at the same delay. Figure 3 shows the impulse responses obtained in shallow water of the Atlantic Ocean continental shelf, off the coast of New England, over a long distance of 48 nautical miles. This example shows a channel with a well-defined principal arrival, followed by multipath of lower energy. The extent of multipath is up to 50 ms. It is worth noting that even though the extended multipath may appear to have negligible energy, its contribution to the overall ISI cannot be neglected. This channel shows a slower time-variation than the one observed in Fig. 2. In contrast, Fig. 4 provides an example of a rapidly timevarying channel. These responses were recorded in the shallow water of Buzzards Bay near the coast of New England, over
1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 10
Range: 2 nautical miles Rate: 500 sps Channel #1: omnidirectional tx depth: 8 m, rx depth: 3.5 m
–5 10 Delay (ms)
5 10
0
2
4
6
8
10
Time (s)
Figure 4. Ensemble of medium-range channel responses in shallow water (approximately 20 m) near the coast of New England, during the month of February. Carrier frequency is 15 kHz.
692
UNDERWATER ACOUSTIC COMMUNICATION
a distance of 2 nautical miles. Of the three examples shown, this channel demonstrates the fastest time-variation, which is typical of a medium-range shallow water environment. The factor that determines the performance of a digital communication system on a frequency-spread channel is the Doppler spread normalized by the symbol rate. In underwater acoustic channels, the normalized Doppler spread can have values as high as 10⫺2. While the Doppler spread describes time-variation of the channel response, multipath spread describes its time-dispersion. The implications of the resulting time-varying multipath dispersion on the communication system design are twofold. On one hand, signaling at a high rate causes many adjacent symbols to interfere at the receiver and requires sophisticated processing to compensate for the ISI. On the other hand, as pulse duration becomes shorter, channel variation over a single symbol interval becomes slower. This allows an adaptive receiver to efficiently track the channel on a symbol-to-symbol basis, provided, of course, there is a method for dealing with the resulting time-dispersion. Hence, time-varying multipath causes a trade-off in the choice of signaling rate for a given channel. Experimental results obtained on a rapidly varying shallow-water channel (16) demonstrate these observations. While there exists a vast knowledge of both deterministic and statistical modeling of sound propagation underwater, the implications of this knowledge on the communication channel modeling has only recently received more attention (17–20). A time-varying multipath communication channel is commonly modeled as a tapped delay line, with tap spacing equal to the reciprocal of twice the channel bandwidth, and the tap gains modeled as stochastic processes with certain distributions and power spectral densities. Although it is known that many radio channels fit well within the model of Rayleigh fading, where the tap gains are derived from complex Gaussian processes, to date there is no widely accepted single model for any of the underwater acoustic communication channels. Modeling of the shallow-water medium-range channel has received the most attention, as this channel is known to be among the most rapidly varying ones. Most researchers consider this channel to be fully saturated, meaning that it exhibits Rayleigh fading (2,4,17). The deep-water channel has also been modeled as a Rayleigh fading channel; however, the available measurements are scarce, often making channel modeling a controversial issue (18). The channel measurements available today focus mostly on stationary communication scenarios. In a mobile underwater acoustic channel, vehicle speed will be the primary factor determining the time-coherence properties of the channel, and consequently the system design. Knowledge of a statistical channel model has proved to be useful in the design and analysis of land-mobile radio systems, and such models for underwater mobile acoustic channels await future development.
SYSTEM DESIGN To overcome the difficulties of time-varying multipath dispersion, the design of commercially available underwater acoustic communication systems has relied so far mostly on the use of noncoherent modulation techniques and signaling methods which provide relatively low data throughput. Phase-coherent
modulation techniques, which use equalization and array processing to compensate for the channel impairments, have only recently been shown to provide a feasible means for a more efficient use of the underwater acoustic channel bandwidth (6). These advancements are expected to result in a new generation of underwater communication systems, with at least an order of magnitude increase in raw data throughput. Approaches to system design vary according to the technique used for overcoming the effects of intersymbol interference and signal-phase variations. Specifically, these techniques may be classified according to the signal design, i.e., the choice of modulation/detection method, and the transmitter/receiver structure, i.e., the choice of array processing method and equalization method, if any. This section describes the design of several systems which have been implemented. While most of the existing systems operate on the vertical, or the very short-range channels, the systems under development often focus on the severely spread horizontal shallow-water channels. Signal processing methods used in these systems are addressed in the following section. Systems Based on Noncoherent Modulation Noncoherent detection of frequency shift keying (FSK) signals has been used for channels exhibiting rapid phase variation, i.e., the shallow-water long- and medium-range channels. To overcome the ISI, the existing noncoherent systems employ signal design with guard times, which are inserted between successive pulses to ensure that all the reverberation vanishes before each subsequent pulse is to be received. The insertion of idle periods of time obviously results in a reduction of the available data throughput. In addition, due to the fact that fading is correlated among frequencies separated by less than the coherence bandwidth (the inverse of the multipath spread), it is desired that only those frequency channels which are separated by more than the coherence bandwidth be used at the same time. This requirement further reduces the system efficiency unless some form of coding is employed so that the adjacent simultaneously transmitted frequencies belong to different codewords. A representative system (21) for telemetry at a maximum of 5 kbps uses a multiple FSK modulation technique in the 20 to 30 kHz band. This band is divided into 16 subbands, in each of which a 4-FSK signal is transmitted. Hence, from a total of 64 channels, 16 are used simultaneously for parallel transmission of 32 information bits (2 information bits per one 4-channel subband). This system has been used successfully for telemetry over a 4 km shallow-water horizontal path, and a 3 km deep ocean vertical path. It was also used on a less than 1 km long shallow-water path, where probabilities of bit error on the order of 10⫺2 to 10⫺3 were achieved without coding. The system performance may be improved by using error correction coding; however, its data throughput will be reduced. An acoustic modem based on multiple FSK is commercially available with a maximum data rate of 1200 bps. Despite the fact that bandwidth efficiency of this system does not exceed 0.5 bps/Hz, noncoherent FSK is a good solution for applications where moderate data rates and robust performance are required. As such, these methods are being further developed, and a system has recently been implemented (22) which uses orthogonal frequency division multiplexing
UNDERWATER ACOUSTIC COMMUNICATION
(OFDM) realized with DFT-based filter banks (discrete Fourier transform). This system was used on a medium-range channel; however, due to the high frequency separation among the channels (only every fourth channel is used) and relatively long guard times (10 ms guard following a 30 ms pulse), needed to compensate for the multipath fading distortion, the effective data rate is only 250 bps. Systems Based on Differentially Coherent and Coherent Modulation With the goal of increasing the bandwidth efficiency of an underwater acoustic communication system, research focus over the past years has shifted towards phase-coherent modulation techniques, such as phase-shift keying (PSK) and quadrature amplitude modulation (QAM). Phase-coherent communication methods, previously not considered feasible, were demonstrated to be a viable way of achieving high-speed data transmission over many of the underwater channels, including the severely time-spread horizontal shallow-water channels (9,16,23,24). Depending on the method for carrier synchronization, phase-coherent systems fall into two categories—differentially coherent and purely phase-coherent. The advantage of using differentially encoded PSK (DPSK) with differentially coherent detection is the simple carrier recovery it allows; however, it has a performance loss compared to coherent detection. Most of the existing systems employ DPSK methods to overcome the problem of carrier phase extraction and tracking. Real-time systems have been implemented mostly for application in vertical and very short-range channels, where little multipath is observed and the phase stability is good. In the very short-range channel, where bandwidth in excess of 100 kHz is available and signal stability is good, a representative system (7) operates over 60 m at a carrier frequency of 1 MHz and a data rate of 500 kbps. This system is used for communication with an undersea robot which performs maintenance of a submerged platform; 16-QAM is used, and the performance is aided by an adaptive equalizer. A linear equalizer, operating under a least mean squares (LMS) algorithm, suffices to reduce the bit-error rate from 10⫺4 to 10⫺7 on this channel. Transmission over a very long range has been used for ocean acoustic tomography (25). With a carrier frequency of 200 Hz, and a 20 Hz bandwidth, information was transmitted in this system at less than 1 bps over 4000 km in deep water. A deep ocean vertical path channel is used by an image transmission system (8). This is 4-DPSK system with carrier frequency of 20 kHz, capable of achieving 16 kbps bottom to surface transmission over 6500 m. The field tests of this system indicate the achievable bit-error rates on the order of 10⫺4 with linear equalizer operating under an LMS algorithm. Another example of a successfully implemented system for vertical path transmission is that of an underwater image and data transmission system (26). This system uses a binary DPSK modulation at a rate of 19.2 kbps. The carrier frequency of 53 kHz was used for transmission over 2000 m. Recent advances in digital underwater speech transmission are represented by a prototype system described in Ref. 11. This system uses a code-excited linear prediction (CELP) method to transmit the speech signal at 6 kbps. The modula-
693
tion method used is 4-DPSK. A decision-feedback equalizer, operating under an LMS algorithm, is being used in the pool tests. Field tests have not been reported yet. A similar approach is considered in Ref. 12. For the applications in a shallow-water medium-range channel, a binary DPSK system (27) uses a direct-sequence spread spectrum method to resolve a strong surface reflection observed in the 1 km long, 10 m deep channel. The interfering reflection is only rejected and not used for multipath recombining. Data throughput of 600 bps within a bandwidth of 10 kHz is achieved. Such high spreading ratios are justified in interference-suppression applications. Current state-of-the art in phase-coherent underwater communications is represented by the system implementation (28). This 4-PSK system is based on purely phase-coherent modulation and detection principles (23). The signals are transmitted at 5 kbps using a carrier frequency of 15 kHz. The system’s real-time operation in configuration as a six-node network was demonstrated in an under-ice shallowwater environment. To overcome the ISI caused by shallowwater multipath propagation, the system uses a decision-feedback equalizer operating under a recursive least squares (RLS) algorithm.
SIGNAL PROCESSING METHODS FOR MULTIPATH COMPENSATION To achieve higher data rates, bandwidth-efficient systems based on phase-coherent signaling methods must allow for considerable ISI in the received signal. These systems employ either some form of array processing or equalization methods (or a combination thereof) to compensate for the channel distortions. Three main approaches have been taken towards this end. The first two approaches use differentially coherent detection and rely on array processing to eliminate or reduce multipath. The third approach is based on purely phase-coherent detection and the use of equalization together with array processing for exploitation of the multipath and spatial diversity. Array processing for multipath suppression has been used both at the transmitter and at the receiver end. Transmitter arrays can be used to excite only a single path of propagation, but very large arrays are required (3). To overcome the need for a large array, the use of parametric sources has been studied extensively (13). These highly directive sources rely on the nonlinearity of the medium in the vicinity of a transducer where two or more very high frequencies from the primary projector are mixed. The resulting difference frequency is transmitted by a virtual array formed in the water column in front of the projector. A major limitation of such a source is in its high power requirements. High directivity implies the problem of pointing errors, and careful positioning is required to ensure complete absence of multipath. These systems have been employed in shallow-water channels where equalization is not deemed feasible due to the rapid time-variation of the signal. Instead, a receiving array is employed to compensate for the possible errors. Binary and quaternary DPSK signals were used achieving data rates of 10 and 20 kbps, respectively, with a carrier frequency of 50 kHz. The estimated bit error rate was on the order 10⫺2 to 10⫺3, depending on the
694
UNDERWATER ACOUSTIC COMMUNICATION
actual channel length. In general, it was found that this technique is more effective at shorter ranges. Multipath rejection using adaptive beamforming at the receiver end only is another possibility. The beamformer (14) uses an LMS algorithm to adaptively steer nulls in the direction of a surface-reflected wave. Similarly as in the case of the transmitter beamformer, it was found that the beamformer encounters difficulties as the range increases relative to depth. To compensate for this effect, the use of an equalizer was considered to complement the performance of the beamformer. The equalizer is of a decision-feedback type, and it operates under an LMS algorithm whose low computational complexity permits real-time adaptation at the symbol rate. A separate waveform is transmitted at twice the data rate for purposes of time-synchronization. The system was tested in shallow water at 10 kbps using a carrier frequency of 50 kHz. An estimated bit-error rate of 10⫺2 was observed without the equalizer and with the equalizer the bit-error rate was 10⫺3. A different method, based on purely phase-coherent detection, uses joint synchronization and equalization for combating the effect of phase variations and ISI (23,24). The equalization method is fractionally spaced decision-feedback equalization used with an RLS algorithm. The system incorporates spatial signal processing in the form of multichannel equalization based on diversity combining. The phase-coherent methods have been tested in a variety of highly timespread underwater channels, showing superior performance regardless of the link geometry. The achieved data rates of up to 2 kbps over long-range channels, and up to 40 kbps over shallow-water medium-range channels, are among the highest reported to date. These methods are discussed in more detail in the following section. Design Example: Multichannel Signal Processing for Coherent Detection In many of the underwater acoustic channels, multipath structure may exhibit one or more components which carry the energy similar to that of the principal arrival. As the time progresses, it is not unusual for these components to exceed in energy the principal arrival (e.g., see Fig. 2). The fact that the strongest multipath component may not be well defined makes the extraction of carrier reference a difficult task in
such a channel. To establish coherent detection in the presence of strong multipath, a technique based on simultaneous synchronization and multipath compensation (23) may be used. This technique is based on joint estimation of the carrier phase and the parameters of a decision-feedback equalizer, where the optimization criterion is minimization of the mean-squared error (MSE) in the data estimation process. In addition, the equalizer/synchronizer structure can be extended to include a number of input channels (9,24). Spatial diversity combining has shown excellent performance in a number of underwater environments, as well as potentials for dealing with several types of interference. In Fig. 5, the multichannel equalizer is shown, preceded by an additional precombiner, which may or may not be used depending on the application and the number of available input channels. The input signals to the baseband processor are the A/D converted array signals, brought to baseband using nominal carrier and lowpass filtering. The signals are frame-synchronized using a known channel probe (usually a short sequence, such as Barker code, transmitted in-phase and quadrature at the data rate). Baseband processing begins with downsampling, which may be carried out on as few as two samples per symbol interval (Ns ⫽ 2), because the signals are shaped at the transmitter to have a raised-cosine spectrum which limits their maximal frequency to less than 1/T. Because there is no feedback to the analog part of the receiver, the method is suitable for an all-digital implementation. For applications where transmitter and receiver are not moving but only drifting with water, no explicit adjustment of the sampling clock is needed. It will implicitly be accomplished during the process of adaptive fractionally spaced equalization. The front section of the equalizer will also perform adaptive matched filtering and linear equalization. To correct for the carrier offset, the signals in all the channels are phase-shifted by the amount estimated in the process of joint equalization and synchronization. After coherent combining, the ISI resulting from the previously transmitted symbols (postcursors) is cancelled in the feedback section of the equalizer. This receiver structure is applicable to any linear modulation format, such as M-PSK, or M-QAM, the only difference being in the way in which symbol decision is performed. In addition to combining and equalization, signal pro-
Ns / T Input 1
×
1/ T
Resmpl. Carrier clock correct 1 1
Spatial precombiner
Feedforward filter 1 . . . Feedforward filter K
Input K
Trellis decoder
Σ
× –
Symbol decision
Feedback filter
Resmpl. Carrier clock correct K K Figure 5. A multichannel equalizer for phase-coherent detection.
Data out (with coding) Data out
UNDERWATER ACOUSTIC COMMUNICATION
cessing at the receiver includes the operation of decoding if the signals at the transmitter were encoded. Trellis-coded modulation, compatible with PSK and QAM signals, is an effective means of improving performance on a band-limited channel (29). In addition to coded modulation, error correction coding may be employed. The receiver parameters that are adaptively adjusted are the weights of the precombiner, the tap-weights of the feedforward filters, the carrier-phase estimates, and the tapweights of the feedback filter. A single estimation error is used for the adaptation of all parameters. This error is the difference between the estimated data symbol at the input to the decision device and its true value. During the initial training mode, the true data symbols are known. After the training period, when the receiver parameters have converged, the online symbol decisions are fed back to the equalizer and used to compute the error. The adaptive algorithm used to update the receiver parameters is a combination of the second-order digital phase-locked loop (PLL) for the carrier-phase estimates, and the RLS algorithm for the multichannel equalizer tap weights. The complexity of the multichannel equalizer grows with the increase in the number of its input channels. For this reason, the spatial precombiner may be used to limit the number of equalizer channels, but still make use of the multichannel gain available from an array of sensors. The precombiner weights can be estimated jointly with the remaining adjustable parameters. The details of the joint adaptation are given in Ref. 24. The multichannel receiver is adaptively adjusted to coherently combine multiple signal arrivals, and thus exploit both spatial and temporal or multipath diversity gain. In this manner, it differs from a receiver based on adaptive beamforming which is adjusted to null out the signal replicas arriving from angles different than that of the desired path. The signal isolated by a beamformer usually has to be processed by a separately optimized equalizer to compensate for the residual ISI which arises because the beamformer cannot completely eliminate the multipath interference (14). The adaptive multichannel equalizer also has the capability to form spatial notches, but rather than cancelling all multipath, this receiver permits a range of solutions, judicious combination of which is determined implicitly through adaptation. Because it is not constrained by angular resolution, the method of multichannel equalization may be used with as few as two input channels. Also, it is applicable to a variety of underwater acoustic channels, regardless of the channel range-todepth ratio. In applications where large arrays are available, the precombiner reduces the receiver complexity while preserving the multichannel diversity gain. The method of adaptive multichannel combining and equalization was demonstrated to be effective in underwater channels with fundamentally different mechanisms of multipath formation. Experimental results include data rates of 2 kbps over three convergence zones (200 km or 110 nautical miles) in deep water; 2 kbps over 90 km (50 nautical miles) in shallow water, and up to 40 kbps over 1 to 2 km in rapidly varying shallow-water channels (6). ACTIVE RESEARCH TOPICS At this stage in the development of underwater acoustic communication techniques, with the feasibility of high rate com-
695
munications established, numerous research topics are foreseen which will influence the development of future systems. Such topics include: reduced-complexity receiver structures and algorithms suitable for real-time implementation; techniques for interference suppression; multiuser underwater communications; system self-optimization; development of modulation/coding methods for improved bandwidth efficiency; and mobile underwater acoustic communication systems. Reducing the Receiver Complexity Although the underwater acoustic channels are generally confined to low data rates compared to many other communication channels, the encountered channel distortions require complex signal processing methods, resulting in high computational load which may exceed the capabilities of the available programmable DSP platforms. Consequently, reducing the receiver complexity to enable efficient real-time implementation has been a focus of active research. The problem of reducing the receiver complexity may be addressed on two levels, i.e., the design of an efficient receiver structure, and the design of an efficient adaptive algorithm. For application in time-varying channels, the receiver, whether it is based on array processing, equalization, or both methods, must use an adaptive algorithm for adjusting its parameters. Two commonly used algorithms are based on the LMS and the RLS estimation principles. In a majority of recent studies, the LMS-based algorithms are considered the only alternative for real-time implementations due to their low computational complexity, which is linear in the number of coefficients N (12,14,30). However, the LMS algorithm has a convergence time which may become unacceptably long when large adaptive filters are used (20 N as opposed to 2 N of the RLS algorithm). The total number of coefficients may be very large (more than 100 taps is often needed for spatial and temporal processing in medium- and long-range shallow-water channels). In addition, the LMS algorithm is very sensitive to the choice of step-size. To overcome this problem, self-optimized LMS algorithms may be used (30), but this results in increased complexity, and increased convergence time. RLS algorithms, on the other hand, have better convergence properties but higher computational complexity. The quadratic complexity of the standard RLS algorithms is too high when large adaptive filters must be implemented. In general, it is desirable that the algorithm be of linear complexity, a property shared by the fast RLS algorithms. A numerically stable fast RLS algorithm (31) has been used for the multichannel equalizer (9). Despite its quadratic complexity, a square-root RLS algorithm (32) has been used for realtime implementation (33). The advantage of this algorithm is that it allows the receiver parameters to be updated only periodically rather than at every symbol interval, thus reducing the computational load per each detected symbol. In addition, the updating intervals can be determined adaptively based on monitoring the estimated mean squared error. Such adaptation methods are especially suitable for use with high transmission rates, where long ISI requires large adaptive filters, but at the same time eliminates the need to update the receiver parameters at every symbol interval which now has short duration. The square-root RLs algorithm has excel-
696
UNDERWATER ACOUSTIC COMMUNICATION
lent numerical stability, which makes it preferable for a practical implementation. A different class of adaptive filters, which also have the desired convergence properties and numerical stability, are the lattice filters that may use either LMS or RLS algorithms. These algorithms have been proposed in Ref. 34, but have not yet been applied to underwater acoustic channel equalization. The selection an appropriate receiver adaptation method will receive more attention in the future acoustic modem design. Regardless of the adaptive algorithm used, its computational complexity is proportional to the number of receiver parameters (tap-weights). Rather than focusing only on lowcomplexity algorithms, one may search for a way to reduce the receiver size. Although the use of array processing reduces residual ISI and allows shorter length equalizers to be used, a broadband combiner may still require a large number of taps to be updated, thus limiting the practical number of receiving channels to only a few. The use of a precombiner (26) is a method for reducing a large number of input channels to a smaller number for subsequent multichannel equalization. By careful design, full diversity gain can be preserved by this technique. Usually, more than one channel at the output of the combiner is required, but this number is often small (e.g., three). The fact that diversity gain may be preserved is explained by multipath correlation across the receiver array. In addition to the reduced computational complexity, smaller adaptive filters result in less noise enhancement, contributing to improved performance. A different approach in the design of reduced-complexity receiver structures is to reduce the number of equalizer taps. A conventional equalizer is designed to span the entire channel response. However, if the channel is characterized by several distinct multipath arrivals separated in time by intervals of negligible reverberation, an equalizer may be designed to have fewer taps. Such a method, termed sparse equalization, was applied to detection of experimental signals, and showed an order of magnitude reduction in computational load (35,36). By reducing the number of adaptively adjusted parameters, this approach also makes it possible to use simple updating algorithms, such as standard RLS algorithms, which have good numerical stability. Finally, in channels that are naturally sparse, discarding the low-magnitude equalizer taps results in improved performance because no unnecessary noise is processed.
ence while simultaneously detecting the desired signal. Noise cancellation is performed by providing a reference of the noise signal to one of the multichannel combiner inputs, which may be accomplished by the use of a reference hydrophone. Cancellation of the sinusoidal interferer may be performed even without the reference signal. By virtue of having the training sequence, the multichannel combiner is capable of adaptively filtering out the interfering signal and extracting the desired signal. A multiple-access communication system represents a special case of a structured interference environment. Due to the bandwidth limitation of the underwater acoustic channel, frequency-division multiple-access may not be an efficient technique. Time-division multiple access is associated with the problem of efficient time-slot allocation, which arises because of the long propagation delays. A possible solution in such a situation is to allow a number of users to transmit simultaneously in both time and frequency. This approach resembles code-division multiple-access; however, only very low code-division processing gains are available due to bandwidth constraints. The receiver thus has to be designed to deal with the resulting multiple-access interference, which may be very strong in an underwater acoustic network. The fact that transmission loss varies significantly with range, and the fact that low spreading ratios are available, both contribute to the enhanced near-far effect in the underwater acoustic channel. The multiuser detection methods investigated for underwater acoustic channels rely on the principles of joint synchronization, channel equalization and multiple-access interference cancellation (38). Two categories of multiuser receivers that have been considered are the centralized receiver, in which the signals of all the users are detected simultaneously (e.g., uplink reception at a surface buoy which serves as a central network node), and the decentralized receiver, in which only the desired user’s signal must be detected (e.g., downlink reception by an ocean-bottom node). Similarly, as in the case of interference cancellation, the adaptive multichannel receiver of Fig. 5 was shown experimentally to have excellent capabilities in the role of a decentralized multiuser detector, operating without any knowledge of the interfering signal. Array processing plays a crucial role in the detection of multiuser signals, but is associated with the problem of computational complexity.
Interference Cancellation and Multiuser Communications
System Self-Optimization
The sources of interference in underwater acoustic channels include both external and internal interference generated within the system. The external sources of interference include noise coming from on-board machinery or other nearby acoustic sources. In a specific scenario of mobile communications, these sources will also include the propulsion and flow noise associated with the underwater vehicle launch process. The internal noise, which has signal-like characteristics, arises in the form of an echo in full-duplex systems, and in the form of multiple-access interference generated by other users operating within the same network. Methods for cancellation of interference arises in the form of band-limited white noise and multiple sinusoids were investigated in Ref. 37). It was found that the multichannel receiver of Fig. 5 was most effective in canceling the interfer-
A receiver algorithm must use a number of parameters that must be adjusted according to the instantaneous channel conditions before the actual signal detection can begin. These parameters include the number and location of array sensors that provide good signal quality, the sizes of equalizer filters, and their tracking parameters. The optimal values of receiver parameters depend not only on the general link configuration but also on the time of operation. In addition, an increase in the background noise level caused, for example, by a passing ship, may temporarily disable the communication. If the adaptive receiver algorithms are to be used in autonomous systems, external assistance in algorithm initialization, or reinitialization, should be minimized. For this reason, the development of self-optimized receiver algorithms is of interest to future research.
UNDERWATER ACOUSTIC COMMUNICATION
The first steps in this direction are evident in the implementation of self-optimized LMS algorithms (14,30), in which the step-size is adaptively adjusted, and in the implementation of a periodically updated RLS algorithm (28), selfadjusted to keep a predetermined level of performance by increasing the tracking rate if the channel condition worsens. These strategies provide the receiver with the capability to adjust to the fine channel changes. However, they depend on the availability of a reliable reference of the desired signal. Because a training sequence is inserted only so often in the transmitted signal, a loss of synchronization or convergence during detection of a data packet will cause the entire packet to be lost. As an alternative to periodic reinsertion of known data, which increases the overhead, methods for self-optimized or blind recovery may be considered. A blind equalization method based on using the cyclostationary properties of oversampled received signals (39), and which requires only the estimation of second-order signal statistics, provides a practical solution for recovering the data sequence in the absence of clock synchronization. Originally developed for linear equalizers, this method has been extended to the decision-feedback equalizer, necessary for application in underwater acoustic channels with extreme multipath. It has proved successful in preliminary tests with real data (6). Further work on blind system recovery for underwater acoustic channels will focus on methods for array processing and carrier-phase tracking. Modulation and Coding Achieving high throughputs over band-limited underwater acoustic channels is conditioned on the use of bandwidth-efficient modulation and coding techniques (29). Related results documented in contemporary research literature are confined to signaling schemes whose bandwidth efficiency is at most 3 to 4 bps/Hz. Higher-level signal constellations, together with trellis coding are being considered for use in underwater acoustic communications. While trellis-coded modulation is well suited for vertical channels which have minimal dispersion, their use on the horizontal channels requires further investigation. First, conventional signal mapping into a highlevel PSK or QAM constellation may be associated with increased sensitivity of detection on a time-varying channel. Recent results in radio communications show that certain types of high-level constellations, which permit differential encoding in both phase and amplitude, are more robust to the channel fading and phase variations than the conventional rectangular QAM constellations (40). Another issue associated with the use of coded modulation on the channels with long ISI is the receiver design which takes full advantage of the available coding gain. Specifically, the delay in decoding poses problems for an adaptive equalizer that relies on the feedback of instantaneous decisions. The use of the maximum likelihood sequence estimation for both decoding and equalization is associated with the prohibitive complexity of the Viterbi algorithm for many applications. Receiver structures which deal with these problems as they apply to underwater channels are currently being studied. In addition to bandwidth-efficient modulation and coding techniques, the future underwater communication systems will rely on data compression algorithms to achieve high data rates over severely band-limited underwater acoustic chan-
697
nels. This is another active area of research, which, together with sophisticated modulation and coding techniques, is expected to provide solutions for high-rate underwater image transmission. Mobile Underwater Communications The problem of channel variability, already present in applications with a stationary transmitter and receiver, becomes a major limitation for a mobile underwater acoustic communication system. The ratio of the vehicle speed to the speed of sound (1/102 for a vehicle speed of 30 knots or 54 km/h) many times exceeds its counterpart in the mobile radio channels (1/108 for a mobile moving at 60 miles per hour or 100 km/h), making the problem of time-synchronization very difficult in the underwater acoustic channel. Apart from the carrier phase and frequency offset, the mobile underwater acoustic systems will have to deal with motion-induced pulse compression and dilation. Successful missions of experimental AUVs that use commercial FSK acoustic modems for vehicle-to-vehicle communication have been reported (41). For phase-coherent systems, algorithms for continuous tracking of the timevarying symbol delay in the presence of underwater multipath are under development. While many problems remain to be solved in the design of high-speed acoustic communication systems, recent advances in this area serve as an encouragement for future work. This will make possible the remote exploration of the underwater world. More information about underwater acoustics can be found in the following entries: OCEANOGRAPHIC EQUIPMENT; SONAR SIGNAL PROCESSING; SONAR TARGET RECOGNITION; and SONAR TRACKING. BIBLIOGRAPHY 1. L. Brekhovskikh and Y. Lysanov, Fundamentals of Ocean Acoustics, New York: Springer, 1982. 2. S. Flatte (ed.), Sound Transmission Through a Fluctuating Ocean, Cambridge, UK: Cambridge University Press, 1979. 3. A. Quazi and W. Konrad, Underwater acoustic communications, IEEE Commun. Mag., March, pp. 24–29, 1982. 4. J. Catipovic, Performance limitations in underwater acoustic telemetry, IEEE J. Oceanic Eng., 15: 205–216, 1990. 5. A. Baggeroer, Acoustic telemetry—an overview, IEEE J. Oceanic Eng., 9: 229–235, 1984. 6. M. Stojanovic, Recent advances in high rate underwater acoustic communications, IEEE J. Oceanic Eng., 125–136, 1996. 7. A. Kaya and S. Yauchi, An acoustic communication system for subsea robot, Proc. OCEANS’89, Seattle, WA: pp. 765–770, 1989. 8. M. Suzuki and T. Sasaki, Digital acoustic image transmission system for deep sea research submersible, Proc. OCEANS’92, Newport, RI, pp. 567–570, 1992. 9. M. Stojanovic, J. A. Catipovic, and J. G. Proakis, Adaptive multichannel combining and equalization for underwater acoustic communications, J. Acoust. Soc. Amer., 94(3): Pt. 1, 1621–1631, 1993. 10. D. F. Hoag, V. K. Ingle, and R. J. Gaudette, Low bit-rate coding for underwater video using wavelet-based compression algorithms, IEEE J. Oceanic Eng., 22: 393–400, 1997. 11. A. Goalic et al., Toward a digital acoustic underwater phone, in Proc. OCEANS’94, Brest, France, pp. III.489–III.494, 1994.
698
UNDERWATER ACOUSTIC COMMUNICATION
12. B. Woodward and H. Sari, Digital underwater voice communications, IEEE J. Oceanic Eng., 21: 181–192, 1996. 13. R. F. W. Coates, M. Zheng, and L. Wang, BASS 300 PARACOM: A ‘‘model’’ underwater parametric communication system, IEEE J. Oceanic Eng., 21: 225–232, 1996. 14. G. S. Howe et al., Sub-sea remote communications utilising an adaptive receiving beamformer for multipath suppression, in Proc. OCEANS’94, Brest, France, pp. I.313-I.316, 1994. 15. M. Stojanovic, Phase-coherent digital communications for rapidly varying channels with applications to underwater acoustics,’’ Ph.D. thesis, Northeastern University, Boston, 1993. 16. M. Stojanovic, J. G. Proakis, and J. A. Catipovic, Performance of a high rate adaptive equalizer on a shallow water acoustic channel, J. Acoust. Soc. Amer., 100(4): Pt. 1, 2213-2219, 1996. 17. R. H. Owen, B. V. Smith, and R. F. W. Coates, An experimental study of rough surface scattering and its effects on communication coherence, Proc. OCEANS’94, Brest, France, pp. III.483– III.488, 1994. 18. A. Essebbar, G. Loubet, and F. Vial, Underwater acoustic channel simulations for communication, in Proc. OCEANS’94, Brest, France, pp. III.495–III.500, 1994. 19. A. Falahati, B. Woodward, and S. Bateman, Underwater acoustic channel models for 4800 b/s QPSK signals, IEEE J. Oceanic Eng., 16: 12–20, 1991. 20. C. Bjerrum-Niese et al., A simulation tool for high data-rate acoustic communication in a shallow-water, time-varying channel, IEEE J. Oceanic Eng., 21:143–149, 1996. 21. J. Catipovic et al., An acoustic telemetry system for deep ocean mooring data acquisition and control, Proc. OCEANS’89, Seattle, WA: pp. 887–892, 1989. 22. S. Coatelan and A. Glavieux, Design and test of a multicarrier transmission system on the shallow water acoustic channel, Proc. OCEANS’94, Brest, France, pp. III.472–III.477, 1994. 23. M. Stojanovic, J. A. Catipovic, and J. G. Proakis, Phase coherent digital communications for underwater acoustic channels, IEEE J. Oceanic Eng., 19:100–111, 1994. 24. M. Stojanovic, J. A. Catipovic, and J. G. Proakis, Reduced-complexity multichannel processing of underwater acoustic communication signals, J. Acoust. Soc. Amer., 98(2): Pt. 1, 961–972, 1995.
28. M. Johnson, D. Herold, and J. Catipovic, The design and performance of a compact underwater acoustic network node, Proc. OCEANS’94, Brest, France, pp. III.467–471, 1994. 29. J. Proakis, Coded modulation for digital communications over Rayleigh fading channels, IEEE J. Oceanic Eng., 16: 66–74, 1991. 30. B. Geller et al., Equalizer for video rate transmission in multipath underwater communications, IEEE J. Oceanic Eng., 21:150–155, 1996. 31. D. Slock and T. Kailath, Numerically stable fast transversal filters for recursive least squares adaptive filtering, IEEE Trans. Signal Process., SP-39: 92–114, 1991. 32. F. Hsu, Square root Kalman filtering for high-speed data received over fading dispersive HF channels, IEEE Trans. Inf. Theory, IT28: 753–763, 1982. 33. M. Johnson, D. Brady, and M. Grund, Reducing the computational requirements of adaptive equalization for underwater acoustic communications, Proc. Oceans ’95, San Diego, CA, pp. 1405–1410, 1995. 34. F. Ling and J. G. Proakis, Adaptive lattice decision-feedback equalizers—their performance and application to time-variant multipath channels, IEEE Trans. Commun., 33: 348–356, 1985. 35. M. Kocic, D. P. Brady, and M. Stojanovic, Sparse equalization for real-time digital underwater acoustic communications, Proc. OCEANS’95, San Diego, CA, Oct. 1995. 36. M. Johnson, L. Freitag, and M. Stojanovic, Improved Doppler tracking and correction for underwater acoustic communications, Proc. ICAASP ’97, Munich, Germany, pp. 575–578, 1997. 37. J. Catipovic, M. Johnson, and D. Adams, Noise cancelling performance of an adaptive receiver for underwater communications, in Proc. 1994 Symp. AUV Technol., Cambridge, MA, pp. 171– 178, 1994. 38. M. Stojanovic and Z. Zvonar, Multichannel processing of broadband multiuser communication signals in shallow water acoustic channels, IEEE J. Oceanic Eng., 156–166, 1996. 39. L. Tong, G. Xu, and T. Kailath, Blind identification and equalization based on second-order statistics, IEEE Trans. Inf. Theory, IT-40: 340–349, 1994. 40. W. T. Webb and R. Steele, Variable rate QAM for mobile radio, IEEE Trans. Commun., COM-43: 2223–2230, 1995. 41. S. Chappell et al., Acoustic communication between two autonomous underwater vehicles, Proc. 1994 Symp. AUV Technol., Cambridge, MA, pp. 462–469, 1994.
MILICA STOJANOVIC
25. T. Birdsall, Acoustic telemetry for ocean acoustic tomography, IEEE J. Oceanic Eng., 9: 237–241, 1984. 26. G. Ayela, M. Nicot, and X. Lurton, New innovative multimodulation acoustic communication system, Proc. OCEANS’94, Brest, France, pp. I.292–I.295, 1994. 27. J. Fischer et al., A high rate, underwater acoustic data communications transceiver, Proc. OCEANS’92, Newport, RI, pp. 571– 576, 1992.
Northeastern University
UNDERWATER ROBOTICS. See UNDERWATER VEHICLES.
UNDERWATER SENSORS. See OCEANOGRAPHIC EQUIPMENT.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL...ONICS%20ENGINEERING/41.%20Oceanic%20Engineering/W5410.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Underwater Sound Projectors Standard Article William J. Marshall1 1BBN Technologies, New London, CT Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W5410 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (199K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Evaluation Methods and Metrics New High-Power Driver Materials About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELE...0ENGINEERING/41.%20Oceanic%20Engineering/W5410.htm16.06.2008 15:17:51
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
;;
UNDERWATER SOUND PROJECTORS
(Nonmagnetic) M
Electroacoustic transducers convert electric signals into acoustic signals, or vice versa. Most transducers, in principle, function as either transmitters or receivers. However they are usually specialized for one task or the other. In the underwater realm transducers that are specialized to emit sound are called projectors, and those specialized as receivers are called hydrophones. This article describes many devices meeting the definition of projector but excludes sound generating mechanisms that do not respond to a drive signal, such as sirens, gongs, or direct mechanical-to-acoustical converters (the camdriven piston, for example), and the parametric array that relies on the nonlinearity of the medium to form the desired acoustic waveform. Although some underwater sound projectors use the same driver types found in loudspeakers, several factors dictate the use of specialized, more rugged transduction methods than used in air. One factor is the significant static pressure difference usually experienced on opposite sides of the radiating surface in the underwater environment. In contrast, loudspeakers are nearly always statically balanced. A second factor is that the specific acoustic impedance (ratio of pressure to particle velocity) is 3500 times higher in water than in air, so underwater projectors must work at higher stress levels. Finally, the underwater environment is generally harsher with respect to temperature extremes, corrosion, and obstacle impact. Together these considerations lead to higher static and dynamic stresses underwater, which in turn leads to more rugged construction and drivers having intrinsically high mechanical impedance. Although many of the transducer types described here may be designed for use at ultrasonic frequencies, this article concentrates on the sonar frequency range, roughly 20 Hz to 20 kHz. The low end of this range presents particular challenges to the projector designer because the size of the transducer grows quite dramatically at low frequencies. See Refs. 1 and 2 for details. The first successful underwater sound projector was a moving coil device designed by R. A. Fessenden and used to detect an iceberg in 1914. The onset of World War I stimulated the
N
S
N
Permanent magnet
Figure 2. Moving coil (electrodynamic) transduction. Nonlinearities arise when the coil travels beyond the region of uniform radial magnetic field.
development of higher power sound sources to search for submarines, and the first active sonar detection of a submarine was achieved by French physicist Paul Langevin in 1918 at a target range of 8 km, using a quartz mosaic sandwich projector (3). Types of Excitation (Driver Classification) Several transduction methods can be considered for underwater projectors. One class of drivers, known as surface-force transducers, converts electricity to useful force at a discontinuity in material properties. Examples of these are the moving armature or variable reluctance type (Fig. 1) in which an electromagnet periodically attracts and releases a sprung radiator plate; the moving coil or electrodynamic type (Fig. 2), the basis of most loudspeakers today; and the electrostatic type (Fig. 3), a capacitor with moving plates. In each case vertical motion of mass M causes sound radiation. For clarity, all supporting structures and waterproofing details have been omitted from Figs. 1 to 5. The other broad class are body-force transducers in which mechanical distortion is produced in an electrically or magnetically active material. Piezoelectric materials have the property that an electric field applied to the material causes mechanical strain, and, reciprocally, an applied stress produces a voltage. The effect is linear for small and moderate signals. Natural piezoelectric crystals, such as quartz, were used extensively in the early years, but soon piezoelectric ceramics (initially barium titanate, but later mixtures of lead zirconate and lead titanate generally known by their commercial name PZT) were introduced. By the 1960s various PZT formulations became the dominant active materials for underwater projectors. In raw form these materials are termed ‘‘electrostrictive’’ and show a square-law strain/field relation-
;;; ;; ;; M
I0
M
Figure 1. Moving armature (variable reluctance) transduction. The external driving circuit features a blocking coil that prevents the ac driving signal from flowing through the dc bias branch. Similarly, the bias current IO is isolated from the signal source by a capacitor.
E0
Figure 3. Electrostatic transduction is nonlinear at all amplitudes because electrostatic attraction varies with the square of gap width.
1
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
2
;;
UNDERWATER SOUND PROJECTORS
Boot
M
Ceramic ring
Figure 4. A basic piezoelectric (electrostrictive) driver. Ceramic polarity alternates (up, down, up, down) in adjacent slabs so the electric field excites the same polarity of strain in every slab.
Figure 6. An air-backed cylinder. The ceramic ring may be poled and excited either radially or circumferentially.
ship, but by applying a polarizing dc field during one stage of manufacture, they become ‘‘poled.’’ Then they show linear strain/field behavior and are called piezoelectric ceramics. Ceramic transducers are usually designed to have the excitation signal applied in the poled direction. Because ceramic thickness in the poling direction is limited to about 15 mm, placing the polarized axis along a large dimension of the active material requires segmenting it into slabs, inserting electrodes perpendicular to the desired electric field, and connecting the electrodes in parallel. This permits lower voltage levels and provides a better electrical impedance match to the amplifier. Because they are produced in a ceramic firing process, piezoelectric ceramics have some variability in properties and are quite brittle (have little tensile or shear strength). On the other hand, improved process control limits piece-to-piece variation, compressive prestress can protect against tensile stresses, and casting allows making a great variety of shapes. Figure 4 shows the essential features of a simple segmented-stack piezoelectric driver. Magnetostrictive materials undergo strain upon application of a magnetic field, although the mechanical response is square-law, not linear. To achieve quasi-linearity, magnetostrictive drivers are also polarized, either by permanent magnets, a separate dc winding, or by superimposing a dc voltage on the ac driving signal. Magnetostrictive transducers were largely eclipsed by piezoelectric transducers until the discovery in the early 1970s that some rare earth/iron alloys exhibit giant magnetostrictive strains. Continued refinement of these materials culminated in the alloy ‘‘Terfenol D’’ which is now the material of choice for magnetostrictive drivers. Although its properties are stress-sensitive and it has a somewhat lower coupling coefficient than PZT, the enormous strain amplitudes possible with this material and its low sound speed make it appealing in low-frequency applications. Figure 5 is a simple form of a magnetostrictive driver.
;;
End cap structure
M
I0
Figure 5. A basic magnetostrictive driver. Both end pieces must have high permeability to complete the magnetic circuit. The function of the blocking coil is described in Fig. 1.
The basic driver types depicted in these five figures are examined in more detail following. Reference 4 is recommended as a thorough introduction to transduction theory, and Refs. 5 and 6 provide good introductions to the principles and methods of electroacoustic design. Practical Transducer Types This section gives brief nontechnical descriptions of some popular projector types. The sketches emphasize mechanical connections. Wiring schemes are omitted for clarity. Most of these examples are body-force transducers. However, one important surface force type concludes the list. Presenting design methods for every type is beyond the scope of this article. Details will be found in the references. Air-Backed Cylinder. A thickness-poled, air-backed piezoelectric cylinder (Fig. 6) is suspended between rigid end caps and excited into uniform radial vibration. The surrounding waterproofing sheath is either a sheet rubber boot or a potted encapsulant. The main advantages of this approach are design simplicity, good efficiency, and efficient use of ceramic (because dynamic stresses are uniform throughout the cylinder). A variant, called the multimode transducer, has the electric field applied with opposite polarity in different sectors of the cylinder, thus causing circumferential flexure which produces a directive acoustic output pattern (7). Sphere. A radially-poled ceramic sphere (Fig. 7) makes a simple source having omnidirectional patterns, good bandwidth, and high efficiency. Because hydrostatic pressure produces only compressive shell stresses in a sphere, adequate depth capability is usually obtained. The wire from the electrode on the inner surface is brought out through a small hole in the wall, and the entire assembly is encapsulated. The electroacoustic design of a piezoelectric sphere is straightforward because the radiation impedance of a sphere is known exactly.
;
Figure 7. A sphere of radially polarized ceramic. The breathing mode radiates omnidirectionally.
UNDERWATER SOUND PROJECTORS
This transducer cannot be hard mounted because the entire outside surface vibrates. It may be suspended by its cable or in a string bag. Small spheres are usually formed from two cast hemispheres bonded at the equator, and larger spheres consist of triangular plates segmented to form polygons. Longitudinal Vibrator. The most widely-used transducer for high-power shipboard and torpedo sonars is the Tonpilz (‘‘sound mushroom’’ in German) (Fig. 8). A ceramic cylinder, either a single radially poled piece or a stack of thicknesspoled rings, is clamped between two masses by a tie rod. The forward (head) mass flares slightly to form the radiating surface. The rearward (tail) mass is isolated from the acoustic medium. The vibrating assembly is normally encased in a container that provides resilient supports and waterproofing for the wetted face. Placing the tie rod in tension applies a compressive bias stress to the ceramic element. The advantages of this design are efficient utilization of ceramic (dynamic stresses in the drive element are nearly uniform) and the ability to isolate the ceramic stack from hydrostatic pressure. Because of the large number of design parameters, optimizing the design is a challenge. Practical complications include avoiding head flexure, providing for adequate ceramic cooling, and coping with variations in ceramic properties during mass production. Chapters 7 and 8 of Ref. 6, a large part of Ref. 8, and all of Ref. 9 are devoted to longitudinal vibrators. Flexural Disk. A way to obtain lower operating frequencies for a given size transducer is to shift from longitudinal to flexural modes of motion. Bonding two thickness-poled ceramic disks back to back and wiring them so that one expands radially as the other contracts results in flexure of the bilaminar pair. Because ceramic near the neutral plane is underutilized, a trilaminar configuration, as shown in Fig. 9, is more typical. The inert central plate extends beyond the radius of the two active plates and attaches to an annular hinge which must be radially compliant but axially stiff. Because radial ceramic strains are converted to flexure in the composite disk, high-shear-strength bonds between the three plates are essential. Various means are used to apply circumferential prestress to the ceramic disks. Report (10) is the standard reference work for flexural disk transducers. Most practitioners use flexural disks in back-to-back configurations with a small air-filled cavity between the disks.
Boot
3
Boot Inner ceramic disks Central plate Outer ceramic disks Figure 9. Opposing pairs of flexural disks are usually wired to produce an acoustic monopole (i.e., both outer disks flex outward in phase).
This provides dynamic balancing of the hinge reaction forces and results in a compact low-frequency source. Hydrostatic pressure places the inner ceramic plate in tension, however, thus limiting its depth capability. One solution is to omit the inner ceramic plate. This shifts the neutral plane to be near the bond line, and only the metal central plate experiences tension. The price of this fix is lower sensitivity. Another approach is to place a single trilaminar disk over the mouth of a long flooded pipe that has its far end capped. This organ pipe source has unlimited depth capability but very small bandwidth, a combination of qualities which matches the requirements for certain sonic beacons and tomography sources. Flexural Bar. Changing the geometry of the previous vibrator from circular to rectangular produces the flexural (or ‘‘bender’’) bar (Fig. 10) (11). These are normally arranged like barrel staves around a relatively compliant oil-filled cavity capped by rigid end pieces. The purpose of the central cavity is to absorb the out-of-phase pressure generated by the inner surface of the bars, and the compliance of this cavity is increased by filling it with flattened air-filled metal tubes. Very low frequency designs sometimes have the cavities filled with pressurized gas. Bender bar projectors are often chosen for high-power, low-frequency, moderate depth applications. Flextensional. Placing the frequency-controlling member in flexure does produce a lower resonance in a given size. Operating the driving element in an extensional mode while only the radiating surface is in flexure yields even lower frequencies and greater relative bandwidth, depending on the materials used. The term flextensional alone generally refers to the
Head mass Ceramic tube
Tie rod
Hinge
Ceramic tube Housing
Tail mass
Figure 8. The longitudinal vibrator (Tonpilz), a projector type used in many high-power sonar systems.
Bilaminar ceramic stack Figure 10. A cylinder of flexural bars. Individual ceramic stacks may be bilaminar (as shown) or trilaminar with an inert central sheet.
4
UNDERWATER SOUND PROJECTORS
End cap Boot Shell Ceramic stack
Support posts Figure 11. The Class IV flextensional transducer. A single shell/ stack assembly is shown, but often these are built with several identical shell/stack assemblies stacked axially and covered by a continuous boot.
configuration depicted in Fig. 11, a driver stack of ceramic or magnetostrictive material placed inside the major axis of an elliptical cylinder. Several other geometries tried result in families of flextensionals, football-shaped, dog-bone shaped, a ring between two concave or convex plates, and other variants on these ideas. But the Fig. 11 shape, known as the Class IV flextensional, has proven most popular. Chapter 13 in Ref. 12 discusses the nomenclature for the different classes, and Ref. 13 reviews the history of this transducer type. Like bender bars, flextensionals are reliable and provide high source levels at low frequencies from small packages, but in addition they usually provide more bandwidth than benders. Their bandwidth advantage results because the flexing, higher velocity component is made of a lower density material than the electrically active driving component. The ceramic is usually prestressed by statically deforming the surrounding shell rather than by tie rods parallel to the stack(s). Analytical descriptions of the Class IV flextensional are available (14), but most designers now rely on finite-element analysis when designing these complex projectors. One drawback of the classical flextensional is that, at the main resonance, radiation from the sharp ends of the shell near the major axis is out of phase with that from the sides near the minor axis. This can be remedied by making the primary radiating surface concave rather than convex, in which case it is known as a barrel stave flextensional (see Ref. 12 Chaps. 13, 14, and 15). Ring Shell. One flextensional variant is called the ring-shell projector, (Fig. 12) (15,16). In this design the radiating sur-
Ceramic ring
faces are dome-shaped shells affixed to the rim of a segmented ceramic ring. The open space between the domes contains an air bladder pressurized by sea water fed through a hydraulic low-pass filter to provide a compliant but statically balanced interior. Flooded Ring. If the active element in Fig. 6 were removed from its housing, waterproofed, and placed in an acoustic free field, one would not expect it to make a very effective sound source because radiation from the inner surface of the ring would mostly cancel that from the outer surface. For particular frequencies, ring diameters, and ring heights, however, it can be a fairly efficient radiator. The radiation impedance for a ring radiating from all surfaces is difficult to predict, so most often the designs are based on McMahon’s empirical findings (17). The overwhelming advantage of this design is that it should work at any ocean depth. The radiation pattern is omnidirectional in the plane containing the ring and has some directionality (which is beneficial in many applications) in the plane of its axis. Placing a flooded ring next to a hard baffle or coaxially with other flooded rings produces other useful radiation patterns. Slotted Cylinder. A rectangular bilaminar plate wrapped into a cylinder with the ceramic on the inside is called a slotted-cylinder projector (Fig. 13). Originally developed to fit down oil drilling holes, this type has recently gained popularity for certain sonar applications. The advantages are small size for its frequency and good power-handling ability. The disadvantages are small relative bandwidth (a result of the small radiating area) and difficulty sealing the slot (where relative motion between opposite sides of the shell is high). Hydroacoustic. Occupying a different camp than all of the other transducers reviewed, this high-power projector uses a piezoelectrically driven spool valve to modulate a high-pressure flow of hydraulic fluid which then vibrates opposing circular radiating faces (18) (Fig. 14). The input power to the actuator valve is small, and all of the acoustic power is extracted from the dc hydraulic pump. Therefore no electronic power amplifier is required, which reduces system cost somewhat. These sources are designed specifically for high output in the low and very low frequency ranges (10 Hz to 300 Hz) and are bulky and rugged. They are often rigged for towing from research vessels. Their main disadvantages are higher distortion than more conventional types and a reputation for poor reliability.
Ceramic ring Shell Boot
Shell covered with boot Figure 12. A ring shell projector. The mostly empty interior may contain a pressure relief system.
Figure 13. A slotted cylinder projector. The central post and end caps, similar to Fig. 6, have been omitted for clarity.
Housing
Radiating surface
M
Oil reservoir
;; ; ;
Hydraulic modulator
Hydraulic pump
Figure 14. A vastly simplified cross section of one half of a hydroacoustic projector. The hydraulic modulator may be a piezoelectrically-driven spool valve.
Moving Coil. By adapting the drive mechanism used in a standard loudspeaker (Fig. 2) to underwater use one obtains a very low resonance frequency and, because it is usually operated above resonance, flat response over a very wide band. This transduction technique was one of the first to be developed, and it is still in use and being steadily improved (19) because of the advantages cited. Because of the electrodynamic driver, it has low electrical impedance. The main disadvantages are low efficiency, low source level capability, and sensitivity to operating depth. Performance Requirements Specifications of greatest interest to users and designers of underwater sound projectors are • Frequency range. This may be specified as center frequency and Q (reciprocal of the fractional bandwidth) or as upper and lower band edge frequencies (usually at the ⫺3 dB points of the transmitting response curve). Measurement factors affecting this quantity are which drive parameter (input voltage, current, or power) is to be held constant during the frequency sweep and where the monitoring hydrophone is placed. • Depth range. Usually minimum and maximum depths for full power use and a maximum nonoperating (survival) depth are specified. • Impedance range. Maximum power transfer from an electrical source depends on matching the source to the load impedance. Underwater transducers, especially highly efficient ones, show wide swings in both magnitude and phase of their electrical impedance with frequency, and this generally affects amplifier operation and achievable bandwidth. If long cables are involved, cable effects must also be considered. • Output source level. This is normally specified as acoustic sound pressure level (SPL) in a certain direction over a given frequency band, scaled as if measured at a range of 1 m from the acoustic center of the source. The designer must be alert to several ancillary properties affected by drive level: voltage and current limits in the driver, cables, and connectors; mechanical stress levels within the transducer; thermal effects (see later); and potential for acoustic cavitation.
UNDERWATER SOUND PROJECTORS
5
• Directivity patterns. Specified at several frequencies and in different planes through the acoustic center of the source. • Weight in air and in water. • Ambient temperature range and duty cycle. These are related because the internal heating rate depends on the duty cycle and power level, and the cooling rate is related to internal and ambient temperatures. EVALUATION METHODS AND METRICS Methods of Analysis Three basic approaches are used to analyze a transducer. The first is directly solving the equations of motion for the electromechanical system. This proceeds like a forced vibration problem with electrically connected forcing functions either in the boundary conditions (for surface-force transduction) or in the stress/strain relations for the active material (for body-force transduction). Acoustic effects are accounted for by specifying a radiation impedance at the radiating surface. The second method is finite-element analysis (FEA) using an FEA code which handles coupled elastic/acoustic problems and offers electrically or magnetically active elements. The third is by translating the electroacoustic system into an equivalent electrical circuit by using an electromechanical analogy, then analyzing the equivalent circuit. Like the first, this method also requires knowledge of the radiation impedance. Finite-element methods are gaining in popularity as specialized FEA programs become more widespread. These codes are expensive to acquire and use, however, and many runs are required to understand how critical performance parameters respond to variations in various dimensional and material choices. Reference 20 is a recent compilation of FEA programs suited for electroacoustic analysis. If the problem is amenable to equivalent circuit analysis, this method has the advantages of simplicity and provides immediate insight into design parameter sensitivities. The following simplified equivalent circuit analysis demonstrates the usefulness of the technique and illustrates several important transducer design principles. Electromechanical Analogies and the Two-Port Network Two analogies are commonly used in associating mechanical variables with conventional electric circuit quantities (see Table 1). The impedance analogy that connects current, the
Table 1. Electromechanically Analogous Quantities and Their Symbols
Mechanical Quantity Force F Velocity U Displacement x ⫽ 兰 U dt Impulse 兰 F dt Mass M Compliance Cm Mechanical Resistance Rm
Electrical Quantity (Impedance Analogy) Voltage E Current I Charge q ⫽ 兰 I dt Inductance L Capacitance C Resistance R
Electrical Quantity (Mobility Analogy) Current I Voltage E Charge q Capacitance C Inductance L Conductance G
6
UNDERWATER SOUND PROJECTORS
‘‘through’’ quantity, with velocity is best suited to electric field transducers and will be used following. The mobility analogy regards current as analogous to mechanical force, and it is more convenient for magnetic field transducers. In both cases the mechanical quantities are associated with ideal mechanical components: lossless, massless springs connected to perfectly rigid point masses and ideal massless dashpots. Consider the transducer as a linear two-port network, as in Fig. 15, and assume a lumped-parameter system in which the essential mechanical behavior is described by a single vibrating mass. The projector is driven by voltage E and current I applied at the electric port. The resulting mechanical output appears at the opposite port where one can measure a force F and velocity U. There are six ways to formulate a pair of linear equations relating the four port variables E, I, F, and U. For electric field transducers it is convenient to chose E and U as the independent variables, and the equations are
I = Yb E − φU F = φE + ZE mU The names and SI units of the coefficients are as follows: Yb ⬅ (I/E)U⫽0 is the blocked electrical admittance (in siemens, the reciprocal of ohm), ZEm ⬅ (F/U)E⫽0 is the short-circuit mechanical impedance (in newtons per meter per second or kilograms per second), and ⬅ ⫺(I/U)E⫽0 ⫽ (F/E)U⫽0 is the electromechanical transformation ratio (in newtons per volt or amps per meter per second). Physical analysis of simple linear electric field transducers shows that the circuit parameters always have the following properties: Yb ⫽ Gb ⫹ j웆Cb, a capacitive susceptance shunted by a (usually small) conductance, is real and independent of frequency, and E ZE m = Rm + jωM + 1/ jωCm
where Rm is the internal mechanical resistance, M is the moving mass of the transducer, and CEm is the compliance (displacement over force) seen at the mechanical port when the electrical terminals are short-circuited. Transition to a Single-Port Electrical Circuit The mechanical port is terminated in a short circuit when the transducer is operated in a vacuum (nearly the same as in air) and in its radiation impedance when in water. The radiation termination is depicted as a series R ⫺ L circuit having impedance Zr ⫽ Rr ⫹ j웆Mr, where the mechanical power dissipated in the real part Rr represents acoustic power radiated into the far field, and the kinetic energy stored in the radiation mass Mr represents acoustic energy stored in the near field. In general, Zr varies with frequency. Combining all of the above relationships, the electrical behavior of the simple
I E
U Electromechanical transducer
F
Figure 15. A linear two-port network having electrical variables (voltage and current) at the left hand port and mechanical variables (force and velocity) at the right hand port.
I
LY E
Gb
Cb
CY RY
Figure 16. Simplified equivalent circuit of a lumped parameter, single degree-of-freedom, piezoelectric drive transducer. Subscript b denotes the blocked electrical components; Y the transformed mechanical components.
single degree-of-freedom projector, in water, is the same as that of the circuit of Fig. 16. Capacitance Cb is called the blocked capacitance, which is shunted by a frequency-dependent conductance Gb to represent dielectric losses. These two elements form the entire circuit when the transducer is clamped (blocked) so that U ⫽ 0. The remaining three elements form the motional branch of the circuit and are transformed mechanical quantities given by
LY = (M + Mr )/φ 2 E CY = φ 2Cm
RY = (Rm + Rr )/φ 2 Note that these transformations yield the proper electrical units. For instance, the units of 2CEm are (N/V)2(m/N) ⫽ N ⭈ m/V2 ⫽ J/V2 ⫽ farad. The current flowing in the motional branch equals U. Relationship Between Circuit Element Values and Transducer Parameters This single degree-of-freedom circuit has a single resonance and a number of auxiliary parameters which define the transducer near that resonance. Setting Zr to zero supplies the inair quantities. Resonance frequencies in air and in water are given by
1 1 E E ωra = (2π f ra )= √ = √ E LY CY MCm 1 E E ωrw = (2π f rw )= √ E (M + Mr )Cm The mechanical storage factor Qm has meaning only at resonance (the second subscript indicates which one). It describes the bandwidth over which the projector transfers power to the load and is defined as the ratio of energy stored in the inductive reactance per cycle to that dissipated in the resistance. High Qm indicates a sharp resonant peak, and low values are usually desired, but this must be accomplished by making Rr rather than Rm large so as not to degrade the efficiency: QE mw =
E LY ωrw ωE (M + Mr ) = rw RY Rm + Rr
UNDERWATER SOUND PROJECTORS
The electromechanical coupling factor k is a pivotal parameter related to bandwidth and power handling. For the case under study k2 is defined as the ratio of the energy available in the motional branch to the total energy contained in that branch and the coupling element (the blocked capacitance in the case of ceramic transducers), ignoring losses. k2 =
CY φ 2C E = 2 E m CY + Cb φ Cm + Cb
The dielectric dissipation factor tan웃 is the ratio of conductance to susceptance Bb for the blocked ceramic, the same as for ordinary capacitors. tan웃 is observed to be independent of frequency, so Gb must vary linearly with frequency. tan δ =
Gb G = b Bb ωCb
The electric storage factor Qe is the ratio of input susceptance to conductance at resonance. It is related to input power factor and, therefore, to the bandwidth over which the transducer accepts power from the amplifier. To achieve wide system bandwidth, Qe for the water-loaded transducer should also be low. Denoting the electrical input admittance by Gi ⫹ jBi,
Qe =
Bi (ωr ) = Gi (ωr )
1 tan δ +
k2 Qm 1 − k2
Note that because a good quality transducer has both a low dielectric loss factor and a high Qm in air, tan웃 Ⰶ Qmk2 /(1 ⫺ k2), which leads to a formula for calculating the coupling factor from in-air admittance data: 1 k2 = 1 − k2 Qe Qm Because either Q when high restricts the bandwidth, a sensible goal is to have their product as small as possible, and this implies maximizing k. Overall projector efficiency is composed of two factors: ma which varies slowly with frequency and accounts for losses in the motional branch and em which is strongly frequency-dependent and accounts for losses on the electrical side. The em equation following is evaluated at mechanical resonance where it is a maximum, therefore the following expression for ea is also valid only at resonance:
Mechanoacoustic efficiency: ηma =
Rr (at any frequency) Rm + Rr
Electromechanical efficiency: ηem =
1/RY = Gb + 1/RY
Electroacoustic efficiency:
1+
1 tan δ
7
air and then in water. The basic measurement is either impedance Z (measured at constant current) or admittance Y (at constant voltage). Generally ceramic-based drivers are best evaluated from admittance data, and magnetic drivers from impedance data. In certain situations plotting the magnitude of Z or Y versus frequency is sufficient. If practical, choose a drive level in air that excites mechanical amplitudes similar to those expected in water at the rated source level. (A limiting factor, however, may be tolerability of in-air sound levels in the vicinity of the measurement.) The remainder of this section outlines some simple rules for extracting transducer parameters from Y and Z data. More detailed instructions are in Sections 2.7–2.9 of Ref. 6 and in (21). These procedures are sufficient to find all parameters of the one-port electric circuit of Fig. 16, but electric measurements alone cannot determine the electromechanical ratio for which it is necessary to make both a mechanical and an electric measurement. Complex Z and Y data can be presented in two ways: parametrically, with the real part on one axis versus the imaginary part on the other and with frequency as the parameter; or as two separate curves plotted against frequency. The first method provides more diagnostic information to the practiced eye and can be the basis of all electric measurements if the frequency points are plentiful near resonance. For reasonable parameter values the electrical admittance of the simple transducer of Fig. 16 produces a slightly distorted circle (a loop) in the Y-plane, and measuring certain geometrical properties of this loop is the basis of transducer admittance analysis. The blocked capacitance Cb cannot be measured directly— practical clamps are not stiff enough to immobilize a typical underwater projector. Instead one measures both the free capacitance, CF ⫽ Cb ⫹ CY, at some frequency far below resonance, and the coupling factor (at resonance), then calculates Cb ⫽ (1 ⫺ k2)CF. This procedure is invalid if CY varies with frequency (i.e., not a single-degree-of-freedom lumped parameter system). Qm is obtained by finding three frequencies: the frequency of maximum Gi (resonance), and the two frequencies on either side of resonance where Gi drops to half its maximum value. Calling these, in increasing order, f 1, f r, and f 2, Qm ⫽ f r /( f2 ⫺ f 1), that is, Qm is the unitless ratio of the peak frequency to the spread between those frequencies where the input power is half what it is at resonance. The two resonant efficiency factors can be estimated from air and water admittance loop diameters DYa and DYw. These expressions for efficiency are valid only at isolated resonances of lumped-parameter transducers. The final determination of efficiency always requires both acoustical and electrical measurements:
(at resonance)
k2 Qm 1 − k2 ηea = ηma ηem (at resonance)
ηma = 1 − ηem =
DYw DYa
DYw Gi (ωrw )
Measurement of Equivalent Circuit and Transducer Evaluation Parameters
Acoustical Power Derived from the Simple Equivalent Circuit
Most transducer parameters are obtained solely from measurements at the electric terminals, with the source first in
Two useful quantities related to the projector in Fig. 16 are the voltage-limited acoustic output power and the associated
8
UNDERWATER SOUND PROJECTORS
input volt-amperes, both at resonance:
PAC = ηma ωrw QmwCb E 2 |E||I| =
PAC ηma
p
k2 1 − k2
1 + Q2e
where E and I are the rms drive voltage and current at the input terminals. A few observations are in order: • although low Qm is desirable for wide bandwidth, the resonant output boost of a high Q device relates directly to increased source level per volt; • the factors Cb and E2 separately depend on electrode spacing, but their product depends only on electric field strength, ceramic dielectric constant, and volume of active material; • high coupling factor greatly benefits electrically limited output power; • 兹1 ⫹ Qe2 is the reciprocal of the power factor of the transducer, as seen by the amplifier. Centrality of Coupling Factor
optimum kQm product is 앑1.2, and the corresponding system bandwidth limit equals 0.8k/ 兹1 ⫺ k2. Note that achieving a certain Qm does not by itself produce the desired bandwidth. The coupling must also be close to its optimum value of 1.2/Qm. In view of these facts, projector designers are advised to pay attention to design choices which affect k. The first rule for increasing k is to minimize the impact of electrical or mechanical elements that store energy but do not participate in the coupling process. For example, a small electric field transducer on a long cable suffers reduced coupling because the cable capacitance stores uncoupled electric energy. Coupling reduction also arises from added compliance, such as stress rods, waterproofing seals, and pressure relief systems. The reduction occurs whether the parasitic element is added mechanically in series or in parallel with the main vibrating member and even if it resonates at the transducer resonance. For distributed-parameter body-force transducers, k can be improved by design adjustments that produce greater strain uniformity in the active material. This is why ceramic can be removed from the central, low stress region of a bender plate without incurring a coupling penalty. Electrical Tuning
The transducer coupling factor k plays a crucial role in many aspects of projector performance. The previous equations show how it influences both electrically limited output power levels and bandwidth. More fundamentally, it is an index of physical realizability for all transducers (0 ⱕ k ⬍ 1). This inequality is a consequence of static stability criteria for surface-force types and of the thermodynamics of the active material for body-force types. The coupling factor is an easily measured single quantity which indicates overall transducer quality and shows the relative impact of design or construction modifications. Furthermore, it serves as a means of comparing different transducer types. The connection between k and bandwidth originated with Mason (22) who stated that an optimally loaded and electrically tuned projector has an attainable fractional bandwidth given by k/ 兹1 ⫺ k2. Stansfield (8) explored this topic in greater detail and found that the upper limit on system bandwidth depends on the properties of the power amplifier and the transducer. Assuming optimum tuning and an amplifier which tolerates a 2 : 1 variation in the magnitude of load impedance and a ⫾37⬚ variation in phase angle (requirements corresponding to a power factor of better than 0.8), then the
Most water-loaded projectors display poor power factors (large electric phase angle) near resonance. This may be improved by inserting a tuning network between the amplifier and the transducer. Magnetic-drive transducers have a net inductive impedance at resonance and so are tuned with added capacitance, whereas electric-field transducers are the reverse and are tuned with inductors. Tuning does not degrade coupling if the external tuning elements add reactance of the opposite sign to that used in the electromechanical coupling process. Hybrid magnetostrictive/piezoelectric transducers (23) combine both types of driver materials and have the interesting property of being self-tuning. NEW HIGH-POWER DRIVER MATERIALS Three challenges continue to motivate underwater projector technology: obtaining smaller size-to-wavelength ratios, higher output power, and wider bandwidth. During the past forty years a series of low-frequency transducer innovations (bender bar, flextensional, Terfenol-driven flextensional, bar-
Table 2. Properties of Active Materials a
(kg/m3) E s 33 (pm2 /N) T ⑀ 33 /⑀0 d33 (pm/V) k33 max E 3 (rms MV/m) k 233 ⑀ T33 E 23 (kJ/m3) a
Unbiased PZT-8
Biased PZT-8
7600 17 1500 300 0.69 0.39
7600 16 1900 280 0.59 0.85
0.81
All except PZN-PT prestressed to 40 MPa.
3.6
Terfenol-D 9100 34.5
0.67
4.9
PMN-PT (1% La)
PZN-PT (no prestress)
7800 13 13000 515 0.42 0.62
8300 150 2800 1800 0.93 0.7
8.0
10.5
UNDERWATER SOUND PROJECTORS
rel-stave flextensional, slotted-cylinder) have steadily, but incrementally, advanced our capabilities. Recently the focus has shifted toward finding improvements in active materials to make bigger strides in performance. The development of Terfenol-D was the first step in this direction. Its low sound speed and high energy density permit smaller or lower frequency sources without compromising output power level. Recently new classes of electrostrictive ceramics have emerged. Many of these materials are based on lead magnesium niobate (PMN) mixed with various additives, notably titanates of lead, strontium, and barium. Another nascent material is single-crystal, lead zinc niobate mixed with small amounts of lead titanate (PZN-PT). Some of these new materials exhibit astoundingly high dielectric constants and electrically induced elastic strains, but these benefits are paid for by other less desirable qualities, such as frequency dispersion, strong temperature dependence, and a quadratic strain/ field relationship. If the desirable large-signal properties can coexist with low tan웃 and good mechanical strength and if they can be preserved during the transition from laboratory specimens to production lots, these materials could propel new advances in the state of the art for high-power, low-frequency projectors. These emergent materials may be compared with conventional ones through the concept of field-limited energy density. All projectors have some limit on output power level. Depending on frequency and operating environment, the limit may be mechanical (stress, displacement, or cavitation limits), electrical (voltage or current limits), or thermal (runaway heating). It is usually desirable to arrange things so that the electric limit controls in the usual operating domain, and in this case one can compare material power handling capacities 2 T based on the electromechanical energy density, k33 ⑀33E 32. In this expression k33 is the material coupling coefficient (similar to the transducer coupling factor k applied to the material itself), ⑀T33 is its dielectric permittivity at zero stress, and E 3 is the maximum allowed rms electric field strength. Table 2 lists pertinent properties for four active materials: a standard high-power piezoelectric ceramic (PZT-8), the modern magnetostrictive material Terfenol-D, one variety of PMN-PT, and single-crystal PZN-PT. The analogous magnetic 2 field-limited energy density, k33 애T33H2, is given for Terfenol-D. All data are for the material under a reasonable compressive prestress, except for PZN-PT where such data did not exist when the data survey was made (24,25). Because the new high-strain materials require a dc bias to achieve quasi-linear operation, PZT-8 is evaluated both in its normal, prepolarized (unbiased) state and with an external dc bias like the other entrants. In every case the bias field (not stated) is chosen to optimize the material’s high ac-field properties. Loss parameters are not included in the table. The materials in Table 2 are arranged in order of increasing energy density in the last row. The first observation is that operating PZT-8 with a dc bias results in slightly reduced coupling k33 and piezoelectric strain coefficient d33 but a significant increase in energy density because of the higher allowed driving field. Terfenol-D has coupling comparable to unbiased PZT-8, but much higher energy density and about twice the compliance modulus which, together with its higher mass density, leads to a much lower sound speed and a corresponding size advantage. Although it has a lower coupling than PZT, PMN-PT offers similar elastic properties and much
9
larger dielectric constants resulting in a greatly increased energy density. These properties, combined with its low hysteresis and very high strain capabilities, make PMN an attractive new material for future projector designs. Though unproved in actual use, PZN offers the promise of even greater energy density and very impressive coupling. However its extremely high compliance may have implications for the mechanical design. BIBLIOGRAPHY 1. J. E. Blue and A. L. Van Buren, Transducers, in M. J. Crocker (ed.), Encyclopedia of Acoustics, Vol. 1, New York: Wiley, 1997, pp. 600–604. 2. R. S. Woollett, Basic problems caused by depth and size constraints in low-frequency underwater transducers, J. Acoust. Soc. Amer., 68: 1031–1037, 1980. 3. F. V. Hunt, Electroacoustics, Cambridge, MA: Harvard Univ. Press, 1954. 4. E. L. Hixson and I. J. Busch-Vishniac, Transducer principles, in M. J. Crocker (ed.), Encyclopedia of Acoustics, Vol. 4, New York: Wiley, 1997, Chap. 159. 5. R. S. Woollett, Sonar Transducer Fundamentals, Section I: General Transducer Theory, Newport, RI: Naval Underwater Syst. Center, 1988. 6. O. B. Wilson, Introduction to Theory and Design of Sonar Transducers, 2nd ed., Los Altos, CA: Peninsula, 1988, Sect. 5.2. 7. R. S. Gordon, L. Parad, and J. L. Butler, Equivalent circuit of a ceramic ring transducer operated in the dipole mode, J. Acoust. Soc. Amer., 58: 1311–1314, 1975. 8. D. Stansfield, Underwater Electroacoustic Transducers, Bath, UK: Bath Univ. Press, 1991. 9. R. S. Woollett, Sonar Transducer Fundamentals, Section II: The Longitudinal Vibrator, Newport, RI: Naval Underwater Syst. Center, 1988. 10. R. S. Woollett, Theory of the piezoelectric flexural disk transducer with applications to underwater sound, U.S. Navy Underwater Sound Laboratory, USL Res. Rep. No. 490, 1960. 11. R. S. Woollett, The Flexural Bar Transducer, Newport, RI: Naval Underwater Syst. Center, 1986. 12. M. D. McCollum, B. F. Hamonic, and O. B. Wilson (eds.), Transducers for Sonics and Ultrasonics, Lancaster, PA: Technomic, 1993. 13. K. D. Rolt, History of the flextensional electroacoustic transducer, J. Acoust. Soc. Amer., 87: 1340–1349, 1990. 14. G. A. Brigham, Analysis of Class-IV flextensional transducer by use of wave mechanics, J. Acoust. Soc. Amer., 56: 31–39, 1974. 15. G. W. McMahon and B. A. Armstrong, A pressure-compensated ring-shell projector, Conf. Proc., Transducers Sonar Appl., Birmingham, UK: Inst. of Acoustics, 1980. 16. B. A. Armstrong and G. W. McMahon, Discussion of the finite element modeling and performance of ring shell projectors, IEE Proc., (Part F) 131: 275–279, 1984. 17. G. W. McMahon, Performance of open ferroelectric ceramic cylinders in underwater sound transducers, J. Acoust. Soc. Amer., 36: 528–533, 1964. 18. J. V. Bouyoucos, Hydroacoustic transduction, J. Acoust. Soc. Amer., 57: 1341–1351, 1975. 19. B. S. Willard, A towable, moving-coil acoustic target for low frequency array calibration, U.S. Naval Underwater Syst. Center, NUSC Tech. Rep. No. 6369, 1981. 20. C. Scandrett, ed., Proc. Transducer Modeling Workshop, Monterey, CA: Naval Postgraduate School, 1997.
10
UNDERWATER ULTRASOUND
21. G. E. Martin, Determination of equivalent circuit constants of piezoelectric resonators of moderately low Q by absolute-admittance measurements, J. Acoust. Soc. Amer., 26: 413–420, 1954. 22. W. P. Mason, Electromechanical Transducers and Wave Filters, 2nd ed., New York: Van Nostrand, 1948. 23. J. L. Butler, S. C. Butler, and A. E. Clark, Unidirectional magnetostrictive/piezoelectric hybrid transducer, J. Acoust. Soc. Amer., 88: 7–11, 1990. 24. M. B. Moffett et al., Biased lead zirconate-titanate as a highpower transducer material, U.S. Naval Undersea Warfare Center Division, NUWC-NPT Reprint Rep. 10766, 1997. 25. M. B. Moffett and J. M. Powers, Single crystal PZN/PT as a highpower transduction material, U.S. Naval Undersea Warfare Center Division, NUWC-NPT Tech. Memo. 972127, 1997.
WILLIAM J. MARSHALL BBN Technologies
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL...ONICS%20ENGINEERING/41.%20Oceanic%20Engineering/W5402.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Underwater Vehicles Standard Article Junku Yuh1 1University of Hawaii, Honolulu, HI Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W5402 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (127K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Types of Underwater Vehicles Vehicle Subsystems Applications Information Resources About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELE...0ENGINEERING/41.%20Oceanic%20Engineering/W5402.htm16.06.2008 15:18:12
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
UNDERWATER VEHICLES
27
more of our attention on land and atmospheric issues, and we have not been able to explore the full depths of the ocean and its resources. Only recently we discovered, by using manned submersibles, that a large amount of carbon dioxide comes from the seafloor and extraordinary groups of organisms living in hydro-thermal vent areas. Underwater vehicles can help us better understand marine and other environmental issues, protect our ocean resources from pollution, and efficiently utilize them for human welfare. However, ocean travel is difficult because of unpredictable and hazardous undersea environments, even though technology has allowed humans to land on the moon and allowed exploration of other planets. TYPES OF UNDERWATER VEHICLES
UNDERWATER VEHICLES The ocean covers about 70% of the earth. Ocean-related activities are extremely diverse—aquaculture, commercial fishing, ocean research, seafood marketing, ocean recreation, marine mining, marine biotechnology, and ocean energy. Living and nonliving resources of the ocean are abundant. For example, it is estimated that there are about 2,000 billion tons of manganese nodules on the floor of the Pacific Ocean near the Hawaiian Islands. The ocean also plays a critical role in global environmental issues such as pollution and carbon cycles, and the ocean retains more heat than the atmosphere. Therefore, it is not difficult to predict that the ocean will have a great effect on the future existence of all human beings. In spite of its importance, the ocean is generally overlooked as we focus
Underwater vehicles can be manned or unmanned submersibles. Manned submersibles include military submarines and smaller manned submersibles while unmanned submersibles include remotely operated vehicles (ROV) and autonomous underwater vehicles (AUV). Unmanned underwater vehicles (UUV) are often called underwater robots. This article emphasizes UUV technology. Since manned submersibles are used primarily for military purposes, the details of their engineering design are not available. Manned submersibles are controlled by on-board human operators. One example of such a vehicle is the NAUTILE, developed by IFREMER, France. NAUTILE is a three-man submersible capable of descending to a depth of 600 m. This vehicle was used to conduct reentry operations into deep sea boreholes, about 800 of which have been drilled all over the ocean floor by the Ocean Drilling Program for scientific mis-
Table 1. Development of Remotely Operated Vehicles (ROVs) Purpose
Depth (m)
Year
Vehicle
1974 1977 1979 1982 1984 1985 1985 1985 1986 1986 1986 1986
RCV Scorpio Filippo Pinguin Sea Hawk Dragonfly Triton Trojan SeaRover Phantom Delta Trail Blazer
Inspection Drilling, construction Inspection Mine countermeasures Drilling, inspection Construction Drilling, construction Drilling, survey Mine countermeasure Inspection, survey Observation Military applications
1986
MUC
1987
RCVIWO
N/A
Hytec, Montpellier, France
1987 1987
Buster Hysub
Trench digging, cable/flow line burial, seabottom work Investigation and inspection of cooling water outfalls from nuclear power plants Inspection Drilling, construction
500 5000
1987 1988 1988 1988 1991 1992
Achilles ARMS RTV-KAM Dolphin 3K no name no name
Inspection and observation Mine countermeasures Inspection of long power plant conduits Construction, survey Nuclear power plants Nuclear power plants
400 305 30 3300 N/A N/A
ROVTECH, Laksevag, Norway International Submarine Engineering, Port Moody, B.C., Canada Comex Pro, France AMETEK, El Cajon, CA Mitsui Engineering & Shipbuilding Co., Ltd., Tokyo, Japan Mitsui Engineering & Shipbuilding Co., Ltd., Tokyo, Japan Deep Ocean Engineering, San Leandro, CA RSI Research Ltd., Canada
412 1000 300 100 500 2000 3050 3000 259 600 150 915 200
Developer Honeywell, San Diego, CA Ametek Offshore Ltd., Aberdeen, Scotland Gaymarine, Italy MBB/VFW, West Germany Scandinavian Underwater Technology, Sweden Offshore Systems Engineering Ltd., Norfolk, UK Perry Offshore, Riviera Beach, FL Slingsby Engineering Ltd., York, England Benthos, North Falmouth, MA Deep Ocean Engineering, San Leandro, CA QI, Tokyo, Japan International Submarine Engineering Ltd., Port Moody, B.C., Canada Travocean, France
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
28
UNDERWATER VEHICLES
Table 2. Development of Autonomous Underwater Vehicles (AUVs) Year
Vehicle
Purpose
Depth (m)
Developer
1963 1972 1973 1975 1975 1977 1979 1979 1979 1979 1979 1980 1980 1980 1982 1982 1982 1983 1983 1983 1983 1983 1984 1984 1985 1985 1986 1986 1987 1987 1988 1988 1988 1988 1989 1989 1989 1989 1989 1989 1990 1990 1990 1990 1991 1992 1992 1992 1992 1992 1992 1992 1992 1992 1993 1993 1993 1993 1993 1994 1994 1995 1995
SPURV 1 UARS SPURV 2 SKAT OSR-V No Name EAVE II EAVE EAST EAVE WEST RUMIC UFSS SPAT PINGUIN A1 CSTV Rover Robot II B-1 AUSS Telemine TM 308 EPAULARD AUV AUV ARCS Submarine Robot PLA 2 ELIT No Name EAVE III LSV Sea Squirt XP-21 MUST ACTV UUV (1) FSMNV MT-88 AUV Pteroa150 Waterbird UROV-2000 No Name Musaku UUV (II) AROV AE1000 Twin Burger ALBAC MAV Doggie Dolphin ABE Phoenix ODIN Ocean Voyage II Odyssey II ARUS ODAS Marius Large-D UUV OTTER ODIN II R1
Water measurement Under-ice mapping Water measurement Ocean research Ocean research Testbed Testbed Testbed Testbed Mine counter-measurements Search Acoustic training Search Submarine control tests Structure inspection Bottom survey Drag characteristics Search/identification Vessel destruction Structure inspection Bottom photography/topography Hydrodynamic Hydrodynamic drag studies Under-ice mapping Testbed-hydrodynamic flow Nodule collection Structure inspection Feasibility Testbed Submarine testing Testbed Testbed Testbed Water measurements Testbed Mine neutralization Bottom/water Testbed Survey Survey Bottom survey Testbed precise control vehicle Testbed Precise Control Vehicle Testbed Search and mapping Cable inspection Testbed Water Column Mine counter-measurements Bottom/sub-bottom survey Water characteristics monitoring Bottom survey Testbed Testbed Science mission Science mission Bottom survey Survey Survey Military/testbed Testbed Testbed Bottom Survey
3658 457 1524 NA 250 100 914 150 610 NA 357 240 200 NA 100 91 90 6000 150 400 6000 NA NA 400 500 5000 1000 NA 200 NA 61 610 610 250 NA NA 6000 500 2000 100 2000 10 10 NA NA 1000 50 300 NA 6000 6000 6000 10 30 6000 6000 NA 900 600 300 1000 30 400
APL, University of Washington, Seattle, WA APL, University of Washington, Seattle, WA APL, University of Washington, Seattle, WA Institute of Oceanology, Moscow, USSR JSPMI, Tokyo, Japan JAMSTEC, Yokosuka, Japan MSEL, Univ. of New Hampshire, Durham, NH MSEL, Univ. of New Hampshire, Durham, NH Navel Ocean Systems Center, San Diego, CA Naval Coastal Systems Center, Panama City, FL Naval Research Laboratory, Washington, DC Westinghouse Oceanics MBB GmbH, Bremen, West Germany Naval Coastal Systems Center, Panama City, FL Heriot-Watt University, Edinburgh, Scotland MIT, Cambridge, MA NUSSC, Newport, RI Naval Ocean Systems Center, San Diego, CA Teksea, Lugano, Switzerland Technomare, S.p.A, Venice, Italy IFREMER, Paris, France DARPA, Washington, DC Rockwell International, Anaheim, CA ISE, Ltd., Pt., Moody, BC, Canada JAMSTEC, Yokosuka, Japan C.E.A. and IFREMER, France IFREMER/COMEX, France Simrad Subsea A/S, Horten, Norway MSEL, Univ. of New Hampshire, Durham, NH Naval Coastal Systems Center, Panama City, FL MIT, Cambridge, MA Applied Remote Tech., San Diego, CA Martin Marietta, Baltimore, MD APL, University of Washington, Seattle, WA Draper Laboratory, Cambridge, MA Naval Ocean Systems Center, San Diego, CA IMSTP, Vladivostok, USSR BC Marine Robot Project, Canada IIS, University of Tokyo, Tokyo, Japan Sasebo High Tech. Company, Sasebo, Japan JAMSTEC, Yokosuka, Japan JAMSTEC, Yokosuka, Japan JAMSTEC, Yokosuka, Japan Draper Laboratory, Cambridge, MA SUTEC, Linkoping, Sweden KDD, Japan IIS, University of Tokyo, Tokyo, Japan IIS, University of Tokyo, Tokyo, Japan DARPA, Washington, DC Yard Ltd., Glasgow, Scotland Yard Ltd., Glasgow, Scotland WHOI, Woods Hole, MA Naval Postgraduate School, Monterey, CA ASL, University of Hawaii, Honolulu, HI Florida Atlantic University, Boca Raton, FL MIT Sea Grant, Cambridge, MA EUREKA (European Consortium) Marconi Underwater Systems, UK IST, Lisbon, Portugal (w/France and Denmark) Naval Undersea Warfare Center, Newport MBARI, CA ASL, University of Hawaii, Honolulu, HI Mitsui Engineering, IIS, U. of Tokyo, Japan
UNDERWATER VEHICLES
29
Figure 1. (a) Omni-Directional Intelligent Navigator (ODIN) AUV. Courtesy of the Autonomous Systems Laboratory (University of Hawaii). (b) Phoenix AUV. Courtesy of the Center for Autonomous Underwater Vehicle Research (Naval Postgraduate School). (c) PTEROA150 AUV. Courtesy of Ura Laboratory (University of Tokyo, Japan).
sions. An existing borehole is located by NAUTILE and then NADIA, a nonpropelled, free falling device, is dropped into the water from the mother ship. NAUTILE then moves NADIA from the landing point and places it into the borehole. ROVs draw power from and are controlled through an umbilical line from a mother vessel. A human operator on the mother vessel generates desired vehicle motion signals that are fed into a ship’s computer to calculate the ROV’s thruster control input signals. These input signals are sent to the vehicle thruster systems via a tether. About 70% of the ROVs are equipped with one or two manipulator arms, ranging from simple grabbers to highly sophisticated robot arms. Scorpio, an ROV developed by Ametek Straza in 1977, is used for offshore oil-drilling support. Operating to a depth of 1,000 m, the Scorpio has two manipulators controlled by a masterslave system. The slave arm is mounted on the ROV and a smaller replica—the master—is located in the support ship’s control room. The human operator moves the master arm to generate desired arm motions; a computer measures the new coordinates, computes control signals for each joint actuator and sends these control signals to the slave arm via a tether. More than 100 different types of commercial ROV models exist worldwide, some of which are listed in Table 1. AUVs, in contrast with ROVs, carry their own power supplies and have some degree of intelligence. There are more than 46 AUV models. Most of the current AUVs are survey research vehicles without manipulators. Only a few of them have performed in deep water and under ice so the performance capabilities are still embryonic. Development of AUVs is listed in Table 2. One AUV is the AE 1000, developed by a Japanese telecommunication company, KDD, in 1992. The vehicle was designed to inspect undersea telecommunication cables and is controlled by an on-board central processing unit (CPU) (MC68040) and equipped with various sensors such as a gyroscope, obstacle avoidance sonar, and AC magne-
tometer. The vehicle’s sensor detects the undersea cable enabling this AUV to automatically navigate along the cable and inspect its condition. Pictures of three AUVs—ODIN, Phoenix, and Pteroa150—are shown in Fig. 1. Extensive use of manned submersibles and remotely operated vehicles is currently limited to a few applications because of very high operational costs, operator fatigue, and safety issues. The demand for advanced underwater robot technologies is growing and will eventually lead to fully autonomous, specialized, reliable underwater robotic vehicles. During recent years, various research efforts have been made to increase autonomy of the vehicle and minimize the need for the presence of human operators. A self-contained, intelligent, decision-making AUV is the goal of current research in underwater robotics. Achieving this goal requires advances in various areas, including high resolution, 3-D imaging systems; artificial intelligence and knowledge-based computer systems, adaptive and learning control systems, acoustic-laser telemetry systems, highly dexterous manipulator systems, and lightweight structures able to withstand high pressure and high density power sources. VEHICLE SUBSYSTEMS Various subsystems such as navigation sensors, mission sensors, computers, mechanical systems, and manipulators are needed for unmanned underwater vehicles (Table 3). Dynamics Dynamics of underwater vehicles, including hydrodynamic parameter uncertainties, are highly nonlinear, coupled, and time-varying. Several modeling and system identification techniques for underwater vehicles have been proposed by researchers (1,2). When one or more manipulators are attached
30
UNDERWATER VEHICLES
Table 3. Subsystems of Unmanned Underwater Vehicles Systems Mission
Subsystems Sensors Planner World modeling Data fusion
Computer
Software Hardware
Platform
Fault-tolerance Hull
Propulsion Power Workpackage Emergency
Vehicle Sensor
Navigation OAS Self-diagnostic
Communication Development and Support
Logistic support Simulation
User interface
Needs/Requirements
Methods/Models
Long range information for detecting and inspecting a target of interest Plans for the mission goals, unexpected events or system failures Set of models for the AUV system and its mission environment Meaningful and correct information from massive data of multi-sensors Tools for developing computer codes for the vehicle, support and simulation systems, fault-tolerance operation Integration of electronic modules in a powerful, robust and flexible manner Accommodation of hardware and software failures Platform for mission package; depth and power requirements; stability; modularity for different mission parameters; materials; drag reduction Navigation/stationkeeping Power for propulsion, mission systems, and payload Tools for cutting, sampling, cleaning, marking, stabilization, docking, retrieval and launch Initiating appropriate action in response to the abnormal vehicle condition and providing means for locating a disabled AUV AUV position relative to a fixed coordinate system Detecting and avoiding obstacles; order of 50m and order of 10 degrees Monitoring and evaluating the vehicle operational parameters for subsystem status Transferring commands and data between a surface station and vehicles Organization, equipment, spares, repair and maintenance, documentation, etc. Tools for testing the vehicle design and interface mechanism for the analysis of the vehicle operations
Tools for displaying data, inputting command data
to the vehicle, it becomes a multibody system and modeling becomes more complicated. The effect of the hydrodynamics of each link of the manipulator on vehicle motion has to be considered in modeling the vehicle and manipulator (3,4). The effect of thruster dynamics on the vehicle also becomes significant, especially when the vehicle has slow and fine motion (5). Therefore, accurate modeling and verification by simulation are required steps in the design process (6,7). Integrated simulation with actual parts of the vehicle and the environment is more desirable than completely numerical standalone simulation. Integrated simulation packages, including 3-D graphics and virtual reality capabilities, will be useful for developing advanced underwater vehicles since actual fieldtesting is very expensive (8–10).
Traditional planner Objective and subjective models Analytic methods, AI System software, application software System architecture, communication network, mass storage Redundancy design Steel, aluminum, titanium, composite, ceramic
Manipulators Emergency buoy, drop weight, flame smoke, beacon, water dye Acoustic, Doppler, fiber-optic gyro, GPS, inertia system
Sensors for voltage, thruster rpm, speed sensor, leak, and temperature Fiber-optics, acoustic, radio, laser
Stand-alone simulation Integrated simulation Hybrid simulation in the virtual environment Virtual reality device, joystick, 3-D graphics
tasks. The intelligent system is a high-level control system for the vehicle. Valuable information has to be extracted and identified from a massive amount of signals obtained by various sensors. With information about control state, system status, environment conditions, and mission plans and goals, an intelligent system should be able to cope with unanticipated situations, support automated reasoning in real-time, and guide and control the vehicle. Therefore, an intelligent system
Table 4. Acoustic Long Baseline Navigation System Error Sources Random errors
Intelligent Systems Unlike ROVs or manned submersibles, AUVs operating without human intervention and supervision require sufficient onboard intelligence and must reliably perform the required
Sonar
Bias errors
Transponder detection delay Transponder turnaround time variation AUV receiver detection delay Compass error Depth sensor error Sound velocity Transponder calibration
앑0.3 ms 앑0.1 ms 0.3 ms 앑1 deg. 앑0.25% 0.2 m/s 앑1 m
UNDERWATER VEHICLES
31
Table 5. Attitude Angle and Motion Sensing Systems AHRS-C303, Watson Industries Power Consumption
12 vDC/350 mA
Weight 907.2 g Dimensions 146.8 ⫻ 79.5 ⫻ (W ⫻ T ⫻ H) mm 117.6 Outputs 3 axis & Heading rate, Roll, Pitch, South and North Heading Rate Accuracy Static ⫾0.2 deg/sec Dynamic ⫾2% Attitude Accuracy Heading Accuracy
Acceleration Accuracy Inputs Interfaces Roll, Pitch Limit Data Output Rate PRICE
Static ⫾1 deg Dynamic ⫾2% Static ⫾2 deg Dynamic ⫾2% — Forward velocity RS232 or Analog No limit 12–70 Hz $8,518
AX100, Precision Navigation
IMU600AD, Watson Industries
TCM2, Precision Navigation, Inc.
MotionPak, Systron Donner
12 vDC/300 mA
⫾15 vDC, 7 W
907.2 g 1,135 g 146.8 ⫻ 82.3 ⫻ 100 ⫻ 100 ⫻ 118.9 108 AHRS-C303⫹3 Axis Azimuth, Pitch, Heading, Roll, Pitch 3 Linear AcceleraRoll and axis Magnetic field tion Rates
1,740 g 193.5 ⫻ 103 ⫻ 84.3 Heading, Roll, Pitch
907.2 g 77.5 ⫻ 77.5 ⫻ 91.5 3 axis rate 3 axis linear acceleration
1% F.S. Range ⫽ ⫾100 deg/sec 0.2 deg to 30 deg
Resolution ⬍ .004 deg/sec
3–4 W
⫹5 vDC regulated, 6 to 25 vDC unregulated 6 to 12 mA 45.36 g 63.5 ⫻ 50.8 ⫻ 28
DGS3, KVH Industries
3W
0.05% Input ⫾0.05%
—
—
0.2 deg
⫾4 deg Resolution .2 deg 0–55 deg ⫾ 1.25 deg 56–80 ⫾3 deg Resolution .2 deg —
⫾3 ⫾1 ⫾3 ⫾1
— RS232 or Analog 80 deg — $1,199
— RS422 45 deg 20 Hz $1,995
⫾3 deg to 15 deg tilt
0.2 deg
⬍0.5% F.S. Range ⫽ ⫾3g⬘s — RS232
— — RS232
50 Hz $11,343
50 Hz $10,000
should be designed with flexible communication, efficient solution to temporal planning and resource allocation, information integration and recognition in the process of multisensor operation, planning ability for a given task, and capability to adapt to the changes in the system and environment. D. R. Blidberg and R. Turner (11) reviewed some artificial intelligence (AI) techniques for underwater vehicle mission planners. Control Systems Control systems in current unmanned underwater vehicles are quite immature compared to on-land systems. The vehicles have preprogrammed controllers for repetitive, routine work or are controlled by human operators. Therefore, these control systems have to be reprogrammed for different tasks or a well-trained operator has to be hired. Operating periods and performance of ROVs for a given task are limited due to operator fatigue. Major factors that make it difficult to control
deg deg deg deg
—
Peak typical Peak typical
— —
Resolution ⬍ 10 mg — Analog — 60 Hz $13,000
underwater vehicles include the highly nonlinear dynamic behavior of the vehicle and manipulator, difficulty in determining hydrodynamic coefficients, and disturbances of the ocean currents and manipulator motion to the vehicle main body. It is difficult to obtain high performance using conventional control strategies. The control system should be able to learn and adapt to the changes in the dynamics of the vehicle and its environment. Various studies have been done on advanced underwater vehicle control systems such as sliding control, adaptive control, neural network control, and fuzzy control (12–18). Sensors The sensory system is one of the major limitations in developing vehicle autonomy. The vehicle’s sensors can be divided
Table 7. Specific Energy Comparison of Batteries and Fuel Cells Table 6. Communication Methods Advantages Acoustic
Useful in water
Radio
Well developed technology High data rate Low error rate High data rate Reduced noise
Laser
System Disadvantages Moderate data rate High error rate Surface only
Under development Short range
Lead–Acid Ni–Cd Ni–Fe Ag–Cd Ag–Zn Hi–H2 Acid fuel cells Alkaline fuel cells
Energy/Weight (Watt-hr/lb.) 10–18 12–20 20–25 18–45 40–48 80–90 70–460 110–430
32
UNDERWATER VEHICLES Table 8. Comparison of Pressure Hull Materials Steel Alloy Ultimate stress (Kpsi) Density (lb/in3) Fabrication Corrision resistance Magnetic susceptibility Relative cost
60 0.283 excellent poor very high very low
Aluminum Alloy
Titanium Alloy
Graphite Composite
Ceramic
73 0.1 very good fair medium very low
125 0.16 good very good high moderate
100 0.057 fair excellent very low moderate
100 0.13 fair excellent very low moderate
into two groups: (1) system sensors, for sensing the motion of the vehicle and (2) mission sensors, for sensing the operating environment. Different tasks require different sensors: optical, x-ray, acoustic imaging, and laser scanners for inspection; Doppler, sonar inertial system, and gyroscope for navigation; sonar, magnetometer, laser scanner, magnetic scanner, and chemical scanner for recovery; and force, tactile, and proximity sensors for construction. Blidberg and Jalbert (19) described mission and system sensors, and reviewed current navigation sensors and sonar imaging sensors. Multiple sensors are often needed for the same task. For instance, information concerning the objects and local terrain surrounding the vehicle can be gathered via a combination of sonar imaging, laser triangulation, and optical imaging. Sonar can provide most of the obstacle avoidance information. Video images plus specialized machine vision algorithms can provide high resolution information concerning the shape and range of near objects and terrain. Laser triangulation can provide the same type of data at a slower rate but with the additional capability of operating in turbid water. Geometric information concerning the vehicle’s surroundings from multiple sensing systems may be redundant and conflicting. This resulting sensor fusion problem must be handled by the intelligent system. An absorbing, backscattering, and color-distorting medium such as the ocean environment causes difficult problems in using video images since the illumination is highly nonuniform and multidirectional. Additional complexities arise because the artificial light sources mounted on the vehicle move with the vehicle. The movement of both plants and fishes also creates confusion in perceived bottom topography. Another difficulty is in x-y position sensing because there are no internal system sensors for the x-y vehicle position. The most common approach that current vehicles use is acoustic long baseline or short baseline method requiring external transponders. However, signal attenuation varies with distance,
frequency, and temperature. Error sources of the acoustic long baseline navigation system are listed in Table 4. Commercial sensing systems for attitude angle and motion are summarized in Table 5. Communications The most common approach for ROV communications uses an umbilical line with coaxial cables or fiber optics. This tether supplies duplex communications. While coaxial cables would be effective for simple operations with limited data transmission, fiber optic cables can transmit more data with less electromagnetic interference and are lighter and thinner. This is important since cables cause substantial drag and often become snagged. About ten percent of ROVs are lost because of broken tethers. A tethered vehicle also requires an operating base, the surface mother ship, whose operating cost may be more than $20,000 per day. Research and development of untethered autonomous vehicles is needed but communicating with AUVs presents formidable challenges. Different approaches of untethered communication are compared in Table 6. The main approach today for through-water transmission involves acoustics in which transducers convert electrical energy into sound waves. Since the ocean rapidly weakens the acoustic energy as the frequency is increased, relatively low frequencies are desirable for longer-range communications. But at very low frequencies, the required transducer size is impractically large and the data rates are lower. The speed and direction of sound signals vary depending on surface waves, temperature, tides, and currents. Josko Catipovic and his research staff at Woods Hole Oceanographic Institution have studied the characteristics of the water channel through which a signal will travel and to adjust the signal accordingly (20). Acoustic modems at a 1,200 baud rate were developed, which is good enough for sending oceanographic data and transmitting video images.
Table 9. Comparison of Various Pressure Hull Shapes
Single Sphere Cylinder Saucer
Egg
Advantages
Disadvantages
Low weight/vol. ratio Excellent for deep diving vehicles Ease of fabrication High optimum vehicle L/D ratio Improved hydrodynamics in horizontal plane Ease of hovering in currents
Low optimum vehicle L/D ratio
Good hydrodynamics Good W/V ratio
High W/V ratio End closures Inefficient structure Low controllability Limited to shallow depths Difficult to design & fabricate
UNDERWATER VEHICLES
33
Table 10. Potential Applications of Underwater Vehicles Science
• Seafloor mapping • Rapid response to oceanographic and geothermal events • Geological sampling
Environment
• Long term monitoring (e.g., hydrocarbon spills, radiation leakage, pollution) • Environmental remediation • Inspection of underwater structures, including pipelines, dams, etc.
Military
• Shallow water mine search and disposal • Submarine off-board sensors
Ocean Mining and Oil Industry
• Ocean survey and resource assessment • Construction and maintenance of undersea structures
Other Applications
• • • • •
Ship hull inspection and ship tank internal inspection Nuclear power plant inspection Underwater Communication & Power Cables installation and inspection Entertainment—underwater tour Fisheries—underwater ranger
Power Systems While tethered ROVs can be powered by the mother ship, operating hours of untethered vehicles are limited by the onboard power system. Most power systems for current AUVs rely on batteries that supply limited energy. A typical battery type is lead-acid. Silver–zinc offers roughly double the energy density of lead-acid batteries. However, silver–zinc batteries are expensive. A 325-kWh silver–zinc battery is about $400,000. Low-cost, high-density batteries which provide the vehicle with more than 24-hours endurance are desired. Fuel cells or fuel-cell-like devices which are more energetic than silver–zinc batteries are being considered. Specific energy comparisons of batteries and fuel cells are listed in Table 7.
ficult and tedious to operate these manipulators with accuracy. Teleoperation using a master/slave system is a common approach. In the offshore oil industry, teleoperated manipulators are used on the tethered ROVs. These vehicles often use two arms—one to latch onto the structure for stability and the other to perform tests and maintenance. For multitask operations, more than one type of manipulator end-effector may be needed. To change the end-effector with the current vehicle system, the vehicle must be brought to the surface and the end-effector changed for each task. This procedure is time-consuming and expensive. A flexible and dexterous design of the end-effector and workpackage is necessary to carry out multitask and sophisticated operations.
Pressure Hulls
APPLICATIONS
Water pressure on the vehicles can be enormous. The deep oceans range from 6,000 to 11,000 m in depth. At a mere 10 m depth, the pressure will be twice the normal one atmosphere pressure, or 203 kPa. The chemical environment of the sea is highly corrosive, thus requiring the use of special materials which have rigidity, strength, and environmental resistance. Many ROVs use open-frame structures with a few pressure hulls while many AUVs have torpedo-shape fairings that include a few pressure hulls for on-board electronics and batteries. The most common materials are aluminum or titanium. Recently, composite materials have been considered. The potential advantages of composite materials for undersea pressure hulls are well-known and numerous research and development are underway (21–24). Pressure hull materials and shapes are summarized in Tables 8 and 9.
As shown in Tables 1 and 2, underwater vehicles have performed various underwater tasks such as seafloor mapping, environmental monitoring, submarine surveillance, underwater pipe and cable inspection, and entertainment (25–35). The Titanic was explored by an ROV, the Argo/Jason. ROVs helped retrieve black boxes and other wreckage from airplane crashes like the TWA flight that went down offshore of Long Island, New York. For military applications, unmanned underwater vehicles are efficient tools to help salvage downed aircraft, test torpedoes, and conduct mine detection and hunting. The offshore oil industry has been a major customer of unmanned underwater vehicle manufacturers. One of the newer application areas is nuclear power plants (36–38). Current use of ROVs by GE Nuclear Energy Co. includes visual inspections in reactor vessels, equipment pools, and fuel storage pools. Potential applications of underwater vehicles are summarized in Table 10 and configurations of some existing AUVs are summarized in Table 11.
Mechanical Manipulators Mechanical manipulators are needed for underwater intervention missions. While many ROVs are equipped with one or two arms, most AUVs do not have arms and are limited to survey type applications. Unlike stationary industrial manipulators in factories, underwater manipulators are attached to vehicles that are constantly moving. Therefore, it is quite dif-
INFORMATION RESOURCES More information about recent development in unmanned underwater vehicles can be obtained from various resources.
34
UNDERWATER VEHICLES
Table 11. Configurations of Some Existing Autonomous Underwater Vehicles Operating System
AUV
Main CPU
Other Processors 3 DSP ⫾ Image processor
AE 1000 KDD, Japan
1992
VxWorks
VME MC68040/4M
Phoenix NPS, USA
1992
OS-9
GESPAC MC68030/2M
ABE WHOI, USA
1992
OS-9
68CH11
T800 SAIL Network
Ocean Voyage II FAU, USA
1993
VxWorks
VME MC68030/8M
Neuron chips LONTalk Network
Odyssey II MIT, USA
1993
OS-9
MC68030/8M
OTTER MBARI, USA
1994
VxWorks
MVME167 (68040)
ODIN II UH, USA
1995
VxWorks
VME MC68040
Power
Thrusters
Lead–Acid
3
Lead–Acid gel
6 with 8 control fins
Lead–Acid gel Alkaline Lithium Lead–Acid Silver–Zinc
6
MC68HC11 SAIL Network
Silver–Zinc
1 with servo controlled rudder and elevator
MVME167 NDDS Protocol
Nickel– Cadmium
8
Lead–Acid
8
The technical committee on Underwater Robotics of the IEEE Society of Robotics and Automation continually updates its World Wide Web homepage (http://www.eng.hawaii.edu/ME/ Research/URTC/URTC.html) with recent research and development activities such as conferences and workshops, and the page provides links to research institutions worldwide that are involved in underwater robotics. Related technical societies include Marine Technology Society (MTS), IEEE Oceanic Engineering Society, IEEE Robotics and Automation Society. Technical meetings sponsored by these societies include the IEEE Symposium on Autonomous Underwater Vehicle Technologies, International Symposium on Unmanned Untethered Submersible Technology, Underwater Intervention, ROVs, and Oceans. Regular journals and magazines include the IEEE Journal of Oceanic Engineering and Sea Technology.
1 with servo controlled rudder and stern plane
Sensory System AC Magnetometers Camera VCR Recorder Laser Obstacle avoidance sonar Altimeter Depthometer Accelerometers Rate gyroscope Acoustic transponder Radio beacon, etc. Datasonic PSA900 altitude sonar ST1000, ST725 collision avoidance sonar Gyros Fluxgate compass Magnetic heading Angular rate sensor Watson 3 axis angle/rate Whisker sonar Sonic speedometer Pressure sensor Mosotech altitude sonar RF modem, etc. 9 Altimeter Temp. sensor Acoustic modem Obstacle avoidance sonar Pinger, etc. Stereo CCD Fluxgate compass 2-axis inclinometer MotionPak 3-axis angle/rate Pressure sensor Sharp sonic ranging and positioning system Pressure sensor Watson 3-axis angle/rate sensor Kaiyo sonic ranging and positioning system
Remarks Max 2 knots 1,000 m depth
Max 1 knots 10 m depth
2 knots 6,000 m depth Max 5 knots 600 m depth
6,000 m depth
Max 4 knots 1,000 m depth 1 mechanical arm
Max 2 knots 30 m depth 1 mechanical arm
Two books in underwater robotics were recently published: Underwater Robotic Vehicles—Design and Control, TSI Press (1995) (39) and Underwater Robots, Kluwer Publisher (1996) (40).
BIBLIOGRAPHY 1. T. I. Fossen, Underwater vehicle dynamics. In J. Yuh (ed.), Underwater Robotic Vehicles: Design and Control, Albuquerque: TSI, 1995. 2. K. Goheen, Techniques for URV modeling. In J. Yuh (ed.), Underwater Robotic Vehicles: Design and Control, Albuquerque: TSI, 1995.
UNIVERSAL RESOURCE LOCATOR 3. M. Mahesh, J. Yuh, and R. Lakshmi, A coordinated control of an underwater vehicle and robotic manipulator, J. Robotic Systems on Underwater Robotics, 8: 339–370, 1991. 4. S. McMillan, D. E. Orin, and R. B. McGhee, DynaMechs: An object oriented software package for efficient dynamic simulation of URVs. In J. Yuh (ed.), Underwater Robotic Vehicles: Design and Control, Albuquerque: TSI, 1995. 5. D. N. Yoerger, J. G. Cooke, and J. E. Slotine, The influence of thruster dynamics on underwater vehicle behavior and their incorporation into control system design, IEEE J. Ocean Eng., OE15: 167–178, 1990. 6. D. J. Lewis, J. M. Lipscomb, and P. G. Thompson, The simulation of remotely operated underwater vehicles, ROV’84, 1984. 7. G. Pappas et al., The DARPA/NAVY unmanned undersea vehicle program, Unmanned Systems, 9: 24–30, Spring, 1991. 8. S. K. Choi and J. Yuh, Design of advanced underwater robotic vehicle and graphic workstation, Proc. IEEE Int’l Conf. on Robotics and Automation, vol. 2, 1993, pp. 99–105. 9. D. P. Brutzman, Y. Kanayama, and M. J. Zyda, Integrated simulation for rapid development of autonomous underwater vehicles, IEEE AUV 92, Washington, D.C., 1992. 10. Y. Kuroda et al., A Hybrid environment for the development of underwater mechatronic systems, IECON, 1995. 11. D. R. Blidberg and R. Turner, Mission planner, in J. Yuh (ed.), Underwater Robotic Vehicles: Design and Control, Albuquerque: TSI, 1995. 12. D. N. Yoerger and J. E. Slotine, Robust Trajectory Control of Underwater Vehicles, IEEE J. Oceanic Eng., OE-10: 462–470, 1985. 13. J. Yuh, Modeling and control of underwater robotic vehicles, IEEE Trans. Syst., Man Cybern., 20: 1475–1483, 1990. 14. J. Yuh, A neural net controller for underwater robotic vehicles, IEEE J. Oceanic Engineering, 15: 161–166, 1990. 15. J. Yuh, Learning control for underwater robotic vehicles, IEEE Control System Magazine, 14: 39–46, 1994.
35
26. K. Adakawa, Development of AUV: Aqua Explorer 1000. In J. Yuh (ed.), Underwater Robotic Vehicles: Design and Control, Albuquerque: TSI, 1995. 27. D. R. Yoerger, A. M. Bradley, and B. B. Walden, The autonomous benthic explorer, Unmanned Systems, 9: 17–23, Spring, 1991. 28. D. R. Blidberg, Autonomous underwater vehicles: a tool for the ocean, Unmanned Systems, 9: 10–15, Spring, 1991. 29. J. G. Bellingham and C. Chryssostomidis, Economic ocean survey capability with AUVs, Sea Technology, 12–18, April 1993. 30. A. Dane, Robots of the deep, Popular Mechanics, 104–105, June 1993. 31. J. A. Adam, Probing beneath the sea, IEEE Spectrum, 55–64, April, 1985. 32. R. C. Robinson, National defense applications of autonomous underwater vehicles, IEEE J. Oceanic Eng., OE-11: 1986. 33. J. B. Tucker, Submersibles reach new depths, High Technology, 17–24, February, 1986. 34. J. D. Adam, Using a micro-sub for in-vessel visual inspection, Nuclear Europe Worldscan, 5–6, 10, 1991. 35. S. Ashley, Voyage to the bottom of the sea, Mech. Eng., 115: December 1993. 36. H. T. Roman, Robot applications in nuclear power plants, Newsletter of the IEEE Robotics and Automation Society, 8–9. 37. J. Judge, Jr., Remote operated vehicles—a driving force for improved outages, Nucl. Eng. Int., 37: 34–36, July 1992. 38. Kok et al., Application of robotic systems to nuclear power plant maintenance tasks, Proc. of the 1984 National Topical Meeting on Robotics and Remote Handling in Hostile Environments, 161– 168, 1984. 39. J. Yuh (ed.), Underwater Robotic Vehicles: Design and Control, Albuquerque, NM: TSI, 1995. 40. J. Yuh, T. Ura, and G. A. Bekey (eds.), Underwater Robots, Boston, MA: Kluwer, 1996.
JUNKU YUH
16. R. Cristi, F. A. Papoulias, and A. J. Healey, Adaptive sliding mode control of autonomous underwater vehicles in the dive plane, IEEE J. Oceanic Eng., 15: 462–470, 1991.
University of Hawaii
17. N. Kato, Applications of fuzzy algorithm to guidance and control of underwater vehicles. In J. Yuh (ed.), Underwater Robotic Vehicles: Design and Control, Albuquerque: TSI, 1995.
UNINTERRUPTIBLE POWER SUPPLIES. See BATTERY
18. A. J. Healey and D. B. Marco, Slow speed flight control of autonomous underwater vehicles: experimental results with NPS AUV II, Proc. ISOPE, 523–532, 1992.
UNION PROBLEM. See BACKTRACKING. UNIVERSAL ADAPTIVE CONTROL. See SWITCHING
19. D. R. Blidberg and J. Jalbert, AUV mission & system sensors. In J. Yuh (ed.), Underwater Robotic Vehicles: Design and Control, Albuquerque: TSI, 1995. 20. J. R. Fricke, Down to the sea in robots, Technology Review, 10: 46, 1994. 21. J. M. Walton, Advanced unmanned search systems, Oceans ’91, 1392–1399, 1991. 22. Du Pont Co., Advanced Submarine Technology—Thermoplastic Materials Program, Phase IIA Final Report, DARPA Contract # MDA972-89-0043, 1991. 23. S. M. Anderson et al., Design, analysis and hydrotesting of a composite-aluminum cylinder joint for pressure hull applications, ASTM/STP on Compression Response of Composite Structures, 1992. 24. P. Davies et al., Durability of composite materials in a marine environment—a fracture mechanics approach, Proc. of ICCM-9, II: Madrid, Spain, 308–315, 1993. 25. S. Smith et al., Design of AUVs for coastal oceanography. In J. Yuh (ed.), Underwater Robotic Vehicles: Design and Control, Albuquerque: TSI, 1995.
STORAGE PLANTS.
FUNCTIONS.