This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
lie-ax == (D II t eM ou:) ~'liiL
[W]
(6.3)
'a. •
c;.
","'..vv-·
J
,
13]
IIJlroduclion to Infr8rf'c1 and Flectro-C'ptical Sysle.ms ._--_.-
....
- .. _--- ...• -...•--_.•..
_ --_ ....
•...... _..
_-.--_._-._ ...... _.-----.._...
__._-----_
... _..
_-_._--
--..-._.- ...
_--
I,.
'l'r I
., 1"-·c
-,j'
'I
dt
~jV'""""'~." _ p "".A1i- \ ,0
cv:
)-",:.,?
Figure 6...t Transmission through an infi(;'itesimal distance of atmosphere. ,./
:.,
ax
\ where 1: == eis the transmittance of the atmosphere over' a distance x. This J principle is known as Beer-Lambert law (sometimes shOliened to Beer's law). As /,;"} mentioned above, the extinction coefficient is highly dependent on wavelength, so '-./ -, Beer's law is usually written as
,'I r- I
,},
[unitless)
(6.4) ~. f' :J;e ,
,Example j
c'
. l-
, ~~, ,.
,,<
.I.
The transmittance of air ta,meA) is the product of the molecular extinction transmittance and the aerosol extinction transmittance: \
'I .
. '-; )"'0 V S
...
/
(6.5) ~A
//'
./"~
) 'n'l..
J'
\
, . /;--,--!:,f . ~ ~ A CO 2 laser (l 0.6-~m wavelength). is f,ropagated through a 2-km hOrizontal path length. Determine the molecular·1ran~mlittance given that the molecular extinction is 0.385 per kilometer (defined for the given wavelength and relative humidity). The solution gives the transmittance at '",-,~-_-" 'tv f.r/
t/l/(l 0.6 Jlm) == e-(0,383)(2km) = 0.465
[unitless]
(6.6)
If the aerosol extinction is negligible (i.e., its transmittance is unity), a lOW .laser provides 4.65W at a distance of2 krn. - c:>J{
'cP .-==- ¢o " L.
<':. \ • ,
(.
6.3 ABSORPTION,..,
0"'\
pvJ
-=- f (') X 0 .1(65 - t( G5 v.>
\,/\../
____ .~.9,,~rptionjs. . fundament~lIy ~_.9uantum process where an atmospheric molecule ,- ab~_q!9~LJh~,en~rgy _.fr.om some inCident ph6iofl.i\gso!ptionby an- atmOspheric -molecule" in-creasing its ~ne~gy,-- and jn_a -_ ... -.--- _~h~g~$._jts,jnte.mar.-state, . . ;,j.o'/ - . resuldDg ..'
Atmospherics
133 I A'" , )
~ange.
When discussing absorption, it is best to consider the quantum natlfJe of the atmospheric molecule. Fr<)m i~frbd~tory physics, the . ,energy str~{Ure of a molecule is, illustrated in Figure 6.5. The energy gap Eg Ly,;!equired for absorption is given in Einstein's equation, E = hv. E is the energy of the photon, h is Planck's constant (6.626 x IOj~ l-sec), and v is the frequency of the incident light.
Epholon
E
g
I , 1St I, '[:: -
'
I' [
Figure 6.S Absorption of a photon.
...
,
[ I"
'", -,,' I( ,, f';;i, L
f;
J ,
"[
j
I . ,1
J [
l~ I,': L"
,: l'-.
, r? I" '/' _, /)
','
-'
, ,/ '~~ ,
Eg is the energy gap that must be exceeded by a photon for an electron to change energy states within a molecule. There are a large number of "allowed" energy y.sJ~t.~ J~c~~ges within a.\.~~~lf.~,~,~r energy structure, so its absorption propertles;';contalh a spectral. dependehce. For a photon to be ab))orged by a J \j '.,,," molecule. it must be at a sufficient wave enat~~~~ ,., lO(ds true. This"ev~f~.S~Y difference is .related to the different~tL~nal and vibratiHha] states of the particular atmosph~nc J.-po}ecules. If the app'ropnate molecular energy band does not exi{(, the co'rfespCfnding photons at that particular wavelength will not ,r interact (to the first order) wi~hat particular atmospheric molecule. Because or. .r' \..,/ the quantum mechanical mal(eDp of atmospheric gas~sl,Jhere is no significant'')' ,<,-,c-s:. I.
ilF \
absorption or radiation in the \"isible wavelengths (efup( for H,o absorption .';")' "
between 0.65 and 0.85 Jlm). \\ nile in 'the infrared region of the spectrum,: a number of molecules present are highly absorptive,,;e..1 I ('.r' ,', .'I.,,~;,..p~pe_.cJ.9.~.sSJmospheric components rhat are absorptive in the infrared '-,' include the diatollllc l11oiecul,es ;-:itrogen and oxygen; water; and carbon dioxide. 'Pk,ts of rhe absorption chara.::teristics for the primary' components versus \\ i2\elength are shown in Figure 6.6. ;' ','
I
Example
, l'
,\\9/ 9 \.)
. A molecule is sensitive [2J to photons with energies equal to, or slightly
grt'arer than, its molecular bandgaps. Lower energy photons pass and are not absorbed. Determine' the wavele:1gth with the most sensitivity (absorption) for a ml)iecule with a gap at 0.18 eV. Planck's constant comes in two forms: 6.63x I O-J~
p'
I J./
I
Introduction to lnfrared .and Electro·-O·:Jtical Systems - - - - - - - _ . _ - - _ . _ - - _ ........
_ _-----_ ..
'.~) . f (
.•.
_----
J-sec or4.14 x 10. 15 eV. The speedoflighf is 3.00 x 10' m/s. The solution is detemlined by Einstein '5 relationship
;':1
\ ~' r -.1'/ ,
-'1
d
Ex::: hv
reV]
(j.
I~
(6.7)
J
'\7
=-'/ \/-;:. -~~... -:: -----i.j:/C) 'I \ 0\ '7 = 11· >Sl I () With Eat 0.18 eV, the frequency is 4.35x I 0. 13 Is, or Hertz. The wavelength can
. ·'1
then be determined from ~
0· ~ r YvJ
_
.-l..~
~
\ c:.
Lf ·1"::) ')i
'~'1
') ::: £\' ~v-:. I\, \
[meters]
(6.8)
I
i1
]
yielding a wavelength of 6.9 pm.
r--..
Carbon Dioxide
100 , - - - - - - - - - - - - - - - - -_ _ _---,
;
'Cf'--' 80 c: o 60
..
.
~: .
,
..
til
.§
I , J
·1
40
;,l
til
c:: 20
('j l-
f-
0
l
2
4
4
16
Water ~ 100r-------------------------~ c: o
'--' til til
.,~
~,
80 60
.. '
.(
E 40 c: 20
til
~
.
·2
4
6
8
Wavelength
~lm)
10
12
14
r '•
16
... ['1. ,
~I
Owne
~ lOOr---~~~==~====~~==========~~~ ~ 80 V o
.-E
til til
til
I
, :I
0
r (.
.... l
I
60
•. 1
40
§20 I...
r
0
2
4
6
8
Wavelengtl\ ~tln)
[ .; '. 'r"' .. 1
( ,
Figure 6.6 Tninsmission of atmospheric gasses for a I-km path length. Th~ sp~ctralgraphs
in
shown Figure 6.6 are low-spectral-resolution I 'A' graphs, where individual absorption hnes cannot be seen (i.e., transmission has been averaged over some small wavebands). High-resolution-transmission graphs :(1116 wavenumber) are shown in Figure 6.7 for a 1-km horizontal path at sea level. J. ""-0
' / -u~
~
f
16
14
12
...roC',)'
... £ -'
ex
f L
.... /
L ;'··1
.1:
.
I
IU jL
Atmospherics
I
\
1··[
135
VrV
Frequently, transmission is plotted as a function of wavenumber, where the wavenumber is the inverse wavelength IIA and wavelength is given in centimeters. The graphs are produced by FASCODE, a high-resolution atmospheric spectral analysis program by Ontar.
't: FASCODE - PcLnWin 4.200
4.1110
4.111~
4.18~
r
1.0
c 0 'Vi Vl
'E
4.190
0.8
4.17:1
4 170
0."
Vl
c
III
0.4
'-
I-
0.2
AA
0.0
23a~
2380
2300
.,
230:1
2400
Wavenumber em
l-km horizontal path at sea level, no aerosols, U.S. standard atmosphere
I
FASCODE - PcLnWin 23QC
23110
~400
238:1
, .0
~.
08
C
0."
l
04
-
0
iii Vl
'EVl C
co lI--
I
I I
~:
-.
,
0.2
I 00 -4. III
4 17
4 18
4 ,g
.. 20
Wavelength Microns
I -li.m horizontal path at sea level, no aerosols. U.S. standard atmosphere
.In:.I .
I -. ,. l
}
.--'
Figure 6.7
Ontar).
High-resolution 2tmospheric transmission (courtesy of 1. Schroeder,
.
!
/36
Introduction to Infrared and
Electro-Opti,~31
Systems
----_._------------
6.4 SeA TTERING
· I .', ",,/LX
....... ~
C.,A,
.
<"
;'1 'f
\.<J~ ~
J1~)~~~f,~$t~L~C~tt~~if1gj~- the~~s<:~~;\ 9r,,~,ho~?~s co~lid~l~g
with atm0sf~rjc _pa~-I~"I~s-"YherebyT~~y~K~ztenergy IS rer~?,~~ted I~:. Ci.II, dlreCtl.on~.,,-The f:a(:tlo~ of the' photon '5 energy ~xtracted and the angUlar pa~n that It IS reradla:ed IS a lUnction of the
,-J-' ....~\..::-:-~
.___-----
·
,i
·
,,'::- I~
,
~
· I
, {
·i
:
::
'·,;1I :\
•
,-
'>-.. '
..
::;
Incident beam
.,
>
i
'(
.
;r ' ~t/
Large particles around 1I4th wavelength (Mie scattering) Incident beam
Larger particles larger than a wavelength (geometric optics) Incident beam
Figure 6.8 Scattering by particles.
v
> (A...
,
·• 1
~.~I
Small particles less than 1110 wavelength (Rayleigh scattering)
(
'
,
:
. .'.' -'::;
.i!
Atmospherics
~i
W
)~. Q F,~
~C
:[ I.
L. :
"
137
,
'\ 3
/'" '..... -"<,The first type gf s~Clttering ~l\~I~jg~ __~_~~!!_~~i~g~-HJ.? used w~~en the .J . ,P,~~1.~I~.ra~~~_~ is ~P'5q.~}Jl1_~te!YJ~s~Ub~n 9I)~~~~l1th gfa__~'!Y~I~n.,gth ..Ih~"?5:att~!ing ~ .' .~.9~f~fl;nt is prop()rtlonal t9_~ -4.: As shown J~ligure 6.8, the angular distribution . is s)~fumetrical. It is Rayleigh scattering that feads to. the blue' color of the sky. The blue wavelengths have a strong scattering inter'hdt'iQn with the atmosphe~e. The blue is scattered in all directions so that the sky appe-ars blue, where the red end of the visible spectrum provides little scattering. Ibe attenuation of light for Rayleigh-_ can be represented by the first order approximation, given by simRlification of
t/.
(6.3)------ .. ~;),·,~;~F
..
.. ~~.. "--'l~:rT'-~'i--~'-'-'-'
.
[W]
(6.9)
where only the scattering coefficient y is considered. The Rayleigh scattering coefficient is given by
7. ,
(6.10)
G
IU !~.' L
where y r is the Rayleigh scattering cross-section and N is the mQlecular number density. Examples of the molecular number density are shown in Table 6.2. The Raleigh scattering cross section is given by 1 'fr::: 811:\11 -1)1 3N2A~
/
'\\'h~re
n is
~.~~, i;d:: of r~~:C~ion.
[km 2]
(6.11)
do~tes
Rayleigh scattering more in the ~-·-~
'
Table 6.2
Atmospheric particles 1 i> '\..'
I.~
1 _.
1. I .' L ~ ._
L L
Particle type.
Radius (pm)
Air molecules (Rayleigh)
10
I-laze particles (Rayleigh)
10 -1
Fog droplet (Mie) . Raindrops· (Geometric)
04
Density (per cm J) 10 '9
02
10-IOJ
1-10
J 0-100
10 2-1 o~
10"-10'2
\
/38
llltrou L! '- \ I I ' "
-----~
'---
.- ..
•. "
....
_-----_.
__
...
_---_. __.........
_-_..-
.....
_---.-_._..
The second type of scattering to be discussed is Mie scattering
>.1
.
Of
aerosol
· TI . u,J I I h ~1 . sea tt.ermg',A)~~rs~v len t. e wl~~}~!!g~!!...~?_;.~rp'~_~~ifl)~telt.t.!~~)_~me siz~a!'U.h~ particle. DetaIled Mle theory IS ba~ed on classical electromagnetIc eq,uarions with --Y
~
>-if. .--' - . - ; . . . . ;
...
Q,
~
-
'7-0;<
'l' contmUlty condItIOns ot the q?l!n~ary between the p3.nlcl.e,~ and ItS surroundings. \) Most atmospheric models mike \~ertain assumptions to -~'[rnplify th~ theory. One such assumption i~.,t~9-t the atmospheric particles c~f1 ,b_e~presenled by simple shapes s~s cylin9~s, ellipsoids, and sp~er$J' It beyond the SCOpe of this book tobderive theucretails of Mie the.ol)':,}nstead, a first-order approximation is provided. This type of scatteri~g is conceliirated in the forward direction as shown in Figure 6.8. The attenuation given by Mie scattering is of the same form as Raleigh scattering and is expressed by
is
[WJ
(6.12)
where Ym is the aerosol attenuation coefficient. The aerosol attenuation coefficient is a function of the aerosol density and is related by YIII(h)
=:
M(h) M(O) YIII(O)
(6.13)
yJ;,
where M(h) denotes the aerosol density at a height h, and Ym(O) is the aerosol coefficient at sea level (density at sea level-200 cm-)). The aerosol coefficient also varies with wavelength. Several examples of the aerosol attenuation coefficient are given in Table 6.3 [5J. The third type of scattering occurs when the particles are much larger than the wavelength. An example of this scattering type is that of raindrops. For this case, the light is primarily scattered in the forward direction. This case can be . approximated with geometrical optics. ~
r
~i . {
..>~
Wavelength (pm)
Aerosol coefficient at sea level
0.3
0.26
0.4
0.20
0.5
0.167
0.6 .
0.150
0.7
0.135
(
,
~l
,
, "l(
.
.';]
.
(
./ r {"
'
.. ~
0.127
0.9
0.120
.
r
'.. I (
0.8
,
-','}
L
L L
Atmospherics
,X·/
6.5 PATH RADIANCE
IL ·
139
D'
L
Path radiance is the natural emission of radiation by atmospheric particles within a . ..'~'" . sensor field-of-view. Given a path, I~ngtb between a source and a sensor, the atmospherj~I.r..~,I1isles in. this path~1'c6rrtribLite radiation to the total flux on the detector. 'To furty descrIbe the flux on the detector, the path radiance must be taken into consideration?l' "".f',7 Path radiance can be described in a number of different ways, but the simplest method is to model the particles as blackbodies. If the atmospheric particles are considered a blackbody at a particular temperature LBB(Ta ), then the amount of path radiance seen by the sensor is .
?,'
I .
I
L:
I
" L
L
/'
[W/cm2-sr-llmJ.
L
where IpalheA). js the atmospheric transmission of the path. Given any sensor viewing geometry where the source fills the detector (i.e., the image of the object is larger than the detector), the total flux on the detector can be calcu lated using an effective total radiance:
" i.
iL
:..:.~... '., y
.!•
LIIIIlI/(A) == L,\·lIurce(/.)tpalh(A) + Lpa,h(A)
.(
[W/cm2-sr-).lm] \
./,..-
/'''''''''' .. /
f' l
'(6.15)
,",f,
,Ct<-
If the source does not fill the detector and the background" contributes flux on the detector, then background radianfe must also be added to the total radiance. /, . There are many argurflents on.when to include path radiance and when to I • igtlore path radiance. From (6.14), the path radiance becomes small when the path" v tr~nsmission is high. With a la:ge transmission a~d .cold-air particles (g>mp(l_~~g ,I., ~,\~ ~, with the source), the path radiance may be negligible. However, a small path '.)," transmission, high air temperature (with respect to source), or both can provid~ significant path radiance. An additional note is that large path transmission corresponds to small paths, so path radiance may be negligible for short ranges.
L_
•
f
(6.14)
L;
l.
I \
6.6 TURBULENCE
f
l
L_
"
1
1
i i
l...
1
· Turbulence is a term used to describe time-varying temperature inhomogeneities of #. Ihe atmosphere. The tenwerarure (and pressure) v~~i~tions result in index of refraction .inhomogeneities. The index of refraction var'iations cause. the direction · 'of light"'~r~pag~~i~n to "~~hd" in yario~~ directions. These d ire{tto'na I, changes ." -,..~\ ' A ' , ').' - - . . ~ \-.;r '" \'ary with time;,Iurbulence is re~p;p(~sible for a variety of effects, Su~~\~S temporal ..,intensity fluctuations (i.e., scintillations). An example of scintillations is the [winkling of the stars (a small source through a large atmospheric path length t Turbulence affects imaging hot only by visible sensors but also by infrared · sensors. An example of infrared turbulence can be considered when viewing a dista,nt object across a parking lot in the middle of the summer. The sun heats the t,.\,,_/
1 '.
;
'" .........,
..
f
.<1
f. _,
..~
";,,.s>
'. \ .
\-
.
.
-
, , " . .r'1
•
NO
Introduct ;on to Infrared and rIectroOpti,:al Systems ,}.
pavement, causing the air above it to become he«ted. This heated air is l!nstable , , ,""K "" ,. \'f> and radiation pqssing thro!1g,h it qeyjates from its original pa~h. This phenomenon causes image rf:o'tion and the ~mage. It is especially not'i~eable when viewing smaJI, distant objects. It differs extinction since motion or blurring is not unifolll1 a:cross the image. , Overall, extinction affects the entire image in a uniform .. c manner. ." \ ',>'. As with other sections of this chapter, a complete developffient of ',,I v turbulenc~ is out of scope, hence' the focus here is on several key parameters. These parameters are defined along the optical path for small-scale atmosphere turbulence. Large-scale atmosphere effects are not generally considered in atmospheric modeling fOf- imaging sensors. ~f~;"" Turbulence c6n~t~ts of time-varying fluctuations that change the locaiized index of refraction. The fluctuations of the refractive ind,ex in the atmosphere can be described by the index structure function:
biuts'
from
,f' ,
..,
:
:' .1'
(6.16) ,} ,'J'
where nl and 112 are the indices of refraction at two points separated by a distance r. The notad~n < > describes the ensemble average. In a similar manner, a temperature structure function is defined by
iI
(6.17)
where TI and T2 are the temperatures at two points separated by distance r. The Kolmogorov-Obukhov law states that differences in indices and temperatures are proportional to the distance to the two-thirds power: D,,(r) :::: C;,r 2!3
(6.18)
and (6.19)
,where C;, and C~ are constants called the index structure parameter and the temperature structure parameter, respectively. Solving for the temperature structure parameter using (6.17) and (6.19) shows that the temperature structure ,parameter can be measured directly: ,---------.--.--'---,~
r--
..--"----
'2
C}(r) :::: «TI:;2~.,~,~,
(6.20)
~---...~'-
r
,I
The two-thirds power rule holds for many cases. It describes the heat transfer /.t.P between the surface and the atmosphere. The two-thirds power functions are yf!lli:t ( for stable conditions (night), neutral conditions (sunrise and sunset), and for some daytime conditions where there is warm air over cold ground (e.g., snow or ice on
it ;
U (
,I
~. 'J" ,~
I !d
[/ '--~--~Y---"'---ro---
. . . __ .. ~.;
_,~
...... _ .,___ _____ __ ,. ._~
~
---" .... ,. -r " (
rn' L ~ Li".
Atmospherics . I ttl
: I'
, :
the ground). For unstable conditions (free con'vection over hot surfaces during the 'day), a four-thirds power is more appropriate. -'L"'f~...J ,/'¥:> I The index structure parameter c~ is reJated to the temperature structure parameter by
L .
141
oO',
C ,
(6.21) . 1
I.:"
.,-',"
L·
... ,
"T,,-
foP
,'.....!J
For dry air and optical wavelengths, (6.21) can be approximated by [6] (6.22) . where P is the atmospheric pressure in millibars and T is the temperature in Kelvin. The index structure parameter ranges from 1 x 10. 15 m· 2/3 for weak turbulence to 5 x 10. 14 m· 213 for strong turbulence. Factors that increase the index structure paramet~r are strong solar heating, very dry grounds, clear nights with little wind, low ~ltih;d~s, and surface t(jlr~·finess. Factors that pr'o~ide reductions in the index structure parameter are heavy overcast, wet surfaces, high winds, and • . , , r-Q.-> ''J/ ' hIgh altItudes. , ,~~;., '\ It is useful to estimate C;, as a function of weather conditions. Computer programs such as IMTURB and PROTURB have been developed by the U.S. Anny Atmospheric Sciences Lab for these types of estimations. Also, a simple model [7, 8] has been developed using data taken by the U.S. Anny Night Vision and Electronic Sensors Directorate. It is based on meteorological parameters and d' )L~I the Talmudic~dnc'ept of temporal hours. '. 'Je ',i/Talmudic hours are not 60 minutes, but are 1I12th of the time between sunrise and sunset. In the winter~ temporal hours are shorter than 60 minutes and in , the summer they are longer than 60 minutes. An example is that 3 temporal hours /'~corresponds to 1I4th of the daylight time.' A weighting factor is used in the estiinate of C~ and is related to the temporal hour. The weighting factor is given in Table 6.4. A regression model was _ perforI1)ed on C~ as a function of tl)is weighting factor, the temperature, relative ." humidity, and wind speed. l?ased' on, the temporal hour and this meteorological data, the index structure parameter in m~2/3 can be estimated by Nf
I
,
'
• ,,' •.9
I
.t
r
C'~ \~ I
_
c~ = 3.8xI0-
14
•
\
Jr - i.Oxl O-IST - 2.8xI0- 1! RH +
-1.lx: 0-8.Sx;
f
i'
L: 'I
[L I, I'·
~~.
19
0- 11
2.9xlO-I~RJ!l
RiP - 2.5xl 0- 11'5 + 1.2x1O15
WSJ -
S.3xlO- 1J
IS
/f':5 2
\.
I (6.23)
W is the \\:eight given fO'r the,te~nporal hour in Table 6.4, T is the air temperature in
I .
I
}~
Kelvin, RH is the relative hunl'iJity in percent, and per second.
rvs is the wind speed in meters
L
142
.'d ','"
>, ,
~"(
Introduction to Infrared and Electro-Optical Systems
,';1 "'I Table 6.4 Temporal hour and index structure parameter weight Event Temporal hour Relative weight interval (W) Night ... until -4 O. I I Night -4 to -3 0.11 Night -3 to -2 0.07 Night -2 to -1 0.08 Night -1 to 0 0.06 Sunrise o to I 0.05 Day 1 to 2 0.1 Day 2 to 3 0.5 I Day 3 to 4 0.75 Day 4 to 5 0.95 Day 5 to 6 1.00 Day 6 to 7 0.9 Day 708 0.8 , ,Day 8 to 9 0.59 Day 9 tolD 0.32 IOta 11 Day 0.22 Sunset 11 to 12 0.1 Night 12 to 13 0.08 over 13 Night 0.13 .-J'~l>.
c:)
I
,
",
;': '
l
,J >,1
\ ',\rY',
9
I
()'
'
'I'
i,"t, ...J " ,! ._f"'.} .-o[ ;.
~:
,.T"
C;,(h) ::: C~(0)h-4/3
(6.24)
where h is the height in meters and C;,(O) is the ground-based index structure parameter.v-' A 1J10difi~~tion to this model is provided by Sadot, et a!. [8] for cases , y\ . ,'7. .....>:\, that include signific'tmt aerosol effects.~' ') ',', }'J . The affect of turbulence on,EO systems IS most pronounced for coherent ,\, , systems (lasers), where the propagcrtk;n of the optical wavef~ont is m.s.:~" critical.(i ,I Strong turbulence also affects incoherent imaging systems by' image blurring and j' smearing with the loss of high-frequency information. The affects of turbulence on I imaging through the atmosphere can be characterized by atmospheric modulation ~
I
"
r
- f'
"
This model has been validated over desert scmd as well' as vegetation surfaces. It is v~lid for temperatures of 9 to 23 deg C, relati~~-humidities of 14 to 92%, and wind speeds of up to 10 m/s. The model applies to index structur~ parameters at heights of up to 15m. For heights that are not near Earth's surface (beyC?nd 15m), c~ can be scaled accb~di'1;g to the Tatarski [9] model .I
")
!",:
\
tr ' .Ii
[ ,[
Atmospherics
/43
transfer functions (MTF). While good progress has been seen in the modeling of atmospheric turbulence, there are no acc:epted standard turbulence models.
L
'!"//
J,.:9 .•.....:,."/,,v"""
Exa~ple
Detennine the index structure parameter for mid-day at a temperature of 27 deg C, relative humidity of 40%, and a wind speed of 2 m/s. The solution is detennined by using the following parameters in (6.23). W= 1.0 T= 300 deg K (27 deg C)
'L
RH= 40% WS = 2 m/s
l;
These parameters give an index structure parameter of 3.5 x
L [L lL
10'14
m· 2IJ •
f
Example
Detennine the index structure parameter for a height of SOm given that the ground-based index structure parameter is 2 x 10. 14 m'~.'), The solution is detennined using the Tatarski model of (6.24)
t .
r,l r~
C;,(50) = 2x 10- 14 (50)-413
'.....-Y ,JV
Note that the index SOm,
L
[L [
. I L
f f
L U L
',9 \)
.
.'
parameter decreases significantly for a height of only '
6.7 ATMOSPHERIC MODULA TIONTRANSFER FUNCTION . ,r
, .. ,
,P
',..1(:>'"
,
\,
Y
i.r:-j
(/'
\
'
..~p( .• ;
' .
In the previous sections"y"arious atmospheFic R)1~nornena have Q~en reviewed _that "\ .. . - ---"\,,.-'''----- . affect the optical radiation. Absorption, and scattering can be vrewedas a reduction ~I _.:"in the. a~ount of,~~,,~?i~tion th~~/ rej.~}251 a seriso.r)~:~ile. s~1"D.n_g and turh~Ien~e ~ .J result 111 Imag~ ~~~rnng and loss ot detail. The blu[[(::,g IS quan~ltled to~~~.sc.nbe I~S overall degradat.IOp effect .on sensor performance. ThiS degraC1atIOn IS I -Q/P'ch'aracterized in terfns of an atmospheric MTF. Atmosph~ric M,TF can roughly be .~.".'J dJ:-P ., deSCrIbed as a reduction u~>~ontrast as a function of spatial' frequency. 1 he atmospheri~~.!JI.;.cf~n b~~. into two MTFs: an:~~rosoI MTF'and a turb~llence MTF. This, app['9~acl~.,~;s not e>3rct for atmospheric, modeling but proVides a firs:-order approxilp,Jtipn. A lim ibtion to this approachexi~ts because MTF theory is based on lin~3'?fnv~rit processes;' Turbulence is_ .').,"-.s.not necessarily uniform across . an image but is often assumed so for modeling purposes. ' . The atmospheric turbulence MTF has two forms depending on the exposure time. For the long-exposure-time case, the \1TF is given by '-
~_\V
f
, I ,"
f
st~cture
.~.,. " '. I
= 1.09x 10- 16 m- 2I3 ,
.,
,
' .
..
.
. ' .
.
'
-
',_
• _'''"
I
'-'?
.
".
,--,
1,';..-';'
•
•
:,
.1l..
..$'.I'
(J \.;...(_P )'
144
Introdu-: lion to Infrared and Elect/'I)·OpTical Systems
(6.25) .. ,7
where ~ is the spatial frequency in cycles per radian, R is,the path length in m~te~s, - ./'-f..and AI is the wavelength in meters. If R is, given in kilometers, the S can be plotted in milliradians. The term short-exposure time isdefined by Goodman [10] as the "time required to freeze the atmospheric db'grad~ti;n and thus eliI~i{~ate the time-averaging effects:' This depends on wind velociry bllt is generally less than 0.0 I to 0.00 I sec. Short exposure turbulence MTF is given by
::
"
. r
p: . : (6.26) ; ~ _[1.. .(
where Jl is I for the nearfield and 0.5 for the farfield. D is the sensor's entrance pupil diameter. The nearfield is defined when D» O.R) 112 and the farfield when D « CAR) 112. · 1.
.. \
Example
·S". \.
Detennin y ~l1er MTF for a long-exposuf(;!-time sensor that operates in the longwave (', i over a horizontal ground path length of.8. km. The atmospheric conditions are such that the index structure parameter is 5 x 10. 14 m· 2I3 • The solution is given by the. lang.-exposure MTF of (6.25). Because the sensor is a longwave sensor, aIO-l1m) wavelength is assumed. The MTF is shown in Figure 6.9. Note that while (6-.25) was developed for units of cycles per radian, the MTF of Figure 6.9 has been plotted against cycles per milliradian.
· Jr
.""~,,,..
.
~.(!,,5'
i'
',.
......
I
,.',/
1.0
-- ....... --- .
-I. { ? .' ,iff 7J'" (;' .//.=70) '..r ",7 \',/0 '/' -7') ':,.J> 1\{I./l
0.9 0.8
L
0.7 0.6 tJ...
.r~
'['
0.5
('
0.4
i ...
0.3 0.2 0.1
0
°
'\
-J.
r .....
5
10
15
20
i
cycles/mrad ,>
Figure 6.9 Turbulence MTF. I
The aerosol MTF is a strong function of sensor FOV, sensitivity, and limiting bandwidth. The aerosol MTF compris~s two components: a low spatial
.... ?. .-
~,
'
[ ,,'
y'
--,'--._, .• _ ..... -" .~.-.--"."",.. --. --, .. ".". --... _m.__
[
.
Atmospherics
145
I
frequency function and a high spatial frequency function. The aerosol MTF is approximated [11] by • (0. J
j'.
"
; •• ,
'V::',
.
"
"I " . ..-
1:
~ ~ ~c
A1TFlI CE,) = exp[-AaR-SaRC'-,<" )2J
(6.27)1 r
MTF{](~) = ,
• ;
wp~~ ~c
I
I"
I "-'. I
ro,.
[
<-.
It IU 1·[
1L I I I I
·l L L l
I
l .
{
{
I,'·,
t." 1.. ..
.~ •
oJ
.>}I
~
exp[-(Aa +Sa)R]
S
J '/
i-:f'7 ~ ,{
is the curoff spati~l frequency that is ~RPXgKmHttcly alA and a is the pa'hkulate radius. A a is the atmospheric absorption coefficient and Sa is the atmospheric scattering coefficient. .. These quantities are ustafl)/'n;taS'ir'~ but Sadot and Kopeika [I I] have/6ullin~d a de~;f,~.q, JI17thod for cal~;tjng these p,a~~rpe}srs. Holst [12] states that~~ro~<;>l contrll~~J.1s to atmospheric MTF are /"~ignifi~~} when the target radiance scattering approaches the radiance scattering attri~~ted 'to the background. For smalJ signal t.argets such as tanks in tactical background scenarios in the infrared, the target sca~ed signal is small comlli!r.~t. . V: ith. the background s~attered si;~,~\I~ t\~.2.? aeroso}J 'W,~f'~ be neglected. For SIgnificant target scattenng such as 'rocket plume signals against sky backgrounds in electro-optical wavelengths, aerosols must be considered. ~
,
.. '
,/\,'",<:, ;~~,.,.
.
,/ I.,,,...)
(\.,# .......
6.8 i\10DELS An accurate atmospheric transmISSIOn calculation requires that all molecular, aerosol, and precipitation effects be taken into account.. When exact results are required, such as the design and manufacturing of a sensor system, a detailed model must be used. Three such models are LOWTRAN, FASCODE, and MODTRAN. These models are widely used in both government and private industry. The intent here is to make the reader aware of these models and their basic functionality. Until recently, LOWTRAN was the most widely used and accepted model for atmospheric transmittance. The original computer code was developed at the Air Force Geophysics Laboratory in Cambridge, Massachusetts. The commercial software was developed by Ontar Corporation. This model was broken down into two components: absorption and scattering. The resolutioI1 of the LOWTRAN model in W3\'e numbers (I/A) is 20 cm". The code calculates transmittance from 0.25 to 28.5 pm. Four eXqmple curves generated wit.h LOWTRAN are shown in Figure 6.10. The curves shown are the transmission of a l-km horizontal atmospheric path length for the climates given. It is readily apparent that water 'playsa large part in the decrease in longwave infrared transmis,sion. In the cQoler climares, the absolute humidity is mllch lower, so th..: water absorption is less. In the wanner climates, the air can hold more gaseous water, so the transmission is less. The water does not affect the visible or midwave as much as it does the long\l;ave. This can be verified in LOWTRAN with the water absorption values.
\ ,--
3 '\' lJ
Introduction to Infrared a11d Electro-Optical Systems
146
!
- - - - - - - - ._-----Also, note that the t.hree sensor bands addressed in this text (electro-optical, midwave, and longwave) have atmospheric windows where the transmission is good. Also, longer path lengths (e.g., 10 km) produce a mLlchrnore severe transmission curve. MODTRAN is currently the most widely used and accepted model for atmospheric transmittance. It evolved from LO\VTRAN and computes both transmission and radiance at 2-cm· 1 wavenumber resolution. An example of MODTRAN transmission is shown in Figure 6.11. The transmission shown is for a l-knl horizontal path at sea level. FASCODE is a line-by-line model that computes the transmission of each absorption line using the spectroscopic parameters in the HITRAN database. MODTRAN uses a two-parameter band model, derived from HITRAN, as its database of absorption parameters. Both models have the capability to handle arbitrary geometries and view conditions, and allow the user to select from several model atmospheres, aerosol profiles, and other weather conditions. The commercial versions of MODTRAN and FASCODE are available from: the Ontar Corporation (www.Ontar.com). More information about the HITRAN database can be obtained from their web site www.HITRAN.com.
':l ,
6.9 DISCUSSION
"
I J
. i .
.,1 (
.,"J r
.
I
r'
r '
,J; l ) :.1r .
'
f
'p sensor = fA2 AI
MwurceCJ...)A It
n
.,·ollrce~"sell.\"(}r'tarm
('\) fI..
('\)d'A . 'tsellsor fI..
'1-"-
where
. ·1···
:...
,';:,,\..ii.~-,; l\..Jt.-/ ~ \
~;-------- /
Vde,t
-=-,
.....
!"l ,
propagating to the sensor detector (Le., less path rad iance and background radiance), one must consider the atmospheric transmission as a function of wavelength. The radiation received by the detector within a sensor can be expressed by r
<)
').\
~~::::.~e.._~~? A~Y\t. JL G dd.
I'
<--"»-')
~
-L ,
(6,28)'
\ '>- ') Q Ck)
'.J r
at ~
;L
Or}"(',
iJ.kJV\
:.1
M.'lIU~ce(A) =.the source (Lambertian) radiance in W/cm 2-sr-um, A \"(}1I~ce = the source area seen within the sensor field-of-view in cmz, . sY;.; ,IV n.,·en,mr= the solid angle of the sensor entrance pupil as seen by the source in sr, 'to/1110... ) = the transmission of the atmosphere between the source and the sensor, and 't\·(.'nwr(A) = the transmission of the sensor optical path (transmission is unitless).
r
I
'L ,. ..
"J
r
"I
Once the wavelength integration 9f &,e object radiance, atmospheri~ ~r'.~!}?mission! and sensor transmission is perion"ried, there is no method to .~ep~r~~~ source tw~ ~~,.\~.~ns: ( I) the '.
\'. , •. t:~. -
>- .,- . )
~r
... _
..... __ •
__
r. ___ ._· ___ .__ .._. __ .. ,._. _ .....___ ......_ ...... .......... _............ _ "_'_"_" __ ~
'--:--,-""",-~
r r
H ,.
L,
L __.
I,
.
'...... J
.'.I .;. c' ..,.'.\
I..
... ___ .. _.. _____ .__.--___...-.. _..... _, __ ..... _. ,_,_. _ .,' __ .__ .__ . ___ ... _.
.
i!., .
T~rnine an exact amount of radiation gr\gina!ing from an Qbject and
~ r;-Iij
I
.1 '
i
• _
!
::·1
f, .- .... ,... f I
r:
Atmospherics
L
r·';
. ·L IS
147
source exitance is constant within the spectral band, or (2) the atmospheric transmission is constant over the spectral band.
'L Midla:titude Summer
GI
Atmospheric Trans: 1 Km Horl:z:onuiol Path
i· L; o.~
of
o.e n 7 ...... .
I .
es O.r:~
0.2 0,"
o+-__ -L~-LL---~--~~----~~----~------~----~~--~ ,.,. u 4 t:;. '0"1 ~
Vt.Ia,vel .... ,.,C th (r.uo ro rT'")QtQro)
Midla.titude VVinter
:r
!I
; :~ I[
'4
Atmospheric Trans: 1 Km Horizontal Path 0.9
o.e
~ ~::r::::::
g
0.""1' ...... ···...... ........................ "" ... ""."" ............. "... "."." .......................
~ ~::r::::::':: 0.'
r:
j
!
o~
o
l. ..
: : : : : : :.: .': , : : : -: . . .: : ': : : : : : : : .: .:.: : : : : :':.: : : : : : : : : : : : : :
__-L~-LL-__~__~____-L__~----~------------~~--~ :2 e 10 "":2 14 16 VV'aVQIQtng th (,.,'0 rOrTlGt,gro)
d ,
&
Tropica.l
L."~
Atmospheric Trans: 1 Krn Horizontal Path
f
,
~.------------~----------------------------------------~
I 0.9-r······
r ' e[
0.8- .... c::
O.7~·
.. · .. " .
~ :::~ ...... .
j
1.-['
0. 4
+....... .
0.3,. .... · .. .
O·:2l ...... .
1"
0.' o~__~~-L~--~--~______-L~____~______~____~~__~ , :2 C 2 4 '5 6 10 '4 '6
[ [
Vt./av";"QnQth (rTliCrOrTlGtgro)
.'.
Subarctic VVinter
I. . L
Atmospheric'Trans: 1 Km Horizontal Path 0 0 - .. · .... ·
o.e-.. ·
I' . l
c::
.J
07-' . O.s~·
.
0.5-'"
.
0.4- .
0.3-....
0.21·········· .,' ..... 0 ' .; ......
O-O------:2~.~L---4~.LL-._.~.~6~.-L---e~-----'~O------.-2------~----,16
I
\A!aVQ .• Qnoth (rnIOrOJT")Qtgro)
L
Figure 6.10 LO\VTRAN transmission curves. /
;'
I;'.'
LX
..
....
''';:"
Intn,duction to Infrared and Electro-Optical Systems . - - -.. -......
--~.-----
.........-.---.......
----..-.- ....----_._---- - -
] POI
MODTRAN - PclVlodWin
..
1.0
4000
2000
r\c.a: r
, !
r.~l
~r
0.8
-
0.6
-
Whr·
-
Jr
-
tr(,'r .
c: 0
V1 V1
c'" 1
1000
~
J'
E V1
c:
(1j
r
0.4
l-
f-
0.2
l 10
5
~~
~ I
0.0
20
15
Wavelength Microns
Figure 6.11 MODTRAN results (Courtesy of 1. Schroeder, Ontar Corp.). r/
:;'.
_~i..J
_
'
1
..
A common, but incorrect, practice is to perform a band transmission
~su;reI?ent of the atmosphere b~ vie'Y~~ \a,'~~Jar-blackbod~~ce and then to
VIew a dIstant blackbody source wIth a broadband sensor that matches the band of interest. The blackbodies are both at the same temperature to provide a similar signal and the collection geom~,tries are accounted for. The ratio of the near power /. to the distant power is conSIdered the average atmospheric band transmission. Even assuming the sensor transmission is constant and taking the constants outside the (6.28) integral, this approximation gives v~
I
C
tha/ld -
I
fA I
~2
M('A)lulm(').·)dJ,.
(6.29)
A1
fA -I /vIO.)d).. .
C2
This measurement gives a weighted transmission that is source dependent. An average transmission that is independent of source distribution cannot be taken from (6.29). However, the assumption that source distribution is relatively constant for low-temperature blackbody sources over the 8- to I 2-~m spectral band has been known to give some reasonable error bounds. This is not the case for EO sources and 3- to 5-~m sensor bands.
.
.... -
'"
----
-
... - -. .. -~---- ........ ~ .............
- .........--...-.
-~
·l· .... "'··, ...... ~ .. & ... G·!(,:J,
, ! P ••'MW.
Atmospherics
L I I' L
149
A second method for detennining average band transmission has becom.e popular with the spectral radiometer. The spectral radiometer measures the power on the detector (i.e., equation 6.28) in fine spectral slices such that the ratio of the near- and far-detector responses can be normalized over wavelength to give Latll/(A). The a,,:erage band transmission is then computed (6.30)
"
('j'
;'; :
'j
~.-:.
I"" i
'
"
!.
L [
';'--
I ''
I ,
f,
L
While this technique gives a non-source-weighted atmospheric trahsmission over a band of interest, it still does n6t describe the detector resporse thr,?ugh the atmosphere for a given source. Again, (6.28) describes the detecto~ response that is a function of a spectral atmospheric transmission and source-radiance product. This response is not equal to the average source radiance and average atmospheric transmission product. . Now that average atmospheric band transmission quantities have been presented in a negative light, it is time to realize that we do not hive in a perfect world. One does not always have the luxury of working with spectral quantities and instruments for spectral measurements are not always available. It is a good idea when it is necessary to work with average values to quantify the errors associated with particular assumptions. While the above approximations may provide acceptable errors in the longwave infrared band, they may not in the. electro-optical and midwave infrared bands. Either \vay, quantification of error bounds is always a good sanity check. Atmospheric transmission is the subject of many research projects and our knowledge is continually becoming more refined. MODTRAN, LOWTRAN, and other models are being updated frequently to account for more atmospheric phenomena. The data presented in this chapter are only a brief introduction to the issues of atmospheric transmission applied to I2R and EO systems. While simple models are useful for back-of-the-envelope analyses. more exact system responses can be accomplished with sophisticated computer models.
i
" EXERCISES
f .' l_
..,. ),,,.,.?_
L;:..-
I, LIf
I L
j---'
L,
~
0,;/
c.-v/
6.1 A CO 2 laser (10.6 pm) provides a.peak intensity of 10 W/sr and is directed . ~ to\vards a detector 1 km away_ Pf~~lde a rough estimate of the laser irradiance J on the detector on a typical day (use Figure 6.2). 6.2 A particular a~.ospheric extinction coefficient for 3 given wavelength is 3.2 per kilometer.. Calculate the transmission for a 4-lan path length. '. \___ ..l
I
,:) ,
6.3 Determine the dominant absorption compon'ent of light with a (i) 4.2-I.lm wavelength, 6-pm wavelength, and 9-~lm wavelength.
InLJoduction to Illfrarf'd and Flect["o--Optical SysteJllS
150
- - - _.. _------------------_._.-.-----
/ [
! L,
6.4 Determine the most sensitive wavelength of I ight for a maler.ial with a strong
bandgap energy of 0.1 eV.
' t:j r}?
/-
r:- -o{)
h /)
'~\~.\
f I
,\..\
6.5 Describe the type of scattering pattern _caus'ed -by fog droplets f('If (i) a He-Ne --laser at 0.6323 ~tm, (ii) an Nd:Y AG laser at 1.06 Mm, and (iii) ;} CO~ laser at 10.6 ~tm.
6.6 Calculate the temperature structure parameter for a 10 deg K differential temperature between a 2m vertical path length. 6.7 Calculate the index structure parameter for a typical des'ert day using the temperature structure parameter found in Exercise 6.6. i '
'I_ ('
::1r'' .
6.8 Determine the index structure parameter for a cool midnight. with a temperature of 15 deg C, a relative humidity of 30%, and calm (negligible) winds. , r ,,'I) f , • -L,v - 1 \ D <-'
,,t"
1
6.9 Determine the ground-based index structure parameter given that the index 16 I structure parameter at a 25m height above the ground IS 4.0 x 10m- 21J 16 e,"C'2-5 ) ~ L/)( 10 " 25 ":> 6.10 A Iongwave infrared system (lO-Mm average wavelength) is op~rating in a farfield, long-exposure scenario. The ground-based index structure parameter is estimated to be I x ] 0- 15 . The sensor is at an altitude of 25m and is vi~wing a target that is 5 km away in a horizontal direction. Determine, the long-exposure turbulence MTF of the system. Detennine the MTF for EO and midwave fR systems in the same conditions.
.,' '-
. "I
,
. !' r
REFERENCES [1] Neilburger, [2]
(3]
[4] [5]
:v .__ !. ; [6]) '" -"
Edinger, and Bonner, Understanding Our Atmospheric Environment, W.H. Freedman, pp. 36-41, 1982. Streetman, B., Solid State Electronic Devices, New York, NY: Prentice-Hall, 1980. ' Waldman, G., and 1. Wootton, Electro-Optical Systems Performance Modeling, Norwood, MA: Artech House, 1993. Seyrafi, K., and S. Hovanessian, Introduction to Electro-Optical Imagingand Tracking Systems, Norwood, MA: Artech House, p. 79, 1993. Seyrafi, K., and S. Hovanessian, Introduction to Electro-Optical Imaging' and Tracking Systems, Norwood, MA: Artech House, p. 80, 1993. Military Handbook, Quantitative Description of Obscuration Factors for Electro-Optical and !v/illimeter Wave Systems Metric, DOD-HDBK-178(ER), pp. 2-11, 1986.
'rr i
!"l
,I ~ '.;
r I
I
~i.
j'
'
f~'
L
\.
[
2.
i:
..
L_"J ~
.
~
f"'-
[
I:.
r
"j.!
{
r ::-~
f
(
~.;;
~
f
I" t!i
I L {'
L' "
( L
I
I
i ,.
f
l L
Atmospherics
151
[7] Sadot, D., S. Shamriz, 1. Drar, and N. Kopeika, "Prediction of Overall Atmospheric Modulation Transfer Function With Standard \Veather Parameters: Comparison With Measurements \Vith Two Imaging Systems," Optical Engineering, Vol. 34, No. 11, Nov. 1995. [8] Sadot, D., and N. Kopeika, "Forecasting Optical Turbulence Strength on the Basis of Macroscale Meteorology and Aerosols: Models and Validation," . Optical Engineering, Vol 31, No.2, p. 200, Feb. 1992. (9] Tatarski, V., r1'ave Propagation in Turbulent JHedium, New York: . McGraw-Hili, 196 I. [10] Goodman, J., Statistical Optics, New York: Wiley, 1985. (II] Sadot, D., and N. Kopeika, "Imaging Through the Atmosphere: Practical . Instrumentation-Base,d Theory and Verification of Aerosol rvlodulation Transfer Function~" Journal of the Optical Society of America A, Vol. 10, No. I, p.1017, Jan. 1993. [12] Holst, G., Electro-Optical Imaging System Performance, Orlando. FL: JeD Publishing, 1996. [13] Lenoble, 1., Atmospheric Radiative Transfer, Deepak Publications, 1993. (14] LOWTRAN Alanual, Ontar Corp, North Andover, MA.
I
\ "
_""'/
i ,..
.
,
,
]
,
"1 ' ..
r'
]
",
"]
,
\
'~l .!!..
~.
'-)
"
., ' 1 "
,
r
'
: l" ~l
r '
;l
C
~I
,
r '
ql
, r
1
1
0 :! ..
r I
I
1
\, -
L .
, ..-
L
i
\
.
I. '.
Chapter 7 \
Optics
,",
. !:
,
--
... , I
L.:~
! ;;
. ,I
, .
I
,··L
01' .
,..-.
I,"
The optics of an fR or EO system collect rad.iation from a scene and project an image of the scenrofii:o the system detector ah;y. In the design of the optics, it is important to understand the lirpitations that optical components int~ocluce in the overall system parameters. 6rri(~;rig the parameters that the optics define are spatial and spectral properties of the sensor system, resolution, and field of view (FOV). While the optics playa key role in defining these variables, other sensor system components, such as detector array characteristics, also have a strong system . influence. There are fou~ basic goals in the design of an optical system. The first goal is. to m~ximize over~U systerr.1 perfoITn;lllce. ,.The second is to maximize the ;~ :'~~sdlVTll.\g ptwer for the d~~i~ed t~sk while fu~int~fuing good area c~age. The third goal is to maximize the amount of image flux that is collected, and the fourth goal is to minimize system complexity and weight. We focus here on the first two goals. , The effects of the optics (Figure 7. I) on Qv~.rall system perfonnance can. be. ?esc:ibed by ~~ the ?iff~action. effe.cts ,!ssd~tc:Ted with the optics, (2~ the geometrIc blur c.~nsJd by Imperfect Imagmg lenses. and (3) the amollnt of sIgnal collected by the optical system ,\t Before discllssing these effects, we provide some I"..;background material on the interaction of light and glass.
t:
,-------------------1 I
Display Human "ision I
~
I ('O)PtiCS
000
I I
-Scanner
, ou a re here!
Targets and background
LI..
Figure
l
I Optics in the sensor system.
[
(
j'
1 r.
II
.'53
Ekctronl:,
1
I
• ••
. ,• • • • •
J
••
15·'
7,} LIGHT REPRESENTATION AND THE OPTICAL PXlH LENGTH
In the analysis of optical systems, one must consider how to represent light at each stage of the analysis prQc~ss. There are three primary methods, each with its: own .,." \~7 attributes. The first concelpt of light is that it is made up of a stream of photons. Each photon has an energy [I]: ., 1\-< - -,\' /J
E=hv
(7.1 )
I
r
where E is the energy in Joules of the photon, h is Planck's constaht of 6.6252x 10·)·1 Joule-~econds, and v is the frequency of the light in Hertz. This representation ~--co~~~~ient in discussions where light iht~tacts with a material such at the detector material. The second representation of light is that of an electromagnetic wave that is written as
L :rl"
.. : .. r '
'1,/' E = Eosin(ffi{ -Ie)
(7.2)
r
j
. i -
Wh!E~""
Eo is the wave amplitude, ffi is the angular frequency, t is 0.!.n:'\_~-. is the propagation constant of 21T./;"", and z is distance. Here, the light is referred to as a traveling wave because it is a function of both time and space. The wave nature of _. light is very different from the particle or photon nature of light, just as the molecular properties of water are different from the wave characteristics of ocean water at the beach. In fact, we can calculate the approximate number of water molecules in a beach wave, and it has been argued how many photons actually make up a wave of light. ~ ___ The third representation of light is through the use of rays, as shown in Figure 7.2.
~-+--+-+_-I--~ rays
Figure 7.2 Light rays.
'.1
I
ir-, . I
Optics
Rays are defined as lines in the direction of propagation that are everywhere perpendicular to the wavefronts. Rays depict light as traveling in straight lines and allow optical systems to be analyzed with simple techniques in a field called geometric optics. These three treatments of light have been summed up by Saleh and Teich [2J as "The theory of quantum optics provides an explanation of virtually alJ optical phenomenon. The electromagnetic theory of light provides the complete treatment within the confines of classical optics .... Ray optics is the limit of wave optics when the wavelength is short." -," 1 . One of the laws that g~n the prop~iati9n ,of light is known as Fermat's principle. Fermat's principle, or the Principle of Lfa~t Time, states that "a light ray extending from one point to another will, after any number of reflections and refractions follow the path requiring the least transit time [3]." Consider the ray in Figure 7.3.
,,-,,:,
i
I
~-.
,
155
'5..~.
\{\,<.....V\,i
(
I
L:...
A
n,
n,
~
V\ ~{9V-
,,,
...... ,
' '
vJd.. -t ~\f ""'--
d \ (J.~ s
,f (
r
I r
.,
0,
-..
B .:.
•
--
Figure 7.3 Optical path. .:s) \s'-y .~ The path of the ray is not the path of least distance if more than one medium is ~i '~~jnvolved. The speed of light in a material is c/n. where c is the speed of light in a vacuum and 11 is the material's index-of-refraction (or refractive index). Fe.nnat's principle can be restated in terms of an optical path le'Zg(h COPl) I
'.
,
' (2..P
·····1
OPL::::
-----..~._
r
'.j.
(7.3)
n(x)dx
A . ._ ..
"
The refractiv~ index is given as a function of position along the path. It can be shown that Fennat's principle corresponds to the path with the smllllest OPL.
3-
-::.. \ ".'5
-,,>
156
Introduction to Infrared and
E!ect~'o-Optical
]1
System.;
,
],
7.2 REFLECTION AND SNELL'S LA W OF REFRACTION .
~
,"
,/~ j
~
'.'~
">
~.r'
'J/J?
':.',,\e/
When a ray of lIght strIkes a boundary of two dissimilar, transparent surfaces, it is divided into two components. One component is transmitted into the second medium, and the other is reflected from the bounJary surface. A diagram of this is shown in Figure 7.4.
~
~l',
'
,.;
r
" "
;1 '1
:-"']'
N
, ,
R
:-) "J
/J(
.j
t'l' :
i.", .
~:;J
.
'j
~~!
'i
'<1
;!', ( •
'/
~
~1 1 i
-1 '
:~
.I
'"
~..
,
--'
'"
I
,; f)t ~T
Figure 7.4 Reflection and refraction at an interface.
, j
.
I is the incident light ray, R is the reflected light ray, and T is the transmitted light ray. N is the nonnal to the boundary surface, n, is the index of refraction of medium 1, and n 2 is the, index of refraction in medium 2. 8; is the angle of incidence light, 8 r is the angle of reflected light, and 8; is the angle of transmitted light. These angles are defined as the angle between each ray and the normal to the surface. In the case shown, n]>n" The law of reflection is stated as follows [4]: The reflected ray lies in the plane of incidence, and the angle of reflection equals the angle of incidence. From the variables defined in the figure above, the law 0 f reflection states ' (7.4)
The law of refraction, also known as Snell's law, is defined [5J as: The refracted ray lies in the plane of incidence, and the sine of the angle of refraction bears a constant ratio to the sine of the angle of incidence. This means that when light passes from one medium to another, the speed of the light as well as the propagation direction changes. The change is the ratio of the speed of light in one
.'
.
,'--
. \
Optics
/57
medium to the speed of light in the other medium. This ratio is defined as the index of refraction. That is:
I.; l
(7.5)
The index of refraction of a vacuum is I and ordinary air has an index of refraction of approximately 1.0003. Refractive indices given in glass catalogs ·are usually with respect to air, so vacuum calculations in lens design codes use 111.0003 as the refractive index of the vacuum. The index of refraction of a material is also a function of wavelength, but correction can be applied in the design of an imaging system to minimize spectral variation over a wavelength band of interest. Typical window glass has a refractive index of 1.5, whereas water has a refractive index of around 1.33. 7.3 GAUSS'S EQUATION
I I .. L.
Ray analysis and geometrical optics are analogous. They are first-order analyses based on the geometric refraction of light with glass. The analyses often contain th~ thin-lens approximation where lenses are simplified to perfect imaging devices. This simplification is performed by ignoring the aberrations an~ diffraction. One of the more common first-order analysis tools is Gauss's [6] thin-lens equation. .
,~. .
\ . t. ,
~
I.
I
1 1 1 -+-=a f i
(7.6)
f \. L I
~
...J
. '"1,
•
~
:.
"
~
L.' ;:
\.
O'
r:
I '-. I
J
:
i
• I
I i I '.
I Il'~' ;' j
I '"
I';}
I:.~.-J. ~
{ .J
.~.
L
L'~ r .
The sign convention for this equation is as follows. 0 is the distance from the lens to the object, i is the distance from the lens to the imag~, and f is the lens focal length that can be positive or negative. 0 or i to the left of the lens is negative and o or i to the right of the lens is positive. Remember the convention that light waves move from left to right across the analysis page. The lenses maybe repr'esenred In many ways as illustrated in Figure 7.5. The positive and negative lens types are given along with their representations. The single focal-length actions of the lenses are also shown. For the cases shown, the light is assumed to traverse from left torigl1~. Par~ll'~l~rays, or collimated light, entering a positive lens from the left will focu~ to a p6~ln~ Cit ()lle focallength to the ri,g'ht of the lens. A point source located one focal length to the idfof a positive len's will produce collilllatecl light t() the right of th~ 1.~11~~~ For negative lenses, the~'reftwifl dive~ge as if the light had collimated light entering the Ie'ns originated fr~ma point located one focal length ·to the l~ft of (he lens. Light coming to a focllsone foc~1 length to the right of the lens, results in collimated light in the space to the right ofthe negative'lens.
tro'm
lrllroduction
158
~'
10
Infrared and FII':',~lTO·Optical Syslen!s
\ ; '. c ..-:
-.;A'" r---; ~\r
; ~.,) L---'p
Positive ( or converging) fCJ\.a~ length
. ~.I;,- \{'~\ 1"./,:.>"c..J r'
-
, ;
,', ·1 r
,)ft
.'
.I
. ' I; .
d, c:}~.;'
,
~.
"",
:1
'J '<1
'i
'I
;j
".1
Biconvex
.. i ·'1 ~j
Meniscus Represent. Plano convex 'convex
~------I~
f
/." f Collimated light ::1to lens
Expanding scurce light into k.ns
'.1
~
• >1
;j
Negative (or diverging) focal length
;1
1.
M) :ii
·~rb/I";~
~
, >-?
.' .. . .:: ..... '::: ....
•
.~,J
.I.t: 'fi, ~~.,<-eJ.fV I,," l.p G.poJ ,.",) ".
. _,' l .. '. i' '''U If:t > ~\), •.•
,-, I ").1" ,l\. r-..:~(·· I \ .- ;;.P
'0
(.? :'
',..
!} . . <::I \..i
'f~'..;. (.'-- ....~.-,1 ,#..• £...
/"'"
c:r
'
J
Plano Meniscus Represent. concave concave
. .( -;/L"U f'L. (_)
,
. , ':7'~ :Lf1' L/"f;·.'
Example .r.-'
J,.tf' j
'j I
I
Gauss's equation can be demonstrated in the example of Figure 7.6. An object is pi'lc'fd 10 cm to the left of a 5 ~m focal-length lens. From (7.6), the image location is determined to be 10 cm.
I f=5em
I
1
1
-+-=-\0 5 i
1· I
"
l
~l' ..
i 't !
1 ;
fll into
rging source Ii g lens
f Collimated light inio lens
I Figure 7.5 Lens nomenclature and representations.
i
"
",
".
.;j
,
.
"':::"
,.,,:,
't,
"J
J
/:.vI ~.[I L-/
'"
,
..u "'::' if , . --:7 \ f1) ' l ... - I ~ Biconcave .
"
. i=IOcm
o
-10 em
Figure 7.6 Example of Gauss's equation.
+IOem
Optical axis
I
Optics
(......;
/59
rt:
.
L L L
This configuration is known as a 2f-2f, unity magnification system. If an object is placed two focal lengths~o the left of a positive thin lens, the image is located at two focal lengths to ,the right 9(Jhe lens. ' r'~" . The transve'rse, or laffral, magnification of a lens system is defined as the image distance over the object distance
/vIr .
= .!.()
(7.7)
II "-
In the previous ex~~p.le, Mr = _\00 = -1. The image is the same size as the object, but the image is inverted and reverted (i.e., top is down and left is right). Note that the arr~w representing the image is pointing downward to show the inversion. ~
/
.
.
1/ "-
Example Another example is given in Figure 7.7. An object is placed 30 cm from a thin lens of focal length 20 cm. Determine the transverse magnification. f= 20 em
1 1 1 -+-=-30 20 i
()
i= 60 em
+60 em
-30 em
;\IT
Optical axis
= 60:t-30) = -2 . I
Figure 7.7 Magnification example. ;
). ~
Again, Gauss's equation is used to detennine that the image is located 60 cm to the right of the lens. The transverse magnification gives an in\'erted image that is tWice aslar'g bj ect. .-.- .--. ...--- .. ---.--------.-.--... . . .. .. ---..----.-.--.-...........---.......
e-;s
a;e~
_.
Example
-------
,
..
-, ",t' . :!.? ...
1 , ,,-' l~d. .
."
. A virtual image occurs when rheobje{Jjl inside the first focal length ~f a lens. CL'nsider Figure 7.8. An object ~ cm in height is placed in front of a 20-cm focal length lens by 16.7 cm. Determine the size and location of the image. \., ' l
L
LJ.
r.:
.... -...
,/'Y
I!IIroclucf ion to Infl'vrpc! and E leclro .. Optical
16()
SyS~C'HlS
---_._---.----------_.... __._._ ......_--_......... _.. _....._._-_ ..._-------_ _ - - - - - - ...
I
:- '" 20
l
CIll
I
.
I
-16.7
2':
i
'1 .1
';
- - . . . -=j '"
-IO! 2
I
CIlI
()
I
t
!
Optical axis
~I
.l
-'i
The size of. the. virtu.'.ll imag~is 6.06 x 4 cm, or 24:2 em high .. Virt~al ima~es cannot provIde tmage flux onto film or a detector wIthout other Imagmg OptICS. However, one may view a virtual image through the right side of the lens. Fdr the right side of the lens shown in Figure 7.8, the image would appear at the given image position and 24.2 em in size. Finally, we discuss negative lenses (lenses that are generally thinner in the center than on the periphery). Negative lenses alone cannot provide a real image (images that can be formed on a physical surface) of an object, .only a virtual image. However, real images can be formed with a c6mbMation of positive and negative lenses (if the composite is considered and the object distance is greater than the total focal length).
J
Example
j
Consider Figure 7.9. An object 5 em in.height is placed 33.3 em from a diverging lens of focal length -12.5 centimeters. Calculate the position and size of the tmage.
I· !
f=-12.5em
I I 1 -+-=-33.3 -12.5 i
()
i =
-33.3 em
-9.1 em
Figure 7.9 Negative lens example.
r
I
Figure 7.8 Virtual image example.
.!
\
Air = (-lOll -16. -) = 6.06
-16.7 em
-10 J.2 cm
f
_i)
I elll
Optical axi$
r '.
~l!: .. -
~
r .
I:
~lr l
~'l' .·F
.
r.
Optics
L
. !
'L "~
[... 'it
·" C [ ,
.
... ...
':'
161 I
Sometimes lenses are specified by optical power rather than by their focal length. The power of a lens is simply IIj lfthe focal length is given in meters, the power of a lens is specified in diopters, which is the same as IImeters. This specification is common for eyeglass lenses. The use of diopters is limited in other applications. . ,'jJ \ ,}'.;' 'G) 7.4 VERGENCE ' . .,.;. (
,\ \,
".'
! --
, .... '11' )."./"""
Recall that the convention for the direction of light' propagation (on . paper) is from left to right. Consider the three cases shown in Figure 7..10 where P'r~ this convention holds true. The first case is an expanding beam that is divergiIJ( c· .-.> \ from a point source of.light. The secppd case is light that is neither converging nor ~ '::>diverging, but propagates as collifn&fed light. The third case is a beam that is coming to a focus ata point. All these cases can be described with the analytical technique known as vergence [7]. ---'.
i"" l..i: t •
I"L.
Diverging
Plar.e
Converging
IL., I .
t
l"egali\e
Ze~o
vergenc~
vergence
Positive vergence
Figure 7.10 Examples of vergence .. V"'\ '.. '-.r---'~:'
·1···. .
I .
L ..
The cur\'ature of the light wavefront length of the wavefront radius: Coc.!..
R
j
I·
,.
j ( 'j
.....
L i .
L-
L' 'i
~l
IS
inversely proportional to the
(7.8)
Vergence can be thought of as wavefront curvature, with a modification for the medium refractive index: v=!!..
-R
(7.9)
/L.....'
UsuaIly, units of wrgence are in IImeters or diopters. Some simple rules govern the vergence sign convention:
'-
$".,
introductIOn
162
1,0 l11U'an)\/ 'Ill(! t. 1<'I,.1i U··'-.'jllh.",
'.
Light travels from I,eftto r i g h t . , 2. All distances are meas~;;ed from a ret~::rence surface (where vergence is calculated). 3. Distances to the left of the reference surface are negative and distances to the right are positive. 4. The radius of curvature is measured from a wavefront to the center of curvature: (i) Radii to the left are negative. (ii) Radii to the right are positive. (iii) Plane waves give zero vergence. Example 1.
1
:f :1
:J ;
,', I
1
j
I;~'
WJ
~1
,./ ____ .-P
:-')
An example here is appropriate, so consider the point source shown in Figure 7. I I. Determine the vergence of light 2m from a point source. The reference is the location at which vergence is calculated (sho\vn by the dptted line). The radius is '. to the left of the wavefront, so it is negative. .'
:1 'l
n'~
:j 11
I
1
V
= _ = -O.5m
r "
r
l
_ (
~ ;..1
- 2 In
'~l :
Point source
.~
(i
.~r,
,
.
1
L ,
2m
,.i
Figu~e
7.11 Vergence example.
L
Example A second vergence example shows (in Figure 7. 12) the positive vergence that , results in a focus. Determine the vergence of light 8 em before the light comes to a focus in aiL The vergence is high for high curvature. That is, the vergence becomes higher as the position nears the point source or focus position. Vergence is a function of location, and not a parameter of the system.
L
r: L"
't' Ii 1CU5U
Optics
163
/ . -- _,- -- I'.... 5m -I 008m
Focus
0.08 meters
Figure 7.12 Converging example. ~etermine
,"-
L
the change' in vergence from one plane to another without the requirement of a point source or focus position, we derive a vergence propagation relationship. Consider the vergence positions shown in Figure 7.13.
I l~
·1
Figure 7.13 Vergence propagation_
We kilOW (hat V, (the vergence at plane I ') corresponds to a focus at a distance, L.I (7.10)
The same ~pe of relationship can be writTen for the vergence- at plane 2, where th~ foew: is de ioser to the wavefront at plane- 2 than the wavefront at plane I. (7.11)
I (!-I
- - - ------------
--_._-------
--_.
With the substitution of (7_ [ I) into (7.10) V,
I·', :::: - " - :::: -
PI -{
( ") J
_/'.!.-
(7.12')
I -1'1 ( ~I )
/ '_..' .. - _-,I
(7.12) is known as the vergenceprojJagatiol1 equation. Consider the example of an expanding wavefront propagating through \\'ater (n = 1.33). The known wavefront has a vergence of -0.5 diopters. We are interested in the vergence of the light 10m away from the known wavefront. Using (7.12), we can compute the new position vergence to be -0.0 I05 diopt~rs._,:,~ .~. ,~ ,,'.( /:" .. r''''' - The next property associated with vergence .is one of surface or lens inte'raction. The power of a surface or lens is simply added to the input vergence to obtain an output vergence: L_~-"
(7.13)
V'=V+f3 --';-- ::::.1. ,_\ __L. l.. () ';f
where V is the input vergence, V'is the output vergence, and f3 is the surface or lens power. The power of the curved surface shown in Figure 7.14 is (n2 - nl)/R.
1
Re~:cill1h.~.t the power
I
o[l ~~~~~~2.t!h~J~n_s..Jo~qlJ~!1gth,
The Lensmakers fonnula (8J ailows' one to find the focal length and power of a lens using the lens curvatures and ~~S_: . '-'-'" I' ,'.
. /)' (V
/
,I
P= ] :::: (n{- nf/l)(~l -~)
(7.14)
where, ' , n, is. the refractive index of the lens, n;" is the refractive index of the medium surrounding the lens, and RJ and R} are the curvatures of the lens surfaces (center of curvature to the right is positive and to the left is negative), For a biconvex lens ~urrounded by air, R=R,=- R} and if nm= I, then ~-.-.-
v
,j---l ,
.J v:-
e.".~l
--.!>
J '.
R
Figure 7.14 Vergence and surface power.
?'l '//
:""}
V'
1
"') ._/
~ t,,?
,
L.~~. ~.".,_,.~,.,.,...~~~_~~~. . . ~__~
I I
[
Optics
/65
1
l.
.-
~
,1
"
(7.15)
:l ,t·'
0;4
;
.."
t '1.
Consider the case of the single 2f-2f example given in Figure 7.6. Th~ vergence corresponding to the object light just to the left of the lens is 1"1 = -o~ 10 ::: -10 diopeters. The lens has a power of p::: I If::: O.~5 ::: 20 diopters. The vergence to the right of the lens is the vergence to the left added to the lens po\\'er or VI' = -/0 + 20 = /0 diopters. The image length is located where the vergence is zero (center of curvature) at i::: ~ = 0.1 m or 10 em. I
The transverse magnification can be derived for vergence using (7.7). Simply using the image distance over the object distance, it can be shown that ,
L IL
"
(7.16)
.. 'r .
Again, the size of the image is given by the transverse magnification times the size of th~ object.
, l_'
7.5 MULTIPLE-LENS SYSTEMS
I
.-
"~ \ 0'-'" ;\
J /
1"
'
For multiple-lens systems, each lens is considered separately, and the system is worked from left to' right (again, assuming t~at.the light path is fr@m left to right). I .. '" I1~ pllncipl~s ar~-, the same as those 'presented in the previous sections. To • Ullustrate the cohcepts, we will work a multiple-lens system first using vergence and', then using Gauss's equation. Consider the muliiple-Iens system shown in Figure 7.15, where the system is imaging a scene very far away (i.e., the object distance is intinity). '----'" ,
y.
~'-'
("'.
(: .
- I: L.
I 't
:"
~== 20 em [2== -6.25
[':
f .'
L,
em
~=
10 em
v V 2
v,'2
I< 1<' -'
-'
i =?
i'
I \.,
-: l"
OptiCal axis
'
I .1
!
i.
<6em><6crn » Figure 7.15 Multiple-lens system.
;( [J.
j) t..I ( f<
I
,
We are imerested in the position of the system image. First, we know that the l..
vergence to the left of the first lens, VI , is
=k ::: O.
In other words, the scene lies
/66
llltroduction to fnfrared an c.! FleclIoOprical
SystCltlS
- - - - - - - - - - - - - - - - _ . _ . _ . _ - - _..
far enough away from the input aperture of the optical system that the input wave fronts can be considered plane waves. Next. \ve can compute the vergen,ce to the right of the first lens:
. .j..
V~ == VI + Pl ::: a -
0+0 = 5 diopters .-
(7.17)
11/
The vergence to the right side of the first lens is then propagated to the left side of the second lens ~
.
~
/ ,:::: ,,~:::: 1-/ 1(1;) 1-5(-1')
V2:::
7.14 dlOpters
(7.18)
'1
Again, the power of the second lens is added to Vl to yield fl vergence to the right of the second lens, ~:::: V~ + pz : : 7.14 ~ -o.o~25m :::: -8.86 diopters. In the same manner, VJ is found to be -5.78 diopters by using the propagation equationfor the 6 centimeter distance. V~ is found to be 4.22 diopters by adding the power of the third lens to V3 . Finally, the image distance (from the right of the third lens) is .found '1
1
i:::: J~:::: y
,
I
•.1 !
j
J
') I I:::: 4._2/11-
a.237m
=:
['"J '~.l'
J :1,
r
(7.19)
23.7cm
Note that the light to the right side of the first lens is converging and corresponds to a positive vergence. Once the light passes the second lens, the light diverges . until it passes the third lens. The light is then refocused to a converging beam. The same problem can be worked using Gauss's equation. Each lens is cortsidered a separate imaging system. We take the first lens as the first imaging system with an object distance of 0 I == -00. We can now .find the location of the image from the first lens I ::00
I
+ 20
If.· I'
~Lj
"",f '
Ji
~J
I
U,
(7.20)
:::: ;;-
:rf
where the units are consistent in centimeters. The image distance of the first lens is found to be 20 em to the right of the first lens, i I =: 20 cm. If all other optical lenses were removed, a single real image would 6ct~r at this position. Because it is a real image, it can be viewed on a screen or detector array. The image of the first lens now becomes the object of the second lens: 02 :::: i 1 • Be careful here because the second lens is not located at the same position as that of the first lens. The second lens is 6 em to the right of the first lens, so il is really 14 cm to the right of the second lens. That is, 02 == 20 - 6 == 14 cm. The image of the second lens can now be found:
',i.J" r
L .:.1'
I·
;
'~'""'~J4.w.rr;::)AL.r..r:~!'lFlt-W""~~."" •• ".,',,)~...
(
'
.•
,..
..... _
'
... ..,.,....---~._,........~ ••••.. ~~ •• __ .......-~~~~_:~
r
----I .
Optics
\
\.
167
\
-+--=\4 -6.25 i2
I.
r·
The second image distance is found to be -11.29 em, or 11.29 em to the left of the second lens. Note that this image is virtual and can only be viewed through the second lens. The second image is also the object of the third lens (shifted, of course). Because the second image distance is 11.29 em to the left of the second lens, and the third lens is 6 cm to the right of the second lens, 03 = -11.29 - 6.0 = -17.29. The third and final image can now be found:
3I;,
1 1 \ --+-=-\7.29 10 iJ
·L
h,k . L'
,e
(7.2] )
(7.22)
where i3 is 23.7 cm. This is the same result that was found using the vergence technique. For multiple-lens systems, there is a corresponding transverse magnification that describes the effect of all .lenses. The overall transverse magnification is simply the product of the individual lens magnifications: . (7.23)
This quantity does not provide a useful value for the system in the above example, because the' first object distance is infinite. However, it does apply to the second ana third l·enses. That is, the product of the two magnifications can describe the ratio of the first image siz~ to the last image si(~7,~, For systems like the one given, it is often described by the FOY. Also, for collimated (infinite object distance) input to the lens, these systems have an "effective" focal length, or single lens equivalent.
7.6 FIELD-OF-VIEW )' .
f
i' . ~ i ,
j
J; ,/)/
T~e FOY of an I~R or EO Sys~~ is :ne 'Of the most important design parameters.
It is the parameter that descrl~es ~he angular space in which the system accepts light.· The system FOY is specified because many systems image objects at great distances. so the transverse would be extremely small. The system .' .magnification , FOY and the distance, or range, from the sensor to the object detennine the area that a system will image. Consider the optical system in Figure 7.16. . , , .
.
,"
"
('./ >, 'iJY) ". :, ,
[}
. '---
l) ,
I
)i
I'
~I""/
A
/68'
Introduction to Intrarecl and Llectm-Uptlcat :J}s:c!l1s
- - - ---------_._--- ....-.._-------
------------
[ :/
L i'l
L
;...
>'
IIr "·'1 [
(
!"./ :: ..
Figure 7.16 Field-of-view.
"{:.,'
s
j
"",
For the system shown, FOVh and FOV v are the horizontal and vertical FOYs, respectively. For systems that view objects at a range R much larger than the ,focal length f, the FOYs are the arOOlngents of the image size divided by the focal length:
.,/"
.' r ..,..
cv/ (/ an d FOr.: -2tan -I"2r ~ ~ . FOVII -2t an -I 2r Y I' ~:1
.(7.24 ) .\
.\
~ .~..-
,
For small angles, the FOVs can be estimated as aIJ and bIJ. Then, the side lengths of the object area are simply FO,V"R and FOVI.R. , . ">~/ r _Tpe image size is b.ounded by afield stop. TheJ~eld stop is located in an image)riI..~Q~ (or an'!in·t'e~~f.fl~ti;,!!TIage plane) an'd is s'p~clfie'd by a and b. The light-se~~J~~~m~terial ~ll\IllJt~d_ to th: area inside the ~el~stop, so the field stop can be )11er.~lY like a fram& (!Jke a pIcture frame). This IS uSl;lally the case for ,film-based sensors. Sometimes the field stop is placed in an intermediate (not the final) image plane. However, J2R andJ~St systems\} e~-ploit ,lj~t with detectors. The detectors take the form of a two-dimension.<:tl array of individual detectors called a staring array or a single detector or rows of detectors that are scanned across the image space. In these cases, the size of the image plane is defined by the light-sensitive area of the array and the possible light-sensitive positions of the : scanned systems. A field stop larger than the detector array size would not limit the FOY ang hence would not be required. r J:' j\ ... : The optics of a system usually comprises of a number of lenses with a single effective loea/length (sometimes shortened to just local length). This is'the equivalent fo~~!_, length of the system that, given the true field stop or detector array size, yields the correct FOV. For example, a system has a 35 mrad (horizontal) by 35 mrad (vertical) FOV and uses a 512 by 512 detector array where J ;
• .,.
,
,
J
L,
r'
f .
Optics
L
t l~ . r· :
1169
each side of the 1~.~.x~_!~r array is 1 cm long. The optical system may have five lenses in the optical train, but the effective focal length is 28.6 cm. To calculate the effective 0c~,I~ngth for a multiple-lens system, .one needs the field stop size (or detectoYspecifics) and the FaY. The FOV can be computed by using the field stop (or detector) as an object and working its images back through the opti,cal system (the reverse of the technique given in Section 7.6). This process is eri1pl.0f.~d to the point of the last lens (just before leaving the i'" ,_ •• ( sensor). The angle created by the Image size of the field stop before the last lens and the last lens focal length gives the FOV. Consider the two-lens example given in Figure 7.17.
::"1......:_
FOY==2 em/30 em == 67mrads
f' = lem/67mrads = 15em
Sid e
e ff ,";
f=30 em
of
f=5cm ".
detector
'--.: ..
<-------~~--------)~~<------->~<---~
30cm
IS-:!l1
7,5cm
t
/1
--
, 1...:.•.
f
Figure 7.17 Effective focal length example.
!
';
f
!.i""
I I
, I· _
f;
u ,f
)
r' -. ,
j-
I,
I, ,
L~'.
)
:"" f
L
J
,
.J
I
'
\::
L -- .. ; ;
r,
I l
I
.-
The system has two lenses: a 30-cm objective lens and a 5-.cm den:aRrifier (I~elay lens). The FaY is computed only for one direction to illus(r~e the concept. We are interested in the effective focal length of the entire system. First, we find the image '/" of the detector array to be located 15 cm from the lens usin2: the material presented \·:·~Iier. The magnification of the detector array by the 5-:'cm focal length lens is /5 em/7'. -" cm = 2, So now the" detector image is 2 cm in height. The FOV of the system is this detector image divided by the 30-cm focal length of the objective lens. \vhich is equal to 0.067 rad. or 67 mrads. Finally, the actual detector array size divided b~ the FaY giv/~f,- the effectiVe focal length of 15 ~Il1.:, ~ eff~ctive focal len2:th IS used frequently because a detector can be replaced With a .... different-size detector and the new FOV can be quickly computed. \JO\V m:~ discuss the design n3ture of the FOY. A. large. or wide, FOY allows th~ sensor! to view a larger area. but the detector elements are spread over this larger area Y'ielding lower resolution (i.e .. image fidelity). A llaITOW FOV gi'ves betrer resolution, but it is difficult to find objects of interest over a large area. Many military applications require the sensor to have two FaYs. The first is a
'lb c.:
rn
Inlroduc:fion to Infrared and I:: kctro·Optical
SYStCflJS
.~----
...
.. r
,.-----.-
medium FOV for firiding, or detecting, objects of inter-est, and the second IS a naITOW FaY that can be used to interrogate potential targets, For tactical systems, large FOVs are about 10 deg and up, medium FOYs are around 5 deg, and narrow FOVs are less than 3 deg. The tradeoffs fOf'system FOY include sensor resolution, sensitivity,'and area coverage. The optics and detectors are. designed to provide sufficient resolution and sensitivity for a given task. Overall sensor perfol1nance is evaluated to accomplish discrimination of a particular target, and finally the largest possible FOV is selected that still allows the required target discrimination. Chapters 11, 12, and 13 cover the procedure for accomplishing both discrimination perfonnance and FOV selection.
.1
,
• :!
I
.I .,
'.1
.,1
,I
7.7 RESOLUTION
J
I
J'
',f \"
The concept of diffraction was presented in Chapter 4, so we know that a lens fannot provide a perf~~age of point source. Even for perfect lenses with no c; 'fu9 aberrations (imaging imperfections caused by refraction or retlectio~)'l,t,~e.}~ages of point object,') are not points b~ are optical psf. With or without aberrations, there is an ultlma'te resolving linfitation for a lens or optical system. The resolution of a lens or optical system can be specified by either Raleigh's criterion or Sparrow's criterion [9J. .',t., ' . . .r ,.U . If we consider the geometrical optics representation, the light is characterized as traveling in straight lines. The ray representation of light from an (J' ideal point source at infinity and imaged by a lens is shown in Figure 7.18. \,~-f" , j>,' . -:. ,.rv <. ~'i) -.] D- aperture stop
:.1 (.
"
"S
A
f - focal length
L
x
D
f Figure 7.18 Perfect point imaging system.
..1 r . ~ l
;,,L .
-[
rL
Optics
171
,I
~JL ~. \'Lrt
i •"
From the approach of a perfect imaging system, the point source would be focused to an ideal point and hence would produce infinite resolution. For real world systems, however, this is not the case. The actual size of the imaged point , source is determined by three contributions: (1) diffraction, (2) aberrations, and ,- (3) manufacturing defects and assembly tolerances. The actual image of a point - source by a real system is often referred to as a blur circle. For many optical systems, diffraction produces the primary constraint on resolution because aberrations and manufacturing defects ca~ be minimized by good optical design and tight manufacturing tolerances or alignment procedures. Figure 7.19 shows an illustration of the relative intensity across the space of an imaged point.
J(r)
; i . I ' ~ L
-f
. I
'-.-
,
t ;
-6 -5 -4 -3 -2 - 1 0 1 2 3 4 5 6
L
1
r
i!
-
;
Figure 7.19 Cross-section of imaged point intensity.
'--
f
.
Ii -
,
,-
I
L-
The normalized intensity rf
I-
I'
I;
LL
(7.25)
I'
i L
1 J (
I I
\ ..,
,
i , i. ..
i
I
I'
! l.
iI'
l_
I
1
'---
L
where r is the axial distance from the peak, D is the lens diameter, f is the focal length of the lens, and A is the wa\'elength. This function is the psf of the optics where the 50mb function is defined in Chapter 2. Given that a point source is not imaged as a perfect point but is sRread out in the image plane. we want to be able to define just how well an optical system can resolve objects that are placed \'ay close together. In other \vords, what is the spatial resolution of the sensor system? Consider two separate, ,adjacent, equal-intensity point sO,urces that are imaged vel)' close together, as shown in the Figure 7.'20,
Introduction to Infrared .
, i
,I
'r
,!
-----------r------. --.,- --.-----..... ---------------- \", .. Sf? \ _--~------ , ---
--
- -
-
/ ;/"
------
/
1.22 (Aj)ID 1
/
~ ...... ,
--- ---
--
.......
r'
....
--- -.j"
'J
Figure 7.20 System imaging two point sources.
>/
~\.r ;'r"-~
,
'·'1
f ~.
We know tbat,~ch point source is not focused to a perfect point, but sp'read out. A s\ffiRft'profile of two closely spaced point sources in the image plane is shown in Figure 7.21. ,)W~_,'-;~'.~ ,\(,.,c,
,
,:,'1 r '
J(r)
\
\
o
,
---
r
l,22(;tj)/D
Figure 7.21 Intensity of two point images just resolved by Rayleigh limit.
r
~L
The Rayleigh resolution limit 9efine~ t;w.o point sources as just resolved when the peak of one point source im~ge ~oYncides with the first zero of the second point source image. This distance is defined mathematically by (7.26) where rR is the distance in meters at the image plane. The Rayleigh criterion given b y ' " "
IS
(7.27)
:.n· (
I
Optics
\,
.L
e
" 1'.
".
where 8 R is the angular distance between perfect point -sources at ali. infinite distance from the lens. Note that the Rayleigh criterion is not a mathematical law but a rule of thumb, though it is very commonly used to define the diffraction-limited resolution of an optical sensor. :;>..JUAnother common technique for defining optical resolution is the Sparrow's criterion that states that "two image points A' and B' in the image plane can be just resolved if the intensity is a constant as we move from the image point ,A'to point B' in the image plane." The Sparrow angular resolution is given by
8s =! D
L I :,
I., [.
.-,
"1 '
/73
(7.28)
The angular separation is less than that of the Rayleigh criterion. Both these criteria, Rayleigh and Sparrow, were derived assuming incoherent sources. Another common term for defining resolution is the blur circle, 'or the airy disk. This definition uses the same Bessel-based sombrero intensity equation 'defined above but defines the central blur as containing 84% of the energy. The angular size of the blur is given by
L;'
8 ,.,' =
r'
2.44>" D
(7.29)
I:'
,·,l,
The diameter of the blur in the image plane is given by I
, [,
i:
,
[" " c..
"
L~ I",' 1
!, :
I, ,
L
j' J
• L,
,!
"",.
,
!.
f'
1(7 ' .30)
2.44V d hlur = -0
J ,L
/
/
.7'
Because the F/# (f-number) is approximately ji'D, the spot size is not a function of optics diameter or focal length alone, but only a function lof the wavelength and F/#. The overall goal in high-resolution sensor design is to minimize the separation distance of the two imaged point sources, which equates to mTnimTZillg-'the' di"amete'roT"lhe-bTur.ln eR and EO system;' the blur spot is deslgned-witl1the detector-ancfsensor performance characteristics in mind. As seen in (7.30), the" separation paraineter "(e.g., blur diameter) depends on the optical parameters of focal length and aperture size. ,
i
.. ,
I
~
"
: 1" .
!'
j
'-
1 (
,
I '~
t
j
( ," I ,
j.
7.8 MODULATfoNTRANSFER FUNCTION
In the previous section, we discussed the resolution of an optical system using the spatial separation of the point-source image blur. This blur spot has been identified by two other names: the incoherent impulse response and the psf. These are equivalent quantities that are used in different contexts. This spatial function can be converted to the spatial frequency domain, Consider the optical system shown . F'19ure 7.I__I . • In
Intrnduction to InfrJred --..
.....
_.__
--_... _-_
........•
~H1d
E fZ'tI1(1-0pt ical
.•....... _._....
. ..•...._....
SystCll ';
. . - -......•-----........
i
.. -
.......... ......
1
-:
I
;. I , I
r
"'1'
r
'
,
Pupil tunction p(;t.y) ..
\
:·1 I •• '
f
Object plane
Optical system
:1
r
Image plane
Figure 7.22 Optical system with pupil function.
The. lens system shown [11] provides an image one -fQ.<::al length from the entrance pupil given that the object plane is sufficiently far away. In typical imaging systems, the collection aperture is located at the limiting pupil function, p(x,y). We know from Chapter 4 that the system shown has both a coherent transfer function an9 an incoherent transfer function (or optical transfer function (OTF)). While we are not usually interested in the coherent transfer function, it is easier to start with it to develop the incoherent transfer function. For the rectangular aperture shown, the coherent transfer function is
H(~, 11) =
X Y reet(L x 'L y )IX=~)...f,Y=l1Af "
(7.31)
Recall that the Fourier transform of this coherent transfer function gives the coherent impulse response (i.e., a nvo-dimensional sine function). A large aperture here corresponds to a small impulse response and consequently higher spatial resolution in the image plane. Also, from Chapter 2, the reet function transitions from 1 to' 0 at arguments of 112. This transition occurs when S= LxI2'A/ and 11 = Ly I2'Af. These are the cutoff spatial frequencies in the image plane. Note that the units are cycles per meter or cycles per millimeter depending on the units of length used for aperture size, wavelength, and focal length. To detennine the optical transfer function of the system shown in Figure . 7.22, the normalized autocorrelation of the coherent transfer function must be found: (7.32)
.,.
L
'
I
I Optics
I[ , 1",
jk
L
While (7.32) appears complicated, the OTF is simply the autocorrelation of the coherent optical transfer function with the peak value (at the function origin) set to I. With the substitutions given in (7.31), the system OTF is
J
OTFCS, 11) = tri(~:, :;)
L L \ ;
\. '. L
r
I [
! L
i . l'
I,'
L
.
;;
L
, I.'
f
I.
.~
>" L,
. I· ' l.:
f.
L \
i. l_
I I !,. I
I i
f
(7.33)
The cutoff spatial frequencies for the OTF occur at ~ = LxlAJand 11 = LyIA/, which are twice as large as the coherent case. haracteristics of the transfer functions must be Three ve ~iscussed h~re. First, the functions descrt e he transfer of ~ies onto the image plane where they are there defined in cycles per distance. It is not always convenient to represent s~atial frequencies in these units and they can be converted to angular spatial frequencies simply by dropping the/in (7.31) through (7.33} While the cycles per distance unit is convenient for image plane calculations, many times we are interested in the angular spatial frequency response of the system. For example, many sensor performance specifications are given in units of cycles per milliradian . . The second characteristic is the segmentation of the. OTF into the modulation transfer function (MTF) and the phase transfer function (PTF). The MTF is the magnitude or modulus of the OTF and the PTF is the phase of the OTF. In some cases, the OTF actually gives a negative value for some range ~f spatial frequencies. When this occurs, the PTF provides the phase shift necessary to change bright regions in the image to dark, and dark regions to bright (i.e., phase reversal). This occurs for the spatial frequencies where the PTF is negative. This is not a desirable design characteristic, so it is not the intent of sensor d~signers to have their systems operate in this mode. Severe defocus is one of the primary causes of phase reversal. Finally, the third important characteristic is that of signal transfer. Consider a slice through the OTF in the horizontal direction for both the coherent . and optical transfer functions (Figure 7.23) .. For the coherent transfer function, input spatial frequencies higher than' dc light but less than the cutoff frequency transfer through imdegraded. This is not the case for the OTF, where the amplitude of the higher spatial' frequencies is reduced. The amplitude of an output signal with ~ frequency half the cutoff will. be reduced by one-half. This transfer is l with respect to the dc (zero frequency) signal because there is some transmission loss of even the de: signal through the optical syst,em. In summary. the resolution of the coherent svstem is 'Iess {han that of the incoherent system, but signals bf all , • ' . I frequencies less than the cutoff transfer through the system in the same manner. The resolution of the incoherent system is twice that of the coherent system, but iITghef'-;p;~i;rfrequencysTgnars-'sufteYnYOredegI:aaatIOilastfi~thfollgh the system. In either system, no signals higher .than the cutoff frequencies are seen in the output image. ~
I
1.
/75
,
on·
L II,
,:
'
-~
_____
~_....J
1-:6
-----
...
_---.- _._--_..-.... -
..-.- ...
!
. \
H(~,O)
OTFC:.O)
~
L'( 2A[
L
,
- ' - Cl'/f/l ,
,
~
0 'c
LX cylm
,-]
i.j
or
or
Lx cylrad
L ...L cy/rad
..
''1
).
2,1
Figure 7.23 Coherent and optical transfer functions.
While the rectangular aperture was selected to illustrate OTF and MTF concepts for simplicity, most apertures are circular. An unobstructed circular aperture placed in the system of Figure 7.22 yields a coherent transfer function: H(p) == cy/(
f5) Ir=pAf
: 'Jr'. 0.'
~
,
Y/I(PL__ IYII(O)I
"
(
(7.34)
where P is the circularly symmetric spatial frequency. The OTF is the normalized autocorrelation of this function: OTF(p) ==
•
.
'l(7.3 5)
and the MTF is the magnitude of this function. While a closed-form solution for the autocorrelation of the cylinder function is a straightforward geometry problem, the results are cumbersome. We know that the function is twice the width of the original cylinder function, so the cutoff frequency is twice that of the coherent system. Also, we know that the correlation degrades the higher frequencies because of the smoothing operation. The solution this autocorrelation function is provided by Gaskill [12J as (7.36)
where Pc = DI'A,f in cycles per unit length (usually, millimeters or meters) or Pc = D/'A, in cycles per radian. D is the diameter of the circular pupil function. (7.36) is valid only for spatial frequencies less than the cutoff frequency, and the MTF is zero at spatial frequencies above the cutoff frequency. A comparison of this function with the MTF of a rectangular aperture of the same size is shown in Figure 7.24.
f ~.ii:'j' \1,.
r L ~
"","
f
(
,
[
'.~
.~
f
,.
I (
~L
MTF Rectangular aperture
L
Circular aperture
I.: ·L;, !i
D
,I
.
-.t.. I;
~,p
!
L
Figure 7.24 MTFs for rectangular and circular apertures. The Fourier transfonn of either function shown yields the psf, or blur spots, of the diffraction-limited optical systems. The transfer functions for optics with aberrations must be treated separately.
. i•..
r
L'
..
r
L'
I,'" l ,
Example I
'
.
Ii L', I' :
Detennine the MTF, plotted with units of cycles per milliradian and cycles per millimeter, for a diffraction-limited optical system in the longwave. Assume that"· . , the entrance pupil has a diameter of 20 cm and a focal length of 60 cm. Also, plot the psf in milliradians and in millimeters. We obtain a solution with an assum'ed wavelength of I 0 ~lm which gives an angular spatial cutoff frequency of 20 cycles per milliradian and the focal plane spatial frequency is 33.3 cycles per millimeter. We use (7.36) for the MTF approximation of a sensor with a circular entrance pupil and (7.25) for the psf. The MTF is first plotted in angular spatial frequency and then in distance spatial frequency (Figure 7.25). The corresponding psfs are also shown. Recall that they are the Fourier transfonn of the transfer functions. All plots shown are circularly symmetrical. /'
I
L
I
L , I
I
L
[
, '
";
!
!
! :
L;
,I, l. .
7.9 ABERRATIONS AND VIGNETTING
The first-order imaging system analytical techniques given in the earlier sections 'Y.ere only approximations of act1!~LQ-Ptk...C!L~y~t~rn~haviQ[. The differences in the ~g~i9r _~n~he _Brst~.ordc:£_~~imate~_~~!!.r:!ow~_~~~b~/~s [13). Hecht and Zajac provide two main classifications of aberrations: chromatic aberrations and n1on-ochromati-caberrat-i~ns.
1:8 I
Introduction to Infrared and Electro-Optical Systems ---------.-----.--
I
'.-" ,I
MTF
I
I
"-
0_8
R-
0.7 0.6
0.5 O.~
,j
J -
,-
0.3
(
0.2 0.1
. I "
L -_ _ _ _ _ _ _" " ' -_ _ _ _~~
0
0 -.
,----~-
10
5
15
20
"
Cycles/mrad
MTF
.
X 1(1'
mrads
J
. i
o. O.
.. 5
O. O.
-,-)
~
5
10
15
20
25
30
2.5 xl(ll mm
35
C~'cles:millimeter
I/Z-1I1
R
-I'
1)2 ~(l_ 1)2J + h2[~(1 20 () + R + 2i R i + ....
(7.37)
The higher order term shown here would give a third-order approximation. Note that the additional term is a function of ray height above the optical axis. That is,
-.--~~""""'~~.r,.'~·""'·--·'-'·-.'~I-~-~_.--._
(
.
• r .
Chromatic aberrations occur because the index of refraction forot, a particular material is a function of frequency (color in the visible spectrum), It can be shown using the lensmakers formula, given in (7-14), that the focal length of a lens and its corresponding image plane location changes with a change in the lens index of refraction. The effect of chromatic aberration within a lens system can be determined by performing a number of imaging calculations over a band of refractive indices across to the waveband. A number of computer ray tracing programs can easily perform this task. Aberrations can occur even when the light is monochromatic. They can degrade the image in clarity and can even deform the image. Spherical aberration, coma, astigmatism, Petzval field curvature, ~nd distortion are all monochromatic aberrations. . ' Spherical aberrations evolve from the first-order approximation that we made of surface power. In reality, the refractive action at a spherical surface has higher order terms: 1=
(
'.:
.1
Figure 7.25 MTF and psf results,
.f
;·1
.......... .- ...... -.T_._._._.~.
-
W·~~'~"·~·~~··I·~·~~~~~·~~~~~~~~~~~~~~~~~~ww~~u~~________~__
)
iE
Opiics
'"-_.. -
179
(
(,U [
the error between the first-order and third-order approximations is a function of the ray height. A large h causes the light to focus at a shorter di"stance from the surface, as shown in Figure 7.26.
!t I.
f
[:
(.
.J,. ~
!..~
t.
r'·'
I,·:
L I'·
L
L
.l",
IL IL
Figure 7.26 Spherical aberration.
I··
L
1
., .1::.: l~.
/
/' L It .t ) L f
L .1
L I,
l i'
1'. ; .'' ~. L.
i
f·
The paraxial focus position is valid for those rays that enter the surface near the optical axis. For a ,given application that uses spherical lenses i h is limited and a ';cirCle of \east confusion" is found where the image plane is set between the' marginal focus and the paraxial focus. This position provides an image with a minimum diameter. In some cases, where large relative apertures are required (relative aperture is the aperture diameter over the focal length), surfaces and lenses are constructed \""ith aspheric surfaces that correct for spherical and other aberrations. In lens systems, the resu·ltant· spherical aberration is a' summation of each surface contributor, so the overall spherical aberratiqn can sometimes be corrected with a series of spherical surfaces. Also, optical designers can minimize spherical and other aberrations by adding more lenses of l.Ower. power to the .. sy~tem. I
Spherical aberrations occOr on the optical axis, or on-axis.· An off-axis aberration is coma that occurs for off-axis image points. Coma comes from a Greek word meaning "tail of a comet." The imaging planes, or principle planes, within a lens are actually curved, giving a transverse magnification that differs (or rays traversing off-axis. In short, coma is the result of an error in off-axis magnification. The resulting image of a point appears to be a comet-like image (a round spot with a trailing tail). Astigmatism is another image degrading aberration. It occurs when· an image point lies an appreciahle distance from the optical axis 5uch that rays strike the lens In an asymmetric manner. ·The plane defined by the center ray, or chief ray, of the bundle and the optical axis is called the lneridionc;/plane. The plane defined by the chief ray but normal to the meridional plane is· called thesagitlal
,. 18()
-_ _------_...._..._"..._............... ....
_-_ _._-_ _-_ ..
.....
.........-.. --_ ..' -
...
_--
plane. As:igmatism occurs when the im.1ge location in the :neridional plane does not occur. in the same position as that of the sagit!al plane. For example, if the image ora point in the sagittal pla~e is a veltical line. the ir:1age at the meridional focus will be a horizontal line. The best image is betvv'een :hese tWO line images. A.stigmatism also occurs when the curvature the lens is di :-rerent in one direction versus another. Lens astigmatism is purposely used to ~orrect vision frol11 a
;r
distorted eye in prescription lenses. Two other aberrations are field .:...~urvalllre. and distortion. Field curvature, or Petzval field curvature, results from too many positi\'e lenses. The image surface is actually curved or spherical in shape. so images projected ontotlat surfaces appear sharp near the optical axis and blurred on the edges. Field curvature can be flattened by balancing the number and power of the positive and negative lenses in the system. Distortion occurs when the magnification varies as a function of radial image location. Therefore, the image size varies as a function of distance from the optical axis. The two types of distortion are pincl/shion and barrel, as shown in Figure 7.27.'
.. \,
,--
I
I ., I :,·1
I I
;
I
'.
,,'~
I
...
t.
Pi IlC U 5 h ion d isfo rtion
Original--:object
im~ge
.
Jlarre I -
-distortion - ~.
. image
.,
:.".
:".~
"" "
Figur"~,,7.27
'
Distortion.
p~inis
out ihat if is convenient to represenf aberrations as a single . Holst transfer function wnere the overall optical MTF can usu.aliy be represented: ..
. MTFopl;c.~ ::;.-MTFJiffftt-fTF(fl,lIrr .
(
,
L
I
~\. "
('
.~
,
.
(
(7.,38)
. MTFJI/f is the diffraction-iimited MTF of the optics. Aberrations can be approximated by a Gau-ssian function:"where the MTF is.given as ". . (7.39) MTF uhllrrafiotl.l·(P) :;::: Gaus( !2rt crp). , The FLlR92 model, developed by the U.S. Army Night Vision and Electronic Systems Dire ctorate (NVES D), uses th is approach, btl tHo 1st [14 J sta tes th at the scaling factor is not directly measurable because any optical measurement includes . both the diffraction and the aberration characteristics. It can be determined given a
L
,.
Optics
181
\(
.~"
I";'
)U
r":
UI
'".] lj I (
"
.
,'-
;
measurement with the·· calculation of the diffraction effects. Also,the approximation of (7.37) does not yield good results if the aberration MTF and diffraction MTF are similar. Ray-tracing programs provide a means for determining an overall MTF that includes diffraction and vignetting. Vignetting [15] occurs when there is a mismatch between elements of an optical system resulting in a partial obstruction of the aperture for some points in the image plane. It is not necessarily. a design defect, as it is sometimes a calculated way to eliminate aberrated rays at the e"xpense of flux throughput. Vignetting is much more common in the wide field of view systems as opposed to telescopes. An example of this is shown in Figure 7.28 . Aperture
A'
"--- ..
Point A / Sources"" ./ B.'-
\
'
-
--
L~
I .. i
Figure 7.28 Vignetting in an optical system.
The bundle of rays from point source A pass through the two-lens system and are imaged at A'. The bundle of rays from point source B are partially blocked by the aperture. This vignetting effectively results in a different-size aperture that results in an object's being reduced in brightness. 7.10 OPTICAL MATERIALS
The selection of lens materials involves the consideration of many issues. A primary consideration is that of optical. transmission. Many materials that transmit visible "light are opaque in the infrared wavelerigths. Examples here are normal window glass, Plexiglas, and quartz. Germanium is transmissive from 2 ~m well past 15 ~m,but it is opaque over EO wavelengths and over the visual spectrum. Zinc selenide is a broadband glass that transmits EO, midwave. and long~'ave light. There are families of glasses with a wide range of transmissions. An example "is that of the IR~RAN glasses that are tailored for use in infrared syste~s. EO system lenses are con"structed of fused silica, BK7, Zerodur, barium flouride, and many other types of glass. "
The transmissions of system lenses are llsllall:. grouped into a single system optical transmission, t (}plicAAL The transmission of an optical system affects the amollnt of flux that propagates through the system and falls on the detector. A large transmission in the band of interest is desired. Also. the transmission is usually a strong function of wave1:.=ngth, so the spectral tr:ansmission must be known along with the source spectrum and atmospheric transmission to detennine the tlux on the detector. The detector response is also a function of wavelength. All these spectral quantities are integrated over wavelength in order to give accurate radiometric quantities. I Lens selection must also include index of refraction considerations. If the index varies as a strong function of wavelength. large chromatic aberrations occur. Sometimes a lens is constructed from two different materials with spectral index functions (dispersions) that cancel each other. Such lenses are called achroma/so Typical index of refractions are around 1.5 for EO lens materials. Infrared lens materials usually have a higher index of refraction. Zinc selenide's index varies (depending on wavelength) from 2.4 to 2.7. Gennanium has an index of refraction of around 4 in the midwave. Other material considerations are thennal properties, elastic constants, hardness, and other mechanical propenies. Lens design and material selection is an extremely complicated process. Usually, a system-level engineer or scientist does not perfonn this level of development, as there are optical professionals that spend . their lives producing optical designs. These professionals are usually consulted during the development of an 12R or EO system.
') :1''',
'. ,I '
:
i
,1 '
~,
"
.~.:j
":',
]"
::,',' t "
"
::j (
,
r
. ;"l .,;1'
7.11 A TYPICAL OPTICAL SYSTEM
(
,
~:
i'e ,
This chapter would not be complete without a description of a typical optical system. There are too many different types of optical systems to begin to address the ch'aracteristics of various optical systems. Thes~ include microscopes, telescopes, night sights, target acquisition sensors, navigation sensors, cameras, camcorders, and the like. Here, we describe one classical system as an example: the common module FLIR [16]. ,,)"'~~; Consider the optical diagram shown in Figure 7.29. The lommon module FLIR was developed in the 1970s as an atte~pt to standardi~e the infrared technology across many military platfonns. The FLIR comprises an infrared side and an electro-optical side. The infrared side includes three optical systems. The first system is the infrared afoeal, which consists of the first two lenses in the optical train. Note that the ~d light into the first lens is also collimated on the output of the second lens (Le., the light bundle is smaller). That is, there is not a focus for the two-lens system, thus afocal means "without a focal length." The second optical system is the scanner. The scanner is an infrared mirror,(a mirror with a cOpting that is reflective at IR wavelengths) on one sid~ and an electro-opy(cal mirror on the other. The collimated light from the afocal is reflected '. .' ,i
r...,
j
11
"
f"!
'
~.,.., ...pyr;.r'T«'''''''''70'''''''''''''''''''~'''''''~_~'''''"''_'~~'''''~~--'1''''''~-''''~?~.""-- _ _ .""""_.~-,- __ ,,,,, . __
",:r
·r, ~"T
L L L L -;-'-j
·L
c . .. -C""· ;
,')
~[ ne -J
I '~iC
Optics
/83
.
from the scanner into the third optical system, an infrared imaging lens. The scanner causes the object scene in the field of view to be scanned across a line of 180 detectors (a 180-detector linear array). The result is 180-line output voltages that correspond to image scene irradiance on the image plane. The scanner tthen Jilts in angle such that the space between the detectors is. now ~lIe~ w.ith the o~her ! half of the scene and scans back across the detectors. This 2: 1 mterlacmg proVides an effective 360 lines of spatial information.
1----l++1~=l1>
Eye
'-r--t--r'
-fL. ns I
e!
I L..
Visible LED array
)n
I
,(5:,..
.:10
~.-
,
Infrared side
:0 .
array
Figure 7.29 Common module FLIR.
!
!
al • j .. :CL 5: .
.5.1·L....'.
dI .
jeL :c1 :i ( nL... L , )( i.
.e; . "
!l . Ir n'
.
The output voltage from the detector array and amPlifietank is input to 180-element linear light emitting. diode (LED) array that emi s. energy in the visible wavelengths. A higher drive voltage corresponds to a brig ter LED output. The output light of the LED array is collimated by a visible collimator and reflected off the backside of the scan mirror in the same manner as the infrared light. The output of the scanner is reoriented with a prism and demagnified for human viewing. The overall system provides a 30-,frame-per-second image scan that is presented to the human viewer inreal time) . The technique of scanning the visible LED outputs in the same manner as the signaliI1put to the. infrared detectors is called an electro-o.:Ylical multiplexer or an EO MC\'. The linear detector array of 180 elements is indicative of the firstgeneration FLIR technology.'
a
18·/
Introduction to Infrared 2.nd Ekctro-Optica! Systems
' '1
EXERCISES
7.1 Calculate the QPL from point A to point B in the system shown below. 11
n=1
=1
B
A
(
'~1
12 em
Figure 7.30
'
l' J' (
I
7.2 Calculate the angle of the refracted ray in the system shown below. 43
,
~l·
~'It
o
n=] 11
"ll ~ "I" .. :
= 1.5
( ,
~'l. ;'1':' ~
.;
~,[ ::
,r'
Figure 7.31
r'
7.3 Detennine the image position and size (assume an object ~size- of I, em) corresponding to the ~bject shown in Figure 7.32 when the object distance is .
(a) x = CIJ (b) x = 60 em (c)x=40em (d)x = 25 em (e)x=/Oem
iTl, '.if'
,''''lr
J,
"'l '.
. "_._.:.l
,i
f= 20cm
i= ?
t 0 ~
)
x ~
Figure 7.32
r
'rI
Optics \
"~
I:..
r
I
/85
7.4 Determine the image posItIOn and size (assume an object size of I cm) corresponding to the object shown in Figure 7.32 when the object distance is
_
~
(
(a)x
= CfJ
(b)x=';IOcm
c·:'
(c)x=20cm
u
1 = -20
~Hj I·~
em
\-,.j
t
1
i
o
=
?
x
...
1'- '-
r.: L,_
I -
Figure 7.33
i
~p."f
{
rJ
-'--'
7.5 Calculate the vergence of a_wavefront that is 10 em away' from a point source. ,----_-----c---I 7,6 Calculate the vergence of a \vavefront that is 30 em before coming to a focus. . -' ( -
_r \--
1
n = 1.0
I,' L
I __ _
r I
-.f /'
~
n
.
'
1.33
=
=
V) -::: \ - t.f-j
I-
f
.
,-
7.7 A lens is dropped .upri'ght into a fish tank and is lodged 5 em trom the tank glass in the sand. An object is placed 10 em from the tank glass. Assuming the glass thickness to be negligible, find the image distance from the lens. The /,'s \','- biconvex lens refractive index is 1.67, and the radius of curvature for bo·th sides is 6.8 cm. -
-:-
\..
u?.!--
t
?
<
o
10 em
5 em
~---~<
~
! l~:
, !'-.;
Figure 7.34 7,8 DetetT:line the FOV for a system that has an effective focal length of 80 em and a detector staring array size of 1.5 em by 2.0 cm. I
I!'
\
'
\!
1\
;J ~"
~
,~
,',
"
7,9
_)
I
/ u Lv
Thre~
lenses with foeallengths 5 em. -5 cm, and 5 cm. respectively are placed I em apart from each other. Given that pamllel light er.ters the first lens, how far behind the third lens does the light come to a focus:
en
rfd '/'
7. JO. Determine the diameter of a telescope aperture requir~d to resolve two stars separated by 100 million krn where the pair of starsz:.re both 10 light years ?way from Earth. Assume a wavelength of 0.5 ~lm. j ~ P- __ y:~?f
~t
-1
angul~resolution
7.11 Detennine the on for diffr3ction-lir:ited infrared 'sensors with apertures of 5 in, 10 in. and 15 in. Assur.1e a wavelength of 10 I Jlm. /'c, , ,)
:1 "1 ' ":.f:."j
f'
'-'" 7.12 Detennine both the MTF and psf for a 3- to 5-Jlm
:
'
m idwave infrared sensor
with a 5-in aperture and a lOin focal length. Specify the MTF in both cycles per miIliradian (angular spatial frequency) and cycles per millimeter (spatial frequency in the detector plane). Specify the psf in both angular space and detector plane size.
t,]. ":
71' /: J
REFERENCES
[I J
[2J
.
[3] [4] [5] [6]
[7] [8]
[9J [IOJ [II] [ 12]
[13 ] [14]
Pinson, L. 1., Electro-Optics, New York: \Viley, p. 3, 1985. Saleh, B. H., and M. C. Teich, Fundamentals of Photonics, New York: Wiley, p. 2, 1 9 9 1 . : ; , The Photonics Dictionary, Pittsfield, MA: Lauren Publishing, 1996. Jenkins, F. A., Fundamentals o/Optics, New York: McGraw-Hili, 1957. Jenkins, F. A., Fundamentals of Optics, New York: McGraw-Hili, 1957. Yu, F. T., and I. C. Khoo, Principles of Optical Engineering, New York: Wiley, p. 66, 1992, Meyer-Arendt, 1. R., Introduction to Classical and Jlodern Optics, New Jersey: Prentice-Hall, 1984. Banerjee, P. P., and T. C. Poon, Principles of Applied Optics. Boston, MA: Aksen Associates (IRWIN), p. 34, 1991. Guenther, R., Modern Optics, New York: Wiley, 1990. Pinson, L. 1., Electro-Optics, New York: \Viley, 1985. Boreman, G. D., class notes from the SPIE course Electro-Optical Systems for Engineers, Orlando, FL: Personal Notes, 1988. Gaskill, 1., Linear Systems, Fourier Transforms, and Optics. New York: Wiley, pp. 305 - 307, 1978. Hecht, E., and A. Zajac, Optics, Reading, MA: Addison-Wesley, p. 175, 1979. . . ' Holst, G. C., Electro-Optical Imaging System Performance, Odando, FL: JeD Publishing, 1995. I
r
-t ,"
[ rl '
I
'
Optics
, «[ 1
~. .
i
i
187
,c:L
[ 15]
L ····r·
'7.~rL·
. c.·
)'1
·~SL
\
; J,
Engineer~ng,
The Design of Optical Systems,.
New York: McGraw-Hill, 1966. fLfR Common Nlodule Design A1anual, Night Vision and Electronics [ 16] Systems Directorate (NVESD), Ft. Belvoir, VA, 1978.
··,w
(
Smith, W. 1., Modern Optical
.
.
L.·
,.
\
; .
, ,
'~l: ,
.
(
,
'''I
..
,
c
:
],. '.
r
'
C,'
lc
-~r
r '
,I .
,
:"t"',"
'(
.. r
~y~ ,r
;:
.
."/
Chapter 8
.......
'
'"
. i ;"
'-.
\.
.
.
::
Detectors and Scanners ..
_. l.._ .•
I . ;
L: l. ; t
i:
L'
c..
• i .: [
.••• l. ...:
"1'
The detector is_the component.2f.the ..system .which ..transforrns. theopticalsign.£l.l .into an·· electric~.srg·nai:"·fhi~_.~.!~~tEis~J__c?.Y.!R!!Lis . . . p[QPortional.:tQ .. tb~. . in~.kJ.~nt . 0p.!i.f.aL"PQYY"~.~~"" The detector component in the sensor plays a key role in • determining system-level parameters including spectral operating band, sensitivity, and resolution. ~p_e.ctraLrespon.se."is_deteanine.d. by. the_detector .. material. _cbaracteristics anc~ the .oper(iting temperature. A strong...energy ....band. JoI.p}Wt9l). t\.:::. acceptance must. bepr~.senL for the energy conversion process to occur. The getec!or sensitivity is.~__D!~f!i~~.
r---- - ------- - ---- --I
I:
.,·l::.'
Display
1
Human vision 1
~
1
I 1
I
S ca:;~er
. I I
I splere
I.
f
t
1i':~:~~" ,'::i:~~
I f-----
.",,;,,~ ~'..T ...
.-\utomatic:arget recognition 1.-\ TR) processor
- - ____
Sensor You are here!
I
I L._
Targets and background
Figure 8.1 Detector and scanner components of the system. , l..
I
," l __ .
189
i
Electronics
Le.
t.
l
1
1
1
-------1-
IWI
IntrodUCtIOn to Intrared
a.:ld Electro~UptiCal ~ystel11s
8.] TYPES OF DETECTORS
There are two general classes [1,2] of detectors: photon (or quantum) and thermal detectors. Photon detectors convert absorbed photon energy into released electrons (from their bound states to conduction states). The material bandgap describes the energy necessary to transition a charge carrier frol1;1 the valence band to the conduction band. The· change in charge carrier state changes the electrical , properties of the materiaL These electrical propel1y variations are measured to determine the amount of incident optical power. Thermal detectors absorb energy over a broad band of wavelengths. The energy absorbed by a detector causes the temperature of the material to increase. Thermal detector materials have at least one inherent electrical property that changes with temperature. This temperature-related property is measured electrically to determine the power on the detector. Figure 8.2 shows the detectors described in this chapter.
Photon detectors
-Photoconductive -Photovo ltaic -Photoemissive
r
,
~. !
.J
.•.. .1
:I ..1
Thermal detectors
-Bolometer -Pyroelectric
:.1r' ,
(
,I r' i
or: l
r
Figure 8.2 Types of detectors. Photon detectors include photoconductors, photovoltaic (PV), and photoemissive detectors. The photoconductive detector is constructed of semiconductor material that converts absorbed photons into free-charge carriers. The change in free carriers varies the conductivity (and resistance) of the detector. A simple circuit provides a bias current that reflects the change in conductivity. , The photovoltaic detector absorbs photons at a P-N junction. Hole-electron pairs are generated that alter the junction voltage. The change jn voltage describes the amount of optical power on the detector. The photovoltaic detector does not require any external bias voltage. The inherent voltage of the P-N junction changes without bias and can be read out directly. Like photoconductors,. the photovoltaic detector has spectral characteristics that are derived from the material's molecular energy levels. Photoemissive deyices are different from photoconductors and photovoltaic detectors in that an electron physically leaves the detection material when a photon is absorbed. Usually placed in a vacuum tube with high voltage, a
j
r'
I
.1 . l
)
:'r
r ' ,
'f, L
j
r.
.
Detectors and Scanners
L I:L. ,.
.
, \
L.
.
.
"i.:' •
li.
f
! i'
! ;
i'-
l
i
- L.:....
\. , .
191
photocathode absorbs the photon and the energy absorbed is enough (higher than the electron work function) to release the electron in the vacuum chamber. The voltage potential moves it toward the anode. Once eleCtrons reach the anode, they are measured as a current that reflects the optical flux on the photocathode. Thennal detectors [3] include bolometers and pyroelectric detectors. Bolometers are· sometimes called thermistors, because the material provides a resistance that changes with temperature. Semiconductors are used with hightemperature coefficients of resistance. The material is usually a long, narrow strip of material coated with highly absorbent coating. The electrical resistance of the strip is measured with a bias current to detennine the amount of absorbed radiation. Pyroelectric detectors are the most common thennal detectors. The pyroelectric detector is a ferroelectric material between two electrodes. The material exhibits a change in polarization with temperature. It can be thought of as a capacitor whose capacitance is temperature sensitive. As with any capacitor, measurement requires an ac signal, so most pyroelectric detectors are chopped to vary the incident flux. In this chapter, we discuss each of the photon and thennal detectors mentioned above. The list is not exhaustive, but it does contain most of the commonly specified detectors. I
{ '.' t'
r··:
!'.
;:..
:
L-:
;" :
; f
8.2 PHOTON DETECTORS
.
.... L_~
....
,~,
.
Photon detectors gelJ.erate free-char~ carr,iers through the absorption of photons . The absorptipn occ~rs withbut any~li.griifi~;~t increase in the detector temper~ture [4]. Detection occurs by a direct inte7action of the photon \vith the artSinic larfice of the material.{ \ ,.Mat~rial parameters that can be changed by the interaction are f I ,-I \ ," resistance, inductance, voltage, and current. The concepts presented here apply to detectors that s~nse wavelengths from 0.2 to 1,000 / l m . · I ~ ,,(?A Sa~Uty of band.gaps (and c?rrespondingwavelength sensitivities) ca.n be acnleved with the selectIOn of matenals. Recall that E = hv, where v = ciA, h IS Planck's constant, and c is the speed of ligh~Thewavelength ora photon has to be short enough so tbat tbe photon energy eX'teeds a material's ~andgap energy level . ~"'" ~J\p'». . Av-'. \ ~ for absorptIOn ... Intrmslc detectors are, those that are c6'nstructed from semiconductor crystals with no introdrtced impurities or lattice defects' [5]. Extrinsic detectors have impurities introduced to change the effective bandgap energy. Cryogenic -cooling (with closed-cycle coolers or liquid nitrogen) is typically required for infrared photon detectors to reduce dark-current generated electron noise. The reduction. in temperature reduces the level of dark-current noise to below that of photon-generated signal. Photon-detector responsivity is sometimes characterized by a quantum efficiency that describes the number of electron trans itions' per absorbed photon. Intrinsic detectors have quantum efficiencies of around 60%. Typical quantum efficiencies for photoconductors
I
i' I"
.
I..
I . L._
I. !.,'
!. !' - , L;~
\' .
< c .•
.
..
'--_ '
'.:u~ ..;. .
... [
'I
". ·r
range from 30 to 60%. Photovoltaic and photoemissive Jetectors have quantum efficiencies of around 60% and 10%, respectively. ' The detection mechanisms for the three most c('mmon types of photon detectors are covered ~in this chapter. The following describes photoconductive, photovoltaic, and photoemissive detectors.
",I
,
"'[
;-1 ,
."[
Photoconductors if Y "
/'.
( ;
,
\ . /"',.. 'L'.
0 ....... ;.;
I ....
Photoconductors s'ense optical power with a change in c~~ductance (or inverse resistance). Longer wavelengths can be detected by doping a.pure material with impurities, giving an extff~sic detector. The ~nergy gap tr~nsition for the intrinsic detector is depittid in Figure 8.3(a). An ext:rinsic bandgap is shown in Figure 8 .3(b). A p-type extrinsic detector decreases the conduction band energy level and an n-type extrinsic detector raises the v'~(e'Kce band energy level. The .' result is a smaller bandgap material that can detect longer wavelengths. Quantum "J'.:.. ,v efficiencies are low,r (around 30%) for extrinsic detectors because the dopant is less abundant than {he host material. (\
/"
,
f""'J-.}.-' ) ..-~. ..Y
"l ! i
I' r
:/' -.. r
., ,
',"
.,'
~~.
?L
.","1' .... "
o
Valence
(a)
[0' Valence I
Valence
P-type ./i'
N -type
(b)
I
!
Figure 8.3 Intrinsic (a) and extrinsic (b) detection. ,!
For photoconductors, an increase in the incident optical power increases the number of free-charge carriers, so the effective resistance of the detector decreases. The differential charge pro":.isIed by the impi,nging photons is. given by [ 1]
..)
of!.1
(Afl--.(.c.-'
[Coulombs]
,
~T;¥.
'.'>t .. o.,
.....
, 4 ..
. :0
j.~...,..~7O'O"("'~~
....... ~~"Y.-_ .. ~ .... _~ ... -.-_,.~~~
..
_v • • •
~
••• _
••
(8. I)
,
193
Detectors and Scanners
where Ep is the incident irradiance in photons per second per square centimeter, W , is the detector width in centimeters, ld is the detector .length in centimeters. llq is the quantum efficiency, tq is the charge carriermran lIf~time in seconds, and e is the charge on an electron (1.602 x 10- 19 Coulombs). Figure 8.4 shows a typical
[--
I
I
i
photoconductor. i'
L,
I' ~
';
w
'. , -,- L.
[-
L '
Figure 8.4 Photoconductive detector.
JI .:.v\7,~,
,f ,
':':'
'
......
The photo-induced current is related to the transit time 9,f the charge carrier across' the detector length Id, where the transit time is esti~at~d to be dl = l~/(J.l Vhius). Vhius is the bi\l-sl voltage across the detector and Jl is the carrier mobility In cm2/(Volt-s). Now the photo-induced current is -;,'f'
-," .
.-
'. ',' i:,';' ,
~~
,
/
[A]
'-. .. \...."':.
f
The photo-induced change in resistance
[
I
I
the bias voltage divided by the
'photo-induced current [Ohms]
I"
,
(8.3)
.v-.,,,J y 'Finally. a photo-induced conductivity can be found using the conducrivity-
L",
:
resistance relationship·
!
! t
1S
(8.2)
M
.1' -
""".
'-.. :'
_1,,_ 60-,\
wJ
[Ohms]
where ~O'\ is the photo-induced conductivity of the resistor.
(8.4)
"N
Example
\'t,~
L.J
t/,)
An indium antimonide photofQnduttor detector is I 00 ~lIn long and 5Q)7 Jlm wide. Its quantum efficiency is.0,45')and its mean carrier lifetime is 5 x 10. ':L i seconds. Its mobility is 3 x 10 4 cm 2/(V-s). Determine the detectdr's change in 2 resistance for an in-band incident irradiance of 3 x 10: photons/(s-cm ). The solution is given by (8.3) . 6.R\. = .
(0.') 1)
(3x 10 17)(.45)(O.005)(5x 1J-7)( 1.602,X 10 -1'/)(3x W)
=
,
..
,
.
r
The'iconductivity is found, using (8.4), to be 0.064 ohms/cm.
'1 '. " J : ..
Photovoltaic
':.
y/ Photovoltaic detectors are constructed with the formation of a P-N type junction in a semiconductor. The P-N junction has an inhe~n'tly generated voltage and does not require a bias voltage or current. A typical photovoltaic detector energy diagram is shown in Figure 8.5.
.
"-1
.'
""'. 1, ' . ,
6 2 kohms
fe-
'~I
'.
"
l
)
>.: .: ';~
r'
:.:. ,1". ~ .~ r '
,""ll '::; .... '\ _._...' ' J'
'
Conduction
..,
Dark
:~rl ...··'1
Ej - - - - - - - - - - - - - - - - k~~? £.. -----.
p~~
~~------N-t;re
~_ _ _
llltuninated - - - - - - - - - .
~
- - - - -
-
- - - -
Valence
Conduction
Ae(';; -~ -
-
-
p -type .',
r~,
Figure 8.5 Photovoltaic energy diagram.
The photons are absorbed at the junction of the P- and N- type materials, an area called the depletion region. The junction is the same as a diode with the added effect that voltage is generated with illumination. The energy diagrams of Figure .') 8A show the diode in the dark and illuminated states. The voltage in the dark state L is related to the differences in conduction (and corresponding valence) energy levels for the P- and N-type materials across the depletion region. Pinson [I] provides the following voltage description. The voltage is given by
~)
l .
,.,,''-f~
W¥h~~_""""'m?T""~'W!'t.""'i""?lIi
..
',I.... ;:; .. 4fP""~~~'''''''''''',,...,.,..-..~.'=-~"r'""~.~~-,.-~- .... ~ .... -~ .. - ....... - , ......
.
'-----":(
-"r
.'
Detectors and Scanners , L:.
[V]
,
,.'
. l.,."
I. '
".
I
t..
I','.
.
~.
195
(8.5)
where k is Boltzmann'sconstant (1.38x lO-~J l/deg K or 8.62 x 10-5 eV/deg K), T is the junction temperature in degrees Kelvin, e is the charge on an electron, and '~" ;jIl~,~J1p are the free-electron densities in the N- and E~\. type materials, respectIvely. The fe'lPl~vel show.!Ldescribes the energYI:l.LwJ1ic:h 50% of the energy levels ar~,J.Lk~}i'J.o be oc'
'
. , Lj
kT I lin +1511" V II = e n"I' +15" P
1'.', !'
I: .':
L.;
[V]
(8.6)
The changes in the number of excess electron-hole pairs can be approximated by
f ;,
( . L~
[electrons/m) ]
) I
.
1 .
f '. Li. ,
where d is the detector depth and til is the carrier lifetime. A differential incident photon irradiance gives a differential voltage of
"
\;.,'
f'
t.'.
Ii
~
(8.7)
,
. !,'
I
'
i ;','
l,:
j' .. t. ...
dr h
d
kT
'TJIIC,'
-dE" = -[-In dE. e
(nll+-d-I II ) Ill/C,
]
(l1 p - -d- l lI )
TJofE,,'
T]l/E"
(11 11 +-d-III)(n"--d- llI )
]
(8.8) I
-~--'-'~"-'--
!
l ..
-[ e
J"III (nll+nl')
If the number of excess electron-hole pairs generated by the irradiance can be assumed to be small con;p1(~i \~'ith n", then ._.IV"
i.
~
"lit
kT
.1/:',,::::::
k [11 ,,,,,(II,, +111') utlll n " I'
(8.9)
l .
I
l !
. I
1', '-. . !:;
I
j'
>.• "
i' [
0.
L: :
;
;
I:. ,.
Pinson states that this differential quantity is, a constant that relates [he photo-induced \olrage to the incident irradiance. The currerit-voltage characteristics of ~hotovoltaic detectors are very similar to those of a diode. These characteristics are give~ir \~.~8tej' 9. Because of this, similarity,. photovolraic detectors are sometimes ~[eferr€d to as photodiodes. Photoyoltaic .detectors offer several advantages over phbto~.qb!ductor detectors, including better responsivity" simpler biasing, and a better thio'fetical signal-to-noise ratio (SNR). " The photo voltaic detector is normally operated in the open-circuit mode; where the photo-induced voltage is m~s~ed directly. This mode has SNR advantages over 3 system that requires external amplification. However, the
Introduction to Infrared and Electro-Optical Systems ~!I'
"
:'·1 '.' I
d~tector
can be operated in the reverse-bias direction, known as the photoconductive mode. The electric :ield str~ngth and the depletion region width are increased. Advantages are higher speed, lower capac itance, and better linearity. The disadv'aritage is that dark currents are greater. . ~_e-:?j The avalanche photodiode (APD) [6] is a specialized photovoltaic detector that is operated in the photoconductive mode. Large reverse-bias voltages are applied to the P-N junction. This causes some of the electrons passing through the large electric field to gain added energy,).,£..C'''~, ~o that addi.tional electron-hole pqifs )t. are generated. This process Is . . kno>VTI as impact iomiation. If the newly created electon-hole pairs afufn suffi~1tntly high energy levels, they create more electron-hole pairs. This process is known as avalanche multiplication and is the means for large internal gains. Typical internal gains are from 50 to 500.
'::1
I
•
,-\0::'\
•
\..
\:;-. ......r:>
".1,,(' ..
;'1
('
...J
:[
">
.
~.':'
Photoemissive
,
,.1 1 , ... ,. \
) \..-' '
,J'
'-~ F· •.
Photoemi.s_~ive detectors release electrons from the photosensitive svrface
during the detection process. The incident photon of energy hv provides the energy necessary to release an electron from the surface. The process is shown in Figure 8.6.
r
"""-!',l
Ph 0
0
n
electron
®-;:..-....---->~
i
Vacuum I
Glass Envelope
ell
r
LY i
Figure 8.6 Photoemissive detection. U
>t.)~
,~,L..~
The photon enters a glass. envelope that houses the photocathoQe and anode. r' '-'_.,N I ,).... ' ., , --' L: ' ,Usually the envelope comprises a vacuum tube \vhere conductive material t·A'a~posited on the glass surface a~ ~s the cathode. The photon energy is abs'orbed, .... . l~-,\..:.·"·' producing a free electron that .is ,accelerated to the anode through the vacuum. Only electrons th~t\ have suffide'nt energy to overcome the work function of the photocathode materi~·e~rf=ri1 released electrons. The electrons are collected at the anode and the corresponding current is measured to determine the amount of light on the photocathode. Photocathodes are designed \vith high optical absorption and small work functions. The photomultiplier tube (PMT) is a special case of the photoemissive detector. The PMT is used when large gains are required to sense very low light levels. The PMT consists of a photocathQde, electron mUltipliers (dynodes), a collection anode, and a evacuated glass or metal envelope. The photomultiplier is shown in Figure 8.7 . ~
..--.-....... -.~ .. -, .- ........... ----.---...,,-... -.....,-.~.-.--. -'- ,- ··-··~·-~'--"'--r-- ...... _. - ...
LX~
"-"'''''-'~
. ---.--.----.- .... --. '.-.--- ....
"---~
_. -,.,,-,-- ..
,
!r~
. r\J\~ \~
,
';·r· ~:·i
Anode
Photocathode
.I
(
.
,
c[
DefL.'Clors
and .f)'cunners
/97
·1
;'.d-
::)11.
L :~
Photocathode
Vacuum Dynodes
Photon
Anode
,!-y e·
"··h .'j
:L L. ~d
r!
I 'J ;
'1\.-
Figure 8.7 Photomultiplier.' !:.: ,.
·:7·t~·
G---: 0:e:.. , .·/An electron is released from the photocathode and is accelerated to the first ,,'/ ~d~':n""~'de. The electron str1~~ the first dynode with enough energy to release a number of electrons that are accelerated to the next dynode. This multiplication occurs through a number of dvnodes resultinQ in alarge number of electrons . collected on the anode. The gain of a prv1T is estimated by
-
! I
.
I,
"'. L-
G= ",
x."
(8.10)
J /_j~\,
\\'here C is the current gain. X is the secondary emission ratio for dynodes, and 11 is the, number of dynodes. A nine-stage P.-M-T_\yi!h£LS~~.9.11da..~·.emission J~tiq .. c:>.Lt !lives a Qain of around 10=7. This large gam and low noise make the PMT an ...--.. .. ..--"" ",.---._.. __.'attraqive detector for many applications. .:>
,
.
; .'
I;
I( . . . i\:. ., 1
,)
~--""",,:::
-~-
·-·-·--~~I-~
..... ",-_..D
, I.
f .~ • L~.
I,
8.3 THERMAL DETECTORS
i'
f·[ r
,
~
31 ;, \l~ "
! ~
,.'
11! ..
I t
I'
J.' ;-:tt
For thermal detectors, })1~ d.~Je.,,5tor signal is related to tht temperature of the detector material. which is ahrr:ed because of incident radiation. These detectors e.~ typically operate at ·room (ambient) temper\.sture and do not require cryogenic cOQling. The Ch2:1ge in temperature prod(Jces one of three electrical~Lty~ ch,mges ;n (hc~,:~,~!ector m3tcrial:L~~ \::-~_~'. or capacit3nce. Thermal detectors 3rt' grOi.:~ed by the type 01 electrIcal prL';Jerty cnange. ,.) ~ In the c ,:st thermal detectors have b;:>en characterized bv their slow response iime. S:gnificant develoPri·(~ts in r;l~nt y~~~s.ilO\\'e·~e~:· have made these detectors \'~2ble candidates for many appll.::ations. Thermal cetectors today exhibit n::1!1Y ~D:::j characteristics. including quick response time. lOW noise. and linear operation. While tbe sensitivity of thermal detectors,;' has improved dramatically in recent yeLlrs. they are still ber.ind the .perfor;l~·ance of photon detectors. In this section, we di;scuss t\VO comm\.'n types of thermai detectors for imaging sensors: the bolometer and pyroelectric c.:tector.
Introduction to Infrared and Electro-Opticnl Systems
198 ... \ {'
""';.
- .•• '
,I
r \
.;", .tl J~'"
-
I
Bolometers r·r:-' .. "\_;.
,
-l.." (._~.
Bolometer detection is d.~rived from a change in the resistance of the detector material. A simple bolometer detector consists of a thin. b~ackened metal or semiconductor filament, as shown in Figure 8.8. :<_«----7 1 i
. J
Incident radiation
"/
Bolometer
1112terial
Rs I
I , I I L - l i ~- I
I
V
.·'1'
Figure 8.8 Bolometer configuration.
r
'-------;--' \J \ f-:' / The change in the bolometer resistance due to incident radiation is given by [7] (8.11 )
[Ohms]
...J~
-f;,;?
.~;;J
~[,,~~-fi'~~~l,
s;~
.where I1T is the change in temperature Ro is the resistance .l::~l when I1T == 0, and a is the temperature coefficient of resistance. The temperqt~re \\~\ coefficient is positi'te for metals and negative for semiconductors. In the absence ,~,} '?~l " of incident radiation, there are changes in the detector from conduction f~OIn the .r~/.~O·~ If 's~r:C;unding envir6n~ent. The changes ~1re,~~~[)t~~~c~ in naiUr~~-'and £~mtflb~'~~JQ ' !:J " . .-" Y detector noise. For this reason, the detector l's is;-)'uitcd fnHd tlii: Icca! environment '>I' ~;_?;. . .J',f--' .- ,l£,.' - ~,!?-, . . to help.. ensure that sIgnals ,i;lre ..a Iesyll onl . .Y Ofil·; ..:!Uent r.f.~dwtlC:~: -IsolatIon IS . . . . . . . . . . . . . . . . . . . . . . . . .. ..?V achieved through special man,uf~-cturing !eclmiq'ues and hy placing the bolometric /. . d,\.:vLp detector in a vacuum. ~/ The increm~ntal temperature difference is defined by
(j .
0
~~,~,.,
,
.-'"
~Y.;
AT_ _ ..'ltD [
1J.
(/2 __ ., •
l\.
I
~k"1
(8.12)
_ II' I' Pc .!], - , .I
l- 'C 1'
TJ
where 11
(8.13)
oJ, \
.,t,
T ·1 T
,1 ,
,I \,
199
Defee/ors and Scanners
~~ L
\vhere
£
is the detector material emissivity, At low frequencies, It«
\/2 n( C,. I/<..') ,
(8.13) can be simplified to
:tL
':' .:ta:
R
= i R" (1 I':
, l,
(8.14)
[V/W]
K
-.:
,"
Example
Calculate the res'pollsivity for a metal bolometer at low frequencies given the following parameters: i =
K .
=
15 mA 5* 10·,1 (W/deg K)
Ro = 50n a = O.O\.'deg K £ = 0.9
I,',
-Q,
Inserting the vaiues into (8.14), R is determined to be: (8.15) ,). ~
. ~
('"
/ ,
..,'-... '\ \
\,t'I
1 r--
Pyroelectric Detectors
:1I.: .
I.". L: ,·ce
Pyroe lectric detectors have a change in, surface charge with a change in temperature. This pyroelectric effect, as it is called, produc.es a current that is proportional
to
the change in temperature: , _ ..• ...1 tiT 'I' - PI· . d til
(8.16)
[AJ
:'11
J .::
where pr t C Cn1-1 K- 1 ) is the pyroelectric coefficient at the temperature T Kelvin, .4 tI is the surt'tl.::e area of the detector in square centimeter, and ':;:~' is the time rate of temperatur-= change. When a constant radiation is applied to the det~ctor, no current can bc'-'s'eilied. A chopper mllst be Jsed with the detector in order to . provide chang::lg temperaturcs,.t~,at produce:::. current to sense:::. constant sign~1: .Pyroelectric dc:ectors are in essence cCipacitors. The responsivit~ for a pyroeletnc detector!s sim::ar to that of the bolometer and is given by
.1..
'I' ,L...;:
! ! ~ ,~; t.
,"
\
L:.
,.
I ~t
.
R
~-- ..
is
,
.\1', 4::£ P ,-/,j , ·=I:.\
, i
_H.C.'
;
(S.17)
[V/W] ·,I:i
"-J
i
,.
.1 ..
;where p is tr.;:- pyroelectric coefficient, Ad
t.... ,.
chopping "
.
r" .:. .: i
,
freqL:~ncy.
1S
the detector are 3., and
.I;'
IS
the
...... ... ~
Introduction to Infrared and Electro-Optic;)1 Systems
]()() I
Pyroelectric materials are ferroelectric crystals that sho\\' spontaneous , chadges in polarization aJo'ng one axis. Two example materials used for this kind of detector are PZT (doped lead zircon ate titanate) and SST (bariUln strontium titanate) [8J. The polarization of these materials varies \yirh the temperature and is characterized by the pyroelectric coefficient. This family of thennal detectors is characterized by fast response times across a broad spectrum. 8.4 CHARGE-COUPLED DEVICES
--·1 I
The charged-coupled device (CCD) [9-11] has become a very familiar tenn due to the popularity of hand-held video cameras operating in the visible band, although there are also infrared CCDs (IRCCD). The term CCD does not refer to a specific detector type but rather a detector readout arch itecture. The basic concept of charge-coupling is that once the photo~generated charge is collected in potential wells, they are selectively transferred or read out of each detector location in a serial process. CCDs are typically metal-ox ide-structure (MOS) devices. The basic structure for these gates are shown in Figure 8.9. When a positive voltage is applied, Vp between the top metal electrode and the substrate, the majority carriers (holes) are moved from the oxide layer creating a depletion region at the interface. If a photon is absorbed into the region, then an electron-hole pair is produced. For the photon to be absorbed, its energy must be greater than that of the bandgap.
:J' . '. ~
.. .
... .
"
"l.i' .-
·'1
n, ;I .
~~ : l
;,1
Metal electrode ~
Insulator oxide
Depletion regIon
r
f!OtJ ... l
j
':T Figure 8.9 Metal oxide structure.
The CCD is an analog shift register in which an entire sequence of charge packets can be shifted simultaneously along the surface of the chip by applying clock voltages (VII)' The electric charge varies from one packet to the next. The sequence of charge packets are scanned as they move past an output electrode connected to an on-chip amplifier. To read out the charge packets, each component in the array is connected to sequential shift registers on the periphery .
.. ;It ... , , , ) ; ; :
,;::PI;:;'"
"
'."IIP'.>,," .PP,
.PC:
q;t1 .... ~.,....,-~~~tr-- ....... -::r--,..-- .. _
..... ~ _ _ .. ____ ...... _ .•
_._~._
~
•.••
" ••••
~
••..•
'--"-T!
r'
~
'" C
2(}/
..
.,
:
, ,
Detectors and Scanners
,
f, I'
I , I '
The packets are then clocked out of the shift registers by a clock that controls the time delay. As can be expected, the readout process is not perfect, and, hence, there is a small amount of charge lost \vith each transfer. This loss is quantified by the charge-transfer inefficiency 11 ct. For a total of P charge transfers, the total fractional loss is given by a= 1-(I-llcl)f:::PllcI. The typical charge 4 inefficiency is on. the order of 10- , and typical array sizes (i.e., charge transfers) are 1,000 by 1,000 elements. 8.5 DETECTOR RESPONSIVITY
L.t'
.
,\~
jv~':'
j
'The fundamental purpose of a detector is to convert an optical signal into an electrical current or voltage. The term responsivity is used to 'describe .the amplitude of the electrical signal with respect to the incident flux. Responsivity is a function of wavelength and temporal frequency and can be determined by
r;;
RCA.j) .
I','
L.
= ~ = 1~(A,/)A'1 "V,,,,, ctJmcult'nl
[V/W or V/L]
(8.18)
'
where VI/III is the detector output voltage, c:'Pino,Jelll is the incident flux in Watts, £('A.,./) is the incident irradiance in watts per square centimeter, and Ad is the detector area in square centimeters. Current responsivity would have units bf , Amperes per wan. A typical photon detector responsivity curve is shown in Figure 8.10.
, /'
/ /
/
--
""\
\
/
()6
o,~
I. :
L
v
V
(),7
Vv'avelength
O~
0.9
\ l\ 1.1
1.0
(~lr:l)
,/
"
I I
I I
/'. ,I',
L~
I.
Figure 8.1 Q. ResroJ1sivity cun·e. '
'. .( - ' /
J!V ","
, .'
,
"
,f\.'
'
J/ Y
IG'''- t,\..(7 "
The responsi~lty lS typicalJ~'\~lleasured b'y the delector manufacturer and provided as a specifi~tion or calibration document. The band-averaged responsivity over a spectral band [A I. '-"2] is given by
Introduction to Infrared and Electro-Optical Systems ----------------,
(
(8.19) "1-
,r i
,
!~
I
?\
This yalue is used frequently in cases where the incident flux and the responsivity are nt;"Cexp'€cted to vary significantly over the spectral band. For EO systems, the responsivity is given [10] frequently in temlsof photometric units:
1:., I('ff ; /6.~ k-tJ _ L: Rp ilolll[1ic
_ II
;:..~ 1'1:
-
",(>.)I«A),n
"
683_1~·2 V(A)
\ /
[Y ILumen or A/Lumen]
(8.20)
./.,
~ ::: Vtv (}.f3 fLIJ.JJ't where V(I>..) is the human eye's photopic efficiency. From C~l!p!er 5, there are 683 L/W at the peak visual efficiency. For EO systems that are 'closely matched to the
human eye in spectral response, the use of photometric responsivity is reasonable. For systems with other-than-photopic spectral responses, however, large differences can be seen between actual voltages and those predicted by the photopic responsivity. Example
',I
\('>
o~ ~m
h
b-
O·~ W/~m2.
• The irradiance on a detector at wavelength of 1 is 100 x 1 The 2 detector has a surface area of 5'cm and the output voltage is 35 mY. Determine the detector's responsivity. The solution is determined by calculating the power on the detector. It is equal to (1 OOx 10- 6 )(5) == 5x 10-4 \V. The responsivity is then R == ;035/5xl0-4 == 70 V/W. £---1.;;,;> efJ..". b·A J!- -=' -q) In a manner similac to 1hat of r~sponsivity, the flux response of infrared systems is typically chara"2te'?t~.9Y the system intensity transfer function (SITF), The SITF is measure(r-Wlth--a-Targe~square target against a uniform background as .- > shown in 'Figure 8.11 (a). -
,>-'---C.r
/'
T 2,
o Sensor
Target
(b)
(a)
Figure 8.11 System intensity transfer function (n) measurement, (b) curve. ___
~_""""'''-'~~''''''''''_'._''''''''''''''''''';~'i('''''''~-~-~'
__
.
,~-","".~_,~_ .......--.---..~
..
_.r~,_ _ _ ..-.-~ •.•• _ •• _ .• _ .•
• - • • • • - .•.••••.
,
(
,
~l'
r' ;t.J
;I
( !'"- [
:i r <:""L
,I'
,
-
,I . Defee/ors mILl Scanners
IT
J03
I -;
.[
r .
It ~';
;
·~_t
. ",.9
;/
The temperature differfilce between the target and the bllc~ground (,~ T) is varied, and the corresponding output voltage is re~o!"~ed. A typical SITF curve is shown in Figure 8.11 (b). The SITF value is the ;~ 'of the cu,rvc in the linear region. This voltage-temperature relationship is important to determine the noise equivalent temperature difference (NETD) of the sensor. Given an rms noise voitage at the output of the sensor, the NETD is
\, ··C
l:
I \.
~
.
..
.If .~.
. L.
arge
'l'le [
I
= SITF
"\'/1;',.
rd eg C or'deg I'] '-
(8.21 )
l
The NETD is ~ sensitivity measure of infrared systems that describes noise in tenns of source temperature variation. The definition and estimate of. NETD are discussed in Chapter II.
'\ the
\
1VETD ~
8.6 DETECTOR SENSITIVITY ?:~/
The responsivity of 3 detector relates the signal level, in volts or amps, to the incident flux on the detector. A large respof1sivirv- does not mean that the detector .e--.... can be lIsed )0, easily dlsce'rn small op..ti~.aj signals. An SNR is required to determine \\'hetlYe-'r the signal can be differeIftiafed from the noise. Because a large responsi\'ity may not correspond to a large SNR, the term sensitivity was developed to describe a detector's SNR charJcteristics. The SNR at the output of a detector can be described [12] as '.
:1~1111C cfon
S
ctJ
.\'
i"
- = - R
1-·,'
[ unitless]
I
(8.22)
Lr!~n L~
I,
~
'rhled
'rF).
.ILas
[ I
!j
[
l:.-;
~
I1_..
1 I
I~
LL: I
I
where R is the responsivity of the detector in amps per watt cD is the flu/x.,.........,...on the detector in \\'arts, and in is the noise currera in amps. If the SNR is .sd to 1 to determine a noise equivalent incident power i >JEP), we can rewrite (8.22) as I
.vEP = cDsJ,v= I
= il/J?
[\V]
?...,) t'8 ._J
If the responsivity is written in terms of "olts per watt. then the NEP would be \'" / R. N EP is 3 function of \vaveiength and of the optical signed frequency (modulated or chopped light). That is, R = R(A./) and NE? = ,VEP('A,f). NEP is also a useful ~uantity in that it provides a noi:::c equivalent !lux on the detector so that \ve can c;?tennine the overall system SNR. This is ussllming that we know the noise current \.. . 1' the noist: .\"ol~age on the outpLt of the detector. A be:-rer figur~' ~;'(Iilerit is D* Cdee 5:ar") or normalized .jetectivity. The D* is also a fL:l1ction of wavelength and modl:lation frequency and can be written as - ."'III-H: 1i2 /l'ml\" ]
l
(8.24 )
Introduction to Infrared andE1ectro-Optic21 Systems
2()·/
..
where A d is the detector area in square centimeters and 4( is the noise equivalent ?II bandwidth of the system that limits the detector output noise. When D* is . ' \ specif)ed, it is normalized to a l-cm 2 area and a I-Hz noise equivalent bandwidth. With D*, a larger value corresponds to a higher sensitivity. The units of,(~/;;.(~:;\I: is. also known as Jones. In addition, there is a blackbody D*, where.,the ra~lant power on the detector is that of a blackbody, so the D* is a broadba~d '~e-;p~rlse-t}iai'ls"a function of temperature and frequency D*(T,j). Fqr photon detectors, spectral D* is an increasing function to a peak D* near the curt~ff frequency. Figure 8. I 2 shows the typical spectral.D*. The cutoff wavelength occurs when the incident wavelength does not exte~d the material, bandgap energy. The •.....•~Iackbodx D* is the equivalent sensitivity of the detector 'd-0~r' over all of the blackbody emitted wavelengths. The blackbody D If. is always less than the peak D* because the detector does not respond to the blackbody' wavelengths beypnd the detector cutoff frequency. L~""<' y
-::1
, !
":J . r .
..,
.r ' 1'./ r
.
!
'j'
::~I~' .e;/~
D '" ('A J)
Peak D '"
r '
_ .
.
"
;.1'"
,
r '
/.
Figure 8.12 Spectral D*.
-
, .'> ' - 'r
~,'PI
.f
f"'':c-,
\
p~
of the more common questions addressed by sensor designers and analysts 'involved with the use of detectors and SNR calculations is: "Does,,,,/Jmy . SNR increase or decrease with a larger or smaller detector?" While this may seem like a simple question, the answer is, like many answers, "it depends. We consider the solution as an exercise in detection of resolved and unresolved Sources. Consider a simple sensor with a single lens and a single round detector as shown in Figure 8.13. The sensor is a large distance from the source, so· that the image of the source is one focal length from the lens. Also, the atmospheric and optical transmissions are assumed to be I (i.e., there are no losses). Resolved sources (s~3.~~lm~ called extend.ed sources) are those .that. mO,re th:n fill the detector's mstantaneous field-of-vlew (IFOY) as shown \l1 Figure 8.1 J. Because the target is larger than the detector's FOY, not all of the targets emitted, radiation that enters the sensor's input aperture falls on the detector. Any radiation leaving the source that is outside the sensor's FOY cannot fall on the detector. l
'
( 1
,
DL'/cctors
l L
205
and SCLlI1nerS
[:-;t
(resol \l~J) Area nf source
seen by the sensor
.-
Sensor.................................. -,
-":"''''''' ~
Detectur:
-~ifF--~d f
t
I
dde
_
_
'foprics
:
: _.... - ~ ----------- -- -- - ~ - -~
Range R
(.
l.
Figure 8.13 Resolved source. ,
r·'
j::
I: ·c ,
r .: -L~
The resolved-source example is one that is common in imaging systems because a large number of detectors (i.e., a detector array or scanned detector) are designed to image a single source. Therefore, the detectors may view only a small part of .th~ ~~tire target. The common problem seen in most applications for the resolved' _ source is to detennine the amount of the target nux impii1gr~g on the detector. We ~reat the sensor as a spectral instrtlinent. The first source quantity considered, as ,,~,p usual, is the emittance of the source M(T, A). If the target is not a blackbody, then the emittance is given by E(/..)/v/(T, /..), where M(T,!.) is the emittance of a blackbody at the temperature of the target. Assuming a Lambertian source, the radi'ance of the source is given by (8.25)
j; i ,.
. Ll i
f··
L
fro-",withi~
,The radiance h;:re applies to the entire source surface if the emissivity and, ,) temperature of the source arc unifonn over the source area. Only nux :::f 'the source are.:: ~~IJesponding to the sensor IFO V is seen by the detector. Therefore. an ef:-ective intensity (as seen by th~ sensor) can be given for the area viewed by- the d~tector. The area of the source 5~en bv the detector is ,
.
'
;
(8.26)
I'. !: :
L \vhere R is the fange from the source to the ~ensor and is given in centimeters. Also, the. small-angle appro~fl~tion is used where the si~' of the I FOV is equivalent to the angle itself when' measured 111 radians. This is a reasonable . . tor - ang Ies Iess-t han~. 5d '4.........-;//.9 apprOXll11atlOn ~, ( ,_.. _. . . c '.- -- r-'", / \..? .:> ~
i
I
L..:..
,<
Introduction to Infrared allelElcerro-Optic:1 Systems
!O(j
~-I
The equivalent intensity of the area seen by :11e seilsor GlIl 110\\ be c~lIclllated. The intensity is given by the source area rnultiplied by the radiance:
~'I
"
~-l
I(T,·))==L(7·')f,==f;(ic).l.1j/).)R~[lIUlf I~ A ."1 \ '.j
[W', _, ] , _ r ~dn
J,'
, I
/ r
., ; c"':,
,..>
""'I
This is only the intensity provided by the source area seen by the sensor. A sensor with a larger IFOV views a larger. S,ourcf afep. and. henc~ sees a larger intensity. . r __However, the intensity given radlat-es out\~a:d from the source over the entire J , hemisphere0-~~~jr~~~c;! .. inro,}~.~ ~Qtr~n~.~,,?p~rtur~ .. oLthe-sens.OJjs ~~FLpq!_Q~n!t-E.~J~~!9f. The intensity is·'t('.on'Verted into flux on the detector by mUltiplying the sensor's solid angle by the intensity. The sensor's solid angle (as seen by the source) is given by t\
l.-l-(_C-
"":j
:.:.1 . f~ P'"
•
n sel/sor --
;r./)~ ~R~
.·1 ,.,....
I . "~I
(8.28)
[sr]
~
The intensity for a flat source falls off with the cosine of the off-axis angie (see '.-' Chapter 5), so it is assumed that the diameter of the sensor's entrance ~~~_..i~" small compared with the range. Now, the flux into the sensor's entrance pupil (and subsequently onto the detector) is m(T, 1) I(T' '*' I\. == , A)r. ~
.I.,I't'II.W/"
A/(T• 1'.") = ;:[)~IF(}I/l Iii £ (") I. H'
[W' I ~L1n J I
J
r
. (8.29)
-': ,'z,..
-
',]
This is an important equation because the pO\ver on the detector is not a function of range. With th~. assumptions given in this section, rhe equation shows that no change in flux is~ oElsetVed by moving the sensor closer to or farther from the target. The only significant changes in flux from geome, tric considerations, ?~ to th~ diameter of the sensor's entrance aperture and the instrument's IFO~ For a \" tec'ta;gular FOV, the process is the same with the re~iipfl~n that the source area is calculated for rectangular area. The rest of the above procedure applies. Broadband instruments coIlect flux over some band of wavelengths. To compute tl1is broadband flux, (8.29) is integrated over wavelength: (8.30)
,J ",-,Because detector output voltage (or current) can ideally be considered direcrly proportional to flux on the detector, we can measure this quantity directly. The output voltage ofa .detector is detennined by placing the detector responsivity inside the integral of (8.30). The NEP for the detector is estimated by I
, J.-1,(',.1 NEP(A) = LJ.(i.)
(
LT r
I
L .'
iT'
[W]
(8.31 )
.
>
'"
Detectors and Scanners
L
,l:
. ](F
where the detector frequency is operating in a flat response region. The SNR can now be detenn ined for a waveband:
L
.~~ ,
== :tn!//'0I!2
,\1:/
I;
16 j.-Id;:'!
f~2 E('A)M(T, I.)D * ('A)dA [unitless]
(8.32)
AI
V/V~
L'
Finally, \\'e can simplify (8.32) by providing the detector area as ItdL/4 and ~~:} setting IFOV to ddetlf. In both cases, ddet is the linear diameter (in centimeters) of .) the detector. NO\\' the SNR becomes I r I
r :
; . L.·
-f,' J
[unitless]
I I
I
.
I'
k.~
S-'
- <-? /
I··
[ i .'
,.
i
•
I
i
I
I'
1......:. ..
..
"..
\:
U
\..f)
JAr
..
~()
\,
'"
t'-Jr,~l/ J "" J~
. A small focal length and large optics diameter is desired (a small ~ /." f-number). This relationship gives a squared (very strong) :./". improvement in SNR. A larger .detector diameter (or root of the area) gives a larger SNR, 'but poorer spatial resolution/ A larger detectivity is desired (directly related). A smaller bandwidth is desired (related by the root).
L:~;
L ..
-
There are a number of qualitative relationships br9_l!g}lJ.. 2.U.t by (8.33). Some general rules that describe the SNR of a detector viewing a resolved source are
I
,
.. A
'--' ~4"
(8.33)
Yie~~
Optimization of these parameters a system that can see very small changes in source exitance. \Vhen the,J'~c~ground photon flux is much larger than the signal flux and photon noise is,the'd~mtn~lt noise source, we have the case of background limited infrared photodef2ction (BLIP). This is the case where tQ.e.ba,c1kground.(nonsource) light on the detector is larger than the signal flux. For t~rial infrared systems, this case is common. The background objects including ground, ocean, and sky path radiance is that of 300 deg K objects. These take up tile majority of the FOV. The targets. or sources of interest, in the FOY are small and the temperatures (and corresponding er.~issions) are similar to that of the backgrounds such that the overall flu;, ,cQntr:butiQils are small. This situ~tion corresponds to the unresolved ~\')..""I~,t :.....\,~~ ..\\.0 source case. \.\'e now consider the case where the source is unresolved {i.e., the source is smaller than the IFOY).· For sources that d\.. . not fill the seJlsor's IFOV, the analysis is \'e"?'Mtmil~r to that of ~he prr;vRm's anal~sis. The e~'\\.~i~n here is. that the source ernlna:lce IS only consIdered over the sIze of the source for the sIgnal calculation. The D * is given to be BLIP where the D* is considered to be viewing 9bjects that are around 300 deg K. Most ground-based sensors can be considered BLIP as terrestri2.1 objects are close to the temperature at which most D* are specified. This is the same D* that was given in the resolved case, assuming that ....
~~.>-
"'; ~'.
Illtroduction to InfrJ.red and Electro-O~tical S\,stc:ms
20S
. !
------------
the, r~solved source was at a typical terrestrial ternper2.ture. Consider the geometry of the unresolved target case in Figure 8.14.
:. I r
I ;
Source area, A5
,_ -
-
~t.:n5~-'r
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
.
- I
:
Area seen by the sensor
r
.
.\
Detecto~r :
r-.--~
:d
IFOI'
: del
f
I
:JI'!'_~ __ u_-----------~--~
Range R
. Ole
Figure 8.14 Unresolved source. , j . i' ),.".,
o? \,.c-"
_
l
Small sources are typically specified by Jan inteo.,sity parameter. To obtain a better understanding of the radiometric confi uran6-n. however, we start with the exitance of the source. lfthe source is Lambertian. the source radiance is
g
'(8.34) The intensity of the source is the radiance integrated over the source area. For a uniform radiance source: I(T, A) == L(T, A)As == E(A),\f~T,i.jA,
(8.35)
[W/sr-fl m ]
To determine the amount of flux on the detector, the sensor's solid angle (as seen by the source) is required: (8.36)
[sr]
Again 'assume that the diameter of the sensor's entrance pupil is small compared with the range. The flux on the detector is now ct>(T, 'A) == J(T,.\~/I.\Or A)D·,. == E(i.).\I(r.i.H,/)~ [W/ fl m] ~ H~
J'
,'
..
~~\,
IS
calculated by
[W]
(8.38)
Note the flux dependence on range. The broadband flux integrating the previous equation over wavelength: ct>(T, A) == A~,;2 J~~ E(A)Af(T, A)dA
..-,-.....,--...-...
~-
..-.... -
~--
.... - ". -...... " - " ' - - ' - ' - " " ' . -
(8.37)
,-
!~ ; ~
1
~.l
,
'~
C '
.
7
Ddectors und SCUl1l1erS
.
.... 1/./
,;
L
e
where. again. th is value can be measured directly because the detector output voltage is proportional to input flux, Using (8.31) for detector background-limited noise, the SNR becomes
/:, L
(I)
\'j:) ,
..
"I./)~ fi., (1) IV'/(T, 'i)D * ('A )d1 = ,. j - E ~R- \'..J,,~f
/I.
,I
I\.
I\.
(8.39)
\\ nere the source is smaller than the sensor FOV, the guidelines for SNR are decrease in range gives a squared improvement. A larger optics diameter is desired and gives a squared improvement. A larger detectivity is desired and is related directly. A smaller detector is desired and gives a root improvement. A small bandwidth is desired and gives a root improvement. .
• A
· · · ·
I" ... L-
I····
L· I'
L·.
f'
I
l
~ I" . L.
I
A BUP scenario assumes that the target signal is small compared with' the background signals. The background signal limits the detection of the signal through the specification of D*, ,""here D* is usually given for 300 deg K backgrounds. If the background is cooler. D* becomes higher, and if the background is hotter. D* becomes lower. In some space applications where a small si~nal is discerned from back~rounds that are extremel\".. cold, the D* can be ...... c\tremely high. If a detector is viewing a small target inside a furnace, where the backgrouI1.d is high in temperature. then the D* can be very lo\\'. BLIP conditions ~re Llsually satisfied in most tactical scenarios and many strategic scenarios. Images with the ground, ocean. and horizontal atmospheric backgrounds provide a iargc majority of BLIP conditions. However, space and \'crtical atmospheric back!:!rotmds are cold and are characterized bv- different BLIP conditions. The resohed SNR and the unresolved S?\R can be compared directly if BLIP conditions are met. This would mean that the source temperature for the :-esolved case would be 'around 300 deg K. For the unresolved case, the ::acKground is also near 300 de~...... K. We also assume that the tar£et is not too , ...... - si~mal ...... ;"ar from 300 deg K. so that the D* does not vary significantly. It is worthwhile to ::,valuate the SNR as a function of detector size to detemline the optimal SNR -:onfiguration of ~: sensor. \\'e provide this comparison through the following ::,xample. '
~
,"
1 ::
I , ,I
c·
l .....
I. . L
Example
r '. i .
I'
,l
L
L.
{ 'I;,: L / ....
(
I
L.
A round object at 3 10 deg K is positioned in front of ;3 300 deg K background. The source is 10m in diameter at a Llnge of 100111.' A rOLlnd detector is operating in a Q- to IO-pm band, where the D'" is consid~red to ~e a constant l : fo: BLIPcollditions. The D*. can be considered cons:tant over this Ix I0 10 CJI;;.fI: <11/ \ band and is appro:\:imately the same for the 3 10 and 300 deg K conditions. The signal is chopped with blades at 300 deg K 50 that no differential signal is seen
lm,oduction to Infrared and
Electr,~-Opllcal
Systems
i
I I j
>
\\hen the detector views the background. ASSUr.ie that the em issi '. ities of the blades, backgrounJ. and signal are I (i.e .. no losses). The !lois;:; equivalent bandwidth is 1,000 Hz. The optical system has a II~'-clll entrance apert'Jre diameter 2nd a 20-cl11 focal length. Assume that the optical :ransillissions of the atmosphere 2.nd the sensor are] . Determine the optimal round ":::etector size for the largest SNR along with an estimation for the SNR. The solution is provided by lIsing (8.39 j for the unresolved case with band-averaged values for D*, emissivity, and diffe:-~ntial emittance: ([:I
SfP -
A\'D
A.n:
·w: v.-I..J.'J!
eLl..
./
!I
;' J
*
(8.33) is used for the resolved case with the same b.-md-averaged assumptions:
The differential emittanc~ can be found using the blackbody equation within the.9to 1O-~lm band for a source of 300 deg K and a source of 310 deg K. The corresponding emittances are 3. I 19 m W/cm~ and 3.672 m W/cm 2, respectively, giving a differential emittance of 0.553 m W/cm 2 . If all of the above parameters are placed in the above equations and the detector size is varied, the resulting plot is 'shown in Figure 8.15.
r,."
,
:.!
SNR 20,000 18,000 16,000 .,r .
14,000
"'; L . ,<-:;;.)
12,000
I
10,000
Resolved
8,000
. .•J"
Unresolved
,,)
6,000
'l
:.r
4,000
r
i
..i
2,000 -
o
o
1..
.'J
0.5
1.0
1.5
2.0
2.5
3.0
3.5
J" .
4.0
DetectOr diameter (cm) Figure 8.15 SNR as a function of detector diameter.
.
,-
I (
Defectors and Scanners
]/1
\\"irh a small detector size, the source is resolved so that the signed grows with an increase in detector diameter. Once the detector diameter reaches the angular . subtense of the source. the SNR is at a peak. The source subtends an angle of 5.7 de£! that is matched bv a 2-c/11 detector diameter at a focal length of 20 cm. At this detector diameter. the SNR is just under 20,000. As the detector size becomes larger. the source becomes unresolved. The source power decreases with the source area where the noise decreases with the root of the source area, so the SNR decreases. An important SNR lesson in this example is that a detector's SNR is at a maximum when the detector is matched to view the source area. _
[
l' ;, L,
:r
! ;L
8.~ I
0/
,
~
DETECTOR ANGULAR SUBTENSE
\\l1ile the SNR is maximum when a detector is matched to a source, a single derector sample of a source does not provide an image of the source. That is, a single radiometric ·sample of the source exitance cannot be mapped into a re2sonable spatiai depiction of a target. A large number of detector samples is desired across the source so that a high-resolution picture can be constructed of the sO:Jrce exitance. The detector angular subtense (DAS) was developed to described the resolution lim itations of the detector size. For a square detector, there are two D.-\Ss: a horizontal DAS and a vertical DAS. Figure 8.16 shows that the DAS is the detector width or height di\'ided by the focal length. I
L.
Collecting optics
I" I: !.
l,-:
\
:,
, L I
:
! ;. t I'
f
<
l __
Fi,-'ure 8. I 6 Detector ;:mgular subtenses. ~
~
Fe- the horizontal DAS ofa square detector, the DAS is the detector width divided
b;. :he effective focal :ength of the optics: 1/
a=f
an": the ver1ical DAS
IS
[radJ
( 8.40)
the detecror height divided b: the effective focal length:
-, /
~
-~
.........
----_ _-_. - ......
I, )!
[r~:jl I
The horizontal and vertical DASs are usu:::lly specitied in, r~·:illiradians: the quantities described in (8.40) and (8A I) mus: therefore be lllulL;:>lied by 1,000. The :OAS describes the Very best resolution ttL:! can be achieved C-\. an EO or fCR system due to the detector size limitations. Tv\:(' point sources of light separated by 0.1 rnrad cannor be resolved as two sources if a system has a focal colane array \vith D/\Ss of 0,25 by 0.25 mrad.
!
I
I
,
-
-
Example
j 1
."I
Two stars are separated by 2 mrad and are imJ.ged by an EO sensor with a CCO detector alTay. The CCO array has rectangular detectors that are 35 ~m in width and 50 ~rn in height with 100% fill factor (this means that the detectors are placed side by side with no gaps between detectors). The sensor has a focal length of 20 em. Given that the sensor resolution is detector limited (the optical blur diameter is smaller than the detector), detennine whether the two stars can be discriminated. The solution is determined simply by taking the DAS and comparing it with the star separation. The horizontal DAS is Cf.::: (35xI0- 6 m'20xI0- 2m)xI000::: 0.175 mrad and the vertical DAS is 0::: (50xI0- 6 m!20xI0- 2 Il1hI000::: 0.250 mrad. With a 100% fi /I factor, the images of the stars wou ld be separated by arounq II detectors in the horizontal direction or around 3 detectors in the vertical direction. Note also that the DAS for a circular detector is described only by the detector diameter divided bv the effective focal length. The IFOY of a detector is sometimes confused with DAS. but Holst [10] describes the IFOY as the solid angle (in steradians) subtended by a detector. In the tradition of other solid angles, the IFOV would be defined as the detector area divided by the square of the effective focal length. It is quite common to see IFOV described in the same manner as the DAS, !1mvever, where the fFOY is given as both DASs in milliradians, Both conventions are used throughout this book.
t
I, t
I
!
I I '/
~
12. 11 , \
8.8 SCANNING CONfiGURATIONS AND ,
("
I
I, ,
IMPLEMENTA'~IONS
L
...............
0..'-;
A scan mechanism is required for sensor systems that have.either a single eleme~H or a linear detector a~r1fY in order to move the image to~ri1~d bv., the optics' across p '-' AV the detectors. In this way, the detector5 are dlS{ributeae\;enly over the image so './ that the image'can be spatiall,YJep-t2e~t~d. A scanner is not required for sensor syste!V?, that hav~.j£;rge two-diriH~nsi6nal arrays, When infrared sensors were firs! .' deveHjped and ',built in the 19705 pnd 19305, the detector semiconductor ; (manufacturing t~chnology \Va.s n~f,}~d~~j1 enoug\ '~f9._,,?~ild I~~~-c for~at ar: ays , Therefore, a varIety of scannll1g approaches were dertved to ta'flow either Single . element or linear array detectors to be ,sca,nned to form an image. Sin'~t time, the detector technology has matured Slgl1lficantly. Today, there are sC'lentlfic EO CCO arrays that are 7,000 by 9,000 pixels on a single integrated wafer. The .. __ .-
. y,!.)'
"G
.' ·:1
•
'f
~.'
•
i~." ..,".
J< ..... ,•.•
...... " ,., ,
'V---,!,,"",,~~~~~_~ .• ___ ........,..._ ....._ _ _ _ _ _ ~ _ _ _
.
.....
.
. _ _._. _ _ _ _ . _ .
_..
..- ...
'
....
r
L
L_'
_'(~t'\9/
, , 'J
.....,
,.1
~
?J ...,.~ .
.
:l
. -
........ .
lj . .
--",!I'm!"
~;9iiUt~'.":' . c.; ')'i)"~U. F!.~f';EJT'i "
(
'r"
Dl'/eclors and Scanners
L
.L,
.. .., i I
.1
213
:
infrared community ha~ developed 640 by 4~O arrays in the ll}idway~,. _\y!~i)~JJ1.e technology is still somewl}?t.J.>el·liI:ld in the LWIR. \Vith respe(t t9 . their detector Jrray configurations. imaging sensor systems can be broadly grouped into two categories: scal1ning and slaring. Scanning systems can be furt~er subdivided into )'i.!rial and parallel systems. Table 8.1 summarizes the array cQ~figurations and scanning types. Table 8.1
Detector/scanning configurations COl1jig/{ra~iol1 ,
ced
-rt) I .r 'is
Serial scanning
Single element
I
Linear array X
:'\
x
Parallel scannmg Staring
,
2-D array
~-,!.
x
Yr
I' ;
"
X
.' :
<:-".~
~Sta~ing system~,
II ,
~
It Ii
f . i~~ ~J
.
, ....
,(
~l~_
".[
a?the name implies •. are the system approach where the image, plane area is fiUed with detectors that have been distributed onto a single substrate . .With this approach. there is no scanning requife~ fJ~~tgh there is usually sOme type of periodic shuttering for gain-and-Ievel correction. Stadng-array systems ciTer the potential of having a much higher sensitivity than scanning system~ c;;cause they have much longer integration time~.:.}!.1.t!~e SNR calculation of (8.33), r::;; temporal frequency is replaced with 1/(21 illl1 t9r these system~ . The higher sensitivity does come at the price of lower resolution because the detector elements . bve spacing limitations in both the horizontal and \'ertical directions. ~ote that a relatively ne\v technique known as microscanning helps to circumvent this . .p:-o.~!.~Il1: Microscanning is a technique, whereby the line-of-sight is stepped or ..c:thered to increase the effective spatial sampl ing frequency. "" !'--:L, ~~ For serial-scan syst'ems. a scan mechanism is required to move the scene o"cI' a linear a1TaV of one or more fixed detector elements. The elements are ,',' "i:::gned horizontali y and scanned s~q~I;~ntially in a two-dimensional rectilinear r:;~ter pattern to fom1 an image as shown in Figure 8. J 7. TI!is type of system , rquires a two-axis 5.::anner to produce an image. One variation of the scanning p...:~em sho\'y'O is to have a number of adjacent detectors in the active or in-scan d::-ection to help increase the SNR. The num~er of adjacent detectors is defined as ti".;:- number of time-delay-integration (TOI) de~ector. V,cli. .:;" \
L ......·
.
.~
.'
:.'" _. (. :'>.J
to. •
\'
~.~.~~
1> "~.' ",
Ir.:roduclion to Infrared ;:1I1d
" I ./
rJl"ct~,,-Opticll Syskms
. i
. - _.. -.-----~--~---.,., i
.\ l' r i \::
S c ;:1
;
r.
n
Retra~~
" l
\
.
Scannej are a ... _ _ _ ._._~~~ . . ~_ _ _--.J.
Fig~re 8.17
Two-dimensional serial (raster)-scan pattem.
_.,One line corresponds to the detector output for 3 single active sc:-:.n acrosS the image (shown in Figure 8.17). The line direction, or in-scan direction. which is the direction of highest spee~.(lS,750 lines per second for the U.S. RS 170 standard 525-1ine format), is usually the horizontal direction. The cross or vertical-scan direction is slower (60 fields per"... second for U.S. RS 170 .. ~.. , standard 525-line fOlmaO· Sometimes the detector-lin~ rate and number of lines do not correspond to the -R"ST7o' television standard, so a special display or a rdonnatter is ~eqLlired to interpolate the line information. . k.· . pl:,9 )\./;~' The parallel-scan system consists of a linear array thal is orthogonal to the cross-scan direction. This linear array has one or more c3fumns of detectors. while the number of detectors in the vertical direction is sufficient to cover the -.9~sired FOY. Three primary types of parallel-scan configurations are shown in ..
?
:,,1· ... r
.'
,
Figure 8.18.
3
0
--"
\.
,
)\. (
;.
'.'J r '
~. t
J
l:J". r
000 0 ~ . -- -- -0000 0000 ~
... -"--
OOoo--~
0°00
!
~. L.
::'.'\.. ;,-
~~OO
",
i
"i"
I
0000
0 (a)
(h)
.\
.
!
Figure 8.18 Parallel-scan configurations. (a) Bidirectional inter1aced scan: tb ' bidirectional interlaced scan with T01; (c) unidirectional scan with T01.
J
"·:·'·r .
r~lj ..
N
"
"]r
;~1-
o------>~
0 <E:
f
.... .
L_
2
I; '1' <
:.. "
I
[l
\ ,
.'/:~ \
,
\)'/
-
--'
215 J
,
r;\
\ I
Defectors and Scanners
,
.... "
Figure 8,! 8(a) shmvs a I inear array of :V detector elements, The image formed by the optics is scanned horizontally across the linear array duri'ng which rhe detectors are sampled. At the end of the first scan, a slighttilt of the sc;anner in the vertical direction is applied and the image is scanned across the detector array in the opposite direction. This provides coverage in the re!!ion of the image plane, which was not covered with the first scan. Each scan fo;'ms"~;'':'field and the two l~ ,-'-.~-~ fields are combined to form a frame that represents a complete image. The field rate for these types of systems is typically 60 Hz .. which produces a frame rate of' 30 Hz. This scan configuration is used in the common module FUR, which w!1s developed by the military. The second figure, Figure 8.IS(b), is si!nilar to that of the first with the addition of serial (horizontal scan) TD!. ~__or mQr..e • ~!,<2r.s_(l,~e_,,~jf·rnple.~,j.bJhe in-scan direction at the sam~ positions and the output of the detectors are added together. The ngise a.dds in mlttashion and the signill
"
L U
\
t,
,.
!"~i
L
la.dds. dir~ctly givi~lt a JNtdi impro;;~~~t in SNR. The last con~guration: Figure t, 8.l8(c), mcludes 'a stagger~d array of detector elements along WIth TOI to allow '.-tne:entire image plane to be covered with a single scan. In the detector array configuration, the increased number of detector elements in the vertical field of '\;i~w (YFOV) gives redundanr ~(lplple'polrlEC' ~;, ,I, . There are several ~6ciated parameters that are used to describe '2~anning-type sensor systems . .T~e scanner efficiency, denoted by II s c, is the ratio of the amount of time that the detectors actively sample the sensor's FOY during one frame time t 5 for a given frame rate F r :
'stl'e
r
:sL,e ldard -sl' "n
\
,\
~-
l1cfil ')/~e .
t
,
:lLL0 '.
1 ..,r~ ';)l ;r'~ _r' \.l,~.
r
(SA2)
'. !L,,,
f' , t , IlL)
where--- Is . ,. is . in,- seconds, and Fr is ..in.frames per second. There are two primary --" "'---' -reasons for the efficiencies being less than unity. The first is. that the scanner r.,:-s"-' rr-averses(ldistancelarger than the horizontal D.e,ld pf"view (i.e., dead space or radiometric reference scanning), and second, diti-urnaf~u!ld time of the scanner is .significant because of the mass of the /l1irrol~~"T'J:pically, scanner efficiencies vary from .- ... - . as low as 50? 0 to greater than 90%. Two other parameters are the number of samples per horizontal DAS and :,he number of san:ples per vertical DAS. The number of samples per horizontal DAS is sometimes -:-alled the serial scan ralio, N.,;. and the number of samples per nrtical DAS is sometimes called the overscan ratio ,Vos. Ihere are other Jefinitions for the? serial-scan ratio [ 13]; however, we use the nomenclature jescribed above, These parameters have been called by other tenns. such as the :lumber of sample5 per horizontal IFOY and the number of samples per vertical lFOV. Consider the- detectors shown in Figure 8,1 G . -~=--~.~
I·
L:,
t
c
l l
. .. ,
~ ":'."
,
l,
:
1 ..,:",
L 1
,
.~-
\
-~-
l.~
".~,
.......,
D~
(ector
SClf!
:>-
Detector
san,lPI~ 0 0
pOSItion
D-\, -" \ Dc~ector
Dt:kctor sc,:.n
-I
scan
----)-
)0
• i \us= I
-
• Detector
SCJn
)-
Figu re 8.19 Serial-scan and overscan ratio,
\.
~.)a.t:~"'1..,··J
,~«A/
t,' 6,
•
"',t- \
\
~
,
....... "
,
M5 .91J~?~JhClr.LLsho~1hat d~tector sample is taken at positions th,M are "
farther apart than rhe horizontal DAS (or in linear dimensions, the separation of !he detector samples are farther apart than the horizontal detector dimension.Jn ,.. ITl~t~rs), The distanc~!Jet\Veen samples in meters is ,sometimes called the pt\e/ pitch. The sample spacing for N", equal to I gives a sample distance in radians that ...is·eQl1al to the horizontal DAS. Finally, the N" greater than I case gives a sample spacing that is less than the horizontal DAS. \i"!<',}-;;.J.' . ,Nu,n or the oversca!1J~!jQ .., o( less than I means flat the vertical sampling .4J:Stance is farther apart than the vertical DAS, An _"'II eq al to one gives a vertical ~~.Clmple spacing equal to the vt!xtjcal detector angu lar su tense, and an N" greater than I gives a vertical sample spacing smaller than the vertical DAS, , The angulilr distance between samples is related to the serial-scan ratio and the overscan ratio along with the horizontal and vertical DASs, The horizontal angle between samples is given by s:.
Uh
=: -
a
/VjJ
I
,L ,
'.._,-,
,,·f
'(
L , ...,'
(8.43)
[rad]
and in the vertical direction (8A~)
L)
i
for staring arrays, 6" ::; ex and 6,. ~ 0, so that N" and IN", are both less than I unless microscanning is employed. Microscanning takes the staring array and jitters it in a ,_~~aIl pattern as shown in Figure 8.20, so that N'l and N", are greater than 1. \~
.. _ _ _
~t."'~""-
__
~r.-"'''''T_~
..... ,....
~_~
.....
r~
.. ~·.~,--;,-.- _ _ _ •••.• _
L, r
I
L.
j)
.... _~._ ••
~-"~,r',!,"""r''''''-'~!IPI"-.~,r-,-;;;,;::>- [ .. ,
: : ":,
..." :
(
" j
,~.!, I~'
-
,[2
217
Delec/ors and Scanners
I i
1.1 L \
Detector array
0[!J[!J [!J[!J 0 00[!J[!J00 0[!J0[!J00 000[!J00 0[!J0[!J00
~,L
.L
i\licroscan pattern
Figure 8.20 Microscanning. The entire array traverses the microscan pattern sho\\'n to the right of the detector arrayjn Figure.S,2Q,.Xhe detectors are sampled a number of times (t~pic<1JlyJoj!L. ,times) during this scan pattern which 100versthe effective sampling distao_c.,e" ; e .LL\no~her important parameter is the detector dwell tin1 IJ. This isdt:!fjned: as the time required for a detector tQ.~fQ[}.9_'.'"~LCl_,~in.gt~,J29int. The detector dw~1L time is given by CI.
(8.45)
[sec]
I,l = -Vf
.-:":.he~.e a
is the in-scan detector angular subtense in mrad and V,I, is the angular !)can ._,-,elocity in mrad'sec. ,EQLs_~aring arrays, th~9.well til11e_is"sQllletimes approximated ,.-,,1s~.J/Er, if the sc~n efficiency is nearly I, (the frame transitiol!. time is the onlys ,DODd_etection period). ,Also. the bandwidth of the ~NR equatigns gi\·en,in.thi ~j1:JPter is typically nlJtched to the detector dwell time, so that AL;,;; \/(2t/) . . A \vide variety of methodologies (hardware) i.n1 pIeme'n-i-''th'e-'scanning _~;:,proaches discuss.:d above. Scanning systems include mechanical, a-:0usto-optical. and :lOlographic scanning techniqul?s._.The 1110St wideiy Llsed are I:~.:chanical because ::1ey are the most efu\ient r.r~d ~'eJI devclop~ct, Th;:re Llr~J\\'o b::s i c ty pes 0 f mec h~::l i c aI sc ann ers: pa ral;;; an9~;-c)Yf\~erg i ng heglnQ.~~\l<;:_~,~:. Th~se ~:- ~~es ctiffer .P)' \\'her-;: the scann~,::,.,ispla~~9. ill the ();~tical PMI1- Tb-e-.p
r
i; ~lIre _
\J
l'
l-
~lrodllclion to infrared (!l;(~ t:·:lc·.,:'lro-Optical Systems
I
218
. r'
-----_._ ..- .. _...-._- - ..
:
----~-.-
_.-.-
._-_. - . __ .---- ... - - - - .---_.-
..
\ .
r '
\.
r .
Galvanic scanner
Rotating drum
Figure 8.21 Common mecnrmical scanners.
I!!,~ __ rQtating,..drum scanner is ideal for systems that require a ..high
_f_C?,!!~~~!!~\'.el().c!~y'.wjth whi~!!. }t_c_~n()p~rClte with \·ery.little loss_in distortion of the j!!!~g~ry. ~~ scanners ~!I.~_~l.?~.~Lprill1i}riiy with parallel-beam systems becays.e )yith ,!..<:onverging system. the ()scillatory Irt(:tion changes the focal point and hence severe dl'sto'rtJ'ijii. The disadva~!:(l~es of this scallner are cost and the --_c~;s .... -.-.- .. -.~.... .. ..
/•.~;5> i~CtIb.Hity to St~111 .. a.lld stop the scanner.-. at a particular time and position.--The galvanometric Jllirror oscillates periodically between two stops ~nd can work with :ehhe~pa~alld or convergent beams. This type of scanner is less expensivea~d near - more flexible',,-than the drum scanners,. but the oscillation becomes unstable . ' -the _..
......
. . . ..
"
field_~dges,
.= ,0"'-'--
y: '..'1, '\1;. "
"
.
_, l
"
'. \
l ,
r
<...(.
,- ' .
.-'....
'-;'
I
~.:1 :
,/
.:'
~
\Vhether a detector is scanned or staring., Hh:: detector integrates the incident flyx over the spatial extent of the detector. Th~~L i:;, the output voltage of a detectorcan be determined by
I
r
,,n
!
(:\.2 I'O'J D''', )d Jr' 1 . j'ct) L'(1A,X,Y)l\,\t\"x,Y ..'((YUI\, [V'l
If
r c::
"'I.J, .. r'J
(8.46)
~L
J
r
where i(A,x,y) is the inc;,dent image irradii:lllce on the detector corresponding to the spatial image and R(A,x,y) is the detector responsivity, From here, we are investigating thE' spatial transYE;i, so we drop the wavelength dependence where it is understood thCit Vl(~ have the integrated fluX for the output voltage. Consider the rectangular detector in Figure 8.21, centered at (Xi ,y']. The cktector integr.;ltes the incident image flux and the output voltage is sampled. A singlt~-0l!trJut valuc represents the image flux integrated by the detector: .
I
l·tI."
,'f)
•
,,_.1
v-y'
o(x' ,)')::-: ...""I.if.) l(x,y)reclr:f,-, ',,)d'(ciy
(8.47)
.L
,
l I'~ L L
219
Detectors ond Scanners
I l
re~u!ting in a single voli3.ge value located at [x',y'] , Equation (8,47) is similar to a convolution where the output signal can be represented as
!<' L'
y
--r-
I· . ".
L
k·"
.-----x
I
". i:':
....... : ;
~
'f
Figure 8.22 Detector spatial integration. o(x',y') = i(x,y)
* *rect(~_ f,)lx=XI .
I=yl
where the substitution is performed only after the convolution is performed. The limitation on the single value can be accompiished by' sampling. The evaluation at the single point is a singie sample that can be obtainecJ by the m'..iltiplication with 5(x _ S' ,), - .v') and then integrated (see the sifting propert) in Chapter 2). • for a st(~rin;; array_ scanning detector. or scanning linear array, the detector integrates the incident flux and, for no\\·. \\'e assume that the output is sampled at certain locations. Consider the detect,]r sampling geometry of figure 8.23.
0 0 0 L- I I-l ~~\ ~I '. I I-l !~ -t~ I 8 ~ 0 ~ [;1-<-r>dl I--;~ r~ j;lvl r-J '-l I i_.~. L-J l._?~ 0 I I [~ 0 I ~ [~l I
A
_.--l
L~
l,:',~
,...--,
r--'8
" !t
Ie! L._~
L_ .. _ l
i',
__, ~ r;l 10;-1 L~---' -------
l.~ :h1~
l~
i~~-'i l~_~
_._ . . .- ... -- .... ~--.-----\1'
I~"
L-~
,II
I
..
,
-.,; I\l :
-"---.
!
I
.~ - ..... ,.
~" :e-, i·'
L· l ., l
L
diill~;IS10Ilc; (1]':2 U by b lind are :::1n;;: .'~n at ,jISiLlIlCeS of (' by d. The lill1:~ed by width 11', and height j. This ~eoJTletry l:cscribes the of dUeell.'.r conligilr,l~lons incll!ding the ~i:arl!led types mentioned in the
The (leu,::,-:..:r Jrr<1: size is
illajority
}][)
htroduction to Infi'ared and Eleero-Opt iCli
Syst~rr;
:i I
f
previous section. The sampling distances c an(:: d J["C ~imply (:/"" and b/S,,\, where the sampl ing distances can describe detecor overlapping. r:e output from the sampled system is a set of discrete values tha[ :::an be rl'preseiltc~; b).' (SA9}
To understand the image transfer transform of (8.49l O(l;, 11) = {(I(;, 1I')ab sinc(a~, b11)]
In
linear sy:::::em terms, We
W~e
the Fourier
* *cd comb(c~. -iq)] * ,~ IIh Si:'1C(H~. hTl)
(8.50)
(8.50) describes many aspects of detector sampLng and provides 3 good deal of guidance in system design. If we assume an infinite detector array (not limited by lV and h), then the output spectrum is the image spectrulll multiplied by a sinc function and convolved with a comb. Figure 8.~4 shows a representation of the spectrum in the horizontal direction. ; :"\qU!:;t
L\tL'
1
I
, i
IConvolved With)
• I
I
'.
lie
,NyqUist rale
Nonaliased
, ,
'
Aliascd ".1
,'j '1
J
f. I
.
Figure 8.24 Detector output signa! spectrum, The sinc function has a width of 2/([ to the first zeroes. It is gt'!1er::-,i!y assumed Ihat the energy P.1st the first zeroes is insignificant. but in rea!it~\ i" ~ilOUlJ ~e given consideration. If this function is convolved '.\ ith the cenl!' fur~t.ion as in Figure 8.24, overlapping occurs (i.e .. aliasing) if f/u is larger th3!l /!~\" .:"150, the figure shows the case where I/o is equal to I/c (this is the case \vh(;re ~he sJ!11ples are spaced at exactly the detector width) and significant aliasing occurs. The shnded region shows the folded detectar spectrum that corresponds to high spatial frequencies being represented as aliased lower frequency signals. This c[!st' corresponds to a nonmicroscanned staring array with spacings equal to the detector
J--,
Dcleclors und Scanners
widths, To avoid or control aliasing and spurious responses, three characteristics can be exploited. . The first characteristic that we discuss is the least likely to be , controilable. If the input scene (input to the sensor) has a spatial freq!uency bandlimit that ensures the highest spatial frequencies are less than 1I2c and 1/2d in cycles per meter (note that c = /,,'.I·sa and d = Nosh) or less than N.n/a and No.JP in cycles per radian, very little aliasing occurs. This requires that the scene vie\yed by the sensor has negligible spatial frequencies above these requirements. For high-altitude surveillance sensors, this is extremely unlikely. For tactical close engagements, it is more likely. Closer objects have a lower spatial frequency. content. A better approaG:h is to Iim it the input spatia! frequencies by the optical design. This is to say that the optical MTF has cutoff frequency of d ll[1fic./"A that matches N.I)a or So)P in cycles per milliradian. Finally, tne best approach is to oversample the detector output with at least two samples per DAS in both directions. This ensures that little aliasing and few spurious responses occur. In · .this situation. the lim iting spatial frequencies are no longer N.I·.Ja and N{).J~, they are the first zeroes of the sine function, I/a and I if3. The best design includes an oversampled detector with an optical design that matches these spatial frequencies. With any of the three above requirements met the transfer function of the detector can be approximated by
L L
L
f;' l e-
LL
(8.51)
I' [
c
• I (
211
, L
I
In-here the DAS are given in milliradians and sand 11 are given in cycles per ,milliradian. In using (8.51), be careful to meet aliasing requirements. This equation · is commonly used with systems that inherently have significant aliasing, thus ·providing poor result::; in the estimation of system perfomlance. It is difficult to ensure meeting nonaliasing requirements. For commercial '(CDs, infrared 5l:aring arrays, and even fl.')r second-generation FLIRs, oversampling by a factor of two cannot be achieved so the sensor design results in limited aliasing. The reduction in input spectrum by the optics effectively reduces the resolution in the output image. Some aliasing can be tolerated and image quality is a tradeoff betvveen resolution and aliasin~. Some guidelines are given in Chapter 10 for these' tradeoffs in tenns of human perception of imagery. Finally, we did not consider the last function in (8.50), the window (unction that descri:-.es the number of detector samples in the spatial an"ay. Note rhat for a large arr~1y. the sine function width is ~mall. The convolution of any function with a function of SIn31l width decreases the resolution of the function I.lnly by a small amount. Remember that convolu:ion with the zero-width delta function yields the exact function. A staring arra;. of 1,000 by 1,000 detecto:-s, where the detector spacing is very close to the dc'tector width, produces a sine function 1/ 1.000th t.he width of the detector sine. The convolution of the Olltput spectrum \vith the window sine results in very little resolution effect. \\'ith an array
ir. iTOductiol1 to lnfi"ili'ed and Elecr:-o-Optict! Systems
F!.!
. - - - - - - - _ . _ - _..---_.
-
of 4 detectors bv 4 detectors, the convoluti,'11 could affect ~:le resolution :;ignificantly. '
~-1
I
:~~~ .-\. longwave (assume a 10-pm wavelength) i:~frared sensor has an entrance 3perture diameter (entrance pupil) of 6 em anj a horizontal det::ctor t:ngular 'subtense of 0,1667 mrad. The horizontal sampt ing frequency is 6 samples per milliradian. If the system images a point source, determine th:: percent of horizontal aliasing of the signal.
--',.
,
I
,I MTF_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _---. 1.2 .,--_ Sampling freq
Nyquist
1.0
I
I
I
I I
0.8
Detector/dptic5 product
I
I
.. 1(\
Detectdr t\lTF
0.6
Optics t\ITF 0.4
,
i
o i
i -
~-----i--
-0.2
o
,.
-',
0.2 _
2
I
4
6
8
..
10
-:-l , .~..-, l
Aliased signal
-
["
f
-0.4
-~ l
Spatial Frequency
,
,
[
Cycles/rnraJ
r
,.-L ,
·'.J'Y
r'
Figure 8.25 Ex;).mple sensor. ,
The solution (Figure 8.25) can be detennined by representing the point source as a delta function. Therefore, the input is i(x,y) ::: 8(x,y). This is a separable function so that iC-~)::: 8(x). The spectrum of the input is detem1ined by the Fourier transform of th~ spatial input. so that f(~) ::: ! . That IS, the spectrum is uniform over ail frequencies. The detector output signal is
;I I i
,.J I . where hop1ic.l is !he optics imjJulse response, krectCi) is the detector impulse response,o. is the horizontal DAS, and 6x is the angular distance between samples. The only limit 10 the output spectrum of the detector is the optics MTF and the detector MTF, before; detector sampling. In the frequency domain, the output spectrum is
i
I"\..
J:
Delectors wui ,)'canners
.~
I,
,
.-.1
6~'
,,:,
'.,
i, i
t: c'
;
(
...._... ;: .. ;.,
1
l...;...
[
I
I,
223
The optics MTF is assumed to be difh·action,·lilllii:ed C'!'()dl Chapter 7) and is plotted in Figure 8.25. The detector MTF is plotted in [ite saille figure for the given detector angular subtense. The optics and detector iVlTFs are multiplied before the sampling function. The sampling is characterized by the comb function (ref. Chapter 2), where the comb spikes are located at the origin and then every 6 cycles per m ilIiradian apart. rf the detector output spectrum is greater than 3 cycles per milliradian apart (this is the spatial Nyquist rate associated with the system), the' convolution of the detector output spectrum results in an a1iased 'signal. The amount of aliased signal, for the image of the point source, is the shaded area shown. This area divided by the total area of the output spectrum is calculated to be around 9%. We assume here that the atmospheric MTF is negligible. Because the sampling occurs after the optics MTF and the detector \1TF (i.e., the sensor front efld), the signal is aliased at this point in the system \-JTf cascade. The optical MTF, the detector MTF, and the sample spacing can be adjusted to reduce or eliminate aliasing. This is a common practice in sensor design, where the optics, detector, and sampling characterisLi':::5 are matched w reduce aiiasing while providing good image transfer. The ~!e(;tr0nics, dispiay, and human perception MTFs are perfonned after the scenei~))arnpkd and cann<)t eiiminate the aliasing .• 8.10 INFRARED DETECTORS
,': l~ I.
! il,_ ! I
;
,,\...:..... i
!
!
I ',1 I L .
I
L
Research into infrared detection bi-;gan oYer SO years ,.(go before Vv'orld War II, although few resources were devoted to this an!!:l, i1.~, a significant amount of research funds was devoted to radar technology" \Vhiii:;; r~Sc(:~j"(h continued over the years, it was not until the 1960s ti1at'.l)c first Fi.ll~ s>:: :isms VI/ere developed and fielded. These earliest FLIRs were :-ieri:.d scann~~rs, which were discussed in Section 8.8. The technology continueJ tn i:',atun-; iii tt.:.=; 19 ms and 19305, which resulted in the common module or firstg(:;neration ,FL: R. I'll,.:: ,)C!lSt)j'S were based on linear mercury cadmium telluride n-!gCdT:~) rki(';C <, ;!;'\ J') ':,:\a~ ':Oi1ta:~ned 60, 120, or 180 elements. These detector arrays ait (:iI2,\ic:~p.ri,:d! hy. !DW Juantum efficiency, limited dynamic range and s::nsitjvity, si:';le-bdnd ('~,~'er3tion (8 -12 ~tm). and are sensor-noise-limited. The late 1980s and early 199(l~: saw t::f,; ,:,'t:!upil1f:;nl 1)( rh'": second generation FUR, These were typic;:l1iy htl::;cd on ,ill "'''CAY thai was ',~8D ciements long and two or more detectors in the hC,l'I7.0'1tCli dit"r"_~iOfl "\)i' TD1. The:j~ FLIRs provided a number· of improvements o"er fir,'31"'.!!,fi ':::i":ij';:l S~ilS()rS, including improved resolution, seJlsitivity, dY!1(1;1 1ic r(ll1g~) ,I:; :~:'!;;'I tity, and detector performance approaching that of BLIP, Third generation FLIRs are still iiI j!1c j'::;jC;j :h:!lddc'l~lopment phase, with . only brass-board prototypes COl1lph:;JFrl. i'b(~ ~:' [I'd ·:plcration ')ensors a(e ~,
'
i
.2.:.'./
._---_.. - - - - - -
..
__. _ - - - - - - - - - - --
----.
.•...
primarily defined by large format arrays (e.g., 6·W by 480 dt"lectors) operating either in the \IlWIR or LWIR. These sensors will provide incrt"3sed performance over second-generation FURs in the are;}s of dynamic rJl1ge, uniformity, sensitivity, and resolution. All three detector aITays discussed above require cryogenic cooling. The future growth for infrared detectors is expected to included multicolor arrays (i.e., a detector array that can detect in both the midwave and the longwave) and increasing usage of uncooled detectors (both pyroelectric andbolometric) as their perfonnance continues to improve in resolution, responsivity, response time, and inherent noise. • I 1,
, : f.:-t)
~!/jl
f I
-!.I'
I
8.11 ELECTRO-OPTICAL SYSTEMS
CCDs are cUITently the detector of choice for EO systems. Since they were invented in 1970, they have replaced photoemissive tube devices and have evolved to a sophisticated level of performance. They are likely to progress for many years to come, however Fossum [14J states that they are likely to be replaced in the long tenn by active pixel sensors (APS). APSs are defined as detector aITays that have at least one transistor within the pixel unit cell. APS sensors will eliminate the need for perfect charge transfer. That is, amplification occurs berore the transfer of charge. APS sensors have been constructed at Toshiba that have 640- by 480element aITays with good performance. In the meantime, 2,048- by 2,048-pixel CCOs have been constructed [15] with dual-color focal plane aITays that operate in both the 400- to 640-' and 640- to BOO-nm bands. In fact, the world's largest singleband CCO was recently constructed [16] by Phillips at 7,000 by 9,000 pixels on a single wafer. The charge transfer efficiency (CTE) is currently 0.9999995 for a 1,620-electron charge packet [17J. Readout noise is down to 2- to 3-electron rms for systems with on-chip amplifiers. The largest arrays sizes that are physically possible (assuming ,current w~fer sizes and 5-11m pixels) is 16,000 by 21,000. Also, the quantum 'efficiency is better than 0.5 over most wavelengths of interest. The near term will bring a large number of improvements to CCDs. A .4,096- by 4,096-CCO array is being developed (the cinema CCO) to replace film I in motion pictures. A 4,096- by 4,096-detector camera is also being developed for deployment on the Hubble telescope. The pixel dimensions for both of these CCDs will be 9 pm' on a side. Dark current will be reduced further and the well capacity will' be further optimized for particular applications. Finally, high-speed clocking will continue to be developed. The pace of technology today yields many surprising developments in extremely short periods. EO systems have an added development thrust (over infrared systems) in that they are used widely in commercial and consumer applications. An example is the digital photographic camera, where the market continues to grow and the price of the system continues tn decrease. These and many other EO systems, will continue to improve at rapid rates.
I
~~;~n"'!~~~~_ff:"'~~~. -.-,....... ... __ ~ ........ ,_~ _____ ~_.~_._ !W ..
.....
,.~-.-r---
•. _. __ •
j .. ~
:
'J'.'
!: .
. '-'/'"
..
;.
~:-'-
r l.'-.,,' )
\
I
Detectors and Scanners
fT.'.
IJ
225
rU :e
rL i
,
:'["
: 2,;
:.d
~.>~': . ", ..
,
i
I
:
!':L.
I,
;I: :
i
f..' :-~
L~ I '. . L -. L
8.12 NOISE
Detector noise was treated in a general sense in this chapter, with detectivity providing the means to calculate the noise signal on the output of a single detector. Overall system noise includes photon noise (from both the signal and the background), fixed-pattern noise, generation-recombination noise, shot noise, Johnson noise, l/f noise, and many other noise sources. Usually, one of these noise sources dominates the SNR of the system. The cause and impact of each of these are very different. Photon noise is the fluctuation in detector output signal due to·the discrete arrival of photons, and the corresponding electron state' transition. For scanning single- and linear-detector infrared systems, the photon noise from the background flux is usually the performance-limiting noise source .. These scanning systems account for a large majority of longwave infrared systems. However, a large number of midwave infrared systems are currently built in the staring-array configuration. These systems are typically perfonnance-limited by fixed-pattern noise. Fixed-pattern noise is the spatial "fixed" noise caused by the nonuniformity in the detector (and electronics) responses over the array. Each of the detectors in an array has its own input flux to output voltage characteristic. These characteristics are caused by detector material VariatIOns, impurities, imperfections, electrical circuit tolerances, and so on. The result is a fixed, norsy pattern on the output of the sensor when viewing a unifonn scene. In the infrared, the fixed-pattern noise is treated as an increase factor of the background-limited case. In the EO wavelengths, the fixed-pattern noise is treated as an nTIS electron count in the output of a CCD charge transfer. The other noises described above are those· associated with readout electronics (generation-recombination noise, shot noise, Johnson noise. and l/f noise) and are described in the following chapter dealing with electronics. Noise is treated differently in EO systems and infrared systems. In EO systems. the dominant noise sources are readout noise (includes Johnson, l/fnoise, and other electronic noises) and fixe'd-patten1' noise. In infrared systems, the dominant noise source is the background photon noise (i.e., BLIP). Also, the approach to noise analysis in EO systems is in equivalent rms electrons in a charge packet. In the infrared. detectivity is'used to detennine an NETD and a minimum resolvable temperature difference (MRT). For fixed-pattern-noise-limited infrared systems. a three-dimensional noise analysis is perfomled that modifies the NETD values. Because the noise treatments are different in EO and infrared systems, we reserve' the treatments until Chapter II ("Infra['ed Systems") and Chapter 12 (" Electro-Optical Systems").
Introduction to Infrared and Elec\!,.Optical S>'stems ----
------,~----.-------.---- ...- - - - - . - - - - - - -..
_.
-------_ ..-------------------
r.
EXERCISES
8.1 Describe the differences between photon and thermal detectors, Provide a list of applications for both photon and thermal detectors. 8.2 Determine the bandgap energy of a detector material required to I.ietect I O-~lm monochromatic light. What occurs with a larger bandgap: A smaller
,; I r .
II ' f-'l
'
t'I,\ .
bandgap? 8.3 A Mercury cadmium teluride (HgCdTe) detector has intnnslc concentration s.uch that II" and np are both 1 x 10 13 per cm). The detector has a quantum efficiency of 60% once it is cooled to 77 deg K. For a 50-~lm detector depth, determine the change in voltage with respect to incident irradiance. 8.4 Determine the gain for a photomultiplier \vith six stages and a secondary emission ratio of 5. 8.5 Describe the difference between radiant and photopic responslvlty. Why is photopic responsivity used and what are the risks associated with its use? 8.6 A round InSb detector is 1 em in diameter with a perfect output filter of I kHz in bandwidth. Determine the noise equivalent power of the detector. State' your assumptions. 8.7 Derive the SNR equations for the resolved and unresolved source cases where the detector is square and the source is square. 8.8 List all possible noise-reduction activities that apply to both the resolved and unresolved source cases. 8.9 Calculate the vertical and horizontal detector angular subtenses for a 30- by 40-ll m detector located at the image plane of a 25-cl11 focal-length lens. Given that the detector is sufficiently sampled in both directions, how can the detector modulation transfer function be improved?
.i .' . "1-" ;'. f
8.10 Explain \vhy second-generation infrared sensors are BLIP limited as compared with first-generation sensors.
I
L .'
:. ~!'
o·
8.11 For a bidirectional scanning sensor system that has a 60-Hz field rate (30-Hz frame rate) and 480 samples in the scan direction, detennine the detector dwell time. (Assume that the scanner efficiency is unity.)
Deleclors and Scanners
227
REFERENCES [I]
[2]
[3 ] [4]
[5] (6]
[7] [8] [9] [10]
Pinson, L., Electro-Optics, New York: \Viley, p. 103. 1985. The Photonics Dictionary, Pittsfield, MA: Lauren Publishing Company, 1996. Dereniaj<, E., and G. Boreman, Infrared Detectors and Systems, New York: Wiley, p. 86, 1996. Crowe, D., P. Norton, T. Electro-Optics Components, Handbook, Bellingham, WA: streetman, B., Solid Slate
Limperts, and 1. Mudar, Detectors in the Vol. III of the Infrared and Electro-Optics SPIE and ERIM, p. 177,1993. Electronic Devices, Englewood Cliffs, NJ:
P:-entice Hall, p. 65, 1980. Hergert, E., "Expanding Photodetector Choices Pose Challenges for Designers," The J996 Photonics Design and Applications Handbook, Pittsfield, MA: Lauren Publishing Company, p. 119. 1996. Grum, F., and R. Becherer, Optical Radiation ;Heasurements, Vol. 1, Academic Press, 1979. Finney, P., "IR Imaging With Uncooled Focal Plane Arrays;" Sensors, The Journal of Applied Sensing Technology, pp. 36-52, Oct. 1996. Saleh. 8., and M. Teich, Fundamentals of Photonics, New York: Wiley, p. 664, 1991. Holst, G., CCD Arrays, Cameras, and Displays, \Vinter Park,
FL: JCD
Publishing, 1996. Aikens, R., "Replacing Charge-Coupled Devices in Many Imaging [I I] Applications," The Photonics Design and Applications Handbook, Pittsfield, [12] [13 ]
MA: Lauren Publishing, 1996. Dereniak, E., and G. Boreman, Infrared Deleclors CInd Systems, New York: \Viley, p. 152, 1996. Shumaker, D., 1. Wood, and C. Thacker, FLIR Petjormance Handbook, DCS
Corporation, 1988. [ 14] Fossum, E.. "Acti\'e Pixel Sensors and Are CCDs Dinosaurs?" SPIE Proceedings: CCD and Solid State Optical Sensors; San Jose, CA, 1993. SWbbs. C, ct al., "A 32 Megapixel Dual-Color CCD Imaging System," SPIE [15] Proceedings: CCD nne! Solid State Optical Sensors, San Jose, CA 1993. Holloran, M.. "7.000 by 9,000 Imaging on an Ir,tegrated CCD V/afer," [ 16]
A Jvanced Imaging, Jan. 1996.
[17] lanesick, Land T. Elliott, "Sandbox CCDs," SPI£ Proceedings: CCD and Solid State Optical SC.'nsors 5, San Jose, CA 1995.
I .
f' ,~
,
J' \
,
,-~
l
,
f'
r L
L:
L I,L (L I \
.
"l Chapter 9
11
Electronics
L
rL I' [
L.
i
I l. 1., I,
The function of electronics is to transfonn the de~ecto(.output into a signal that can be view~d or proce,ss.ed (Figure 9.1). This ,tra~ifomiation must be accomplished • .~)\ ,~('~ .~u.-> /''I'--''\vJ\ with mInimal degradatIon of system pert6nnance. Issues of primary importance are low noise, high gain, low-output imIfed~fi'ce, large dynamic range, and good linearity. While there are entire books written on sensor electronics, we will provide an introduction along with simplistic models that will pennit the inclusion of electronics in a system-performance estimate.
r------------------- I
I:' ~o.
'
r
i() oP';C~c~(te
L._.
J
[ : L_
~. .
I, '-. i
!
L 1
~
Electronics
I:
i :
~:::T:!i' Automatic target recognition I
. .£fl\ ~7f.
Atmosphere \ - - - - - - - -
';~::;,;r:, _---1ATIDJllQff~.9L-.j.
sensor;),!l You are
L:
Human vision I
1
I
here~
Targets and background
Figure 9.1 System electronics component.
i·'
l L J
l'
, i l_.
{,
L
9.1 DETECTOR CIRCCITS
1...
A large number of detector circuits are used to provide a voltage or cun-ent that is proportional to the in-adiance on a detector. Bipolar.. \ JFET, and MOSFET technologies are all used in detector amplification. We fO(o5 here on operational amplifiers (op-amps) because they are extremely common in detector circuit design_ The circuits given here are useful as amplifiers for photovoltaic, photoconductive, bolometer, and pyrometer detectors.
----------------_._-----229
Introduction to Infrared and E It'ctm-Optic;\l Sy,'.k'ms
230
'
:,
i•
--).x \ ,':;}\ '.. t ~
I
' ",)1' I
(3 '~-:---------'------'----
~
r
------------
/
{ -', For photov91taic detectors, there are (\\'0 types or circuits that can be used as detector pream1lifiers: the \'oltage-rnode l'irCllil and tilt: current-mode cirqlit. The photovoltaic detector can be modeled v/lth (l voltage-current relationship that is very similar to that of a diode, The difference benveen the light-sensitive diode and the typical diode is that the voltage-current relationship shifts with detector illumination. The magnitude of the voltage !s related to the number of incident photons on the detector. This voltage-current relationship is the nonlinear diode
· '!
·ii · : "
Ie',!
~
: iJ
1 !
<:
'
equation d'
" 1 = I.\' [e IlkT
-
(9.1)
[Amps]
I]
'f
where ~
1.1' is the detector saturation
:l
cu~t
in amps e is the charge on an electron (1.6x I 0'19 coulombs) ~ is an efficiency factor (l for ideal, and 1 or 3 for the real case) 3 k is Boltzmann's constant (1.38 x 10-: J/deg K) T is the detector temperature in degrees Kelvin.
r-'
r
',
I.
· Ir ~.l
,J
While voltage is proportional to the number of incident photons, current is not • linear with the number of photons. Consider the voltage-current relationship and the voltage-mode circuit shown in Figure 9.2. The circuit is the standard noninverting op-amp circuit where the input current is negligible (recall that one of the ideal op-amp characteristics is that no current enters the op-amp). When the current is held to zero, there is a voltage change when the detector is illuminateci. (This is illustrated in the voltage-current relationship of Figure 9.2.) Detector V-I characteristics
Det~or
~_ _ _-l
#~-="w ioaht ~
+
+ "4 .1
0
/'0
"r
Illuminated
T'
21
_-!.L--x''-----+V
I
-l--
I.
L
'il.'· !~
Voltage mode I
":1"'" .-
Figure 9.2 Voltage-mode detector circuit.
,
r
t~ "
"'-
t I
.
L
r
d
.~~
I
'U it. I
tl' j
,[
Electronics
The ideal op-amp has an infinite open-loop gain. When the op-amp gain is sufficiently large (i.e., much larger than the closed-loop gain), the gain of the circuit can be approximated by the closed-loop gain equal to ~
Ode ::1" •
Itt ode
I.: ·1 '
23/
(V
, "F' ~ .' l,>/,.
... ~\.J
..,..'.r
'
"
i
A l '-I'
~
VlJe/
-
-
ZI+Z2 ZI
(9.2)
[unitless]
'v9
'/;': "C'1
where ~ 1 and Z2 are the impedances shown in the blocks of Figure 9.2. For an ,op-amp with an open-loop gain that is not much larger than the closed loop gain, .' the voltage gain of the circuit is (9.3)
[unitless]
I
I
l ,
,
!
1
and A OL is the op-amp open-loop gain. The output voltage of this circuit is in phase with the input voltage. Also, the voltage-mode circuit is desirable when a high-input impedance and a low-output impedance is desired. ~ Jv v/J The second type of photovoltaic detector circuit is the...) .......current-mode D detector circuit shown in Figure 9.3. Recall that another 'lci~gL op-amp characteristic is that the voltage across the input terminals of the op-amp is essentially negligible (i.e., grounded in the configuration shown). Forcing the voltage to approach zero causes the detector to establish a current when' the detector, is illuminated, as pointed out on the detector voltage-current relationship graph. The input impedance of the detector is not used in the determination of the gain. For an ideal op-amp, where the open-loop gain is large compared with the closed-loop gain, the amplifier voltage gain is ~~. \.~
n l
'e'. '" . ') i~
f is ,
i
:he A 1 '--
~ VDet
--
_Z2 ZI
[ unitless]
(9.4)
Detector V-I char:lcteristics I
D'~
t
l
Illuminated
? ~
:-\ onillum ina ted --jw---.L-----~
V -I
f, . Figu re 9.3 Current-mode detector circuit.
This type of circuit is used for high~gain, nominal-input impedance, lUlU "'" output impedance. However, there is a signal inversion (i.e., the signals are I SO c1eg out of phase) from the detector voltage t.o'the uutput voltage.
!"" :~.
I
Example
(
Determine the voltage gain for the detector amplifier shown in Figure 9.4. Assume that the operational amplifier is an ideal amplifier (i.e" large open-loop gain). The solution is given as the current mode gain of (9.4), \vhere Z I is the impedance of R I and C I in parallel. The impedance Z2 is R2 and C1 in parallel. The impedance of a resistor and capacitor in parallel is
where
(t)
. i
,
is the angular electrical frequency. The voltage gain of the amplifier is
""I , 1
.-l: ~, r
Note that when the frequency is small, the voltage gain is -R11 R I , and when the frequency is large, the voltage gain is -C 1/C2. t. ' I~
r'
I
l ,
r. l
Detector
I
J
-r
r •
+
l
, .. ~'
L
r
J
,C
I
I
L
:.A
Figure 9.4 Example detector circuit.
___ -..-.-__ .... - .... _r"'_~_ ......~.,..-- • ..,,_~_._ .... ,_.
. ...•.••. ----~~- ---- .... __ . __ .• --
n'---'~'"
......
""0
".,
~
:;,:
..
I J'
,
Electronics
233
i<:~I)
~y
The photoconductive detector provides a change in condllctivity with incident phot~;;';vJhe change in conductivity is inversely proportional to the change irir~isti~ity, so the detector can be modeled as a resistor. Therefore, the detector has a variable resistance that changes inversely proportional to the number of incident photons. The detector circuit is shown in Figure 9.5.
L
' L .
-
,f
,
.
f
,
. I
t-
c
,
~
v
...-_-+ /t-______o
,, "
v
'L "
R
L
s
Det Pream pIi fier
,,I "
" L, ,-"
f
l
>
- - i .
f '-
Figure 9.5 Photoconductor circuit.
L,
If the circuit is operated in constant source current mode [1] and output voltage is proportional to the change in detector resistance:
RL
»
RDt!/,
the
i
\
,
>
4
r.:··
-
I ,.
1
L
(9.6) can be written in terms of the detector parameters glven a sinusoidal
irradiance incident on the detector:
.! \ . )
dy
",'
VI" = . .,)'
I':' ,.
L" . ,
'.
, I
L f
L
L,
(9.6)
[V]
(.I
I
9'~ ),,\ 1
,'D, ',i:'
"J v
"/
A"11qiDerT/~V]
I'RDer RL +IlDer N(l +00 2 (2) 1/2
J'1 't'
\1..(:11
y~'
cJ
~
)."'.,v
" / ; ,0;
(9.7)
'--,
)
The detector characteristics are: Ad is the area of the detector, 'll/ is the quantum efficiency of the detector in converting photons to electrons, EDel is the incident photon flux on the detector in photons per square centimeter per second, { is the average free-CatTier lifetime, N is the total number of frte charges in the element when there is no incident photon flux, and (D is the angular modulation frequency. The circuit has a capacitor on the output so that the circuit is ac-coupled. That is, the output voltage only represents the changes in the detector resistance. The capacitor can be removed to provide a signal proportional to the detector incident flux.
Introduction to Infrared and Electro-Optical Systems ,
-
l; ,
J
-
J
-r,-\- - - - '
',Vi }\-"
-.
,
/r
.
.
r
, i
'
1 he boloni.eter can also be modeled as a resIstor, but the resIstance does not change with a photon-level conversion. The resistance is a measure of resistive, material temperature (see thermal detectors in eh, 8). At ;1l1Y rate, the circuit shown in Figure 9.4 can also be used for the bolometer circuit. The capacitor
j ,
defines the time constant of the system and must be sized accordingly, taking into accOllnt the desired cutoff frequency. The pyrometer is modeled with a capacitor, where the capacitai1ce changes with the temperature of the capacitor. To detect the change in capacitance, the light is chopped \'lith rotating blades ..-\ typical circuit for the pyrometer is shown in Figure 9.6.
i,
:
r '
,
i
I
"
RL
'" i .
V
.. ,Ie'
s
V~
0
V
C
Oct
s
b
,,
0 D
[
,I "
,'I,
t
Pream~ifier
" I
.. !
"'J' ,
'
;·:1
Chopper Figure 9.6 Pyrometer circuit.
The photovoltaic circuits shown adequately amplify the input signals, but the other circuits discussed require additional amplification. This can be accomplished by placing the outputs of the circuits at the terminals of the photovoltaic detector. That is, the output of the bias circuits in Figures 9.5 and 9.6 can be considered inputs to the photovoltaic amplifiers in Figures 9.2 and 9.3 (i.e., either the voltage or current mode). This first amplifier is usually called the preamplifier because there are other amplifiers in the total system electronics. The preamplifier is the most important circuit in the entire electronics network. The noise associated (2] with the first stage in a network has the predominant effect on the overall noise figure for the network. The system designer should always try to minimize the noise produced in the first stage.
fo)' 0;..J .J/'f ~\~ 9.2 CONVERSIdN OF SPATIAL AND TEMPORAL FREQUENCIES For scanned detectors, there is a relationship (3] between the electronic temporal . frequency and the system spatial frequency. This relationship depends on the scan c0~i!X of the detector across the image (or the image velocity across the cretector). There are two conversions: one is a function of metric spatial frequency
,
'u;;/L'
'-J',~
'
l)
L
I
'
Electronics
235
i·i
; -r;:;.
.. i
.. , :') i
. titJI
(cyc le~ ~er ~eter) and. the other is a functio~ of .angular spatia~ (ve..qu"epcy (cycles per mtiliradlan). ConsIder the cases shown 111 FIgure 9.7. R~g.~rJle$s Q.f whether the detector is serial-scannec;1 or parallel-scanned, it takes a particular amount of time for the detector to trav~e the image. In the first case, the detector traverses the horizontal FOY in a time /.1('(111' The conversion from spatial frequency to electronics temporal frequency is
:ne {' [ff ] - ):: [ / d] FOf/Tlllrw!r) Z -s cycmra Ilwn[sec)
.;t;'·,j
[
[Hz]
je
(9.8)
I
),tJ
~J
The relationship is usually inverted to give the electronics transfer function terms of spatial frequency.
hr-------------------__,~, ~
Detector scan
Detector scan
,
.~
" Sensor FOY (milliradians)
Il1
L
"'
,
)-
Image size in imagc plane • (meters)
Angular space
Il ,. .r, ' t
.. ....
,
r.. . 1•
.:l ~ ' ..
Metric space
Figure 9.7 Detector scans across image space.
.... 1.._ .•.
The second case of Figure 9.7 shows a detector scanning across the image plane,· ·where the image plane is given in dimensions of meters. Sometimes it is convenient to work in the spatial : frequency domain where spatial frequency is gi\'en in cycles per met~rin the image plane. For this case, the cO!1\'crsion is .r
-
/e
[H . . ] _ >=[ . / . ] !/IIl/geSize(lIIe:"n) .:. - S CJC meter I [1 KMS~
In general, the electrical frequency velocity j~.
L
r
[Hz]
(9.9)
can be exp:-essed in terms of the scan
= vS
[Hz]
i
(9.10)
where the velocity is in meters per second or miLiradians per sL'cond and the spatial frequency is in cycles per meter or cycles per rnilliradian, respecrively. One "
L
Ie
.
•
.
I
of the basic understandings of an EO engineer is the relationship between spatial and temporal frequencies. The scan velocity can also be written in term.s of the detector angular subtense (DAS) and the detecto·r dwell time: 1':::
,DA.':! uwl!il
(m/s or mrad/s]
(9.11 )
The DAS is the detector. size in the scan direction divided by the system focal length. DAS is the angle subtended by the detecwr onto the angular object space. The dwell time (see Chapter 8) is the time required for a target edge to scan across the detector element.
i:: W1
J
Example ;:"\.~
The boards and spaces on a picket fence subtend a fundamental spatial frequency ()f 1.5 cycles per milliradian (as seen from a sensor's point of view) in the horizontal scanning direction. The sensor has a 5-deg (87.3 mrad) FOV and operates at,)O fr~mes/sec. Assuming a 100% scan effi,ciency, determine the electrical temporal frequency on the output of the detector that corresponds to the picket-fence fundamental frequency. The solution is given by (9.10) if the scan velocity can be determined. With a 100% scan efficiency, [he 30 frames/sec requires that (87.3mrads)x30/ sec, or 2,618 mrad's is the scan rate. With a picket fence fundamental of 1.5 cycles per milliradian: rJ ~ Ie = (2, 618mr/s)(1.5cy/mr) = 3,927 Hz (9.12)
~·'l
;:
~:l
"1
or just under 4 kHz. Higher spatial frequencies require higher electronic frequencies, and lower spatial frequencies require lower electronic frequencies. Now that temporal frequencies can be converted to spatial frequencies, we can address the image blur associated with the detector integration time as the detector is scanned across the FOV. The detector integration (sometimes called detector sample-and-hold) can be performed on the detector chip or off the detector chip on a circuitboard. The detector integration circuit sums the detector signal I for some integration time and then is reset for the next integration time. Image blur due to this integration is a rectangular function only in the scan direction with a blur size, in time, equal to the integration time tim.' The temporal blur (in milliradians) is converted to a spatial blur with the scan velocity, Uilll ==: vI illl. The spatial blur due to detector integration time is present only in scanned imaging systems (not staring systems as the detector remains stationary, for the integratidn time) and can be described with an impulse response:
r
:", :~
(9.13)
hjll,(x,y) = d-:-rect(J-) uri U,n/
...
..
'i~
: ~
.
!
,:.~
.,.,.,.,.....,..,...
" " '.,. .,. . . .". .',-. ~' ~~-'7......"....,.,....~~ ....,.....,.,..-.,..-~--.... ,
•
...•• - ....•
~
r
jL.:;
.. ,. _ ..•____ .•.• _ '.'...
~~~~~:\'!I!!'f'm~:''" J( c. l c.
.
..;.
~
-~
. :
Electronics ,
237
'
i
. I'
!
where the corresponding detector integration transfer function is
L"';
f
(9.14)
I,':
The image blur only occurs in the scan, or horizontal, direction. The function in (9.13) can be considered a separable function where My) = eSCV). The transfer function is also separable where H(ll) = I.
,.::
.. t" ~
~.
9.3 ELECTRONICS TRANSFER FUNCTION !
i,
The electronics transfer function for a particular sensor describes the frequency limitations of the electronic amplifiers and the corresponding filters. Usually, the electronics transfer function provides bandwidth well beyond any other component in the system, so filters can be added to bound the noise content without ,reducing system perfonnance. A common method for modeling the el~ctronic filter transfer functions is ....p • :-, '" ' ...... , \J'\> to consider the low-frequency response separately from the high-frequency response. The independent transfer function variables are then conVerted from temporal frequency to spatial frequency and aBp'lied as components of the horizontal, or scan-direction, transfer function of the system. A typical low-pass amplifier filter design [4] is the lagging (Qutput voltage lags the input voltage) low-pass single pole filter as shown in Figure 9.8(a}. The transfer function for this filter can be written in terms of its single pole response:
i
! ' L ..
r ;
c~ ~- ~
'
it,
, \
1
(9.15)
[unitless]
r
\
"
;
IH(~
R
,
Actual
)1
Bode OdB , ~ ~ Plot -3 dB - - - - - -' - - - ~ ,
(-~
I
\t !
-10 dB
I" (
,
1
C_I-'---_oollt -20 dB
:_il_1____
r
~--+---~--~~------
,I
0.0 log(~/p)
\ ,
1
l
(a)
(b)
\ Figu rc 9.8 Low-pass filter (a) circuit; (b) transfer func:ion.
The Inagnitude nC the transfer fUllction for this illrer The magnitude ill c!ecibels call be calculated by
IS
shown
IH(je)ldB = 20 log If/(f!!" (unitless]
III
Figure 9.8(b).
(9.17)
The transfer function can be easily converted to the spatial frequency domain for application as a system component simply be modifying p to a spatial pole 1'(2rr:vRC) and Ie with S in (9.15). Also note that v is the scan velocity in m illiradians per second. The MTF for the filter can then be written as JvITFl.lJlrP(/\s(S) = ~
[unitlessJ
F
,.
(9.18)
1+( ~)-
!,
The order of the filter can be increased using the same filter casczlded (the output of one filter feeds the input of the next filter). These are called ladder filters. There are a large nlllnber of different ladder circuits that have been developed, including both Butterworth and Chebyshev filters. The spatial frequency equivalent MTF corresponding to the magnitude function [5] for the nth order Butterworth filter is
:
i
1 I
!, ,
(9. 19)
A similar filter can be designed for high-pass applications. This is the case for aC-'coupled systems were the dc signals are removed from the output of the detector. In some cases, de is added back to the ac signals for later display. These types of systems are called dc-restored systems. A simple. single-pole high-pass filter is shown in Figure 9.9(a) and the corresponding Bode plot is shown in Figure 9.9(b). The circuit shown in Figure 9.9 is a leading high-pass filter where the transfer function can be written in terms of the pole: [unitless]
[,~
(9.20)
.,.>
(
.
!
where the pole is again at p= 1/(2rcRC). We are more interested, however, in the magnitude of the signal as it transfers through the system.
,i
I
I,
.~.
r
1
:
(
! ' " l '.:
f
L.
,
i
,
'
U . ,8
I
Electronics
239
.:
c
Bode Plot
-"'dB j
L L
- -
/'
- - - - - -.,. ...
-10 dB V. m
.
R -20 dB
L [
'Actual
log(~
(a)
/p)
(b)
l
Figure 9.9 High-pass filter (a) circuit; (b) transfer function.
I'
With some algebra, the magnitude of the function given in (9.20) is
..
L
fc
\
(
I
L i',. I
L.
~
r' . l.' f I
.
!
i ;;
}:;',
IH(fe) I = ~ I+C~)2
=
[unitlessJ
(9.21)
JI+(t)2
There are a large number of high-pass filters, with the Butterworth and Chebyshev filters being the more common. The transfer functions can be converted to the spatial frequency domain by changing the temporal frequency to a spatial frequency along with changing the pole to a spatial frequency with the. scan velocity tenn p = l/(21tvRC). The high-pass filter MTF for an nth order Buttef\\'orth filter is
L..._
\' '~
! .
[unitless]
(9.22)
i'.' . l~:..
I'
t. IL:
The combination of the high pass and the low pass can implemented simply by mUltiplying their MTFs. In circuit design, the output of one filter feeds the input of the other filter. From Chapter 3, we know that in the time .domain, the input signal of the first filter is convol\'ed with the filter impUlse response to result in the filter output signal. The output signal is then convolved with the impulse response of the second filter. The output of the second filter is the output of the filter system. Care must be taken in the characterization of the loading effects at each stag~ of analysis. In most cases, it is easier to work in the frequency domain where' the transfer functions multiply.
--_._--_
.....- ....
_-_ ..
9.4 NOISE L: '
Entire books have been written on noise anal;. sis in circuits. A complete treatment of circuit noise here is beyond the scope of this work; however. a cursory overvie'yv of the noise associated with detector elec::ronics is appropriate. The noise components associated with the detector a':-e described in C:1apter 8. Noise components that are specific to EO and I2R systems are presentej in Chapters II and 12, respectively. The noise components associated with the detector circuit and system electronics are Johnson, Ilf (or f1icker noise). sho~, microphonics, readout, and quantization noise. Each of these is addressed belm .... Johnson Noise
Johnson [6J noise exists because all conductors exhibit thennal noise. Any material over the temperature of 0 deg K exhibits electrical noise from the thermal activity of the electron states. A resistor at some temperature, T, provides a noise level that is related to its resistance and temperature. In theory, capacitors and inductors do not exhibit thennal noise. In practice, even high-quality capacitors and inductors generate some noise. Thermal noise sources associated with resistors can be modeled as either a voltage source or a current source, as shown in Figure 9.10.
I
I
:'fl~" 1i .. f
,.
.1
R
I .
nOise
~oise
Voltage model
Current model
Figure 9.10 Johnson noise models.
The, value of the rms noise voltage for the voltage model shown is VJolmsofl =:
J4kTR~f
(9.23)
(V]
whhe k is Boltzmann's constant of 1.38 x 10. 23 J/deg K, T is the temperature of the resistor in deg K, R is the resistance of the resistor, and D/ is the bandwidth over which the noise is measured. The equivalent current value for the second model is [
•
.,..,.,....~..".....,.,.,.....~~~-.~ ______c··~,.~.........".~--~.~~".------.. -.... " .. --~.- _... -
... - •. -"
.
It
-.--"~~,,, -"'..."'''''''''''),ir . ."
..
:~: '.' ..., .',i.'· ,:-y/~:- :H;'~;l!fr~;rHJ;4;ij;f:~.~/.
I
:. I
r:,
L.
Electronics
241
.b
;L L
r
i./oh/l.\(}11
= J4kT4ICIIR)
(9.24)
[AJ
.\'
Example
'"
Determ ine the noise voltage for a 10 kOhm resistor at room temperature (300 deg K). The noise bandwidth is 10kHz. The solution is given by (9.23), where
I
I
"
r
......j -.
L l
I
"
~IL..,
V}vlm,I'OIl
= J4(1.3 8x I0-23 JIK)(300K)(1 OOOOOhms)(1 OOOOHz) = 1.29J.l V
'Note that the rms thermal noise of the resistor is around a microvolt.' A reduction in resistance, temperature, or bandwidth decreases the noise voltage. I/f Noise
v
.1
l/j noise, or flicker noise, accompanies any system where electron 'conduction is present [7]. This noise is due to surface imperfections caused during the fabrication process. The noise has a power spectrum that decreases with frequency, hence its name, and is more important at lower frequencies (from 0 to 100 Hz): The long-term drift of transistor amplifiers is usually the result of.l/j noise. FHcker noise can be described by an rms current:
[AJ
(9.25) I
where K is a constant that is specified for a semiconductor device, IDe is the average (i.e., dc) current through the device, and!J.j is the bandwidth. Shot Noise :.J\~
Shot noise in the electronics is the result of random current fluctuations
caused by the discreteness of charge carriers when diffused across a semiconductor junction. The rms value for shot noise current is i.l'ho(
=2efDC!J.j
[AJ
(9.26)
where e is the , charge of an electron. 1.6 x 10- 19 coulom bs. Both shot noise and, Johnson noise have a power spectrum that is approximately' const31lt across the frequency band. These are examples of white noise. There is a difference in the way noise Sources add versus typical voltage and current sources. DC \'o!tages and currents are additive, and ac voltages and ~
1·/2
Introduction to Infrared mh':: Electro-Optical S;. stems
currents add vectorally (with amplitude and phase). However. noise sources add constructive Iy. For two-noise voltage sourc~s in series: [V]
,. '.'.:
,
';
--j
.t j
,
"
(,
(9.27)
where a is the con'elation coefficient for the noise sources. For uncorrelated noise sources, which includes the Johnson noise and the majority of other noise cases, the third teon is zero and the noise sources add in quadrature. Two identical waveforms have a correlation coefficient of I. In a few circ;umstances, noise sources are correlated, as in the case of ground-loop pickups of stray signals. The majority of sensor noise contributors are assumed to be uncorrelated. There are many other types of noise. Diode noise, BJT noise, and FET noise are all combinations of shot and lifnoise. Other noise sources include power source leakage, ground-loop pickup, and cable pickup (of electromagnetic fields) . Some general guidelines in the noise reduction process are:
~-"
'. i
." I
r
;".:
r
..... -;
( I· , "
, -~
.\ t
-,
1
:~
j I
1.
i
i
j
• Use filtering to limit the noise bandwidth. • Match the input resistance of the amplifier to the detector. • Reduce lead lengths as much as possible. Keep the power· source clean. Decouple and regulate the power input to each board. · Minimize any ground impedances. Ground planes are essential to i"solate digital circuits from sensitive analog circuits. • Keep digital grounds separate from analog grounds. They should connect only at one point (the point of lowest potential). • Isolate high impedances, high-gain inputs from strong drive currents. This usually means shielding. Minimize magnetic loops. Reduce the loop area of wires, preferably with' twisted pairs. While the minimization of noise is, in general, frustrating, these general guidelines can simplify the noise-reduction process.The noise bandwidth is defined differently than a network transfer function. It is not identical to the system bandwidth but describes the characteristics of the noise. The noise bandv,:idth is the frequency span of a rectangular-shaped power-gain curve equal in area to the area of the actual power gain versus frequency function. Consider the power spectrum (or power spectral density-PSD), G(j), oftlle noise shown in Figure 9.11.
'
i:;'j (
,
".
,
Electronics
I:'.·' "~
243
G(f)
IIf Noise
White noise
f<---->~,~<----------->~,-E-<----
. Got-------==--'-------------..;;:-------.
b./noise
~------------~------~>
iI
.
L
I' [ I J
Figure 9.11 Noise power spectrum and noise bandwidth. The noise bandwidth can be calculated by !J./lltJi.l'e
= ~" f~ G(f)df
[Hz]
(9.28)
,L:
[;
f' ,
/
L~_
Once the n
'"
'
" I.;
L,
i.
"'
ii
L.
V fill.\'
=
j
l,: I
L
I
J J
[VI Hz
(9.29)
Two parameters that describe the noise associated with an amplifier or filter stage are the noise factor and the noise figure. The noise factor is defined as the, input S~R normalized by the output SNR F=
r;
G of../l1lJ i.I"e
S,IN; S.,iN"
[unitless]
(9.30)
The noise factor provides the noise figure in decibels. The definition for noise figure is
L
! .
NF= IOlogF
I" L ,f
I',
L'
I
t, ~ It
[dB]
An amplifier or filter stage wilh no noise will have an NF of zero. A well-designed amplifier should have an NF of less than or equal to 3 dB. : The noise factor allows the determination of critical design guidance in the staging of amplifiers and filters. Consider the cascaded network shown in Figure 9.1::2.
24.1
Introduction to Infrared ,:Ild Ekcrro-Upllt.ai "'_'
J
_ _ ._
r
,......--.-
+
F
1--------
I
j 1
Ii
in
.
G
I
.c.,
f---
.,
-
i
~
G
lJ,
out
J
:";
-
1
. I
Figure 9.12 Cascaded network.
Let G describe the voltage gain and F describe the noise factor for a particular circuit in the cascaded network. The noise factor for the entire cascaded netv.:ork can be estimated to be (9.32)
The impact of the first stage on overall system noise is higher given that the cascaGed gains are larger than I. Each stage should have a gain greater than I with the highest gain occurring at the first stage, where the sensor noise dominates. 9.5 MTF BOOST FILTER
The MTF boost filter is an active filter that provides gain in the compensation of MTF rolloff. The boost filter allows a boost to particular spatial frequencies while maintaining a unity transfer for low and high frequencies. Boost [8] is available from a variety of circuits. Here, we present a common representation: (9.33)
where K is the boost amplitude, N is the boost order,! is the temporal frequency of the circuit, and!hol!s, is the frequency that benefits the most from the boost. Figure 9.13 shows the boost gain as a function of temporal frequency for various boost amplitudes with an N = 1. The boost filter can be converted to spatial frequency using the temporal-to-spatial conversion given in Section 9.2 to yield (9.3~)
The boost filter provides little gain at low frequencies and little gain at high frequencies. At the boost frequency, the signal is boosted by a factor of K. The
[ r
.L.· L
..
(
,
L
Electronics
245
spatial frequency version has the same shape and boost amplitude with a change in the independent variable,
L L i
. i .
,
1-:. L
MTF
4.0 ~------~------~~~------~-------~
3.5 3.0 -
K=3 2.5 2.0
K=2
I .5
K=l 0.5 0
i
-
I
I
I I
0
I
0.5
I
1
I
1.5
] 2
II/boost
Figure 9.13 MTF boosting filter.
9.6 EO l\'1UX MTF
J:I·':'
..
J~.'
.
'\'
,
i
1.
I c..
. - )1" f
I. I_
I . h:.
t '.ei.
l.
The electro-optical multiplexer (EO I11UX) is an image converter. It provides' infrared detector output signals from a linear array and their corresponding amplifiers to a visible linear light emitting diode (LED) arra:;. The infrared signals light up the LEOs to an intensity (in the visible wavelengths, hence EO mux) proportional to the signal voltage. Consider the EO mux shown in Figure9.14. The EO mux shown has a linear array of five' infrared detectors, corresponding electronics, and LEOs to show the concept In reality, common module FLIRs have 60, 120, or 180 detectors in a linear array. . . A two-dimensional irnage is formed by th.e infrared lens onto an image plane, The front side of the scanner is reflective in the infrared wavelengths, and the scanner 's\veeps the image across the linear detector array in one direction, When the edge of the image is reached. the scanner reverses direction and sweeps the image back across the linear alTay. This backs can is accompanied by a small tilt of the scan mIrror ill the vertical direction to fill in between the forward scan lines to compensate for the g3p between detector elements. The output voltages are taken from the infrared detectors, amplified, and fed to the visible LED alTay, The LED array is imaged by a visible lens and reflected off the bJ..:kside of the scanner converging to a visible image plane, The image plane can be viewed directly by a human, or it can be imaged onto a vidicon tube for conversion to video output.
Introduction to Infrar;:-d and Electro-OpticJI Systems r
A:T:l\ SCJIl
~i
-(_
LED array
-1 _-1
IR kns
.f
im~g..:
o Scanner
i
~l
I
Visible image
/0
o
LED array
Lens
, '.
~1
-j
··· 1·
..
'J
Amps
IR image
J
I
r:"( .~
Linear detector
I
I( .
array
1
»
1
Image scan
1
'!
i .~
Figu re 9.1.1 EO mUltiplexer. The LED spatial extent has an MTF associated with it. The EO mux display is continuous in the horizontal direction with a continuous voltage out ot' each diode. In the vertical direction, the output is sampled as there are a discrete number of LEDs, just as the d.etectors sampled the original infrared image. The MTF treatment of the EO mux LED spatial characteristics is discussed in Chapter 10.
9.7 DIGITAL FILTER MTF !O
j
j
There are two major classes of digital filters: infinite impulse. response (UR) and finite impulse response (FIR). IIR filters have excellent amplitude responses but are more difficult than FIR filters to implement. FIR filters usually have more ripple than fIR filters. \Ve focus here on FIR filters. We do not derive the transfer functions for these digital filters as there are references [7] that include these derivations. However, we are interested in the transfer functions as the; affect the overall perfonnance of our system .. The filters are shown in Figure 9.15 operating on a linear stream of pixel values. The output' pixel signal level of the odd filter, in the case shown, depends on the value of five input pixels. The input pixel signal levels are multiplied b; their corresponding coefficients and the products are summed to give an output pixel signal level. The input pixels are shifted by one pixel and the output pixel signal level is again calculated. This process continues through the stream of data.
, .• Jr . i
';"': L .
,r'r . !
Electronics
, ~l L L L
f
2-17
There are artifacts at datastream edges (e.g., frame- or line-scan edges), where tricks such as doubling back data have to be played to reduce the artifacts .. Pixel values
Pixel values
o t
~ ~ ~ ~
A 12
DOD
t t t
A 12
I
.-
2
, J F."
t
·· L ..
C Output
Output
(a)
(b)
Figure 9.15 Filters with odd (a) and even (b) number ofsamples. Note that the sum of the filter coefficients must be 1'. Otherwise the 'average frame gain and offset change. The filter MTF in the filtering direction for an odd number of samples [9, 10] is (9.35)
where c; \ = liS is in cycles per milliradian and S is the angular distance between pixel samples in milliradia:1s. Note that with oversampling. S may be smaller than .a DAS. The MTF for an 'e\"~n number of samples filter ·is _
l ,.
' . :'; L; I . :5 • 1.··. " ';; I '
,
r. '
I
L
>'
AhFdfiller(CJ = .
V/2
L AkCOS[ ·k=).
]!(2k-I)~
,
(9.36)
]
Ss
While these filters are wr,irten for one direction, they can je easily applied in the horizontal or vertical directioil if separable (see Chapter .3) flInctions are used. Nonsep2.rable two-dimensional filters are much more diffie:..!lt to analyze . l
lntrodll<'tion to Infrared and
t~:lecfro,·Optic;]1 Sys:~rns
';-.;
------------'-~--------
','1
.. ---'"
9.8 CCDs
Most visible and some infrared staring arrays are (CDs. These CCDs comprise an array of detectors that sum the irradiance on the Jerectors for some integration time. The integration time is less than the frame time of typically around l/30th of a second. The integrated charge is then movcJ out of the CCO along a predetermined path [I IJ under the control of clock pll !ses. The charge is stored and moved in a series of metal-oxide semiconductor (\10S) capacitors. Adapted from the Infrared Handbook [12], Figure 9.16 shows a typical three-phase readout cycle. The electrode potentials at four different times are shown to depict the charge transfer. At time I J , a low potential is held across the VI electrodes. At this time, the charge packets are located at the ~'I positions. A low potential is placed on the V2 positions at (2 and the potential is increased at the 1'1 positions. This process is increased at t] causing even more of the charge to transfer to the V~ positions. Finally at t~, the only low potentials are located at the V~ positions. This same process is followed to cause the charge packets to transfer to the V3 positions.
v
I1
't '2
'3
(4
!'l
r' 2 I
l'
2C
J (
I
O-Xiae n - Silicon
2 C
<::::
I
l'
1
J
(
;'-1 :
')-
.'
Cl
1 1: -
J
r
<
~r
]1 .
(
I
]
I
I
Figure 9.16 Three-phase CCD operation. The process described above has a transfer efficiency that describes the completeness of charge packet transfer. Charge is typically left behind, causing a smearing effect. Holst [7] describes an MTF for the smearing effect: MTF"un.vjer == exp{ -N,rclIIs(I - E)[ I - cos(~;;t~)]}
(9.3 7)
--:1\
N,rull.\: is the total number of charge transfers from '~a detector to the output ampl1ifier, E is the transfer efficiency, and ~,.\ is the clocking spatial frequency. For
....~-.......--- ..- .....
-_ ....... -------.- ..
,
'r ;
]t·:
\.
,
~r
I
.
.1
1<'
"'"
-----~......,.-...,.~ ~,..
:-l
~l
2
r
,-l
!
I
I ~ I j
I
T'
'"'
j
Q
:,::1
Figm"c 9. I 7
N(H!;'n if:Jt"lT! stari IE!"~IT!.1)·
JlIl(l ~e.
.. ,.
'"', '"".~.
.,,, . ;,
,:-.,L-.-__ . ....,•.
Figure 9.18 Nonuniform linear-array image.
..:.!,.. ,-:., •. !~.,~;-:.~~::"
: :.!/ ;."'
::.'<: : ~.; .:~
::
','
:',.:"...
.... : :
"
.:'
'.,
.. ':,.'! •
',"
I
.j
,
',!
.::.
,',
".:
:J
i:'
:! ':i
"~ i .
.. ~
;:
..•.. ,.
.~~}~(,: \ .. ... : .,:: .....::.:'" .. ~
.
.... , .
':
":
G
I
Ie i [
L L
Lleclronics
25/
EXERCISES
9.1 Determine the transfer function and the 3-dB cutoff frequency for the following detector amplifier. Assume an ideal operational amplifier.
..
<.
I';
r ' ~;: 1
{ , !
I
~,c. ..
I
Figure 9.19
I '. L:..
/
o
'<,u \,. .
t,
*' 1./ I
1\"1.
j "-
I • '-.. " ' "
..1,....,. .. -..........
'v:~
9.2 A HgCdTe photovoItaic detector has a resistance of 10 (kOhms. Design a voltage-mode detector amplifier for the detector with a bandwidth of 5 Hz to 1 MHz and a closed-loop, midband gain of30 dB. I
. !:',
!
"
I
I'
.
c':'·'
'.
I, L,
[
.,'
1 i:,
t·::'
(I) Scan velocity. (2)Temporal signal frequency corresponding to the fundamental harmonic of the fence.
L ..
I
~
..
t,
" L,
1 j.
~
!;
I
L,-,
I
I
1.. •••
I '0-.- •
I
I !
9.3 A common-module FLIR system has 180 detectors that scan in the horizontal direction at a rate of 60 times per second (30 Hz and a 2: I interlace pattern). The scan efficiency is 0.8 (20% of the time the scanner is performing"some other task than scanning the image across the detector: turnaround time, stop time, retrace time, etc.) If the horizontal FOY is "2.75 deg and the detector scans across a picket fence with equally spaced boards 1 mrad apart, what is the
~-
i
j
..
"
L"':'
;
9.4 Consider two resistors (R, = 50 kOhms and R: = 20 kOhms) that are connected in series. At a temperature of 400 deg K and a bandwidrh of 5 \IHz, calculate the nTIS noise voltage across each resistor and then across both resistors. 9.5 A system input SNR ratio is 100 and the system out;nlt SNR is 30. the system noise facor and noise figure. ;.
Det~rmine
9.6 Plot the MTF for ajoost filter with a K of 4 and a joost temporal frequency of 2 MHz. Assume 2 50-mrad HFOY and a 52-psec HFOV scan time.
25:!
Introcluctiori to Infrared and Electro-Optical Systems
9.7 ,An odd digital filter is described with the following coeffici~nts: ..10 = 0.8, AI = 0.9, A2 = -0.2, and A) = -OJ. Sketch the MTF in tenns l1f the nonnalized spatial frequency ~/~\"
i
I
9.8 Plotthe IV1TF as a function ofnonnalized spatial frequency tnonnalized to the clocking spa.tial frequency) for CCD transfer efficiencies of 0.99, 0.999, 0.9999, and 0.99999. Assume the number of transfers to be 300. REFERENCES
[I]
[2] [3] [4] [5] [6] [7]
[8] [9]
[10] [11]
[12] [13]
Wolfe, \V., and G. Zissis, The Infrared Handbook, Environmental Research Institute of Michigan, Office of Naval Research, Washington DC, pp. 11-32, 1993. Krauss, H., C. Bostian, and F. Raab, Solid-State Radio Electronics,' New York: Wiley, 1980. Holst, G., Electro-Optical Imaging System Performance, Orlando, FL: JCD Publishing, p. 119, 1995. Valkenburg, M., Analog Filter Design. New York: Holt, Rinehart, and Winston, p. 80, 1982. Sedra, A., and K. Smith, Microelectronic Circuits, Philadelphia, PA: Holt, Rinehart and Winston, p. 771, 1991. ~atts, R., Infrared Technology Fundamentals and System Applications: Short Course, Environmental Research Institute of Michigan, June 1990. Holst, G., Electro-Optical Imaging System Performance, Orlando, FL: JCD Publishing, p. 146, 1995. Savant, d., M. Roden, and G. Carpenter. Electronic Circuit Design, Menlo Park, CA: Benjamin Cummings Publishing, p. AI2S, 1987. FLIR92 Thermal Imaging Systems Performance Model, U.S. Anny Night Vision and Electronic Sensors Directorate, p. ARG-6, Jan. 1993. Rabiner, R. L., and B. Gold, Theory and Application of Digital Signal Processing, Englewood Cliffs, NJ: Prentice-Hall, pp. 81 - 84. Streetman, B. G., Solid-State Electronic Devices, Englewood Cliffs, NJ: Prentice-:HaIl, NJ, 1980. , Wolfe, W., and G. Zissis, The Infrared Handbook. Environmental Research Institute of Michigan and the Office of Naval Research, Ch. 12, 1993. Scribner, D. A., K. A. Sarkady, M. R. Kruer, and J. T. Caul~eld, "Adaptive Retina-Like Preprocessing for Imaging Detector Arrays," IEEE International . Conference on Neural Networks, Mar. 1993,
,
'
\
"
)'
;:'j
j
~l>
D' ~'f
J
..}
-'
,
L ;L "L L "
Chapter 10
,... .
'
Displays, Human Perception, and Automatic Target Recognizers
L.::'
I,:
L ..
. r·
I, .
I
.' L j
~
..
,.j'
f ' L~
L I
"
Displays are the interface between the J2R .or EO sensor and the human VISIOn system (Figurel 0.1). The display converts the electrical signal from the J2R or EO sensor to a visible signal that is subsequently presented to the viewer. Occasionally, systems are designed where humans do not interpret the data, so displays are not required. Imaging missile seekers with automatic target trackers are a good example. Various levels of human interpretation of imagery areseen in the targeting community. One example that is increasing in success and popularity is the targeting sensor coupled to a computerized automatic target recog'nizer (ATR)l automatic target cueing (A TC) system, or aided target recogni.zer (AiTR). We collectively refer to these automated target exploitation systems as ATRs .• The, performance of displays can be characterized with a modulation transfer function (MTF), where the MTF depends on th.e resolution of the display. The human visual system also has a transfer function that is based on the resolution limits of the eye. Both these transfer functions are discussed; however, this chapter provides only an introduction on how to acc,ount for these "system" components. r--------:----:---------,
tr
{
!
.;
:>ispJay
Human vision
I
~rl
I.
I
i
,
U
,
I
.~. JlOmatic target re.:ognition
I- - - - ________ .J.AIRLpillClliQL - Sensl...'r
;
.
'1'011
,I
J
L
~.;
Targets and backgn.lund
!
L
.r
I
Figure 10.1 The last components of the system .
i ...
253
arr hac!
I I I I I
+
25-1
Introduction to In frared'and E lectro-Optical Syst~Tl1 s
---, :i
,L
Automatic targeting systems are in early technology development, so the performance of these systems is difficult to quantify. An introduction to the ATR nomenclature is provided. ATR systems are typically nonlinear and do not lend themselves to linear systems modeling. Figure 10.1 identifies these last components in the EO and 12R systems.
,
--1 ,
i J
(
'.1
10.1 DISPLAYS ...,..
Much work is cun'ently being performed in the analysis of displays. The evaluation of displays is still highly subjective because the primary basis of display design is viewer satisfactfon. However, there is an increasing amount of work in the objective performance evaluation of these devices. Pinson [I] states that display criteria are usually given in terms of the probability that the viewer can perform, specific tasks. These tasks involve extracting information from a displayed image. The probabilities are related to more tractable criteria, such as measurements of MTF, contrast, SNR, and resolution. The magnification of a system depends on the sensor's imaging characteristics' and the observer/display geometry. The definition of system magnification (which is different from the optics magnification) is the ratio of the angular image size seen by the observer to the angular size of the image as seen by the sensor input pupil (see Figurel 0.2).
,I' f
rf F; , -'.1("
IJ"
m'; ,
Fe! l
,
"!:r '
M == 8 imoge/8 IJhjec/
(l 0.1)
This can be simplified so that the system magnification is not object d~pendent by writing ,. '. '~
M ==
FOV" FOV"
(10.2)
where FOV d is the display field of view in the vertical direction and FOVv is the sensor input pupil vertical field of view. Note that the magnification is larger if the • viewer moves closer to the display. However, the image becomes objectionable because the near-point limitation of the eye and the cognitive frame of the observer. Viewing distances are typically sel~cted by observers to make the image display lines unnoticeable. In aviation systems, typical viewing distances vary from 20 to 28 in, depending on the height of the pilot. System displays include cathode ray tubes (CRT), light emitting diodes (LED), and liquid crystal displays (LCD). These display systems are described in the following sections along with their corresponding resolutions and MTFs. Keep in mind that there are a large number of display types, each with its own MTF, that must be accounted for in the wide variety of sensor systems. The· ones covered here are some of the more common displays.
di: ...... L
,
,
c
····'~ :
~
,~
.. _... -- ....• - ... - ..,....... -- -- ..--.- .... ... ...... -.. ..... -..... Displays, Human Perception, and Automatic Target Recognizers '.
"
..
..:
--.~
",
"'-~'-'-" , ' _
"'.
,"
'
.. -
-
~. ......
255
,L [
L L L G c. [ j",
u'
I
v
,
_'._rObserver
Display
Figure 10.2 System magnification.
(.
'.
[
!I FOY
Example A 3- by 4-deg sensor has an output image on a I5-cm-high disRlay monitor. The observer is positioned 38 em from the monitor. Determine :the magnification of the system. The solution is given by (l0.2) where the vertical FOY of the sensor is 3 deg. Typically, the horizontal FOY is larger than ,the vertical FOY and it is customary to specify vertical and then horizontal FOYs. Th~ display venical FOY is
{ I
1 .
-I· L
1
...
f..
I,
L'
.
~
I)
i
[
-
.-
[deg]
The magnification is then 21.5/3 = 7.2. Note that a smaller sensor FOY or a larger display FOY gives a larger magnification . 10.2 CATHODE RAY TUBES
I' :'
..
L,
(:,'
).
1J
38
l..:
1 L
I
(12) -21 "'
" I
1
FOJ/ d -. - t.an -I
!
.
i'
L'
CRTs are probably the most common display sys(~m. A CRT comprises an evacuated tube with a phosphor screen. An electron 'beam is scanned across the phosphor screen'in a raster pattern as shown in Figlire 10.3. The beam direction is controlled with horizontal and venical magnetic fields that bend th~ beam in the appropriate direction. The phosphor converts the electron beam into a visible point, where the visible luminan'ce of the phosphor is '"elated to the electron beam current (and voltage). The standard raster scan and inkrlace pettedl are also shown in Figure 10.3. First the solid line shows a field patt~rn that is traced out on the screen. The dashed line shows a second field pattern t.:1at is i!lterlaced between the first field lines. Both fields together provide 525 lines, where around 490 of the lines display infonnation. The two fields in the stal1l~~rd 2:1 interlace pattern are
- - . . --. -_.
:756
Introduction to lIltrared and Clectro-upllltli
2)
)lelll~
c9~~ectively called a/rame, The fields are presented at a rate of 60 Hz and frames are presented at 30 Hz.
-,---------
Tube Phosphor screen ,
I
I I
-
(',
~-------------
Side view
Front view with raster scan
Figure 10.3 Cathode ray tube.
The output luminance of the phosphor spot is related to the tube voltage in the linear region (i.e., the region that is not saturated) by L = k(V - Vr)Y
(lOJ)
• where k is a constant, Vr is a threshold voltage, and y is a display constant known as the display gamma. A high gamma provides more phosphor contrast between two different voltages. I A common CRT resolution assumption is that the electron beam spot and the corresponding luminescent ~oot is Gaussian in shape, The spbt can be approximated by S(x,y)
I
= 2ncr
-2
J
2 X +y2
(l0.4)
Gaus( j21t ) 2n cr
where x and yare the horizontar and vertical positions on the screen and (J describes the size of the spot (known as the spot size parameter). (10.4) can be considered the impulse response of the CRT. The MTF is then determined by taking the Fourier transform of this expression, yielding MTF CRrCS, 11) = Gaus( J21t
<J
j S2 + 11 2 )
(l0.5)
Gaussian functions are separable in x and y so that the MTF given in (10.5) can be separated to give an MTF in the horizontal and vertical directions: ,
.
.-
·---'-·_···~or'T~:---~_:---"~I_~-.-
.. ....." ......... _........... ~_ ....~. ____ ,,-____ ._ ... _. - - _• . . . - . - ...
--.
- •••
- p . .- .
--'--'-'
.....
_"'_"_'"
..... _
w __ ' _ " _ "
.. _ .. ___
• • • _, . . . . . . ,
,-~-.,
. . . - - • • • • • _ . . . _ - . . . . . . . . . . . . ._ .
'
•
Displays, Human Perception, and Automatic Target Recognizers
257
= Gaus( j2rr (J~)Gaus(.j2; all)
(10.6)
IvITFC/uCS, 11) = MTFuufs)MTF CRT(ll)
The spot size can be measured using the shrinking raster technique [2J. With this technique, the line spacing on the CRT is reduced until the observer can no longer resolve the scan lines. The distance between the lines I can be estimated by measuring the total height of all the lines and dividing by the number of lines. , The spot-size parameter (J is related to the distance between the lines by a = 0.541
(10.7)
[mJ
Note that cr is in meters, so the MTF given in the above equations has an argument of cycles per meter. To provide an MTF that can be used in an angular system MTF estimate, the argument must be given in terms of cycles per radian or cycles per milJiradian. This conversion is determined with the sensor FOV in milliradians and the size of the CRT display in meters. An example of this conversion in the vertical direction is FOVv
(10.8)
a anglliar = a MonitorSi:/!,
where FOVv is in milliradians. This angular spot-size parameter provides an MTF in the same spatial frequency domain as that of the other sensor components. , Example' The spot-size parameter a has been detennined to' be 200 11m on a IS-em-high CRT monitor using the shrinking raster technique. The monitor displays the output of a 5- by 7-deg FOV sensor. Plot the MTF of the monitor for system-related object space in units of cycles per milliradian. The solution is given by first converting the spot-size parameter from micrometers to milliradians: (J angular
-6 87.3mrads a I 16mracis = 200 X 10 m 15xlO-2m. =.
This angular spot size can be used in the horizontal the horizontal MTF. The MTFis given by --
compo~ent
of (10.6) to give
which is shown below. If the Galls function is written in terms of Gaus[.L] where ~co is the cutoff frequency, then it can be shown that ~c" = JI(/'2rr Gangulor ). In this case, the cutoff frequency is 3.4 cycles per milliradian. "
","d
,~.'
MTF 1.0
~-----,--------------~
0.9
-
0.8
'1
"'\
0.7
0.6
fJ
0.5 0.4 0.3
rJ ;:
0.2
0.1
-,: ]'
O~-----+------4----~---~~==--~
o
2
3
4
-
~:.
5
Cycles/mrad Figure 10.4 CRT example results.
~,-j "'\
Example Results The CRT usually has a sample-and-hold circuit that gives a discrete number of horizontal pixels as the display spot-scans the screen in the horizontal • direction. This sample-and-hold samples the horizontal signal on a scan line into an integer number of samples and then holds the phosphor spot voltage constant over this sample as the spot is scanned across the horizontal direction. For example, many CRTs have 480 active lines in the vertical direction (out of the 525 possible lines). With a 1.33 horizontal-to-vertical image aspect ratio, the horizontal scan across the CRT screen would be broken into 640 samples. lfthe CRT has this type sample-and-hold circuit, the psf for the sample-and-hold circuit is h(x,y) = (FO/'hI 640 )rec/[xl(FOVhI640)]b(y) and the display sample-and-hold transfer function is H(~, ll) =sinc[(FOVI.l640)~]. This transfer function must be accounted for in any system that includes a CRT display sample-and-hold circuit. 10.3 LIGHT EMITTING DIODES LEOs are P-N junction devices that emit light when they are forward biased. Because LEOs [3] emit light, they are very visible in dark conditions and are less visible in bright conditions. LEOs are luminescent (self-emitting in visible wavelengths), where the luminescence is proportional to the current passing through the LED. The visibility of the LED depends on the emitted radiation and the external quantum efficiency of the eye at the LED emission wavelength. The efficiency of LEOs is limited because of the mismatch of refractive index between
I I
[
! .
L
f.
L J
.L L , L'
Displays, Human Perception, and Automatic Target Recognizers
air and the luminescent material (index of refraction around 3.6). Therefore the emitted angle for total internal reflection is small. A lens can increase the effidency by a factor of 2 to 3. LEDs are typically arranged in a linear array (a single line of diodes) and are parallel-scanned across the observer's viewing plane. Consider the system shown in Figure 10.5. As the scan traverses across the image plane, a current is applied to each LED and the corresponding luminescent signals are traced out in lines. An eyepiece adjusts the LED image to a distance from the viewer that is comfortable.
" 0 . ; '.
n
~Ia'~ I
I
'" 0 LED
L
Eyepiece
oarray o o 0 o
f-',:. .~:L·
d
/
I i
I
\
~
I
y
<
)-
dx
~
r
~
Je
<
lI
•
'
r (
[ ,
L i f, ll.:'
,
;
\J
>
Side View
FrontView
,'L
I
i
I;
J
t
259
Figure 10.5 LED array display. The scanning occurs at a rate higher than the rate at \vhich flicker is seen by the human eye (over 20 Hz). Most LED displays, as used in 12R systems, fonn the back end of the EO mux subsystem (as described inCh. 9). Here, the scan occurs at the same rate as that of the infrared linear-detector array scan. The vertical angular size of the image depends on the array height and the eyepiece focal length: ,
I
(I0.9)
The vertical image size is related to FOV v by the magnification
/,
(
,
a.·.=MFOV v
(10.10)
I
I.
1
l~
I. L It ,
In ,the system shown,' a and b describe the size of the LEDs and c ,describes the distance between the centers of the LEDs in the vertical direction. The distance of the LED array from the left side of the image plane h described as s. Many scanned LED displays are ara10g in nature (i.e., the voltage and corresponding
I
260
...
Introduction to Infrared C':..nd Electro-:Optical Systems
\
luminescent signal are continuous across the scan), so there is no sampling in the I horizontal direction. Therefore, the spatial function for scanned array shown i s ' (' ,
tl~e
* *reetd ..~)
o(x.y) == (i(x,y)heombC;)o(x - s)]
(10.11 )
I-Jere, the spatial parameters x and yare in meters in the image plane and must b e , 1converted to angular object space. This is performed using,
I r
where x andy are in radians (object space). In addition, d r and d. are the width and height of the displayed image. The spatial frequency domain can now be compared directly with all the other system contributors. The Fourier transform of (10.12) gives the spatial frequency representation of the display: ): ) = [Ie): ) * * b( c(FU v,.) '1 )] uh(FOV,,)(FUV,) . ("(HWI')~ ...,,11· com d, dA Sine d, ' O(...,,11
h(l·nr, ir: ') -:i d, e
'f:~:'I'
If the input signal (the image to be displayed) is bandlimited to \vhere all frequency components are less than half of dy/(eFO VI')' then no aliasing occurs and the MTF of the LED array can be written as L'T'r:' (,I: ) r LED ..,,11
iVi
_. (a(FOVh)~ -sine ' (I x
(10.14)
"(FOII.)I1)
Jy
If the signal is not band limited, then aliasing occurs on the output image and artifacts are seen. Well-designed systems use the optics and detector MTFs to b~mdlimit the imagery so aliasing is controlled. Further discussion of acceptable image aliasing for human consumption is described later in this chapter. 10.4 LIQUID CRYSTAL DISPLAYS
The LCD [4] comprises a thin, clear layer of twisted nematic crystals placed between two conducting plates that are transparent. Some common LCOs use crossed polarizers as the two conducting plates. With an applied voltage, the LCD material (twisted nematic crystals) then turns the light polarization to pass through the crossed polarizer. The degree of polarization twist detennines the shade of brightness that passes through the polarizer. The LCD panel is an aggregate of these cells that form an electronically addressable display. Leos can be small, light, and inexpensive. LCOs are passive devices that modulate passing light. Therefore, they work as well in bright, ambient light as they do in dim lighting. Their perfonnance is strongly influenced, however, by the spatial distribution of lighting. Also, LCD
.
.......,...............-..
-~-
-~
.... --.-- .. --,
-
... .... ~
•..... -.- ....
_._....... .
~l'
(10.13)
~]:
.
I !
Displays, Human Perception, and Automatic Target Recognizers
U .. ::
I
!
.L ,
L ,. L '
,
i
[', L:, ,', . " , Il~
displays arc usually viewed directly by the observer. LCOs require very little power dissipation because the power is only required to provide dynamic scattering in the LCD fluid. LCOs are not as fast as LEOs and typically require tens of milliseconds to turn off or on compared with some LEOs in the nanosec~:)/1d realm. The slow response time limits their utility in high-speed applications. The spatial display function and the transfer function can be derived from a simple model shown in Figure 10.6. The LCD array is a rectangular grid of L,CD cells that are surrounded by an inactive area. The ratio of the LCD active cell area to the total array area is cailed the jill factor. The spatial sampling of the input image and the displayofthe image on the rectangular functions can be written as I
o(x,y) == [i(x,y) cd com
ro',·
i
-
261
b(5 ' 'z.)] * *rect(;, h) l'
\'
(10.15)
L': Taking this function to angular object space gives
i
l
I" C,,'
I l l
( ) OX,}'
.(
= [ I x,y
)
d,r til' b( xdx yd l . ) ] dF(}Vh cFOI·,. com, dFO/'h ' cFOVv
xd.r * *rect(aFOI'
h
'
ydy ) hFOVv
(10.16)
where x, y, and the FOVs are in milliradians. This spatial function can be converted to the spatial frequency domain by taking the Fourier transform:
De ~.: 11 ) -- [Ie::~. 11.),~ . .
>I<
com
beJ(FOl'h)E, '
I I'
'
c(FU"')'l,]"h(FOVh)(FOV•. ) l ) Ii.tIf (, ~
.
("(FOVh)~
srnc,
J•
'
h(For.,)'l) IiY
(10.171
Note that this function is very similar to the spatial frequency representation of the LED array (10.13). The difference here is that the LCD array is sampled in both the horizontal and vertical directions. Also note that if the image spectrum is not bandlimited in both directions, aliasing occurs before the display of the imagery. If the display is bandlimited to half of the comb function spacing, then the MTF is identical to that described in (! 0.14). 10.5 SAMPLING AND DISPLAY PROCESSING Any c!isplay that provides a sampled signal has an infinite theoretical replication of
the image spectrum in the frequency domain. This replication is seen as the first tenn i:1 the spatial frequency mode;s of the LED (10.13) and the LCD (10.17) displa) s; For an LS! ::;y~tell1 to pro\'ide an image without aliasing, Goodman and Gaskiil teach (see eh . .3) that th:.= image spectrum must be bandlimited to half the sampli:ig frequency. Consider the sampled function in Figure 10.7.
-l
Introductio,n to Infrared anJ Electro-Optical Systems
262
,:"l ' .
.(
'. LCD
OOOQJI_DDD A .1.00DOCClDD ~= .OOOOCODD t OOOOCOOD OOOOCOOOd t-EJOODCOOO} ",-EJOOOCDOO I OOODCDOO 0000[:000 '¥ -< >~
array
l' -1 '
. :"1 ~T
~<~___d_e----->-~
....1--. :-
, ,
u
. .'
u
F[1J{lt vIew .
•
..
]
t",
•
:".;.
y
Side vie\'i
Fig'urc 10.6 LCD display.
~r'
.;.
Spatial Dorn.ain 'it"
... "
".t .' '---,
J .
f(x)( l/a)comb(x/a)
~:.:.
f..-_ _- - ' - - - - ' - - - ' - - - - - - ' - - - - ' - - - - - I - - - - .\'
~
"
,
a
.
(a)
Frequencv Domain
F(.; )*eomb(a.;)
Replicated at Iia Intervals
Replicated at lIa Intervals
•••
Image Spectrum
Image Spectrum
te)
(b)
Figure 10.7 Sampling in space and frequency. Spatial domain (a), frequency
domain with no spectra overlap (b), spectra overlap {c) .
... __ ._ ... ,.-.,......--.--.,;----_-'7.-.
...-~1~,.....r--.---
... ,.-_.'_ ....... _._ ... _ .. _ •. _._. ___ ........................- .... -..... "
... .
.
(
·"1
Displays. !-IullIon Perception, and A lltomatic Target Recognizers
263
.,
\
(
"J
"
I.
f; t .~
!
\ ,
i
t..
y
i I
t \
The function /(.\) is s(1mpied at a spatial distance of a meters (radians) and mathematically depicted by the product of/(x) and the appropriate comb function. The result is a set of samples that represent the original function. In the frequency domain, the spectrum of/6), represented by F(~), is convolved with the transfo!m of the original comb function. The result is a replication of F(c,) at intervals of Iia cycles per meter (cycles per milliradian). Note that the location of the replicas depends only on the sampling distance a. To obtain the original function, J(x) , without sampling artifacts, a "reconstruction filter" must be applied to the spectrum to eliminate all the higher order replicas of F(~). Again, the perfect theoretical filter is a rect filter that limits the spectrum to half the snmpling frequency as shown by the dashed line in Figure 10. 7(b). For spatial frequencies less than 112a, the required sampling rate is Iia cycles per met~r or cycles per milliradian. If the spectrum of F(~) is bandlimited to 1/2a, then perfect reconstruction can occur. However, if F(E) spills over past 112 a, as depicted in Figure IO.7(c), then aliasing occurs. The F(~) content that is included within the . reef filter is unaliased signaL The signal content from the second replication that , spills over into the reel is aliased. This aliasing is a spurious, high-frequency signal that takes on the identity of lower frequencies, hence the tenn alias. It can be visually objectionable if it is not properly controlled. This aliased signal is really the higher frequencies in the original F(~) function that were '\vrapped" back i!1to lower frequencies because of the insufficient sampling rate. These lower . frequencies were not present at this apparent contribution in the original image . . To detem1ine the sources of aliasing in a sensor system, consider Figure 10.S. Imagery .is degraded by the atmospheric MTF. the optics MTF, and the detector MTF, and then the image signal is usually sampled by the detectors. If the image is bandlimited, or these MTFs provide sufficient bandlimiting, or the sample rate is high enough, then aliasing will not occur. However, if the image spectmm has content above 1/2a by the time the signal gets to the detectors, then aliasing occurs. Optical and detector MTF s are usually designed with sampling rates to control aliasing. Once the signal is sampled by the detector, the electronic MTF is applied a'nd a display reconstruction filter may be applied (in the same manner as, the MTFs). The discrete nature of the display makes it anOther sampling device. If the display is matched to the sampling L'haracteristics of the detector, no reconstrllction is reyuired.However,· if the display has a different sampling rate than that of the deteL'tor, then reconstruction is recuired. The reconsrl'Jction filter bandlimits the , imagery for display at a lower rest:-.lution sampling gri.] or interpolates the signal for display on a higher resolution sJ.mpling grid. Final!y, the display MTF ,lIlel the eY,e \lTF are applied to the image s !.::nal.
Figure 10.8 Imagery transfer.
Now that we have described the image transfer through the system, we know that· aliasing can take place at two places: the detector sampling action and the display sampling action. Optics, detector, and detector sampling control the detector aliasing. Reconstruction filters (if necessary) control the display aliasing. Imperfect sampling does not follow LSI principles, and Vollmerhauseii [5,6] addresses a number of reconstruction issues in this regard. First, it is true that the reconstruction function must be applied to the sampled spectrum to ,bapdlimit the information for display. There are a number of methods for approaching this including a spectral filter similar to the reel bandlimiting function. In practical systems, the reet function' in the frequency domain is a sine function in the space domain that effectivel'y' has to be convolved with the sampled image. The input image is not infinite in width, so the ripple of the sine tails at the edges gets folded •back over and causes artifacts. There are more practical reconstruction functions including a reef convolved with a Gaus function. Vollmerhausen reveals that a number of reconstruction functions work fairly well and are not objectionable to human perception of the bandlimited imagery. Second, image signals do not have to be bandlimited to half of the sampling rate.' The sampling theorem in terms of the display of sampled imagery corresponds to an ideal case and is not practical in real systems. Vollmerhausen .... states that sampling at a rate double the point where the MTF reaches 10 to 40% is ~ generally adequate for decent image quality. . There are three well-known criteria [7] for the design of reconstruction -filters. We consider an input image spectrum, a presampling MTF, and a sampling . frequency as shown in Figure 10.9. ,
,
i
.
j,
.
.L
l ,
,:
r. (
,
'-_ ••. 1
. . . ~.
;" 0 '
. : ....
r
~
L,
.
. ".\.
~
jj,~~
.':,;
.
:'
'.:,;;
.r
. .,'.'
;. :.. '..:'.
.-.....,'
',"'.';;,"',
.-.:,:;,!
"
.....; ~:..... '.
.: :.:.:
;
':~~~f(\ .: :- 1
;~~~
; •••• ~.
~."1
, .)
'.':
•
;
;!~jl"' r
Displays. Human Perception. and Automatic Target Recognizers
Amplitude
Half sampling frequency
265
Sampling frequency
I
I
Image spectrum through the system MTF
(befor~ sampling)
I
I
System MTF (prior to sampling) Input image spectrum
l/2a
Ila
Spatial frequency cyclmrad
Figure 10.9 Image spectrum, presampling MTF, and sampling frequencies.
,1.
i'
l
In Figure 10.9, an input image spectrum is shown that is well beyond the required sampling bandlimit for aliasing. Sometimes, this bandlimit, 1I2a. is called the Nyquist limit as frequencies higher than this are wrapped back into the lower frequencies. This wrapping around is not shown here to discuss aliasing control criteria. However, any frequency content above the bandlimit, 112 a, is aliased. The input image spectrum shown results in significant aliasing if it is sampled directly. Hov·;ever, the presampling MTF (such as the optics and detector MTF before detector sampling) limits the input spectrum perfonning. a signal conditioning function. This MTF bandlimiting action reduces the amount of signal that is aliased. With respect to Figure 10.9, the three aliasing control criteria are Legault's criterion, Sequin's criterion, and Schade's criterion. Legault's criterion is the more strict in that less than .5% of the image spectrum through the system (i.e., the area of the curve shown in Figure 10.9) should be at frequencies greater than Il2a. The Sequ in's criterion states that the aliased signal should be no more than one-half of the nonaliased signal. The Schade's criterion states that the MTF (before sampling) should have a magnitude of less than 15% at the 1I2a frequency. Finally, it is worth discussing the effects of phase and the finite width of the display. The sampling theorem is the solution to a signal that is sampled in an infinite time or space. For a finite-length signal (limited by the display width or height), the Dumber of samples prescribed by the sampling theorem to replicate the signal are not sufficient. In fact, the smaller the length of signal, the larger number of samples is required to replicate an accurate rendition of the signal. In these cases. the phases or the locations of the equally spaced samples on the signal become an important factor in detennining whether the signal can be faithfully
266
.l
Introduction to Infrared Cind Electro-Optical Systems
.\
r reconstructed. Vollmerhausell develops 3 technique for determining th~ effects of sampling finite signals and discusses phasing between the samples and t.he signal. It is beyond the scope of this introductory book to teach non-LSI systems or system components; however, we should note that the LSI approximations are not always sufficient. It could be that the display of sampled imagery is the EO or fR characteristic that is least appropriate for LSI modeling. Nevertheless, the LSI approach has been an effective tool for analyzing seilsor systems for some time and will continue to be so. A suggestion here is to learn the LS I modeling technique as a first-order approach to sensor analysis and to then r~fine these approaches to include non-LSI effects ..Also, it is our responsibility t\..) learn the limitations of our analvtical models.
,i
. I
r .
.1
r'
:1r .
., : 1
'f '
10.6 HUMAN PERCEPTION AND THE HUMAN EYE
The observer must interpret the information that is presented ona display. To more fuNy understand the response of the eye to this information, we describe the eye in physical terms and then provide models that can be used to estimate the eye MTF. The human eye is similar to a camera. It forms (l real image on a lightse~sitjve surface called the retina (see Figure 10.10). Light enters the eye through a transparent layer called the cornea, then traverses through an anterior liquid chamber called the aqueous humor. The light from this anterior chamber is limited by the iris, which is the effective entrance pupil of the eye. The diameter gf the iris changes in response to the amount of light received by the eye. In realitY;Ahe pupil is not that important in the adaptation of the eye to large variations in light level. The most popular theory is that the eye adapts to large changes in light levels through chemical changes in the photoreceptors. Some of the light passes through , the iris to a crystalline lens that is an elastic material. The lens power is determined by the cilliary muscles that pull on the lens radially to change the lens curvature. This change in optical power allows image focus of near and far objects on the retina. After refraction by the lens, the light is imaged onto the retina through the vitreous humor. The eye is a spherical jelly-like mass that is protected by a tough membrane called the sclera. The cornea is the only transparent part of the sclera; the rest is white and opaque. Inside the sclera is a dark pigmented layer called ,the choroid. The choroid is an absorber of stray light, similar to the black paint on the ..... inside of electro-optical devices. A thin layer of light-sensitive photoreceptors, like detectors, covers the inner surface of the choroid. The eye has two kinds of photoreceptors (125 million in all): rods and cones. Rods have characteristics of high-speed, course-grain, black-and-wh ite film and are extremely light sensitive. These photoreceptors can image light that is too dim for cones to process, but the images are not well defined. Cones are like low-speed. high-grain, color film that perfonns very well with bright light. The photoreceptor difference is one of the primary reasons that clay and night vision provide two photometry curves: .
"l
,
.: r
r.
Displays, Human Perception, and Automatic Target Recogni::L'rs
267
photopic and scotopic (see Chapter 5). At the retinal center, a depression in the photoreceptor layer is called the macula. It is .around 2.5 to 3 mm in diameter with a rod-free region in the center of about 0.3 mm in diameter called the fovea cenlralis. Hecht and Zajac [8] state that the moon subtends a spot of around 0.3 mm on the retina. In this region, the cones are closely packed and provide the sharpest detailed images. The photoreceptors are ac-coupled. The eye is continually moving the fovea centralis over objects of interest. The image is constantly shifted across the retina. If the image is held stationary on the retina, the image actual)y fades out.
Macula
r' :t(' f IflLj r ,d-,
I
Vitreous
,5
Aqueous
1,l t
humor
I.··~·
Crystalline lens
I;;l~
Figure 10.10 The humap eye.
[t ,
1
1:[
I.·~·[ --1 '
, I"
[
L..
I
'
L.: . "
r
II
I
t
'-.,
J
I l.I. __
I
The radii of curvatures [9] for the cornea, crystalline lens first surface, , and crystalline lens second surface are 7.8, 10, and 6.0 mm, respectively. The indices of refraction of the aqueous humor, the lens, and the vitreous humor are 1.336, 1.413. and 1.336. An interesting note is that one of the reasons humans do not see well under water is that the water refractive index is typically 1.33, very close to the eye indices. Recall that little change in the refractive index at acurved interface provides little focusing power of the inpm light. The crystalline lens is fascinating in that it is a graded index lens. The index is 1.406 in the cenler, 1.375 at the equator, and 1.387 (aqueous) and 1.385 (vitreolls) at the vertices. Focllsing the iens is called accommodation resulting in a change of the lens shape. The lens is 2.1most spherical until pulled by the ciliary muscles, \vhere the lens takes on a tlan:ershape.' The front surface r3dius can change from 10 mm to 5.3 mm. The total power of the eye is· arollJld 60 diopters. The cornea-air interface accounts for around 42 diopters of the total power [10].
i
~",1
268
'I
Introduction to Infrared and Electro-Optical Systems
(
,
!
"it
The age degradation of the crystalline lens and the ciliary muscles tends to affect eye perfOImance in a significant way. The closest point that the eye can image is around 7 cm for a teenager, 25 cm for a young adult. and 100 em for middle-aged people. Lenses are prescribed for people who cannot accommodate for the' necessary power. Eyeglass power is prescribed in dioplers (recall that power is the reciprocal of the focal length). ~fyopia, or nearsightedness, is the case where the eye provides too much power and a negative lens is prescribed to give a less overall system power. Hyperopia, or farsightedness is the case where the eye does not provide sufficient power to image on the retina, so a positive power prescription is the solution. Ablation of the comeal surface with excimer laser radiation to change the surface curvature is becoming a common solution to either of these accommodation problems. The band of sensitivity for the eye ranges from roughly 360 nm to 7S0 ,nm. There have been cases where people have seen images outside these wavelengths. The typical resolution of a human eye under certain conditions can be estimated by using the pupil size and a 0.5-~m wavelength \vith anyone of the angular resolution criteria in the optics chapter. These estimates provide ballpark values but are probably never realized because of imperfections in the eye. Typical angular resolutions of the eye range from 200 to 400 ~rad. 10.7 THE MODULATION TRANSFER FUNCTION OF THE EYE
The modulation transfer function of the eye is taken from Overington [11] and is the product of three distinct transfer function characteristics: optical refraction, retina, and tremor. These components are covered separately but must be cascad6d (multiplied) to obtain the overall MTF of the eye. The optical refraction MTF component corresponds to the limiting resolution of the pupil diameter. The pupil diameter, however, is a function of the display light level and can be estimated by dpllpil=4.30-0.645Iog(ftL)
[mm]
(10.1S)
where fiL is the luminance of the display in foot-Iamberts. There are 0.292 foot-Iamberts in 1 lumen/m 2-sr. The optical MTF of the eye is estimated by MTFeol'liC,l'(P) = exp[-( 43 .69plpo)iO]
(10.19)
where po and io are transfer function variables that are related to the pupil diameter of the eye. Here, p is the circularly symmetric spatial frequency in cycles per milliradian. Table 10.1 gives the transfer function variable values for given pupil diameters. The pupil diameters given are for use with one eye. When two eyes are used, the pupil diameter becomes around 0.5 mm smaller. Also, the MTF
,If ' i'r' I
~',-
I
'Ir '
I
l' '(
,
I
......
I[
i ,L /L l~
__ .' .. - .......-.
1.5
36
0.9
2.0
39
0.8
It
2.4
35
0.8
3.0
"7 -'-
0.77
.[.
3.8
~)
7-
0.75
4.9
15
0.72
5.8
11
0.69
6.6
8
0.66
, l'
-,
.
~
It it r.
f'L
L
·r
f
I
, . I;.
l~
IL t· [ . ,
- ...,.
...
'
....... , --- . ....
269
Table 10.1 Eye optical MTF variables for given pupil diameter
io
'.
-~-
given is in the eye angular?pace. The spatial frequency parameter p is increased (division of p in (10.19): recall the scaling theorem from Chapter 2) by a factor of the system magnification !vI to convert the MTF to the sensor object space. I
to
IL If
- .-..
Displays, Human Perception, and A utomatic Target Recognizers
Pupil diameter (mm)
The second MTF parameter associated with the human eye is the MTF of the retina~ This parameter is similar to a detector MTF in that it is associated with the size of the retinal receptors and the focal length of the eye. A good approximation for the MTF is MTFrelillu(p) == exp[-0.375p 1.21]
(10.20)
where the spatial frequency must be converted to the angular object space with the magnification of the system to use the MTF in a system manner. (10.20) gives the MTF of the eye in angular object space as seen by the eye. The sensor system angular object space is smaller by a factor of the system magnification M. Finally, the last eye MTF component considered here is that of tremor, which can be estimated by
i ,.
l~
f
f
· iI
l "" /.
t
[
,-~
[" , I
( .·l.. :
!vfTFlrefll(}r~p)
= exp[-,4441 p= J
(10.21)
where the MTF has to be cOITected for the system magnification for placement in senso:- object space. In (10.19) through (10.21), p is replaced with p'.\/ to give the MTF::: in sensor. or object, space. Tremor is associated with the nonnal movement of the eye. Eye movements comprise high-frequency tremor, low-speed drift, and flick. All these are small, involuntary movements thar are necessary for vision. ,Experiments [10] have been performed where the eye was artificially stabilized
_._- -
._.,
Introduction to Infrared anJ Electro··Optical S: stems
l;O
,\
'
: i , l
with a contact lens projector. The conscious image ap~lcared to :-ade out. Therefore, the eye can be considered an "ac-coupled system."
i
\
Example
Determine and plot the eye's optical MTF using the Overington model. Assume that an observer is viewing a mon itor of 10-fL brightlless. The s,-,Iution is given by first determining the pupil diameter, llsing (10.19): dpllpil
=
(10.22)
[111m]
4.3 - 0.64510g(I 0)
i I,
or 3.655 mm. Using Table 10.1, the MTF variables are determined. Interpolating for a 3.655-mm pupil diameter, po = 26.2 and ill = 0.754. The optical MTF associated with the pupil size is then estimated by (10.19) as
..
""\"
.
J
'
dO.23) ·, ~
1
This function is shown in Figure 10.11. Note that if two eyes were used, the pupil size would be smaller and the MTF would provide more degradation.
,.-.-
I
"',
••
'-
j
:·1i ' d,
MTF 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2
j~r
0.1
\
0
,
0
0.5
1 Spatial frequency cycles/mrad
Figure 10.11 Example results.
-.. ----..-- .............. - ..•. ...... ~
--~-
.. -
.'
!
Displays. HUll/an Perception. (lnd A Zltomatic Target Recoglli::ers
2.:-:/
10.8 CONTRAST THRESHOLD FUNCTION OF THE EYE
Blackv"ell [12J showed that the detection capability of the human eye is a strong function of target size, contrast, and brightness. His classic set of experiments included more than 200,000 observations by 19 women during World War II. Sources were presented that were circular targets against uniform backgrounds to determine a contrast threshold of 50% detection probability. Blackwell showed that the contrast threshold decreases \vith an increase in display brightness or target size. Many other experiments were performed that were similar to Blackwell's and bar targets and sine wave targets were eventually used to relate contrast threshold to spatial frequency. The contrast threshold function (CTF) was developed and defined as "the level of contrast required for an observer to detect the presence of a sine \vave pattern with a 50% probability level as a function of spatial frequency." The CTF follows what is known as a standard J curve shown in Figure 10.12. CTF
cycles/degree
Figure 10.12 Contrast threshold function. Essentially, the curve describes the mll11mUm contrast for the human visual system to detect a spatial frequency within the eye's FOV. Karim [13] states that cny system MTF that is below the CTF is not usable to the observer, so he describes a modulation transfer function area (MTFA) that is a measure of the usable contrast above the threshold \·alue. Also, note that the units of CTF spatial frequency are in cycles per degree. This is an e)/e research convention where the minim3 of the J is around 3 to 8 c~ (les per degree. Keep in mind that these units must be converted to cycles per radi::n or cycles per milliradian to LIse the CTF as an analytical function \-vithin the sensor response. It then must be adjusted by the system magnification. Holst [14] describes the J function in two parts: spatial frequencies below the minima called the inhibilory regIon and spatial frequencies I
IntrodUCtion to Infrared and Electro··Optical Sy::;tcms
above the minima called the excilatol}J region. Many sensor perfonnance estimates do not include the inhibitol): region response because it corresponds to low frequencies. Most imag~ details are captured in the higher frequencies. That is. edges and smaller obj~cts contain significant energy in the high-fr~quency components of the spectrum. Ignoring the inhibitory region sometimes leads to poor sensor system performance estimates at low spatial frequencies. HI...Jlst also compares five eye models of the J curve and detennines thJt the J can change significantly for obser\"er head movement. vievving distance. and noise power spectral density. Holst [15] also states that the eye MTF takes the shape of an inverted eTF. He points out that the Nill model and the Kornfeld-Lawson m0del are common models. The Nill model includes the inhibitory response and the Kornfeld-La\vson model does not. The Nill model approximates the eye MTF usmg ~
!vfTFeF(P) = 2.71 [0.19 + 0.81 (-p P )]e -(I',,:ok) peak
10.24)
where Ppeak is the location in spatial frequency of the eye's peak response. In this case, units are arbitrary and can be cycles per degree or cycles per milliradian. The Kornfeld-Lawson model is (l0.25)
where units o( P are in cycles per degree.
r
is a light-dependent factor that can"be
estimated by
r
=
(10.26)
1.444 - 0.34410g(B) + 0.0395 log2(B) + 0.00 197Iog\B)
where B is the display brightness in fl. Figure 10.13 shows the Nill and Kornfeld-Lawson eye models. The Kornfeld-Lawson model is used in a number of sensor perfonnance tylodels! It has errors at the lower spatial frequencies, however, due to the missing inhibitory part of the model. Holst points out that the model was validated with. tank-sized targets that were generally in the mid-spatial frequency region. Finally, Holst states that the MTF should be set to 1 when the observer is allO\;ved head movement to optimize vision for presented imagery. That is, the observer can move closer for small images and farther for large images.
-"
-"-'-~-'-'--""'"
"',"."'
...
--.---~.
-
--
I r I f" I I
E
Displays, Human Perception, and Automatic Target Recognizers
It rl [~
'L L
0.5
,
i.:.
'l'" ;
l~
10
5
'['".
,
' '
~
..
cycles per degree
20
15
[;;
Figure 10.13 NiH and Kornfeld-Lawson eye models.
() ,,-,,'
';
,.
r~'
') "" ...-.J
10.9 AUTOMATIC TARGET RECOGNITION
1'/"C<-) JLhaslQOg b~en desired JO(l!ltom~tically process the datapr()vjded by imaging )en fora variety of ,functions. Th9se functions range from such commercial Sors applications as. component inspection for manufacturing to military applications such as target detection and face recognition. The focus here is on military patte:n recognition, though., mu<,:h of this materialjs...applicable for both commercial and ,,..:.~) ,j,r. .,military systems. The -termmology that is often used to describe automated pattern recognition in the military community is generically referred to as automated target recognition (A TR). A TR implies a sequence of mathematical operations where the result is the recognition of a target; The principle functions for the A TR
L
.
.'.1\,1,,',
{"to
sequence [16-18] are shown in Figure 10.14.
Input
image
--
./
Detection
Figu re 10.14 ATR process.
f~
Segmentation
r7
Recognition
Processed...,. ./
results
, Introduction to Infrared and Electr9-0ptical Systems
274
"
',r The definitions for each of the ATR funcrions arc provided as follows, ,I \..e. ' ,/ ,j' :J I I
',~
Detection ~~3JY.,~-q~2Y~,~-_p,~_r!i_~r.~__()L~f1 il~]ag.~" ~lS~q lly cqlleo, a .region of in/r;;re,s( {RDJ}", has been cletennined to contain an, object "-If interest.
,' i:l' .?
'~~h~l: -aefi~ition is si~~ply ·t!~at'~uli~'~g~ "suc~~s;f~ily- 'tr-;~·~~s~s,~: ti1fough~'?
prefilter (i.e., the detection process is a spatial screener), This process is 4S!E!lly s£!rrjec:Lout by _~tepp!.I~g_,~.,~v,il!d(Jy~~_throughthee!ltjr~ image. The windowi~ _app'ro)(jll1.~tely the size of the target class of interest. At each step, a variety of tests (mathematical operations) are perfonned to determine if an object is present. Typical parameters of the window include: contrast, various statistics, and power ,~,.1, , .-spectral density. 9nce a parameter is calculated, it is either compared with- the /~ , ,__ background region around the window or compared with some predetermined' _threshold value. These values are detennined based on some a priori knowledge of the target types of interest and by analysis of existing data that are sim ilar to the test data. The primary_purpose of detection is to eliminate most of the imagery _data, as~h'~pr~~essing power is usually not available to apply complex recognition algorithms to the large amount of Imagery data. The real target decision is perfOlmed later. (
.
~
Segmentation:
,?}
Cla~JJQn; ___ ..AfteL_?eg!l.1entation,
additional features are calculated, usually from combinations of features extracted during segmentation, They. ar~us.~d for cafeg6dzing the object to a specified class or specific identification. There are three categories of classification algorithms: statistical, template corre lation, and neural networks. All these approaches require extensive knO\vledge of the target set. A pictorial representation and basic description of the three classification approaches is shown in Figure 10.15.
------~----~"'~-~-~
':r '
.
.1
!
. ,.J ":
'(
i
, r
i ,I
:..'\...~
On.ce a potential target is localifed through detection, it is . ''-. ", extracted from the backgr~~~~,as accurately as possible. Segmentation is " ;." generically fefeII~~Ltoas.the'ex.~~i1ction. of both low- and high-level features that ~ are u~t:_cl iruhe recognition stage. Typical features that are extracted include target ~,~\.~ 9 silhouette and internal features of the vehicle. There are three methodologies that ). t ,'are- used inseg~e~tation: edge detection, thresholding, and region growing. Some . ·segmentation approaches. employ all three components. Basically, segmentation assumes that objects are distinguished from their surroundings by the presence of edges. Some segmentation methods employ thresholding to separate edges of the target from those of the background. A wide range of edge detectors are available, including the gradient, Laplacian, and Sobel operators and more recently, wavelets. Morphological operators such as erosion and/or dilation are sometimes employed to help improve the segmented image. Erosion makes a region smaller by removing pixels from the edges, while dilation makes them larger by adding pixels. ". "''''",,
\
.... __ ...___ ..... ~..,... .........""__ ..... ...,. ... ,_V~.,...,-..__......_..__ __.. _ ... ____ •. __ • __ .. __ ._ .. _ .
.' r' . , .
:- ~ .
':II
,) "!
.
~ ~L
(,L I
Displays, Human Pftrception, and Automatic Target Recognizers
275
While the above process is a classical description of the ATR approach, advances in segmentation [19] and other ATR processes are making it more difficult to separate the steps shown in Figure 10.14. Detection is increasingly becoming a data elimination process. Segmentation is also becoming more a part of this process by using more sophisticated analytical techniques.
Stastical Classifiers use various features such as
Targ~t class I .
moments, frequency components. and stasticalmeasures in order to distinguish one target from other
Targe class 2
targets in the same general class, nlis is done
Nontarget
through probabilistic association by partioning the feature space into various regions. Feature I
Ie \/'./ "1:· ,'
( \ Jt,:,
Template Correlationalgoritllllls use a set of templates stored in a data base to develop correlations with the segmented object for purposes of classification, Very simply. the higherthe correlation, the higher the probability of correction classification, The template data base can be developed either from training data or CAD models of the targets of interest.
·:[/" ,
"lj.· ,
.
FI
,
:[
, 1"L'"i ~
Neural Networks are based on the use of a multitude of elemental nonlinear computing elements organize':: in a nctwork reminiscent orthe brain,
TIIC
key components!
F2
aspects of a neural network include: the input layer, and a leanting algOritlllll,
Target , type n
Fn
hidden layer, output layer. a set of initial weights,
ROI features
Input layer
Hidden layer
Output
layer
Figure 10.15 Classification approaches.
/',
[; .,
/, ·, f
I
, ..
LJ
In evaluating an ATR algorithm, it is desirable to use measures that completely descr.ibe its perfonnance. A wide range of factors must be considered including the environment in which th,;: data was taken, the quality of the sensor data, the parameters of the sensor (res0Iution, sensitivity). and scene background clutter, just to name a few, A perfom1 ance curve that is frequently used in the evaluation/design of ATR algorithms is sho\vn in Figure 10'.16. Historically, this curve has been known as the receiver operating curve (ROC). More recently, ithas been calleel the.i TR operating curve [2(1). As can be seen from the grarh, the pro,bability of target detection (this could be recognition or identification) is plotted with respect to the false alarm rate (FAR). A false alarm occurs when the .-\ TR incorrectly classifies a nontarget as a
Introduction to Infrared and Electro-Optical Systems
276
target. As the probability of target detection (P d) improves, the FAR increases. Ideally, one would like to have a very high probability of detection and a very low FAR. It is up to the algorithm designer to choose an acceptable Pd along with an
:f
.
I
c
r .
allowable FAR. ;'.i
,'1
.. '[
,
I ,I ~1.
1.0
~
0 ...... +-'
0.8
\.
...
--
U
0
\'j..
..
0.6
I
0
C ..... ;.::
I
0.4
!
~
I
..0
,
...,... ....
0
~
p...
,
,
'-H
"J
0.2
.,,(
0.0001
0.001
0.01
O. I
I
,
.
FAR
Figure 10.16 ATR performance evaluation curve. In summary, there have been tremendous improvements in the supporting elements of ATR (sensors, processors, mathematics). However, the algorithms are still far from mature. Perfonnance limitations can be attributed to the wide range of target signatures and backgrounds that vary tremendously depending on the test . location and the environmental conditions. Despite its limitations, ATR is proving tO,be extremely useful in cueing an operator to an area of an image that contains an important target. This cueing and partial recognition is important for many military applications where timelines are critical and a large amount of imagery is involved. The commercial sector has had better success with optical character recognition (OCR) for typed pages and component inspection in manufacturing, The Sllccess in the commercial arena can largely be attributed to having a
'f . ....l
J
T" '1 .(
.
J; "
I
controlled environment.
J.
I'r .( I
.... '_"'." __ ' .... .. _.... - ._-......... _.. _0...-.._ .. ____ . _... ___ ._ ................. _...__ . . . . . _. __ .__ ...... ________ . . -......... ... _..__
~
..
Displays, Human Perception, and A utomatic Target Recognizers
277
EXERCISES
". C ,.
r:
ri
10.2A CRT display is a 525-line, 14-in monitor that is given the shrinking raster test. The shrinking raster yields a total line height of 25 cm. Sketch the MTF of the CRT display in cycles per meter (or cycles per millimeter). Assuming that the monitor displays imagery from a 5-deg vertical FOY sensor, determine and sketch the CRT MTF in cycles per milliradian so that it can be compared with other system MTFs.
U
1OJ For the example given in Section 10.2, provide the necessary spot-size parameter for a cutoff frequency of 10 cycles per milliradian given the same system.
[
10.4 Show that the cutoff frequency for the CRT is II(J2;r 0').
r'L
L {. L
( I
10.1 An imaging system has a vertical FOY of 2 deg and the imagery is displayed to a human observer. Assuming a typical viewing monitor angle, detel1l)ine the system magnification.
1
10.5 Sketch the horizontal MTF for an LED display with an LED horizontal dimension of 0.05 mm, an LED scan distance of 2 cm, and a horizontal FOY of 5" deg. Sketch the MTF as a function of spatial frequency in cycles per milliradian. 10.6 Using the Overington model, determine the MTF of the eye that is viewing a display brightness of 20 fl. Sketch the results as a function of spatial frequency in cycles per milliradian. Compare these results with the MTFs of the retina and tremor.
10.7 Discuss the best conditions for each ATR classifier approach. What would the best approach be for a m~nufacturing visual inspection? A desert battlefield with little clutter? An extremely cluttered and smoke-filled battlefield environment? 10.8 Of the three display devices described in this section, which one would lend itself more to a handheld device" for field use? An aircraft cockpit? REFERENCES
[I J Pinson, L., Electro Optics, New York: Wiley, p. 240, 1985. [2J Waldman, G., and 1. Woot1on, Electro-Oplical System Performance Modeling, Boston, MA: Artech HOllse, p. 170, 1993.
r f I
r .'
Intrc'ldllction to {nti'a;~~d and Electro~Optical Systems
278
[3]
[4]
[5]
[6]
[7]
[8]
Biberman, L., "Image Display Technology and Problems With Emphasis on' Airborne Systems" in the InFar~'d and Electro-Optical Systems Handbook, ERIM and SPIE Press, 1993. Goodman, L. A., The Relative ;\ferits of LEDs alld LCDs, Pro·eeedings for ,Society of Information Display, \6(1),1975. Vollmerhausen, R., "Impact of Display Modulation Transfer Function on the Quality of Sampled Imagery," SPIE Proceedings of Aerospace/Defense, Sensing and Controls, Orlando, FL, Apr. 1996. Vollmerhausen, R., Application of the Sampling Theorem to Solid State Cameras and Flat Panel DisplCl)'s, U.S. Army Night Vision and Electronic , Sensors Directorate Report NV -2-14, Ft. Belvoir, VA, Feb. 1989. Barbe, D., and S. Campana, "Imaging Arrays Using the Charged Coupled Concept," in Image Pickup and Display, Vol 3, B. Kazan, Editor. Academic Press, 1977. Hecht, E., and A. Zajac, Optics, Reading, MA: Addison-Wesley, p. 138,
~,
i
r '
1979.
Meyer-Arendt, 1. R., Introduction to Classical and j\"/odern Optics, , Englewood Cliffs, NJ: Prentice-Hall, 1984. [10] Yu, F. T., and I. C. Khoo, Principles of Optical Engineering, New York: Wiley, 1990. [11]1 Overington, 1., Vision and Acquisition, New York: Crane, Russak and Co ..
[9]
. [12] [13] [14]
[15] [16] [17]
[18]
[19] [20]
I
1976.
Blackwell, H., "Contrast Thresholds of the Human Eye," Journal of the Optical Society of America, Vol. 36, 1946. Karim, M., Electro-Optical Displays, New York: Marcel-Dekker, 1993. Holst, G., and A. Taylor, "What Eye Model Should We Use for MRT Testing," Infrared Imaging Systems: Design, Ana~ysis, Modeling, and Testing, SPIE Vol. 1309, 1990. , Holst, G., Electro-Optical Imaging System Performance, Orlando, FL: JCD Publishing, p. 130, 1995. Haykin, S., Neural Networks: A Comprehensive Foundation, Maxwell McMillan International, 1994. Keinosllke, F., Introduction to Statistical Pattern Recognition, Academic Press, 1972. Madler, M., and E. Smith, Pattern Recognition Engineering, New York: Wiley, 1993. Morel, 1., and S. Solimini, Variational Methods in Image Segmentation. Boston, MA: Birkhauser, 1994. Sherman, 1., D. Spector, C. Swauger, L. Clark, E. Zelnio, M. Lahaut, and T. Jones, "Automatic Target Recognition Systems" in the Infrared and Electro-Optical Systems Handbook, ERIM and SPIE, p. 369, 1993.
,
( "
.
'
! t,
O'~,
,
i
!
,.l: J.f ,
I
:.'L " or r '
I,
i~
Part 3 Systelns
, i
.. :1
r
, I - I
'/'
'i
i
- l
::1 -';'1
"l .
","" L
l
L .L 'l. [L
[I t· :
,
L Ll
L L
Chapter 11 Infrared Systems , , < \."" '.' \. L','c--
"\.d)
Waldman and Wootton [I] state that infrared systems undenvent initial development just before Worle!, ,War II. Due to the success' of radar, imaging infrared (I2R) systeols were irive~tigated primarily in the laboratOlY until the 1960s. ~Once developed, its highest value was identified as providing battlefield observation in the dark. Infrared sensors allowed both targeting and fire control with complete day ~nd night coverage. Huge successes ",,;ereseen in Desert Stonn with the night superiority provided by I2R systems. The 12R techniques presented in this chapter are the same as those used for the. design and analysis of these ,~,;; :~~~', \ '- ..... :.\ military systems. . _.Th~fQ,rwClrq-1 ()9.k!~g !'.1frq!~d.D:-_bJB:L~~~~iY~Lis.Jl.miUt~_ryY_~~~i0l!_?f the ~9i lloth_leJ:U1sLfJ~~!(indJ~R.,_aI~_us_ecLinterchang~gbJy_jILth~...ill~'i.i~_~r-a-n.d· .,~st~~tegi.~. sensor cPIl1munities. The prog'r'ess in infrared imaging technology ha'5'_.ge(:,:.n_g~~cribed in.Jen:n~_.Qf .!he FJ:JR,,, The first-generation FLIR js widely fielded on may platforms including tanks, helicopters, and ritles_Jalso, hand-held sighting systems). The first-generation FLIR is gen{!rally called a common-module FLIR. T11i.s stems from the use of system components made up of modules designed to be configured many different ways to be compatible with the platform constraints. The modules. or building blocks allow system versatility in design while l1'1inimizing system uniqueness and, therefore, major expense throughout the life _cycle of the equipment. Minimized logistic cost are realized as many of the components can be interchanged with similar fielded common-module systems. Due to specific system weight, power, and size constraints, some of the com/TIon modules are unique and cannot be interchanged. For example, there is a large IR imager module and a small IR imager module. Ihe-J!r1>l:g~nerC!tiQn_£LJ~.!~?Y ~_Q.f!!R!js~ a linear array ofcl~~~ct.QJ::;Jh~.t.. g,r.e.Jll21k_aUy-s}vepLac.rQssJh!!_l;i.~~Qr. FQY. TQre~ detector array modules consist 0[60, I20L.'!_f!~LUHU:i.~t~!;:tQrs.~a.~h. The 120and I80-element modules are interchangeable, whereas the 60-element module is unique to itself and is used for rifle and hand-held applications. The 120-element module is typically used for ground vehicle platfom1s, \'~J1e'i-eas the 180-element module is used for airborne applications. There are two unique and non interchangeable scanner modules. One for the small 60-element detector module and the 'other for the 120- and 180- element detector modules. The scanners are designed in such a way that the scanning min·or tilts on successive scans, thus providing successive interlace fields to the infrared imagery. _' _, I
182
Intr,1ductioll to Infrared and Electro-Optical Systems
The scanners are bidirectional in operarion. All the electronic amplifiers and filters are analog. The infrared-to-visible-I ight conversion is perfonncd \vith an electro-optical multiplexer (EO mux). The first-generation FUR was very successful in Desert Storm. First-generation FURs have been fielded in Apache helicopters, Light Observation helicoptcrs, M IA tanks, Bradley Fighting Vehicles, F-16 fighters, with troops (the AN/TAS-4A), and the Night Observation Device-Long Range (NODLR), to name a few. Second-generation FURs ha\"c been under development for the last 10 years. They are well defined and have been patented [Blecha, et. al., U.S. Patent Number 5,510,618]. While the second-generation FLIR has been successfully developed and characterized for some time, the production and mass fielding of these systems have only recently begun. The systems are intended to replace the fielded first-generation systems initially, and then become integrated components in new weapon system platfonns. The major differences are that the detector has the ability to perform time-delay-integration (TD!) processing with a number of detectors in a single scan line. Typically, the detector is 480 by 4, where the· four-detector width is scanned and read out serially to perfom1 the TDI function. In short, each of the four detectors samples the FOV at the same point and the signal is integrated to increase its fidelity. The signal adds directly and the noise adds in quadrature giving an increased signal-to-noise ratio (SNR) by a factor of 2. On-chip electronics are still analog, but digital processing is performed off the chip. Second-generation FLIRs are still single-band systems that either bperate in the midwave or longwave spectral bands. The third-generation FLIR is currently under development. It is not yet defined to the point that a patent has been issued. The third-generation FUR technologies are listed in Figure 11.1; however, whether these technologies are included in the third-generation FLIR is only speculation. It is widely accepted that the third-generation FLIR will include a large format staring array. The details about digital technology on the chip are not currently known. Finally, there is some chance that the third-generation FLIR will include multiple spectral-band detectors on the same chip. These bands could be both midwave and longwave or a . com bination of subsets of these spectral bands. Lloyd [2] points out that the imaging process has several different dimensions: intensity, time, wavelength, polarization, and the three spatial dimensions. With the number of variables contained in these dimensions, there are many correct solutions for a given imaging problem. Unfortunately, a large number of incorrect parameter sets can also be taken as solutions to a particular situation. Analysis and design go hand-in-hand; they for,m an iterative process. We assume that the analysis lends itself to linear-shift invariant principles in the development of the response for an infrared sensor design.
f
'
\
r '
: r
'
: :.,
....
(
r
.:. ','"
,J
T j, i",J
:. (
j,
); .
,;.~I
,
Infrared Systems
C >[
283
I -
'.11
1 5t Generation (common-module)
:~L :)L
0( 0 0
:,[ ,
)[""
f
, .. "
, I
)1
..
0
Bidirectional Iinear array
~
Analog electronics
EO-MUX E-MUX Analog filtering
Single-wave band
2 nd Generation
0000 0000 0000
)
3 rd Generation 000 000 000
· . ..
· .. . ·. 0000 Unidirectional
Tor .x 4 Array Analog electronics on chip, digital off chip
Digital signal processing
Single-wave band
• • • 0
DO
... •
•
...
00 0
• • •
•
00 00
Large format array
Digital electronics on chip
On-chip processing Digital signal processing
Multispectral
Figure 11.1 FUR generations. Performance measures are an excellent method for representing the collective response of a sensor system. However, infrared sensor performance measures do not include all the necessary infonnation to detennine how well a sensor will perfoml in an overall imaging environment (e.g., a tactical or strategic scenario). That is, the eR system performance measures do not explicitly include target, background. or atmospheric c~aracteristics. The measures only include the sensor components as outlined by the dashed box in Figure 11.2. While the sensor system Goes not include the target, background, and atmospheric components, it does include the display and human vision components (if they are intended to process the data). The idea here is to characterize the sensor with a number of system-level performance parameters that describe the fidelity of the infrared to visual transformation. These system-level metrics include the response of each sensor . component. They can be used to determine how well 2. sensor performs in an overall imaging system scenario for particular targe-ts, backgrounds, and atmospheric conditions allowing a quick evaluation of the sensor in different environments. I
I
1 l
IL i·:
I ;
284
Illtroduction to Infrared and Electro··G)ptiC:11
SYSlcll15
r-------------------I I
I
Display Hu;nan \'ision I
D EJ:
I
.-\ :JtolTI.1tic target I 'lIon (A. TR)I
r '
I
I
I
~-------------------~ Sensor system
osphere
r'
Targets and background
Figure 11.2 12R sensor system. The system-level perfonnance measures describe the sensor in terms of sensitivity and resolution. Noise equivalent temperature difference (NETD) and three-dimensional noise parameters describe the sensor's general sensitivity. The system modulation transfer function (MTF) describes the imaging resolution of the sensor. However, sensitivity and resolution of a sensor are not separable. The r~solution of an infrared sensor is strongly dependent on the sensitivity of the' sensor and vice versa. Therefore, a combined sensitivity and resolution performance parameter was developed called the minimum resolvable temperature difference (MRTD, or just MRT) that accurately describes the interrelationship for a particular sensor. MRT is used by many engineers and scientists, especially tactically oriented (fire control and targeting) organizations, as the primary performance measure. Many parameters are discussed in this chapter. The material is intended to pull together all the sensor components presented in the previous chapters to culminate in a system-level sensor description. This is then used along with the tar~et and atmospheric parameters to give a probability cf object discrimination. '11.1 SENSITIVITY AND
J) ~[,
~[
RESOLUTION~:+,
I
The two general parameters of perfonnance for any IlR or EO sensor are sensitivity and resolution. Sensitivity referlt-~ how well a sensor can discrimi~~te small radiance changes in the object space~'Fot:mally, sensitivity can be defined as the differential radiance which produces a unity SN~] For infrared sensors, this is usually described in terms of an equivalent blackbody temperature (i.e., the temperature of a ideal blackbody that delivers the prescribed radiance). Ideally, an infrared sensor would be able to resolve the radiant changes in targets and backgrounds to an infinite number of equivalent blackbody temperatures.
:: .i
,
Infrared Systems'
U \-
283
'
;[
I •. n I .
r~[
t
,q';
:,L I
>
r{
I :~ lei-
( ~L 1'L l ,.u
11 I: L [
'f:
~
IL
8 1t';
1 stGencration (common-module) 01", 0 0
~
· · 0· Bidirectional linear array
Analog electronics
EO-MUX E-MUX Analog filtering
Single-wave band
2 lit! Generation
0000 00 00 0°00
)
· . .. · . .. ·.
0000 Unidirectional TDI x 4 Array Analog electronics on chip, digital off chip
Digital signal processing
Single-wave band
3 rd Generation ODD ODD 000
• • • 0 00
... •
DO 0
•
•
...
•
o~
00
Large format array
Digital electronics on chip
On-chip processing Digital signal processing
Multispectral
Figure 11.1 FLIR generations. Performance measures are an excellent method for representing the collective response of a sensor system. However,infrared sensor perfOlmance measures do not include all the necessary infonnation to detennine how Iwell a sensor will perfoml in an overall imaging environment (e.g., a tactical or strategic scenario). That is, the 12R system perfonnance measures do not explicitly include target, background. or atmospheric characteristics. The mc'asures only inclu:de the sensor components as outlined by the dashed box in Figure 11.2. While the sensor system does not include the target, background, and atmospheric components, it does include the display and human vision components (if they are intended to process the data). The idea here is to characterize the sensor with a number of system-level performance parameters that describe the fidelity of the infrared to visual transformation. These system-level metrics include the response of each sensor . component. They can be used to determine how well 2 sensor performs' in an overall imaging system scenario for particular targets, backgrounds, and atmospheric conditions allowing a quick evaluation of the sensor in different. environments.
[ntrocftlction to
284
inTrarcc! and Electro-Optical Systems
r - -- - - - - - - - - - - - - - - --, ,
r
,
D
.
..
Display Hu.:-nan \'ision ,
~:,
,
Autom.:nic target ,
ilion (.-\TR), (
,
('
.
I
I
I
~-------------------~ here Sensor system
Targets and
back~round
:',-"._.
Figure 11.2 I2R sensor system. The system-level performance measures describe the sensor in terms of sensitivity and resolution. Noise equivalent temperature difference (NETD) and three-dimensional noise parameters describe the sensor's general sensitivity. The system modulation transfer function (MTF) describes the imaging resolution of the sensor. However, sensitivity and resolution of a sensor are not separable. The r~solution of an infrared sensor is strongly dependent on the sensitivity of the' sensor and vice versa. Therefore, a combined sensitivity and resolution performance parameter was developed called the minimum resolvable temperature difference (MRTD, or just MRT) that accurately describes the interrelationship for a particular sensor. MRT is used by many engineers and scientists, especially tactically oriented (fire control and targeting) organizations, as the primary performance measure. Many parameters are discussed in this chapter. The material is intended to pull together all the sensor components presented in the previolls chapters to culminate in a system-level sensor description. This is then used along with the target and atmospheric parameters to give a probability cf object discrimination. 11.1 SENSITIVITY AND RESOLUTION
r+
'..
~l
(
The two general parameters of perfonnance for any rzR or EO sensor ,are sensitivity and resolution. Sensitivity referlt~ how well a sensor can discrimi~~te small radiance changes in the object space~'--Foi:mally, sensitivity can be defined as the differential radiance which produces a unity SN~] For infrared sensors, this is usually described in temlS of an. equivalent blackbody temperature (i.e., the temperature of a ideal blackbody that delivers the prescribed radiance). Ideally, an infrared sensor would be able to resolve the radiant changes in targets and backgrounds to an infinite number of equivalent blackbody temperatures.
r ""l,
,-_.~J
i
Infrared Systems
C U
I IL [i
b
I
285
However, system noise and sensor dynamic range eliminate any chance of infinite sensitivity. NETD is a good measure of sensitivity, but it is a gross value that lumps noise into an equivalent bandwidth. More important, sensitivity is a function of resolution. MRT describes sensitIvity as a function of spatial frequency (i.e., resolution), producing a more comprehensive characterization. Resolution refers to how well a sensor can "see" small spatial details in the object space. Resolution is characterized by the MTF of the system. However, the entire system MTF is not useful. Some MTF values decrease signals to less than the noise level of the system (for particular spatial frequencies). So, again, we find ourselves in a position where the tme useful resolution of the system depend~ on noise (and therefore sensitivity). MRT was developed to address .the need for a function relating the two. Because sensitivity and resolution are relate~, chan,ges in sensor resolution affect sensor sensitivity. Most of the possible changes in sensor components give conflicting changes in sensitivity and resolution. For exam'ple, an increase in a sensor's focal length (with all other sensor parameters held constant) may increase resolution and decrease sensitivity. Such a parameter change can give a sensitivity and resolution relationship similar to the one shown in 'Figure I 11.3.
>
'---
Resolution Figure 1 1.3 Sensitivity and resolution relationship. With such a parametric analysis, a sensor designer or analyst needs to determine an appropriate point to set the parameter. The decision may be a simple one because , of the sensor use. For example, the sensor may be used to view rocket plumes where plenty of signal is present, but high resolution is desired. The process may , be an iterative one requiring convergence to a compromise in requirements. Not all parameters are conflicting in sensitivity a:ld resolution. There are a few such as entrance pupil diameter that usuall\' enhar.ce both sensitivity and , resolution. Fora larger entrance pupil diameter, th~ resolu:ion of a sensor may be increased because of a smaller diffraction spot (or pst) ar.d the sensitivity of the I
L
L[~ I
l~
186
Intr(lduct'io(~ to Infrared and Electro-Optical Systems
·
~.
· i
.;
sensor may be increased because of collection efficiency (i.e., flux throughput). While this sounds like a great way to beat both parameters, imaging optics usually have a practical f-number lower limit of around I, resulting in a limit on entrance pupil diameter. Also. optics with very low f-numbers tend to be accompanied by significant aberrations, so the resolution of the system may actually suffer. It is well known in the airborne sensor communities (both tactical and intelligence, surveillance, and reconnaissance [ISRD that payload 'customers want sensors with infinite resolution, infinite sensitivity, and an infinite FOY. The bottom line is that sensor parameters involve significant tradeoffs in sensitivity, resolution, and area' coverage. The following sections provide an introduction to the sensitivity and resolution parameters of fR sensors, the basis of most tradeoffs. We begin with the sensitivity parameter ofNETD . 11.2~ NOISE EQUIVALENT TEMPERATURE DIFFERENCE
:H'-
"~oj
.:(
:i~' :
.~ ; r
: 'j
"l, ,
, "\" ,
'
i"
(
.~
NETD is a sensitivity parameter that is defined [3] as the target-to-background temperature difference in a standard test pattern that produces a, peak-signal-to-rms-noise ratio of unity at the output of a reference filter (recent measurements use the system noise bandwidth as the linearity filter) when the system views a test pattern. The NETD is a measure of the rnlS noise amplitude in terms of target-to-background temperature differentials. The NETD is txpically measyred with an oscilloscope on the output of a detector while the infrared imager views a square target of differential temperature 6. T that produces a signal-to-background differential voltage VI:' The noise voltage on the output of the detector is then measured with a true rms voltmeter V". The corresponding NETD of the detector signal is determined by I
;',
~":
J
IP'
': ~1'
I
t-.T
NETD = V,/Vn
..
:.
-~
(11.1) ·r I
The overall frame NETD can then be taken as the average, or some other function [4], of the detectorNETDs. There are actually a number of different techniques for .. measuring NETD that can lead to errors [4]. However, the basic concept is that NETD is a temperature equivalent signal that corresponds to system nTIS noise 'output. . _____- I The following derivation of NETD is provided by Lloyd' [3] with the following assu~p"~~ns: ~ '\.~'L
1. Detector responsivity is uniform across the detector's sensitive area. 2. Detector detectivity is independent of other parameters in the NETD equation. 3. NETD equivalence is in blackbody signal levels.
,..1
: .... __ J
:, r I
287
Infrared Systems
4. 5.
The detector angular sub tense and system f-number can be expressed by small angles. Electronic processing introduces no noise.
Consider the sensor system in Figure I 1.4 that views part of a large blackbody source. The blackbody source has a spectral radiance of L(T, A) =
(11.2)
[W/(cm::-sr-mm)J
M(T),) ;1
The flux (or power) on the detector, whether the detector is scanned or staring can be expressed as (11.3)
where a = aIJ (horizontal detector angular subtense), ~= bIJ (vertical detector angular subtense), A is the area of the optics entrance pupil, and 1o/'licsC A) is the (I
transmission of the optics.
Area seen by detector
Optics
Ao
Detecror
,.
f
R
Blackbody
Figure 11..4 Blackbody flux to detector.
Because the detector signal responds to flux, we are int~rested changes in tlux with target temperature
l .,
i
0;
!
.
ic
cf
all
-;t
A 0 1 11
('. I.
)2.\/(1'.;'.
2T
[W/(mm-deg KIJ
diffel:ential
( 11.4)
The detector responsivity relates the flux