40• Nuclear Science
40• Nuclear Science Dosimetry Abstract | Full Text: PDF (204K) Fission Chambers Abstract | Full Tex...
95 downloads
1534 Views
4MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
40• Nuclear Science
40• Nuclear Science Dosimetry Abstract | Full Text: PDF (204K) Fission Chambers Abstract | Full Text: PDF (81K) Fusion Reactor Instrumentation Abstract | Full Text: PDF (162K) Ionization Chambers Abstract | Full Text: PDF (110K) Light Water Reactor Control Systems Abstract | Full Text: PDF (1633K) Nuclear Engineering Abstract | Full Text: PDF (468K) Nuclear Power Station Design Abstract | Full Text: PDF (132K) Particle Spectrometers Abstract | Full Text: PDF (336K) Photomultipliers Abstract | Full Text: PDF (144K) Radiation Detection Abstract | Full Text: PDF (172K) Radiation Monitoring Abstract | Full Text: PDF (143K)
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20EL...LECTRONICS%20ENGINEERING/40.Nuclear%20Science.htm16.06.2008 15:24:04
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICA...CTRONICS%20ENGINEERING/40.%20Nuclear%20Science/W5202.htm
●
HOME ●
ABOUT US //
●
CONTACT US ●
HELP
Wiley Encyclopedia of Electrical and Electronics Engineering Dosimetry Standard Article John W. Poston Sr.1 1Texas A&M University, College Station, TX Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. : 10.1002/047134608X.W5202 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (204K)
Browse this title ●
Search this title Enter words or phrases ❍
Advanced Product Search
❍ ❍
Acronym Finder
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20...20ENGINEERING/40.%20Nuclear%20Science/W5202.htm (1 of 2)16.06.2008 15:25:37
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICA...CTRONICS%20ENGINEERING/40.%20Nuclear%20Science/W5202.htm
Abstract The sections in this article are Interactions of Radiation With Matter The Bragg–Gray Principle The Free Air Ionization Chamber General Considerations Ionization Chambers Thermoluminescence Dosimetry Electronic Dosimeters Glossary | | | Copyright © 1999-2008 All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20...20ENGINEERING/40.%20Nuclear%20Science/W5202.htm (2 of 2)16.06.2008 15:25:37
DOSIMETRY
47
DOSIMETRY Radiation dosimetry can be defined as the theory and application of principles and techniques associated with the measurement of ionizing radiation (1). In the field of radiation protection/safety, radiation dosimetry is divided into two primary categories: external dosimetry and internal dosimetry. External dosimetry applies to measurements in which the radiation source (i.e., the radioactive material) is outside the body and the measurements can be made with any number of sensitive radiation detectors. In contrast, internal dosimetry applies to situations in which the radioactive material is taken inside the body and may be incorporated into organs and tissues of the body. Internal dosimetry requires very specialized calculations involving many assumptions, usually standardized in the field because it is not possible to make direct measurements for internally deposited radioactive materials. Usually measurements are made of radioactive material excreted from the body or radiation emanating from the body using very sensitive detectors and, based on the data J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
48
DOSIMETRY
obtained, a calculation is used to assess the dose. Because of the very specialized nature of internal dosimetry, this topic will not be discussed further. External dosimetry usually involves radiation detection instrumentation which can be used to assess one or more characteristics of the radiation field. These characteristics include measurements of the types of radiation, the total energy deposited in a radiation detector, the energy distribution (i.e., the energy spectrum) and/or the total fluence, the radiation intensity, the angular dependence of the field, the time dependence of the field, locations of the sources within the area, and many other specific parameters depending on the purpose of the measurement. Thus, the selection of an appropriate radiation detector depends on the purpose of the measurement. In addition, almost all radiation detectors (i.e., dosimeters) require careful calibration in known radiation fields before use in unknown radiation fields. The term dosimetry can best be understood by remembering that the term simply means ‘‘dose measurement.’’ In the simplest of terms, dose measurement involves the assessment of the energy deposited by ionizing radiation in a known amount of material. This article introduces the concepts associated with radiation dosimetry and provides a survey of some common radiation detectors used for this purpose. To assist in understanding the concepts presented in this article, a glossary of common terms is provided at the end. For detailed discussions of the quantities and units associated with radiation dosimetry, the publications of the International Commission on Radiation Units and Measurements (2–5) should be consulted.
INTERACTIONS OF RADIATION WITH MATTER The basis for radiation dosimetry is obviously the ability to use a device to detect or measure the energy deposited by radiation (or a quantity which can be related to the deposited energy). The device can take many forms and the methods of detection vary widely. In order to understand how radiation can be detected and how energy is deposited in a radiation detector, it is necessary to have some understanding of the mechanisms through which radiation interacts with matter. These interaction mechanisms are dependent on the types of radiation to be measured and, in many cases, the energy of the radiation. The subsequent discussion is intended to provide some insight into the common types of radiation and the interaction mechanisms of some of the most common radiations. More detailed discussions can be found in several excellent texts (6–8). Alpha Radiation Alpha particles (움⫹⫹) are large, charged particles composed of two neutrons and two protons (some texts correctly identify this particle as a helium nucleus) and are less penetrating than the other radiations typically considered in radiation dosimetry. An alpha particle has a positive charge of two and, on an atomic scale, is quite massive. In general, a single neutron or proton is about 1840 times larger than any of the electrons orbiting the nuclei of atoms. Thus, an alpha particle is more than 7000 times larger than a single electron. This size difference plays an important role in the way alpha particles
interact with matter and the hazard they present to living tissue. For the purposes of this discussion, it can be assumed that alpha particles emitted in the decay of a specific radionuclide are monoenergetic (i.e., they all have the same kinetic energy). The kinetic energy of an alpha particle is transferred to the medium, through which it is passing, by interactions between the particle and the orbital electrons of the atoms or molecules of the material. The major energy-loss mechanisms are electronic excitation and ionization of the orbital electrons. An alpha particle has a high electrical charge (⫹2) but a very low velocity (because of its mass). Thus, interactions with the medium are not spaced very far apart and the alpha particle does not travel far in most materials (i.e., alpha particles have a high specific ionization and a short range). These interactions are not actually collisions with the loosely bound electrons of the atom but are electrical or coulombic in nature. Since the particle is positively charged, it exerts an attractive force on the oppositely charged, negative electrons. The force exerted by the alpha particle is dependent on the distance between it and the electron as they pass (often called the impact parameter). The force and the probability of an ionizing event both increase as the distance decreases. In some cases, this attractive force is not sufficient to remove the electron from the atom and the electron simply is raised to a higher orbital position (i.e., the atom is excited). If the attractive force is sufficient to separate the electron from the atom, the atom is ionized and an ion pair is created. In some cases, for close encounters, it is instructive to imagine that the alpha particle exerts a huge attractive force and rips one or more electrons from the atom. Each interaction with an electron reduces the kinetic energy of the alpha particle. On the average, it takes approximately 34 eV to produce an ion pair in air. This is about twice the first ionization potential for most gases because, as stated, the energy transferred is shared between excitation and ionization events in the material. A typical alpha particle has a high specific ionization and may produce 20,000 to 40,000 ion pairs/cm of travel in air. In addition, the range of the alpha particle is dependent on the material through which it is traveling. Thus, even a very energetic alpha particle, with an initial energy of 5 million electron volts (MeV), will travel only about 5 cm in air. The range of the 5 MeV alpha particle in human tissue is only about 30 애m. A typical alpha particle does not have sufficient energy to penetrate the dead layer of skin on the human body. Thus, alpha particles are not of dosimetric concern if they remain outside the body, but must be given careful consideration if internally deposited (a situation which puts them in direct contact with living tissue). In general, alpha particles have straight paths through material and discrete ranges. The energy transfers of this massive particle are small relative the total kinetic energy of the particle (until the alpha reaches the end of its travel), thus the straight path through material. Usually alpha particles associated with a specific radionuclide are characterized by specifying the mean range of the radiation. A number of empirical equations have been derived to relate the initial kinetic energy and the range of the alpha particle. Usually, these ranges are calculated for air and converted to other materials through simple relationships. For alpha particles with initial kinetic energies in the range 4 to 8 MeV, a useful equa-
DOSIMETRY
tion is: R = 1.24E −2.62
(1)
where R is the range in air (cm); and E is the alpha particle energy (MeV). The range in tissue is obtained through a very simple ratio: Rair × ρair = Rtissue × ρtissue
(2)
where Rair is alpha particle range in air (cm); air is density of air at STP (0.001293 g cm⫺3); Rtissue is alpha particle range in tissue (cm); and tissue is density of tissue (1 g cm⫺3). Thus, the range in tissue is simply: Rtissue = Rair × ρair
(3)
Once the alpha particle loses its kinetic energy it acquires two free electrons to become a helium atom. Alpha-emitting radionuclides are usually those considered to be the heavy elements: these are the elements with a large number of nucleons (neutrons and protons) in the nucleus. Typical alpha-emitting radionuclides include Ra-226, Th-232, U-235, U-238, Pu-238, and Pu-239. Usually, alpha particle emission also is accompanied by the emission of electromagnetic radiation (gamma radiation). In most cases, spontaneous fission competes with alpha emission as a mode of transformation; however, usually alpha emission is dominant. Alpha decay results in a recoil nucleus that also slows by producing the same kind of dense ionization. In dose calculations, especially internal dose calculations, these contributions to ‘‘dose’’ also must be included. The alpha particle and the recoil nucleus are both assigned a quality factor (radiation weighting factor) of 20 to indicate the biological significance of these radiations. Beta Radiation Beta particles (웁⫺) are identical to electrons in that these radiations have the same mass and charge as electrons. Beta radiation is more penetrating than alpha radiation and, in certain situations, must be considered carefully in radiation dosimetry. Beta particles originate in the nuclei of atoms which are unstable and are neutron rich (i.e., having too many neutrons in the nucleus). These radiations arise when a neutron in the nucleus is converted spontaneously into a proton, an electron, and an antineutrino. Since the electron does not normally exist in the nucleus, it is ejected and generally carries a significant kinetic energy. In contrast to alpha particles, beta radiation emitted from the nuclei of a material is not monoenergetic but has a range of energies (i.e., a continuum or a spectrum). The energy available in the transformation is shared between the beta particle and an antineutrino. The antineutrino is thought to have no charge and a near zero mass (this is still a subject of great debate). Thus, the energy of the beta radiation emitted is distributed essentially from zero to some maximum energy associated with the specific transformation. At one extreme, the antineutrino carries away all the energy while at the other the beta particle carries away all the energy. Obviously, all other energy sharing arrangements are possible and the continuum of energies is produced. Antineutrinos have high penetrating power and low interaction probabilities and,
49
therefore, contribute little to the dose from beta radiation. For these reasons, antineutrinos are not normally considered in dose assessments. There are a few pure beta emitters, that is, radionuclides which reach a stable state with the emission of the beta particle and antineutrino and no other radiation. These include radionuclides such as H-3, C-14, S-35, and P-32. Often, other radiations (electromagnetic) may be emitted in this transformation because emission of the beta particle from the nucleus may not have resulted in putting the nucleus in its most stable state. In some situations, the nuclei of atoms may be unstable because they contain too many protons (i.e., proton rich). In this case, a proton is converted spontaneously into a neutron, a positron, and a neutrino. A positron (웁⫹) is simply a positively charged electron which is emitted from the nucleus. As with the beta particle, the energy of the transformation is shared between the positron and the neutrino and, thus, the kinetic energies of positrons emitted in the transformation of a particular radionuclide are also distributed over a continuum. As was the case for antineutrinos, the dose contributions from neutrinos produced in beta decay are not considered in dose assessments. Radionuclides which are positron emitters include C-11, O-15, F-18, Na-22, and P-30. Positron emitters are rare but have some unique characteristics which make them very attractive for use in diagnostic nuclear medicine. These radionuclides typically have short half-lives, are produced by accelerators located in the medical facility, and their uses are limited to positron emission tomography (PET). Positrons interact with material in a similar manner as do beta particles and only the differences will be discussed here. Beta particles also interact with matter through excitation and ionization. However, in this case the interactions are those of two particles, of the same mass, with like charges and the major interaction process is repulsion rather than attraction. It is useful to think of these as scattering interactions or inelastic collisions. Since the beta particle and the electron are essentially identical, there is a net repulsive force exerted and the orbital electron is either raised to a higher orbital position (excited) or separated from the atom (ionized). The net result is the creation of an ion pair and approximately the same energy is required to create the ion pair. Since the beta particle and the electron are the same size (mass), in contrast to alpha particle interactions, the beta particle changes direction because of each interaction. There are many scattering events as the beta particle loses its kinetic energy and the path of the particle is far from straight (many authors use the word tortuous). Nevertheless, the term range is used to provide some insight in the distance a beta particle can travel through a material. However, the range of a beta particle is best thought of as the ‘‘crow-flight distance’’ or the linear thickness of material rather than the total distance (i.e., path length) traveled. It should be obvious that the path length is much larger than range. A useful rule of thumb is that the range of a beta particle, in air, is about 4 meters per MeV. Beta particles also lose energy through radiative collisions with nuclei. This phenomenon, called bremsstrahlung production, describes the electromagnetic energy radiated when the beta particle is accelerated due to the presence of the nucleus. Bremsstrahlung is usually important only at very high beta particle energies and in high atomic number materials. How-
50
DOSIMETRY
ever, production of bremsstrahlung must be recognized and considered in some radiation dosimetry situations. There are a number of empirical range–energy relationships for beta particles. One of these equations is: R = 0.542E −0.133
(4)
where R is the beta particle range (g cm⫺2); and E is the beta particle energy (MeV). This equation is useful for beta particles with a maximum energy greater than 0.8 MeV. Also, note the units on the range. This unit is called the density thickness of material and is a useful way to express the range. To convert to the range in a specific material in more conventional units, all one must do is divide this result by the density of the specific material. Thus, using density thickness to specify the range means that it is not necessary to specify the material, as was required in the discussion of alpha particles. At the end of travel, after each beta particle has lost its kinetic energy, it can exist in nature simply as a free electron. However, a positron cannot exist in nature and, when this particle has expended its kinetic energy, it combines with a free electron and these two particles annihilate. This process is a good example of conversion of mass into energy since the two electron masses disappear and the rest mass energy of the particles appears as two energetic photons of 0.511 MeV each. The production of these energetic photons (called annihilation radiation) must be considered in radiation dosimetry measurements as well as radiation shielding design. X rays and Gamma Radiation X rays and gamma rays are both electromagnetic radiation. These radiations really differ only in their origin: X rays are produced during rearrangements in the electron orbitals (shells) of the atom, whereas gamma rays are produced as a result of nuclear rearrangements. Energy (or wavelength) differences are not important in that many low energy gamma rays have been discovered and it is now possible to produce very high energy X rays. Both of these radiations are called photons. A photon has been described both as a particle and a bundle of energy. This is because photons possess both particle and wave-like properties: that is, a photon possesses energy but has no mass and has a wavelength and a frequency. X rays and gamma rays typically are the most penetrating radiation of those generally discussed in radiation dosimetry. The degree of penetration is dependent on the energy of the photons and the material through which the photons are passing. Very dense materials, such as lead, are excellent shields against photon radiation. Photons are indirectly ionizing radiations in that a primary interaction with the material must occur which produces a charged particle and it is this charged particle which produces additional ionization and excitation in the material. Photons interact with matter in many different ways but, usually, discussions are limited to three primary mechanisms: the photoelectric effect, Compton scattering, and pair production. These are three very distinct interactions and the type and probability of the interaction occurring depends on the photon energy and the material through which the photon is passing. The photoelectric effect occurs with highest probability at low photon energies and in high atomic number (i.e., high-Z) materials. The probability of a photoelectric interaction is
proportional to the atomic number of the material (Z4) and inversely proportional to the energy of the photon (h⫺3). This interaction is considered relatively unimportant for photons with energies above about 1 MeV, except in very high-Z materials. The photoelectric effect can be considered to occur with the entire atom although the interaction is really with a tightly bound electron (typically the K-shell electrons). The incoming photon strikes the electron and transfers all of its energy to the electron (the photon disappears). If the energy transferred to the electron (often called a photoelectron) is greater than the electron binding energy, the photoelectron is ejected from the atom with the excess energy being manifested as kinetic energy. This charged particle then produces additional ionization and excitation through interactions with material as described previously for beta radiation. The photon interaction produces a vacancy in the electron structure of the atom which must be filled. An electron from a higher orbit will drop into the vacancy, leaving another vacancy which must be filled. Thus, there is a cascade of electrons as each succeeding vacancy is filled. As these electron vacancies are filled, photons are emitted with energies equal to the difference between the initial and final energy levels. These photons are called ‘‘characteristic X rays’’ because the photons are unique to the element from which they originate. Compton scattering is most probable for photons in the energy range 0.1 MeV to 1 MeV and in light materials (i.e., lowZ materials). Compton scattering occurs between a photon and a very loosely bound electron. The electron is in one of the outer electron orbits and is assumed to be essentially free from electrical influences from other electrons or the nucleus, since the binding energy is significantly less than the photon energy. In this interaction, the conservation of momentum and energy make it impossible for the photon to transfer all of its energy to the electron. The photon has a collision with the electron, dislodging the electron but transferring only a portion of its energy to the photon. The photon is deflected (scattered) from its original direction of travel and has a lower energy (longer wavelength). The Compton electron has a kinetic energy equal to the difference between the initial and final photon energy. The scattered photon may have additional Compton interactions or a photoelectric interaction in the material (depending on the photon energy). As with the photoelectron, the Compton electron produces additional ionization and excitation through interactions with material, as described previously for beta radiation. Pair production occurs with highest probability for photons with high energy (i.e., typically more than a few million electron volts). This interaction is the opposite of annihilation radiation production discussed previously. Here, the photon penetrates to the near vicinity of the nucleus and has a coulombic interaction with the charged nucleus. The photon disappears and two charged particles are produced. The charged particles are electrons of opposite signs (i.e., an electron and a positron) and the excess kinetic energy of the photon is shared equally between the two particles. Since each of these particles has the rest mass energy equivalent of 0.511 MeV, the interaction is not possible for photons with energies below a threshold of 1.022 MeV. In general, even above this threshold, pair production is not important for photons below about 4 MeV. Again, these charged particles produce additional ionization and excitation through interactions with material, as described previously for beta radiation. When the positron
DOSIMETRY
has expended its energy in ionization and excitation in the material, it will annihilate with a free electron, as described previously. Neutron Radiation Neutron interactions are strongly dependent on the kinetic energy of the neutron and the material through which it is passing. Neutrons have approximately the same mass as a proton but possess no electrical charge. Thus, neutrons can penetrate to the nucleus of an atom and interact in a number of different ways. For thermal neutrons, capture is the most important interaction. In this interaction, the neutron is captured by the nucleus of an atom of the material and the nuclear structure is transformed. Typically, the nucleus is unstable and the excess energy, in the form of radiation, may be emitted. In dosimetry for radiation protection, two thermal neutron interactions with tissue are important. First, the neutron may be captured by the nucleus of a hydrogen atom and a 2.2 MeV gamma ray is emitted (this reaction is written 1 H(n, 웂)2H). A second important interaction is between the thermal neutron and nitrogen, 14N(n, p)14C, which produces a 0.6 MeV proton. There are a number of other possible capture reactions which occur with a wide range of materials (typically called activation) but these will not be discussed here. Intermediate energy neutrons are usually in the process of slowing down from higher energies. The interactions processes include scattering but capture and other nuclear reactions also may occur. Fast neutrons are usually the most important in terms of radiation dosimetry, especially since the concern is the deposition of energy in tissue. Primary neutron interactions considered are either elastic or inelastic collisions. An elastic collision of a neutron with nucleus of an atom results in deflection of the incident particle and a transfer of a portion of the neutron energy to the struck nucleus. Energy losses depend on the size of the struck nucleus and the collision angle. Sometimes, it is possible to transfer all the fast neutron energy to the struck nucleus in a head-on collision. The most important fast neutron interaction in tissue is elastic scattering with hydrogen. In these interactions, since the neutron and the hydrogen nucleus (a proton) are essentially the same size, complete energy transfer is possible. In tissue, more than 90% of the fast neutron interactions and the energy transfer are due to elastic collisions between fast neutrons and hydrogen nuclei. As the neutron energy increases, inelastic collisions become important. These interactions occur for neutrons with energies typically above about 1 MeV and, above about 10 MeV, elastic scattering and inelastic scattering have equal probability of occurring. In tissue, the most important inelastic interactions are those with the nuclei of carbon, nitrogen, and oxygen. Most of these interactions result in the emission of gamma rays, as the nuclei deexcite. However, in some cases, the deexcitation may include the emission of protons or alpha particles. These latter reactions typically take place with very high energy neutrons (i.e., above about 5 MeV). In the relativistic energy range (i.e., ⬎10 MeV), inelastic scattering is more important than elastic scattering. For highZ materials, the elastic probability (i.e., cross-section) may be ignored entirely. But, even in this energy range, elastic collisions in tissue are still important.
51
THE BRAGG–GRAY PRINCIPLE Many radiation detectors, if calibrated properly, can be used to measure the absorbed dose and the dose equivalent from exposure to ionizing radiation. Ionization chambers, filled with air, were one of the first detectors to be used to measure the absorbed dose in tissue. Accurate dosimetry with ionization chambers (or any gas-filled detector) has its foundation in the Bragg–Gray principle. This fundamental principle states that the energy deposited by secondary electrons per unit volume in a solid medium is equal to the product of the ionization per unit volume in a gas-filled cavity in the medium, the mean energy expended in the gas, and the ratio of the mass stopping powers of the secondary electrons in the medium and the gas. More succinctly, this simply means that the amount of ionization produced in the gas-filled cavity, in the medium, serves as a measure of the energy deposited in the surrounding medium. For the above statement to be true, four conditions must be met (1): 1. The cavity must have dimensions such that only a small fraction of the energy of the charged particle is dissipated in it. This requirement simply means that only a small fraction of the charged particles contributing to the ionization will enter the cavity with a range that is less than the dimensions of the cavity. 2. Contributions of radiation interactions in the gas filling the cavity to the total ionization in the cavity should be negligible. This requirement means that ionization in the cavity should be caused by charged particles produced in the medium, as opposed to the cavity gas. Generally, this requirement is satisfied if the first requirement is satisfied. 3. The cavity must be surrounded by an equilibrium thickness of the solid medium. This is the thickness that will result in the condition called electronic equilibrium. Electronic equilibrium exists when electrons, produced by radiation interacting in the cavity and leaving the cavity, are replaced by electrons, produced by radiation interacting in the medium and entering the cavity, deposit a portion of their kinetic energy (see Fig. 1). Basically, the equilibrium thickness is equal to the range of the most energetic secondary electrons produced by interactions of the primary radiation. The principle of
e– e– e–
e– e– e– e– e–
Figure 1. Illustration of electronic equilibrium as applied to radiation dosimetry. The gas-filled cavity must be small in relation to the range of the electrons generated by photon interactions in the medium. Electrons that are produced in the cavity, and leave the cavity carrying away energy, must be replaced by electrons generated in the medium which enter the cavity and deposit energy.
52
DOSIMETRY
electronic equilibrium is employed in the free air ionization chamber (see discussion below). 4. Energy deposition by ionizing radiation must be uniform throughout the solid medium immediately surrounding the gas-filled cavity.
L S
d
If these requirements are met, the energy absorbed per unit mass of the medium is related to the ionization per unit mass of the gas in the cavity by: Em = Jg × W × sm
(5)
where Jg is the number of ion pairs formed per unit mass of the gas (usually expressed in units of grams); W is the average energy required to produce an ion pair in the gas; and sm is the ratio of the mass stopping power of the medium to that of the gas in the cavity for the secondary electrons. As mentioned previously, the average energy required to produce an ion pair in most gases is about 34 eV. The factor sm can be expressed as: sm =
Nm × Sm Ng × Sg
(6)
where Nm is the number of electrons per unit mass of the medium; Ng is the number of electrons per unit mass of the gas; Sm is the electron stopping power of the medium; and Sm is the electron stopping power of the gas. The factor sm indicates how much more frequently ionization will occur in the medium as compared to the gas in the cavity. Therefore, measurement of the factor Jg, the ionization per unit mass of the gas in the cavity, linked with the knowledge of the values sm and W makes it possible to determine the energy deposited (i.e., to determine the absorbed dose) in the medium. If the medium of interest is tissue, then the Bragg–Gray principle allows measurement of the absorbed dose in the irradiated tissue. THE FREE AIR IONIZATION CHAMBER One of the first detectors used to measure the quantity exposure due to gamma and x ray sources was the ionization chamber (see IONIZATION CHAMBERS). Early ionization chambers employed metal electrodes and crude insulators, were filled with air, and were very simple in construction. The free air ionization chamber provides a good example of the simplicity of such detectors and, at the same time, illustrates the principles so necessary to measure the deposited energy. This detector is designed to measure exposure over a specific photon energy range and serves as a standard device at national standards institutes across the world. Basically, the free air ionization chamber is a parallelplate detector with plate separation being one of the important variables (see Fig. 2). A potential difference is maintained between the high voltage electrode and the collecting electrode. The collecting electrode is surrounded by a grounded guard electrode to clearly define the electric field shape and limits. At one end of the detector is a collimator which serves to define the radiation beam (S) as precisely as possible. Between the plates of the detector, the collecting volume is defined by the collimator and the electric field lines
Figure 2. Schematic drawing of a free-air ionization chamber. The collimated photon source is located at S, the distance L is the necessary ‘‘thickness’’ of air to establish electronic equilibrium, and the plate separation is indicated by d. As the photon energy increases both L and d must increase; thus, these detectors are designed for a specific range of photon energies.
between the two electrodes. The distance between the collimator and the collecting volume (L) also is variable to ensure that electronic equilibrium exists in this volume. As the energy of the photon beam is increased the collimator must be moved farther away from the collecting volume (the range of the secondary electrons increases), and the plate separation (d) must be increased to accommodate the increase in the effective collecting volume of the chamber. These detectors have inherent limitations due to electronic equilibrium requirements which dictate changes in the plate separation and the distance between the collimator opening and the collecting volume. For some photon energies, these detectors, with appropriate shielding, can be quite large. For these reasons, free air ionization chambers are manufactured to be used in specific photon energy ranges and their use is restricted to standards laboratories. Some corrections must be made in the use of the free air ionization chamber. These corrections include: • Attenuation of the photons in the air between the collimator opening and the collecting volume; • Recombination of ion pairs in the chamber; • Changes in air density and humidity; • Ionization produced by photons scattered from the beam; and • Loss of ionization due to inadequate separation of the electrodes. However, if these corrections are made, it is possible to make measurements of radiation exposure to about ⫾0.5%. In dosimetry it is not always possible to establish electronic equilibrium in the manner used in the free air ionization chamber. Usually, the outer wall of the ionization chamber is constructed using solid materials which provide an equilibrium thickness for a specific photon energy range (see Fig. 3). In addition, equivalent materials are used in some applications. That is, materials which are classified as airequivalent or tissue-equivalent are used in certain specific dosimetry applications. This is accomplished by selecting a material which attenuates the radiation in the same manner as the equivalent material.
DOSIMETRY
R
53
No dosimeter meets all the requirements for an ideal dosimeter. But, the strengths and weaknesses of the system in use must be completely understood to ensure proper use and interpretation of the results obtained. IONIZATION CHAMBERS
Figure 3. The ‘‘air-wall’’ of an ionization chamber condensed around the collecting volume of the detector. The thickness of the wall, R, is essentially equal to the distance, L, in Fig. 2.
GENERAL CONSIDERATIONS The basic requirement of any radiation dosimeter is that it measure the dose received (i.e., energy deposited) with sufficient reproducibility and accuracy over the entire range of radiation energies, doses, or dose rates expected during its use. The dosimeter may be a standard device used to characterize a particular field (e.g., calibration of an X-ray machine) or it may be a monitoring device worn by a radiation worker to establish the occupational dose. The accuracy of the dosimeter may vary depending on the intent of the measurement and the dose levels to which the dosimeter is exposed. For example, national and international guidance on personnel monitoring indicate that, for routine exposures, an accuracy of ⫾50% is acceptable. As the exposure level increases, the required accuracy of the dosimeter becomes more restrictive and, as the doses approach the permissible exposure levels, the accuracy should be ⫾30%. At higher exposure levels, such as those approaching clinical significance (i.e., life-threatening), the desired accuracy becomes ⫾25%. However, these levels of accuracy would be totally unacceptable in a radiation therapy situation where accuracies of a few percent are required. A clear understanding of the specific dosimeter requirements and the ability of the dosimeter to meet these requirements is important in any radiation dosimetry program. Regardless of the intent of the dosimetry program, it is imperative that the performance characteristics and limitations of the dosimeters be completely understood by those responsible for the program. Quantitative measurements made with a particular type of dosimeter will depend on a number of factors including: • Variation of dosimeter response from the ideal; • Reliability with which the dosimeter maintains its calibration or retains the recorded information; and • Influence of environmental factors on dosimeter response. Factors affecting dosimeter response include radiation quality, radiation intensity, energy dependence, angular dependence, and the presence of other radiations. Terms such as fading or leakage are used to describe the loss of information stored or recorded by the dosimeter. Environmental factors include temperature, humidity, dust, vapors, light, and other factors such as rough handling and radiation contamination. All of these factors may influence measurements of the absorbed dose and, potentially, lead to invalid monitoring results.
There are two broad categories of ionization chambers used in radiation detection: passive and active (a more detailed discussion is presented in IONIZATION CHAMBERS). Passive detectors have been applied routinely to monitoring radiation exposure of individuals engaged in activities at many types of nuclear facilities. Passive dosimeters are integrating detectors since they provide only an indication of the total exposure. No dose rate information is given (but total dose divided by exposure time provides average dose rate) and, if the dosimeter is exposed beyond its useful range, no useful information is provided. Active detectors are normally used for radiation surveys in the workplace and will not be discussed here. Some passive detectors require a number of steps to be taken to secure an indication of the radiation exposure and, ultimately, the absorbed dose. These dosimeters are often called indirect-reading or condenser-type dosimeters. Basically, the dosimeter is a right-circular cylinder capacitor. The outer electrode is a right circular cylinder of conducting material and it surrounds, but is insulated from, a central electrode (usually a thin wire). The volume between the two electrodes is filled with air. The dosimeter must be prepared for use (charged) using an external circuit, exposed to radiation, and evaluated (usually in the same device used to charge the dosimeter). The measured exposure can be obtained only at the end of the irradiation and the dosimeter has been removed from the radiation field. The exposure can be evaluated from the relationship: CV = Q
(7)
where C is the electrical capacitance of the chamber; V is the change in voltage before and after the exposure; and Q is the charge collected during the exposure. If the chamber volume is known, the exposure can be calculated based on the definition of exposure. Usually, this is not necessary since the charger-reader, used to prepare the dosimeter for use, is calibrated to read directly in units of exposure. These dosimeters typically have a useful range of 0 to 200 mR with a quoted accuracy of ⫾15% over a photon energy range of a few keV up to 3 MeV. However, dosimeters of this type have been manufactured to cover a number of exposure ranges. Although it is seldom done, a conversion factor can be applied to relate exposure to the absorbed dose. Certain types of the condenser-type dosimeters are used as secondary standard devices for calibration of the output of xray machines, radiation therapy sources, and radiation detector calibration sources. These detectors, often called R-chambers, are manufactured with different wall thicknesses and compositions for use in a wide range of photon fields. Usually the wall thickness is air equivalent and the thickness of the wall is adjusted for a particular photon energy range. For example, R-chambers are available for use in the energy range 6 keV to 35 keV and also for photons in the range 0.25 MeV to 1.4 MeV. Exposure range is normally controlled by selecting the chamber volume. Detectors are available for the mea-
54
DOSIMETRY
surement of exposure from 0.001 R up to 1000 R. These detectors also are available as active detectors coupled to a high voltage/electrometer for immediate readout of the exposure rate. Other passive dosimeters (called direct-reading or selfreading dosimeters) have incorporated a lens system into the dosimeter which allows the exposure to be evaluated visually by the wearer without the use of an external circuit. These detectors have a sealed collecting volume, filled with air, and a single electrode made in two pieces. One piece is relatively rugged and stationary, while the other electrode is made of a thin, conducting fiber (i.e., a quartz fiber), which is moveable in relation to the stationary electrode. In this dosimeter, the electrodes are charged to the same polarity and, thus, the stationary electrode and the fiber repel each other. The dosimeter is prepared for use (charged) by adjusting the voltage so that the moveable electrode casts a shadow at the zero position on an internal scale. When the dosimeter is exposed to radiation, ion pairs produced in the sensitive volume reduce the charge and the fiber moves closer to the stationary electrode. The wearer can view the change in position of this shadow through a magnifying lens system incorporated into the dosimeter, while holding the dosimeter up to any light source. This movement is directly related to the exposure received. The direct-reading dosimeters typically are very rugged and are preferred over the indirect-reading devices. At one time, these detectors were used extensively in the nuclear power facilities to provide day-to-day monitoring of the work force. These dosimeters have been manufactured for use in similar photon energy and exposure ranges and have the same accuracy, etc., as quoted previously. Some special dosimeters have been manufactured for low-energy photons and for detecting thermal neutron radiation. However, these are special applications of the general technology and these uses are not wide spread. Currently, passive dosimeters are not used for monitoring personnel radiation exposure in the workplace. These dosimeters have been replaced by thermoluminescence dosimeters and, more recently, by sophisticated electronic dosimeters.
but, interestingly, no one proposed the logical application of TL to the measurement of radiation dose. In TLD the absorbed dose is determined by observing the light emitted by the previously exposed TLD phosphor (or crystal) as it is heated in a controlled manner. The light emitted is directly proportional to the radiation energy deposited in the phosphor and, thus, to the absorbed dose. However, no TLD phosphor is an absolute dosimeter, the phosphor and the evaluation system must be carefully calibrated to establish the relationship of the light emitted and the absorbed dose. Careful calibration is required of each TL material, the badge or holding device in which the TLDs are placed, and the particular system in which the material is to be evaluated. Even though the material may be the same, each batch of TL material may have slightly different characteristics which affect its response to radiation. In addition, there are a number of possible approaches to evaluating the TLDs and each evaluation system has very specific characteristics. Calibration of a TLD system is a major task which must be completed before the system can be put in routine use for radiation dosimetry. Some authors would suggest that the physical and chemical theories of TL are poorly understood. However, the basic phenomenon can be understood qualitatively using an explanation founded in solid-state physics. This approach is perhaps too simple when one considers the complexity of TL emission but it will serve to illustrate the fundamental processes. Consider a hypothetical energy-level diagram of an insulating material, in which the valence band is assumed to be filled and the conduction band is assumed to be empty (see Fig. 4). If this insulating phosphor is exposed to ionizing radiation, interactions of the radiation in the phosphor will free electrons from their respective atoms (ionization). These electrons will be raised from the valence band into the conduction band. The loss of electrons in the valence band creates holes, (i.e., positively charged atoms or sites). The electrons and holes may migrate through the phosphor until they recombine or are trapped in metastable states. These metastable states may be associated with defects in the crystal structure or, to facilitate the trapping, impurity materials may be added intentionally to TLD phosphors. These materials typically occupy interstitial positions in the lattice and, in our energy
THERMOLUMINESCENCE DOSIMETRY Thermoluminescence dosimetry (TLD) is a common and popular method applied to the measurement of personnel radiation exposure. These dosimeters are used widely in nuclear facilities in the United States and, in many cases, are used to establish the ‘‘dose of record’’ to satisfy regulatory requirements. TLDs have many of the characteristics of the ideal dosimeter but there also are certain characteristics which influence the response of the dosimeter and significantly affect the results obtained. These characteristics must be completely understood for TLDs to be used effectively as a radiation dosimeter (9). Some authors state that thermoluminescence (TL) has been observed for centuries, whenever certain limestones and fluorites were heated. Certainly some of the early research dates back to that of Sir Robert Boyle and others in the mid 1600s. However, it was not until the early 1950s that Daniels proposed the use of TL as a radiation dosimeter. The relation between x-ray exposure and TL had been reported in 1904
Conduction band Electron trap
E TL photon
Hole trap
TL photon
Valence band Exposure to ionizing radiation
Heating: Electron trap is less stable; hole trap is the emitting center
Heating: Hole trap is less stable; electron trap is the emitting center
(a)
(b)
(c)
Figure 4. Schematic energy level diagram of an insulating crystal exhibiting thermoluminescence. The ‘‘traps’’ are located in the forbidden zone and retain the trapped electrons or ‘‘holes’’ until thermally stimulated.
DOSIMETRY
diagram, occupy energy levels in the forbidden zone between the conduction and valence bands. These traps prevent the electrons from returning to the valence band and, in effect, the energy which raised the freed electrons is stored in the phosphor. If the phosphor can be stimulated so that the energy is released, and that energy can be measured, the phosphor can be used as a radiation dosimeter. Usually, the stimulation is through heating the phosphor in a controlled manner (thermo) and the stored energy is released in the form of visible light (luminescence). The measured emitted light is directly proportional to the energy deposited in the phosphor and, thus, the radiation absorbed dose. Actually, the stored energy may be released through two possible mechanisms. First, as the phosphor is heated, the trapped electrons may receive sufficient energy to release them from their traps, raising them back into the conduction band. These electrons may return to the valence band, recombine with a hole, and release the stored energy in the form of a luminescence photon. The light photon released has an energy proportional to the difference between the excited and stable electron energy levels. Second, the hole trap may be more unstable and heating of the phosphor provides sufficient energy for the hole to wander through the crystal until it can combine with a trapped electron. Again, when the hole and the electron combine, a luminescence photon is released. Often, since the two processes are similar, only the first possibility is mentioned in simplified discussions of TL theory. The temperature required to free the electrons and cause the emission of light is related to the energy gap between the valence and conduction bands. When exposed to radiation, the deposited energy produces many trapped electrons and holes. As the temperature of the phosphor is increased in a controlled manner, the probability of releasing electrons is increased. Finally, a temperature is reached at which all the electrons have been released. Thus, the emitted light from the heated phosphor will be weak at low temperatures, pass through one or more maxima as the temperature increases and, finally, decrease again to zero. A plot of the emitted light versus the heating temperature is called a glow curve. More recently, as the heating cycle has become more controllable and reproducible, a glow curve may be a plot of the emitted light as a function of time (where there is a constant, known heating rate). Typical glow curves show one or more maxima (called glow peaks) as traps at several energy levels are emptied. The relative amplitude of these peaks indicates approximately the relative populations of electrons in the various traps. For radiation dosimetry, either the total light emitted during a selected part or all of the heating cycle, or the height of one or more of the glow peaks, may be used to indicate the absorbed dose. Often, a single peak (called the dosimetry peak) is selected for use in the evaluation. Usually, the dosimetry peak is one of the more stable peaks. However, in some phosphors, only a single glow peak is present (e.g., Al2O3 : C and CaSO4 : Mn). Again, proper calibration is required of both the phosphor material and the device used to evaluate the TLD. There are a large number of materials which exhibit TL and many have been studied as potential radiation dosimeters. Several of the most popular TLD materials include CaSO4 : Mn, CaSO4 : Dy, CaF2 : Mn, CaF2 : Dy, LiF : Mg : Ti,
55
LiB4O7 : Mn, and Al2O3 : C. Of these TLD materials, LiF has been the most widely used material. This phosphor will be discussed in some detail to illustrate the considerations important to dosimetry. Commercially available LiF (commonly called TLD-100) has been studied extensively because of its excellent characteristics for use as a radiation dosimeter. For photon radiation, these include: • A relative constant energy response per unit exposure over a wide range of photon energies. At low energies, LiF exhibits an over-response of about 25% in the 30 keV to 40 keV energy range. In other TLD materials, this over-response can be as high as factors of 10 to 15. In addition, in special cases, the energy over-response can be reduced by using a simple energy-compensating shield. • Even though LiF has a density of 2.64 g cm⫺3, the effective atomic number of LiF is about 8.2 which makes this TLD material nearly tissue-equivalent (Z ⫽ 7.4 to 7.6). Other TLD materials have an effective atomic number in the range of 12 to 15. • The main dosimetry peak (⬵190⬚C) is extremely stable and shows little loss of information (fading) when stored at room temperature. Fading is estimated to be only about 5% per year. LiF actually exhibits up to six glow peaks of various magnitudes but pretreatment annealing (i.e., heating) can reduce the influence of these less stable peaks on the measured results. • The phosphor is useful over a wide range of exposures, typically from a few tens of mR up to hundreds to thousands of R. The actual range will depend on a number of factors which must be determined during calibration. LiF has been used in a number of dosimetry and monitoring applications, from measurement in high-dose cancer therapy situations to many personnel monitoring applications. LiF also has applications in neutron dosimetry because the phosphor is available in three separate formulations. LiF (TLD-100) has the natural isotopic mix of the two lithium isotopes, 6Li (7.4%) and 7Li (92.6%). However, two other phosphors are available, called TLD-600 and TLD-700. TLD-600 is highly enriched in the isotope 6Li (95.6%), which has a very high thermal neutron cross-section (about 945 barns) for the (n,움) reaction. In contrast, TLD-700 is made essentially of pure 7Li (99.96%). This isotope of lithium has essentially no sensitivity to thermal neutrons (the cross-section is 0.033 barns). In this application, two different LiF phosphors are employed (either TLD-100 with TLD-700 or TLD-600 with TLD700). The latter combination is preferred and will be used in the following discussion. Since TLD-600 has a sensitivity to thermal neutrons as well as photons the output signal (the glow curve) from this phosphor will represent the contributions to dose from both radiations. Since TLD-700 has essentially no sensitivity to neutrons, the output signal from this detector represents only the dose due to photons. Subtraction of the photon dose measured by TLD-700 from the neutron and photon dose measured by TLD-600 provides an estimate of the thermal neutron component of the mixed radiation
56
DOSIMETRY
field. As with most radiation dosimetry systems, these detectors require proper calibration before use. This technique is not limited to measurement of thermal neutron dose. The dose due to fast neutrons can be measured using the same approach. However, here the TLDs must be covered with material to absorb (i.e., capture) thermal neutrons incident on the dosimeter and alter the TLD response. One popular detector system is called the albedo dosimeter. This dosimeter takes advantage of the fact that fast neutrons incident on the human body may be moderated (i.e., slowed down) by the tissue and reflected out of the body at lower energies, in the incident direction. The albedo dosimeter is designed to measure the thermal neutrons that escape the body when it is irradiated with fast neutrons and, through calibration, provides an estimate of the fast neutron dose. Many different designs of an albedo dosimeter, incorporating any number of TLDs and a variety of materials, have been reported in the literature.
ELECTRONIC DOSIMETERS Currently, electronic dosimeters are used widely in radiation safety/personnel monitoring applications around the world. Originally, the idea of a small electronic device, which could be worn comfortably by a worker, was restricted to detectors which simply issued an alarm (or ‘‘chirped’’) to warn the wearer of a potentially unknown radiation field. Even though some attempted to quantitatively relate the ‘‘chirp-rate’’ to the dose rate, the primary use of the device was simply to provide a warning. Obviously, this restricted use was a direct consequence of the inability to build small radiation dosimetry systems. With the advent of microelectronics, electronic dosimeters now have replaced the direct-reading dosimeters which were used extensively in many nuclear facilities. In addition, the rapid development and incorporation of a number of attractive features into these dosimeters have caused many to consider replacing TLDs with these devices. Should this occur, the electronic dosimeter will become the dosimeter of record in terms of satisfying the regulatory requirements for personnel radiation monitoring. This rapidly growing popularity is due to a number of features which allow better control of radiation exposure in the workplace and facilitate record-keeping. The attraction of electronic dosimeters is that they usually are coupled to a computer, through a reading device, which allows the setting of alarm points before use and the interrogation of the device after use. The data obtained from the dosimeter can be routed directly to the computer records system and, ultimately to the exposure file of the wearer. These devices will record, and display for the wearer, the accumulated dose or the dose-rate to which the worker has been exposed. In some cases, several alarm set-points are available for use. Typically, electronic dosimeters are designed to detect photons, but dosimeters for beta and neutron radiation are being developed. Several different radiation detectors have been used in these dosimeters. The simplest detectors are small (about 1 cm3 volume), halogen-quenched, energy-compensated Geiger–Mueller tubes. These detectors operate at about 550 V. Other dosimeters use pin diodes or silicon semiconductor detectors which require significantly lower voltages (about 1
V to 4 V). Typical energy response for these dosimeters is ⫾30% over a photon energy range of 60 keV to 1.2 MeV and the accumulated exposure range is typically 0 mR to 9,999 mR. Dosimeters are exposure rate limited, although at a very high rate, to the range of 10 R/h to 1000 R/h, depending on the detector selected for use. The entire unit is quite small, the weight of typical units ranges from 100 g to 200 g. Battery life has been reported to range from 100 h to about 6 months of use. Some dosimeters offer the capability of using telemetry so that a single radiation protection technician can monitor the activities of several workers in different locations in the facility. However, since this field is developing rapidly, the quality of these detectors can be quite variable and many anomalies have been reported in the literature. These include wide variations in the photon energy response of detectors which are identical, poor quality control on some of the software, loss of stored information, audible alarm failures, and radiofrequency interference, to list only a few reported problems (10). GLOSSARY Absorbed dose. The amount of energy deposited by ionizing radiation in a material per unit mass of the material. Usually expressed in the SI unit, the gray, 1 Gy ⫽ 1 J/kg. The traditional units for absorbed dose is the rad, 1 rad ⫽ 0.01 J/kg; therefore, 1 Gy ⫽ 100 rad. Cross section (). A quantitative measure of the probability of a given nuclear reaction. The concept of a nuclear cross section can best be visualized as the cross-sectional area, or ‘‘target area,’’ presented by a nucleus to the incident particle. The unit associated with the cross section is the barn where 1 b ⫽ 10⫺24 cm2. Directly ionizing radiation. Radiation composed of charged particles that interact directly with the electrons in the medium through coulombic interactions. These radiations include alpha particles, beta particles, positrons, electrons, and protons. Dose equivalent. The product of the absorbed dose and the quality factor. This quantity is used to express the effects of the absorbed dose from the many types of ionizing radiation on a common scale. The SI unit for dose equivalent is the sievert, 1 Sv ⫽ 1 J/kg, and therefore, 1 Sv ⫽ 100 rem. The traditional unit for dose equivalent is the rem. More recently, the International Commission on Radiological Protection has renamed this quantity the equivalent dose and has redefined it as the product of the absorbed dose and the radiation weighting factor, WR. Dosimeter. A device that may be worn or carried by an individual into a radiation field for the purpose of measuring the absorbed dose or the dose equivalent. The dosimeter may be either active or passive. In some cases, the dosimeter may measure the total dose over the exposure period and, in others, the dosimeter may provide dose rate information. Effective dose equivalent. The sum of the products of the dose equivalents in organs and tissues of the body, HT, and the individual tissue weighting factors, WT. Exposure. An early quantity defined as the total charge produced in air by photons interacting in a volume of air with a known mass. The currently accepted SI unit is C/kg and the
DOSIMETRY
traditional unit was originally the roentgen, 1 R ⫽ 2.58 ⫻ 10⫺4 C/kg. This term also is used, in general, to indicate any situation in which an individual is in a radiation field. Fluence. The number of particles incident on a sphere of a specified cross-sectional area. The unit associated with this quantity is m⫺2. Individual dose equivalent, penetrating, HP(d). The dose equivalent in soft tissue below a specified point on the body at a depth, d, that is appropriate for strongly penetrating radiation. The recommended depth, d, for monitoring in terms of HP(d) is 10 mm, and HP(d) may be written as HP(10). Individual dose equivalent, superficial, Hs(d). The dose equivalent in soft tissue below a specified point on the body at a depth, d, that is appropriate for weakly penetrating radiation. The recommended depth, d, for monitoring in terms of Hs(d) is 0.07 mm, and Hs(d) may be written as HP(0.07). Indirectly ionizing radiation. Radiation composed of uncharged radiations that must interact with the material producing charged particles in order to transfer energy to the medium. These radiations include, among others, X rays, gamma rays, and neutrons). Ionization. In this context, the process of removing (or adding) one or more electrons from (or to) an atom or molecule through interactions of radiation with the medium. Ion pair. The resulting positively charged nucleus and the free electron produced by the interaction of ionizing radiation with the medium. Isotope. One of two or more atoms of an element with the same number of protons in the nucleus but with different numbers of neutrons, such as C-12, C-13, and C-14, are isotopes of the element carbon. A radioisotope is an unstable isotope of an element that undergoes a spontaneous nuclear transformation by the emission of nuclear particles or electromagnetic radiation to reach a more stable nuclear state. This transformation usually produces an atom of a different element. A radioisotope is often called a radionuclide, although radionuclides are not necessarily isotopes. Kerma. The sun of the initial kinetic energies of all charged ionizing particles liberated by uncharged ionizing particles in a specific mass of material. The units for the absorbed dose may be used for this quantity (i.e., gray or rad). Absorbed dose and kerma may be equal depending on the degree of charged particle equilibrium and bremsstrahlung production. Linear energy transfer. The mean energy lost by a particle, due to collisions with electrons, in traversing a specified distance. The unit for this quantity is J m⫺1. Mass stopping power. The rate of energy loss (i.e., dE/dx) by charged particle radiation traversing the medium divided by the density of the medium. Neutron. A fundamental constituent of the nucleus of an atom. Neutrons can be produced in a number of ways and represent a significant source of indirectly ionizing radiation. Neutrons are classified according to their energy. A thermal neutron is at thermal equilibrium with its environment and, in special cases, has a Maxwellian distribution of velocities. In this distribution, the most probable velocity at 295 K is 2200 m s⫺1 corresponding to an energy of 0.025 eV. Intermediate neutrons have energies in the range of 0.5 to 10 keV. These neutrons are also called resonance or epithermal neutrons. Fast neutrons have energies in the range 10 keV to 10
57
MeV. Neutrons in this energy range interact with material primarily through elastic collisions (i.e., billiard-ball collisions). Relativistic (or high-energy) neutrons have energies in excess of 10 MeV. These neutrons interact with material primarily through inelastic collisions. Quality factor. A factor to weight the absorbed dose for the biological effectiveness of the radiation producing the absorbed dose. Currently accepted quality factors are: beta particles, positrons, electrons, x rays, gamma rays, and bremsstrahlung, Q ⫽ 1; protons, fast neutrons, Q ⫽ 10; alpha particles, recoil nuclei, Q ⫽ 20. Note that both the national and international bodies have recommended new factors (now called radiation weighting factors) but these have not been adopted for use in the federal regulations of the United States. Radiation. In this context, this term is used to designate ionizing radiation. Ionizing radiation has sufficient energy to cause ionization of the atoms and molecules with which the radiation interacts. Radiation weighting factor (wR). A factor used to weight the absorbed dose for the radiation effectiveness in a similar fashion to the earlier use of quality factors. Response. The ratio of the indicated reading of a radiation detection instrument to the actual value of the quantity being measured. Usually, the calibration factor is the reciprocal of the response. Specific ionization. The number of ion pairs produced by charged particles per unit length of travel in the medium, usually expressed in units of ion pairs per centimeter. Stopping power. The rate of energy loss (i.e., dE/dx) by charged particle radiation traversing the medium. Tissue weighting factor (wT):. A factor which expresses the fraction of the total stochastic risk associated with the irradiation of a particular tissue. The risk is based on the probability of producing a fatal cancer, a nonfatal cancer, hereditary effects, and shortening of lifespan in the exposed individuals.
BIBLIOGRAPHY 1. J. W. Poston, Sr., Dosimetry, Encyclopedia of Physical Science and Technology, New York: Academic, 1992, Vol. 5, pp. 301–352. 2. Quantities and Units in Radiation Protection Dosimetry, ICRU Report 51, International Commission on Radiological Units and Measurements, Bethesda, MD: 1993. 3. Determination of Dose Equivalents Resulting from External Radiation Sources, ICRU Report 39, International Commission on Radiological Units and Measurements, Bethesda, MD: 1985, reprinted in 1991. 4. Determination of Dose Equivalents Resulting from External Radiation Sources—Part 2, ICRU Report 43, International Commission on Radiological Units and Measurements, Bethesda, MD: 1988. 5. Measurement of Dose Equivalents from External Photon and Electron Radiations, ICRU Report 47, International Commission on Radiologial Units and Measurements, Bethesda, MD: 1992. 6. K. R. Kase, B. E. Bjarngard, and F. H. Attix, The Dosimetry of Ionizing Radiation, Orlando, FL: Academic, 1985, 1987, 1990, Vols. 1, 2, and 3. 7. J. R. Greening, Fundamentals of Radiation Dosimetry, 2nd ed., Bristol, UK: Adam Hilger, 1985.
58
DRAM CHIPS
8. F. H. Attix, Introduction to Radiological Physics and Radiation Dosimetry, New York: Wiley, 1986. 9. M. Oberhofer and A. Scharmann, (eds.), Applied Thermoluminescence Dosimetry, Bristol, UK: Adam Hilger, 1981. 10. Michael W. Lantz, Should electronic dosimeters be used as primary dosimetry?, Rad. Prot. Manag., 13 (5): 28–34, 1996.
JOHN W. POSTON, SR. Texas A&M University
DQDB. See METROPOLITAN AREA NETWORKS.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICA...CTRONICS%20ENGINEERING/40.%20Nuclear%20Science/W5203.htm
●
HOME ●
ABOUT US //
●
CONTACT US ●
HELP
Wiley Encyclopedia of Electrical and Electronics Engineering Fission Chambers Standard Article James F. Miller1 1GAMMA-METRICS, San Diego, CA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. : 10.1002/047134608X.W5203 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (81K)
Browse this title ●
Search this title Enter words or phrases ❍
Advanced Product Search
❍ ❍
Acronym Finder
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20...20ENGINEERING/40.%20Nuclear%20Science/W5203.htm (1 of 2)16.06.2008 15:25:52
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICA...CTRONICS%20ENGINEERING/40.%20Nuclear%20Science/W5203.htm
Abstract The sections in this article are Construction Electrode Coating Fill Gas Electrode Spacing Electrode Geometry | | | Copyright © 1999-2008 All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20...20ENGINEERING/40.%20Nuclear%20Science/W5203.htm (2 of 2)16.06.2008 15:25:52
FISSION CHAMBERS
FISSION CHAMBERS Since neutrons are uncharged particles, their detection must be accomplished by an indirect method. Slow (low energy) neutrons may be converted into ionizing reaction products on being captured by an isotope of an element with a high probability of interaction with the neutron, such as uranium-235. The absorption of the neutron causes fission within the uranium atom. The resulting high energy released is approximately 200 MeV per fission, according to Knoll (1). This energy release causes ionization in a gas, which is then detected utilizing normal pulse detection methods. Because of the substantial energy released, a neutron-induced fission reaction can be expected to produce a much greater ionization signal than other reactions, such as gamma radiation or alpha disintegration of the uranium. The extreme ratio of the signal to background allows the use of fission chambers at extremely low background signal, enabling operation at very low counting rates. According to Rossi and Staub (2), fission chambers are typically constructed as cylindrical ionization chambers with the inside surfaces of the chamber coated with a fissile material. For measurement of slow-neutron flux, the usual coating is uranium-235. The main purposes for which fission chambers are used are: 1. Measurement of the rate of fissions in a given neutron flux 2. Relative measurements of neutron flux of sources with identical neutron energy spectra 3. Investigation of the energy distribution of the fission fragments For experimental purposes, all three types of measurements are important. For practical commercial uses, fission chambers are used for the first purpose (measurement of the rate of fissions in a given neutron flux) as a part of the instrumentation for measuring the power level of a nuclear reactor. In this instance, the measurement of the neutron flux leaking out of the core of a nuclear reactor is proportional to the power level of the reactor. In applications in nuclear reactors, fission chambers can be used in several modes of operation. For source-range reactor power measurements, fission chambers can be used in the counting mode or pulse mode of operation. The counting rate from the fission chamber is proportional to the magnitude of the slow-neutron flux from the low pulse counting rates up to a counting frequency at which individual pulses can no longer be differentiated. The fission chamber can also be used in a
581
dc current mode of operation for measurement of full power in a nuclear reactor. The output current from a fission chamber is proportional to the total ionization produced by fission of the coating plus background current associated with gamma radiation and alpha disintegration of the uranium. The useful range of measurement in the current mode of operation is as a linear measurement from 0 to 100% power. Fission chambers can also be operated in an ac signal mode or mean-square-voltage mode of operation. The mean square of the ac statistical fluctuation of the linear current signal is proportional to the slow-neutron flux. This signal is useful for measurements over a fairly wide range of operation, approximately 5 to 6 orders of magnitude from 10⫺3% power to 100% power. CONSTRUCTION Baer and Swift (3) explain that typical fission chambers (see Fig. 1) are constructed as compact cylinders in order to be useful for mounting in close proximity to the neutron flux to be measured. For reactor measurements, these can be from 2.5 cm diameter by 15 cm length up to 10 cm diameter by 115 cm length. The construction is of metal cylinders, preferably with low capture cross section for neutrons and with little activation of the materials. Usually, the construction is with concentric aluminum cylinders, with uranium coated on the inside surface of the cylinder. The outer cylinder is connected as one electrode, and the inner cylinder as the second. The two are separated by ceramic insulators. The spacing between electrodes is approximately 0.25 cm to 0.50 cm. The space is filled with a gas, usually argon mixed with nitrogen at a pressure of 50 kPa to 300 kPa (0.5 atm to 3.0 atm). ELECTRODE COATING As explained by Graves and Froman (4), several methods have been developed for providing a uniform coating of fissionable uranium on the surface of the electrode. At least two methods are used commercially by manufacturers of fission chambers. One method, which has provided good uniform results, utilizes a solution of uranyl nitrate suspended in an alcohol-based lacquer. The lacquer is applied in thin coats and then dried and fired in a furnace to drive off the lacquer and fix the uranium to the surface of the metal in the form of uranium oxide. This method produces a very uniform coating, and by applying several coats of lacquer solution, the coating thickness of uranium can be made up to 2 mg/cm2. The disadvantages to this method include the labor required in applying the numerous layers of lacquer and the care that
Insulator
90% argon– 10% nitrogen
Uranium oxide coating HV
Outer cylinder
Inner cylinder
Figure 1. Unguarded fission chamber.
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
582
FISSION CHAMBERS
must be taken to ensure that all lacquer residue has been removed in firing in the furnace. The second method that is used commercially is electroplating a solution of uranyl nitrate and ammonium oxalate in water. The electrodeposition of uranium is accomplished through the use of a platinum electrode, a plating voltage of 8 V to 10 V, and a current density of about 120 mA/cm2. The electroplating is carried out to completion; thus the quantity of uranium in solution becomes the quantity of uranium plated. Since the coated area is fixed, this provides good control over coating thickness. FILL GAS Several gas mixtures have been used in the design of fission chambers. Argon gas at atmospheric pressure and mixtures of argon with 1 to 10% nitrogen have been used with commercial success in fission chambers, for certain magnitudes of the neutron flux to be measured. According to Colli and Facchini (5), the ionizing potential of the gas and the drift velocity of electrons in the gas have greatly favored the use of argon for the fill gas in fission chambers. The drift velocity, however, is greatly influenced by the presence of small amounts of impurities in the gas (6). If aluminum is used for the construction of the electrodes in the fission chamber, the surface of the aluminum and the surface of the uranium coating trap oxygen. Under operation at temperatures above 90⬚C (200⬚F), the oxygen mixes with the argon fill gas, causing changes in the electron drift velocity, and greatly affecting the output pulses from the fission chamber. The use of 1% to 3% nitrogen stabilizes the pulse output by reducing the effect of free oxygen within the argon fill gas. Using this combination, fission chambers have been used at temperatures up to 200⬚C (400⬚F). At higher temperatures, other materials such as titanium, nickel, or other high-temperature metals must be used in the construction of the fission chamber. In fission chambers used in-core in research reactors or in power reactor applications, where neutron flux levels nv reach 1014 cm⫺2 ⭈ s⫺1, pure argon must be used as the fill gas because of the tendency of nitrogen to disassociate under these high radiation fields. ELECTRODE SPACING Numerous experiments have been performed to test the effects of electrode spacing on fission chamber performance. One study is examined by Aves et al. (7); the testing involves measuring the pulse height versus count rate for various test conditions. The effects of electrode spacing, fill gas pressure, and coating thickness are then measured versus output sensitivity. The effects on output pulse height from varying coating thickness from 0.1 mg/cm2 to 3.0 mg/cm2, electrode spacing from 1.25 mm to 7.6 mm, and fill gas pressures from 750 mm Hg to 2400 mm Hg have been tested. At a fill gas pressure of 760 mm Hg, the output pulse height increases with electrode spacing up to a spacing of approximately 4.0 mm. Above this spacing, the pulse amplitude does not increase significantly, indicating that most of the ionization of the gas by fission fragments occurs within the first 4 mm of gas space. By then varying the coating thickness, a set of output pulse amplitudes versus coating thickness was obtained, which indicates that the counting sensitivity increases directly with coating
thickness up to a value ranging from 1.5 mg/cm2 to 2.0 mg/cm2. Above this thickness, the output count rate increases significantly more slowly versus coating thickness, indicating that for thick coatings most of the energy of the fission fragment is lost before the fission fragment exits the uranium coating, while for thin coatings the fission fragment exits with most of its energy remaining to be given up within the fill gas. ELECTRODE GEOMETRY In order for fission chambers to be used for experimental purposes, the operation of the chamber is of primary importance (8). For this purpose, flat electrodes provide the ultimate theoretical performance, by providing a uniform electrical field between them. In that case, output pulses due to fission events can be measured with good correlation. For instrumentation applications in a nuclear reactor, however, the fission chamber must be designed to fit within as small a volume as practicable, necessitating the use of alternative construction geometries. For such purposes, the fission chambers generally are cylindrical. The use of cylindrical electrodes produces a varying voltage gradient between electrodes; however, for electrode spacing less than 20% of the chamber diameter, the voltage gradient across the chamber is sufficiently uniform that there is no significant degradation in performance. Associated Instrumentation Fission chambers operate as pulse ionization chambers (see Fig. 2), that is, they detect the ionization in a gas caused by the fission of the uranium coating on the electrode of the fission chamber. As explained by Pare (9), when the chamber is operated with a high potential between electrodes, the ionization of the gas produces a pulse of current. According to Taboas and Buck (10), this pulse of current can then be amplified in a high-gain pulse amplifier to produce an output voltage pulse corresponding to each fission event within the fission chamber. The output voltage pulses are then passed through a level detector, such that all pulses greater than a threshold height produce a digital output, but smaller ones produce no output. This produces an output digital pulse that has been discriminated to eliminate background signals caused by electrical noise in the amplifier and to eliminate electrical noise caused by other sources of ionization of the gas within the fission chamber. These digital pulses can then be counted directly to produce an output count rate proportional to the neutron flux. This is the basis of source-range neutron flux monitoring instrumentation for power reactor applications. For more information on this instrumentation, see Ref. 11. The output pulses from the fission chamber can also be summed into a linear current amplifier to produce an output signal proportional to the neutron flux. The output current
Current
Neutron
Output pulse Ionized gas
Figure 2. Operation of fission chamber.
HV
FLAT PANEL DISPLAYS
signal for each pulse is approximately 1 ⫻ 10⫺13 C (ampereseconds); thus the output current is only relatively accurate for neutron flux levels nv above 108 cm⫺2 ⭈ s⫺1 due to the inherent alpha background current from the naturally radioactive uranium coating. (Typical dc alpha background currents from large fission chambers are of the order of 10⫺9 to 10⫺8 A.) This current measurement from fission chambers is the basis for some power-range neutron flux monitoring instrumentation for power reactor applications. The output pulses from the fission chamber can also be amplified in an ac voltage amplifier to produce an ac output signal that is statistical in nature. According to the statistical theory of noise, the fluctuation of the output signal is proportional to the square of the signal produced by the fission chamber. If this statistical output signal is then passed through a squaring circuit and then through a logarithmic amplifier, the final signal output will be proportional to the logarithm of the neutron flux. This technique is useful in monitoring neutron flux over a fairly wide range, and this is the basis for intermediate-range neutron flux monitoring instrumentation for power reactor applications. For additional information, refer to the study performed by Valentine et al. (12). From the above examples it may be seen that, if desired, the various signals from a fission chamber can be used to provide inputs into several different instruments, providing the capability to measure the neutron flux over a wide range. By using a combination of counting, statistical noise measurement, and linear current measurement, the range of detection can cover more than 11 orders of magnitude, covering reactor operation from shutdown (zero power operation) up to full power. In boiling water reactors, three different types of miniature fission chambers are located within tubes inside the reactor to monitor source-range, intermediate-range, and linearpower-range neutron flux. In pressurized water reactors, fission chambers are located within holders placed outside the reactor vessel to provide similar functions for source range and for intermediate-range neutron flux monitoring instrumentation and, in some reactors, also for power-range instrumentation. Major benefits for this application include high signal-to-noise ratio, good rejection of the signal caused by gamma radiation in comparison with the signal caused by neutron flux, and good usable signal over a wide range of operation. BIBLIOGRAPHY 1. G. F. Knoll, Radiation Detection and Measurement, New York: Wiley, 1979. 2. B. Rossi and H. H. Staub, Ionization Chambers and Counters, New York: McGraw-Hill, 1949. 3. W. Baer and O. F. Swift, Some aspects of fission counter design, Rev. Sci. Instrum., 23 (1): 55–56, 1952. 4. A. C. Graves and D. K. Froman (eds.), Miscellaneous Physical and Chemical Techniques of the Los Alamos Project, New York: McGraw-Hill, 1952. 5. L. Colli and U. Facchini, Drift velocity of electrons in argon, Rev. Sci. Instrum., 23 (1): 39–42, 1952. 6. U. Facchini and A. Malvicini, A–N2 fillings make ion chambers insensitive to O2 contamination, Nucleonics, pp. 36–37, April 1955.
583
7. R. Aves, D. Barnes, and R. B. MacKenzie, Fission chambers for neutron detection, J. Nucl. Energy, 1: 110–116, 1954. 8. W. Abson, P. G. Salmon, and S. Pyrah, The design performance and use of fission counters, J. Brit. Nucl. Energy Conf., 3: 1958, pp. 201–209. 9. V. K. Pare, Model of a fission counter system for optimizing performance at high gamma dose rates, ORNL/TM-6408, Oak Ridge National Laboratory, November 1978. 10. A. L. Taboas and W. L. Buck, Neutron induced current pulses in fission chambers, ANL-CT-78-14, Argonne National Laboratory, January 1978. 11. L. C. Wimpee, Y. Dayal, and P. W. Swarz, Analytical study of a source range monitoring system for LMFBR, AEC-AT (04-3)-893, Task 9, General Electric, April 1982. 12. K. H. Valentine, et al., Experimental study of fission counter design optimization with a simplified analytical model of performance, ORNL/TM-6926, Oak Ridge National Laboratory, November 1979.
JAMES F. MILLER GAMMA-METRICS
FLASHOVER. See VACUUM INSULATION.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICA...CTRONICS%20ENGINEERING/40.%20Nuclear%20Science/W5207.htm
●
HOME ●
ABOUT US //
●
CONTACT US ●
HELP
Wiley Encyclopedia of Electrical and Electronics Engineering Fusion Reactor Instrumentation Standard Article Kenneth M. Young1 1Princeton Plasma Physics Laboratory, Princeton, NJ Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. : 10.1002/047134608X.W5207 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (162K)
Browse this title ●
Search this title Enter words or phrases ❍
Advanced Product Search
❍ ❍
Acronym Finder
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20...20ENGINEERING/40.%20Nuclear%20Science/W5207.htm (1 of 2)16.06.2008 15:26:06
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICA...CTRONICS%20ENGINEERING/40.%20Nuclear%20Science/W5207.htm
Abstract The sections in this article are Measurements for Control of the Plasma Plasma Measurement Techniques Currently in use Development Issues to be Pursued Summary Acknowledgments | | | Copyright © 1999-2008 All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20...20ENGINEERING/40.%20Nuclear%20Science/W5207.htm (2 of 2)16.06.2008 15:26:06
62
FUSION REACTOR INSTRUMENTATION
FUSION REACTOR INSTRUMENTATION At the present stage of development in the worldwide fusion program, the device with the highest promise for providing a power-producing fusion reactor is the tokamak (see FUSION PLASMAS). Significant levels of power and energy have been achieved recently in the Tokamak Fusion Test Reactor (TFTR) (1) and in the Joint European Tokamak (JET) (2) tokamaks, and many critical design studies have been carried out for extrapolation from present-day tokamaks to the ultimate reactor. A major international design program is in progress now for designing a large tokamak, International Thermonuclear Engineering Reactor (ITER), to provide the full capability of a self-sustaining ‘‘burning’’ plasma and testing of necessary technology such as power-generating and tritium-breeding blankets. While the development of other concepts using magnetic confinement techniques are showing good progress, many aspects of their plasma measurements are similar to those used on tokamaks, and extrapolation of these measurements to a reactor will follow a similar path. The requirements of instrumentation for inertially confined plasmas using high-power pulsed lasers or beams of heavy particles, another potential route to a fusion reactor, are quite different because of the very short pulses and very localized reaction volume, and they will not be described here. Our concentration will be on the instrumentation necessary to maintain a steady-state burning plasma at the core of a powerproducing reactor (see FUSION REACTORS). The instrumentation for ITER must fulfill the needs of scientists aiming to achieve the full physics understanding of the plasma behavior while creating very long plasma pulses; the design specifications call for about 20 min. Hence the instrumentation set must be capable of giving very detailed physics information about the plasma behavior, but also much of it must be integrated into the automatic control sys-
tem for maintaining the performance for these long periods. The expectation must be that the instrumentation required for a final thermonuclear reactor will be much simpler, partially because the developmental physics issues will have been fully explored but also because the plasma parameters key to providing necessary control information will have been identified. Since the optimal mode of operation of the plasmas in the tokamak for long-term stable behavior has not yet been determined, the actual instrumentation cannot be defined too tightly and we will therefore consider here the instrumentation being conceived for the ITER device, and other tokamaks under design now, as being able to illustrate the requirements and implementation of the reactor. We will first address the requirements for instrumentation for these tokamak devices and, in particular, what role it is expected to play in the control of the plasma performance. Some of the techniques used in these diagnostic measurements will then be described briefly to illustrate some of the design and integration challenges to be overcome for achieving the necessary quality of measurement for an active control system. For a detailed physics understanding of the measurements, the reader is referred to Refs. 3–5. No attempt will be made to address issues of the instrumenting of relatively conventional parts of a fusion reactor such as the electrical generation or of the plasma-surrounding blanket modules for breeding tritium and converting neutron energy into thermal energy, assuming the type of deuterium–tritium (D–T) burning plasma expected to be exploited first. The temperature measurements, neutron measurements, and flow measurements are expected to be conventional, at least in concept. After describing the plasma diagnostic techniques, some of the developments necessary for some techniques and demonstrations of the viability of other techniques will be described. The very serious aspect of achieving industrial-quality instrumentation with its reliability and durability from the set of sophisticated scientific instruments described will not be addressed here but is clearly an essential element in achieving an effective fusion reactor.
MEASUREMENTS FOR CONTROL OF THE PLASMA In describing the instrumentation for the plasma and plasmafacing first-wall of a next-step tokamak device, one can consider its main requirements under three categories: (1) to provide input for protection of the hardware inside the tokamak vacuum vessel, (2) to provide information for achieving and maintaining operation of the burning plasma at high output power, and (3) to provide the detailed data for physics understanding and optimization of the plasma performance. Some of the plasma diagnostics will participate in all three categories. The priority for providing redundancy, reliability, and maintenance will be highest for those measurements having a protective role. Table 1 shows the parameters whose measurement is presently considered to play a necessary role in the control of a tokamak reactor. Following from the scheme developed for the ITER design (6–8), the measurements have been divided into (1) a basic control set needed to enable the device to achieve short-pulse good performance and (2) an advanced set which will provide capability for a long-time ignited-plasma operation. The combination should extrapolate fairly effec-
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
FUSION REACTOR INSTRUMENTATION
63
Table 1. Proposed Matrix of Measurements for Control Basic Control Control Component
Measured Parameter
Plasma creation, shape and equilibrium control
Plasma current, poloidal flux, toroidal flux, line-averaged density
Impurity content and radiated power
Spectroscopic line radiation, total radiation, visible continuum radiation
Advanced Control Control Component Kinetic profile control
Current profile control Impurity content and radiated power MHD activity (sawteeth, ELMs, high-frequency Alfven modes)
Operational issues
Performance goals
Divertor and edge control
First-wall surface temperatures, ‘‘halo’’ currents, m ⫽ 2 and lockedmode magnetic perturbations, runaway electrons, H-mode transition by H움 spectroscopy, gas pressures and composition Plasma pressure (beta), neutron flux, triton/deuteron densities in plasma core Surface temperature, currents to tiles, radiated power from divertor and x-point, gas flow, gas pressure
tively to the required set for an operating tokamak fusion reactor. The basic control set should be considered at this time to be the set providing the protective role for the device, though it is almost certainly a larger set than will finally be required. The set does include all the diagnostics needed to start up the plasma operation. The table might need minor modification in the light of specific design attributes of the reactor. Two things should be made very clear at this point. First, the implication of the table is that there will be very complex control algorithms required because of the interaction of the different plasma parameters and the actuators, such as disparate heating and fueling systems, which have not been addressed. The second is that while all the parameters shown in the table have been measured on existing tokamaks with very good success, only a few of the basic control parameters have been used so far in a feedback loop. Considerable technical development is necessary to demonstrate control capability on a burning-plasma device. The maintenance of the socalled reversed-shear discharges in the operating devices is leading to application of some of the advanced controls where profiles of some of the parameters are a key to a long-lasting high-performance plasma without disruptions (see FUSION PLASMAS). Figure 1 shows a cross section of the ITER device; its dimensions should be similar to those of a reactor (9). At full power, with the plasma generating 앑1.5 GW of fusion power, the plasma will have both very large stored thermal energies and magnetic energies. The plasma in a tokamak reactor must operate for long times (⬎1000 s) with a steady confining magnetic field provided by (1) the large toroidal coils, (2) field windings providing vertical and horizontal fields, and (3) the plasma current. In addition to the plasma heating by the cur-
Operational issues
Measured Parameter Electron density, electron temperature, ion temperature, rotational velocity Current density distribution Helium density (ash), total radiation profile, visible continuum profile Magnetic fluctuations, electron temperature fluctuations, electron density fluctuations, Alpha-particle loss detection, neutral density particle source, neutron fluence
Performance goals
Alpha-particle source profile
Divertor and edge control
Edge density, radiation in divertor, heat deposition profile in divertor, electron density and temperature in divertor, radiation front position, surface erosion
rent, there will also be additional heating provided by neutral particle beams or by radio-frequency power matching to the plasma particles at one of the resonant frequencies in the magnetic field, and the effectiveness of this heating must be determined. To optimize the heating and minimize energy losses, it is necessary to ensure sufficient plasma density, low impurities (non-fuel ions) so that loss of power by line radiation of impurity ions is as small as possible from the core plasma, and minimal turbulence in the plasma. The plasma must be kept away from the first wall because of very high power flows for which even short contact (ⱕ1 s) could cause local melting of the surface. But it must achieve high power levels, determined from the flux level of neutrons for a deuterium–tritium reactor, and sustain them with control of the plasma temperatures and densities, while avoiding additional plasma losses due to turbulence. An event that can only be tolerated very rarely, because of the potential damage it could cause to first-wall components, is a disruption, a catastrophically fast loss of the plasma current and its energy. Disruptions can arise from a number of causes, such as the plasma pressure rising over a specific value related to the device parameters (Troyon limit), the plasma density exceeding a specific density value (Greenwald limit), or a growth in the magnetic turbulence, either at high plasma velocity or slowing down to ‘‘lock’’ and become purely growing (see, for example, Ref. 8). All these factors have to be taken into account in providing a control system for protecting the device. But there are many other requirements to optimize the performance of the plasma; these are, of course, the subject of much of the research currently in progress to advance the fusion program. The description of plasma control in terms of measurements of the different plasma or device parameters shown in Table 1 must now be related to instrumentation techniques
64
FUSION REACTOR INSTRUMENTATION
Bioshield
Pressurized water cooling
25 Cryostat 20
15
10
Removable blanket modules
Vacuum vessel
Central solenoid
Equatorial port extension
Preload structure
Divertor port extension
5
0
–5 TF coil –10 Gravity support –15 PF coils –20
Figure 1. A poloidal cross-sectional view of the ITER device (9).
0
5
10 15 Meters
20
25
which can provide this information sufficiently quickly that it can be applied in control. Often significant analysis is required for interpreting a detector’s output in terms of the physics parameter being measured. In some cases more than one measurement is needed to be able to interpret a single plasma parameter. Hence considerable fast computing power will be required in such control, but such is the advance in computer capability that this is not considered an issue for devices to be built decades from now. For experimental work on current devices, either simple permissive switching or neural network techniques are being applied, the former mostly in protective roles and the latter in trying to improve the operational performance. Diagnostic techniques now in use for carrying out these measurements are listed in Table 2. They will be described in a little more detail in the next section. It is also necessary to consider the quality of the measurement required for the plasma control. Important issues will be the dynamic range of the measurement, the spatial resolution within the plasma, the temporal resolution, and the necessary accuracy. It has become the practice to define measurement requirements in such a way for the new tokamaks and some examples of the definitions set for ITER are shown in Table 3 (6). For these
tokamaks, there is a major physics mission requiring as good as possible spatial and time resolution as well as accuracy. For some parameters, some relaxation of the requirements for control may be possible, but clearly for others, such as the neutron count rate used in controlling the operational burn of the fusion reactor, it is not. For an effective control system, most of the time resolutions for measurement shown in Table 3 are much faster than can be applied in control, so that the requirements are dominated by the need for physicists to be able to understand the observations. The time constant for current-driving magnetic field to penetrate the steel structures surrounding the plasma could be longer than 1 s, and the current will take a similar time to penetrate to the core of the plasma. Using neutral beam particles or radio-frequency techniques to drive current, it may be possible to change the current density distribution in the plasma on a faster time-scale, perhaps as short as 100 ms. Providing raw signals directly to the control section of the computer systems, provided with appropriate interpretive and averaging algorithms, will allow the plasma to be controlled, provided also that the software includes knowledge of the response of the plasma to imposed external actuation (10). In parallel the signals will be fed to analysis areas where the
FUSION REACTOR INSTRUMENTATION
physics team can develop understanding of the plasma behavior and be able to make changes to the control responses as the plasma performance develops. Thus for the generation of next-step fusion devices epitomized by ITER, physics understanding and control development will go hand-in-hand to provide the optimum performance. It is hoped that tokamak reactors beyond that stage will be able to have simpler requirements on multiple measurements for determining the necessary control and fault identification. This simpler instrumentation set will be definable after the operation of an ITER-like device.
Table 2. Plasma Measurements and Diagnostic Techniques for Plasma Control Measured Plasma Parameter Plasma current Plasma position and shape Line-averaged electron density Spectroscopic line radiation Radiated power Visible continuum radiation, H-mode transition First-wall/divertor surface temperatures ‘‘Halo’’ currents, currents to tiles MHD activity and locked modes Runaway electrons Neutral gas pressure, gas composition Plasma pressure (beta) Neutron flux Triton/deuteron densities Electron temperature profile Electron density profile Ion temperature and rotational velocity Current density profile Helium density (ash) Radiated power profile (core and divertor) Visible continuum profile Electron temperature fluctuations Electron density fluctuations Alpha-particle loss detection Neutral density particle source Neutron fluence Alpha-particle source profile Edge electron density Electron density in divertor Electron temperature in divertor Radiation front position Surface erosion
Diagnostic Technique Magnetic Rogowski coil Magnetic flux loops Interferometer Spectroscpic impurity monitors Bolometers Filter spectroscopy Infrared imaging cameras Current monitors in first-wall/ divertor structure High-frequency magnetic probes Synchrotron radiation detectors Pressure gauges, residual gas analyzers Diamagnetic loop Neutron flux monitors Neutral particle analysis, spectroscopy Thomson scattering, electron cyclotron emission Thomson scattering, interferometry, reflectometry Spectroscopy, enhanced by a neutral beam Motional Stark effect spectroscopy, polarimetry Spectroscopy, enhanced by a neutral beam Array of bolometers Array of filter spectroscopy High-frequency electron cyclotron emission Correlation reflectrometry Faraday cups at wall, infrared camera Fast ion gauges Activation foils Neutron camera Reflectometry Interferometry, Thomson scattering Thomson scattering, Langmuir probes Visible imaging, filter spectroscopy array To be determined
65
PLASMA MEASUREMENT TECHNIQUES CURRENTLY IN USE In the last 10 years, there have been major advances in the quality of measurement of plasma parameters, both in number of spatial locations and in time resolution. The advances were largely the result of improvements in technology, but the need for them was largely driven by the discovery of new high-performance operating modes, the computer codes developed for modeling and predicting the plasma performance and the associated theoretical studies. This improvement in theoretical prediction of the plasma behavior and the capability for simulating the plasmas also led to the requirement to measure some additional parameters, most notably the current density distribution and fine-spatial dependent ion temperature. Some of the diagnostic methods closely related to the needs of plasma control will be described here; more detailed information, both about these techniques and about the wide range of methods used on tokamaks can be found in Refs. 4 and 11. Magnetic Measurements Magnetic measurements have provided most of the fundamental information for plasma control until now, giving information on the plasma current, its shape and position, and the total plasma pressure, as well as about magnetic turbulence inside the plasma (12,13). All of these measurements, except the last, require time integrals of the changing magnetic field so that for very long pulse or steady state operation some alternative technique is required. Plasma Current. The plasma current is measured by a coil wound in a small diameter solenoid looped around the plasma in a plane normal to the current. This looped solenoid is called a Rogowski coil. The voltage measured in the solenoid is proportional to the rate of change of the magnetic flux through the solenoid. This magnetic flux is proportional to the current passing through the loop. For a fusion reactor this solenoid must be mounted inside the vacuum vessel, to provide sensitivity for rapidly changing currents. It must be made with relatively radiation-resistant ceramic insulation and it must operate into electronics capable of integrating relatively small signals for times up to 1000 s. Plasma Position and Shape. The plasma position and shape are measured using small discrete coils distributed in the plane of the plasma minor cross section. The concept proposed for ITER is shown in Fig. 2. The coils are normally oriented with their axes nearly parallel to, or nearly perpendicular to, the field provided by the plasma current, and the external coils are provided to control the plasma current position and shape. These discrete coils are mounted on structure inside the vacuum vessel. Reconstruction magnetic codes are used to optimize the number and location of these coils with respect to the main plasma, the x-point in the flux surfaces induced in the field shaping for the divertor and in the divertor itself. The insulating materials for the coils, including that in the cabling to carry signals out to long-time integrating electronics, must operate within a radiation environment. For steady-state operation, Hall probes could be used if they can be made to withstand the radiation environment. Some new ideas in which mechanical movements induced by the mag-
66
FUSION REACTOR INSTRUMENTATION
Table 3. Examples of Assessment of the Target Plasma Measurement Capability for ITER (6) Parameter Plasma current Total neutron flux Neutron and 움-particle source Divertor surface temperature Core electron temperature profile Edge electron density profile Radiation profile in main plasma Radiation profile in divertor a
Parameter Range
Spatial Resolution
0.1–28 MA 1014 –1021 ns⫺1 1014 –4 ⫻ 1018 ns⫺1 · m⫺3 200–2500⬚C 0.5–30 keV a (0.05–3) ⫻ 1020 m⫺3 0.01–1 MW · m⫺3 ⱕ100 MW · m⫺3
Not applicable Integral 30 cm 1 cm 10 cm 0.5 cm 20 cm 5 cm
Time Resolution 1 1 1 1 10 10 10 10
ms ms ms ms ms ms ms ms
Accuracy 1% (Ip ⬎ 1 MA) 10% 10% 10% 10% 5% 20% 20%
1 keV ⫽ 1.1 ⫻ 104 K.
netic field might be coupled with the electrical measurement described above are being developed. Measurement of the density by microwave reflectometry to determine the vacuumplasma boundary is also being considered for the position and shape measurement in ITER. Magnetic Turbulence Measurement. Similar small coils designed for measurement of plasma-induced fluctuating magnetic fields are needed. Magnetohydrodynamic (MHD) instabilities are always observed at some level in all tokamak plasmas, with their amplitude and structure providing significant clues about the plasma performance. They are usually in the frequency range of a few tens of kilohertz; but in the presence of fast ions, such as the alpha particles created in the D–T fusion reaction, the range can be several hundred kilohertz so that the design must be suitable for high frequencies. They must be mounted close to the plasma so that eddy currents created in close-by conducting material do not reduce
Back plate with blanket/shield modules
Pickup coils Voltage loops
Vacuum vessel
Divertor cassette Figure 2. The distribution of magnetic detectors planned for the ITER device (8).
the high-frequency response. Because the structure of the instabilities inside the plasma must be determined, there must be many measurement locations and there are normally many coils distributed in more than one plane around the torus. The fluctuations observed by the coils are caused by the rapid movement of the instabilities inside the plasma. The instabilities should align with the magnetic field lines passing around the toroidal plasma; and their frequency, phase, and amplitude at localized measurement locations can identify specific kinds of MHD modes. The strongest impact on plasma performance is caused by low m (number of nodes in the waves measured around the minor circumference) close to the axis of the plasma. m ⫽ 2 modes or m ⫽ 1 sawteeth (so-called because of shape of the time-dependent signals observed at the coils) are of most concern, so that their presence is an indicator for a control response. Sometimes the MHD modes slow down and ‘‘lock’’ in position, a condition which frequently results in a catastrophic termination of the plasma, known as a disruption. Plasma Pressure. The total plasma pressure can be measured from the diamagnetic effect it has in reducing the toroidal vacuum magnetic field. This effect is, at best, only a few percent even for an effective tokamak reactor, so the measurement is not easy though the concept for magnetic measurement is very simple. A complete loop of wire surrounding the plasma in the plane of the minor cross section will give a measurement of the field change. But possible misalignment or moving the coil out of this plane can give changes larger than the one being sought, so that normally ingenious compensating loops are included in the design. The measurement is crucial, because exceeding a defined value is known to cause the plasma to terminate in a disruption. While this measurement is so conceptually straightforward, it may well have to be replaced by a complex set of kinetic measurements of density and temperature combined together and integrated over space to provide the key information. In current experiments, such data are routinely compared with the ‘‘diamagnetic’’ pressure, but only a long time after the relevant plasma discharge. Measurement of ‘‘Halo’’ Currents (14). When a discharge terminates in a disruption, the current in the plasma decreases dramatically (at a rate as great as 1 ⫻ 109 A/s) and the pressure collapses. In so doing, currents pass into the structure in the first wall which are found to be potentially large, (hundreds of kiloamps) and nonuniform. There is considerable con-
FUSION REACTOR INSTRUMENTATION
cern that these short-lived currents, in the presence of the large steady toroidal magnetic field, will cause forces in the structure that could cause serious damage to it. The disruption process and the formation of these halo currents are subjects of urgent study and development of instrumentation to measure them. So far, small ‘‘Rogowski’’ coils have been placed around structural members. It is assumed for the moment that such measurement will be provided on a reactor device. No control response will be possible in the timescale of the disruption, if it should happen, but the data will enable engineers to compare the calculated forces with those used in their design. Electron Density and Temperature The electron density has three significant roles to play in the control system. Fueling of the plasma is clearly going to be controlled to match the required plasma densities. The sum of the electron and ion pressures, the pressure being the product of density and temperature, can be used to replace the measurement by the diamagnetic loop, particularly for long times. Minimum density levels are often necessary before noninductive power can be applied effectively, and high densities can also lead to a rapid termination of the plasma in a disruption. In the most recent successful experiments on plasma confinement in tokamaks, very low levels of energy transport have been found inside radii with very steep density gradients. There is thus a possible need for very good spatial resolution in the density measurement, though it is not presently clear how the data could be used in plasma control. The most straightforward measurement of the density is interferometry, since the refractive index of the plasma is linearly proportional to the density over a wide range, provided that the probing wavelength is selected appropriately. For large plasmas (r ⱖ 1 m), densities in the range 앑1020 m⫺3, and wavelengths from 앑100 애m to 앑4 mm, the far-infrared to microwave region are used. The interferometer is usually in the Michelson or Mach–Zender configuration, the former being usually favored since it does not require a long compensation leg. Several sightlines vertically through the plasma are very desirable, and unfolding techniques have been developed to arrive at density profiles of the toroidal plasma. A dual wavelength system is used so that defects due to movements of the mirrors can be corrected (15). The presence of divertors in the reactor-grade devices with intense heat loads tend to prevent many vertical lines of sight, so sightlines tangential to the minor axis of the plasma are being considered. The probing beams pass through the plasma, are reflected from conical retroreflectors mounted off the wall of the tokamak with as much protection from the plasma as possible, and return along the same path. The geometry does not allow for very accurate unfolding and requires additional information from the magnetic diagnostics about the position of the plasma axis. Because of the longer pathlength in the plasma, a shorter wavelength (e.g. 10.6 애m of a CO2 laser) is preferred. However, there is the advantage that the density can be obtained from the Faraday rotation of the polarization of the light. The rotation is proportional to 兰 nBldl, where n is the density and Bl is the component of magnetic field along the line of sight and dl is the distance along the line of sight. For a tangential sightline arrangement, the magnetic field
67
strength, Bl, is dominated by the large toroidal field strength so that it is nearly constant and the rotation angle is therefore primarily related to the density. The density and electron temperature can also be measured by the scattering of intense laser light off the electrons, Thomson scattering. Because of the very small cross section for this scattering, very high power pulsed lasers are generally used. Nowadays, Nd : YAG laser with energies of several joules are used. The intensity of the scattered light is a measure of the plasma density,and the broadening of the spectral line is a measure of the electron temperature. The spatial localization of the light emission to provide profile information is obtained by one of two techniques. In the first, an imaging technique, light is imaged onto a fan of fiber optics, each fiber identifying a potential spot in the plasma along the line of the laser light (16). The second technique, more effective for large plasmas with difficult access for imaging sightlines, is to use very short laser pulses (⬍1 ns) and viewing along the laser beam and measure the scattered signal for different short time delays (17). The discrete pulses of the laser limit the time-dependent nature of the measurement, but now lasers of sufficient quality capable of firing at 앑30 Hz for many minutes are available commercially. A combination of the two types of system may be necessary to achieve the necessary spatial resolutions for the core, edge, and divertor plasmas (13). A continuous measurement of the electron temperature is highly desirable for following turbulent plasma behavior and is available by observing the radiation by the plasma electrons in the microwave spectral region. The electrons accelerating in the magnetic field radiate at their fundamental resonant frequency in the magnetic field, f 앑eB/me, where B is the field strength, e the electron change, and me its mass. The intensity of the emission is, to first approximation for a wide range of density and temperature, proportional to the electron temperature. By selecting a frequency value, the position of the source of the emission can be determined since for a toroidal plasma the toroidal magnetic field is inversely proportional to the major radius. Various instruments can be used; a grating instrument with say 20 detectors gives the time behavior at 20 locations simultaneously, or an instrument sweeping in frequency provides a spatial scan of the temperature (18). There may be significant difficulty for devices with large plasma dimensions due to overlap in signal from the higher harmonic emission from a different location but at the same frequency. Ion Temperature Measurement of the temperature of the ions is much more problematic. The hydrogenic ions at the plasma center are fully stripped of their electrons and do not radiate for plasmas of interest here. Hence, spectroscopy cannot be used to find the temperature by determining the Doppler broadening of spectral lines. The measurement must therefore be done making use of ionized states of impurity atoms. Impurities like carbon and iron are normally naturally present, but often noble gases are introduced into the plasma. Spectral line-width measurements are often adequate in the outer regions of the plasma or in the divertor region. But even atoms as heavy as iron will be fully stripped of electrons in ITER. Potentially, krypton would not be fully stripped and spectroscopy of the
68
FUSION REACTOR INSTRUMENTATION
He-like or Li-like ions in the X-ray region could provide information on the ion temperature or plasma motion, from the Doppler-shift (19). A curved-crystal spectrometer with a multiwire proportional counter detector could be applied for each line of sight through the plasma with high-energy resolution. Most of the signal will come from the region for which the ionization state of the impurity ion is determined by the local electron temperature so that the measurement can be relatively local. A significant disadvantage of X-ray instruments, which require significant radiation shielding for a reactor, is that they cannot provide very good spatial resolution. Hence, a preferred technique is visible spectroscopy, making use of an artificially enhanced population of low-mass (e.g., carbon) ions with one electron in the plasma core. Some of the neutral particles from a neutral heating beam, or occasionally a beam specifically provided for diagnostic purposes, penetrate to the center of the plasma (20,21). There they can exchange an electron with the fully stripped ions, enabling them to emit light in characteristic line radiation. The optimum beam energy for this process is about 100 keV for hydrogen atoms. The Doppler-broadened spectral line provides the temperature of the impurity ions; the value can be corrected to infer the temperature of the fuel ions. An optical arrangement focusing the plasma source light on to an array of optical fibers provides for many sightlines and hence the possibility of unfolding to provide a profile of the ion temperature. At the plasma edge, it is possible to measure the ion temperature making use of the emission from the partially ionized ions in this cooler region. A practical deficiency of requiring a significant density of neutral hydrogen atoms in the core of the plasma is that their penetration to the center of the plasma without ionization by electrons or charge exchange with ions further out becomes increasingly difficult as the plasmas become larger and hotter, as, for example, in ITER. The energy of the heating and current-drive beams being developed for ITER is 1.2 MeV, very good for penetration and for producing plasma rotation. Unfortunately the charge-exchange process for creating a singly charged ion to radiate to meet the spectroscopic needs is optimum at 앑100 keV and falls off very rapidly to higher energies. Thus development of a neutral beam designed with very high current for a short time is extremely desirable (22); this is discussed later. Current Density Distribution Two techniques that are relevant to reacto-grade plasmas have been used for measuring the spatial dependence of the magnetic field within the plasma and hence the current density distribution and the pitch of the magnetic field lines. The first is to measure the Faraday rotation of probing laser beams, but this time with the beams aligned in the plane of the plasma minor radius where the magnetic field variation is dominated by the field of the plasma current (23). The technique is rather limited in spatial resolution, and the accuracy is affected by having to unfold an integral measurement of the product of two rapidly varying quantities. The second is to measure the polarization of one component of the light emitted by neutral beam atoms excited by electron collisions as the beam enters the plasma. The motion of the atoms in the local magnetic field creates an electric field which leads to Stark splitting on the spectral lines. The polarization of
one of the lines identifies the direction of the magnetic field locally where the line of sight of the detector intersects with the beam. This technique has been named motional Stark effect (MSE) and has been used successfully in important plasma confinement studies on TFTR (24) and DIII-D (25), and it is planned to be used on ITER. The heating neutral beam provided for driving current and inducing toroidal rotation of the plasma has an energy of 1.2 MeV. It will penetrate readily to the axis of the large plasma and provides a strong electric field. The major challenge for its use is the ability to provide a viewing mirror to collect sufficient light and provide sightlines to the outer edge of the plasma while maintaining its optical quality. Impurity Concentration Impurity ions dilute the fuel ions in the plasma because of their higher charge and the requirement for approximate charge neutrality with the electrons in a plasma. Because the heavier ions do not become fully ionized until the electron temperature is much higher, they emit a lot of energy in impurity line radiation characteristic of the particular ion and its charge state. Hence, spectrometers viewing in the visible and ultraviolet spectral regions with fairly wide spectral coverage will provide spectra of the fuel particles and the impurities (26). In the visible region, an array of fiber optics can again provide spatial dependence of the emission. This array can also be used with narrow-band spectral filters in front of photodetectors to identify one line from selected fuel or impurity ions to give a very precise time behavior. Such a technique is used to identify good performance patterns of the plasma edge or to give quantitative information about the plasma’s impact in the divertor where an impurity gas may be used to radiate over a broad area to reduce the power load impinging to a small surface area of the divertor plates. In present-day devices, fiber optics can be used close to the plasma with a vacuum window and simple optical lenses. Unfortunately, the reactor radiation environment will provide so much absorption and prompt fluorescence in transmission optics (27) during the measurement that this preferred equipment will have to be removed beyond the main shielding of the tokamak. Mirrors in shielded labyrinths will be necessary to bring the light out to the vacuum windows and then on to the lenses and fiber optics of the main transmission system. As for all the plasma diagnostics, it is desirable to keep all the sensitive detectors and their associated electronic packages in areas well shielded from the tokamak to allow as much commercial equipment to be used as possible. In addition to line radiation, free–free electron, or bremsstrahlung, emission is always present and radiates a significant fraction of the power radiated. This emission can be quantified and is normally observed using a narrow-bandpass filter centered at a wavelength where there is no line radiation (28). The ratio of this observed intensity to that calculated for a pure hydrogen plasma provides information on an averaged impurity level. It can be a very valuable control value, particularly when the dominant impurity atoms are relatively light—for example, carbon, which is expected to be used as the material in parts of the first wall facing the plasma. A similar measurement can be made in the soft-Xray region with a pulse height analysis system which can also
FUSION REACTOR INSTRUMENTATION
identify lines of heavier ions like iron which are present in structural elements inside the vacuum vessel. Total Radiation by Bolometry To measure the total power radiated from the plasma, bolometric techniques can be used. Emission of radiation from the plasma covers a very wide spectral range. It is therefore important to measure this radiation with detectors with very flat spectral response over as wide a spectral range as possible. Much of the radiation comes from near the plasma edges where the incoming particles are only partially ionized; but as we have seen, much comes from the center of the plasma in free–free radiation and from highly ionized states of impurities. It is necessary to view the plasma with arrays of bolometer detectors to be able to carry out tomographic analysis of the source of the radiation both for control and for physics assessments of the energy transport in the plasma (29). The control requirements range from seeking to reduce the impurity concentration to trying to increase the radiation close to the divertor plates to reduce the heat load conducted onto the plates by radiating the energy over a greater fraction of the wall. Introduction of a noble gas such as krypton onto the divertor is proposed for producing this ‘‘radiating’’ divertor. The arrays of bolometers will be mounted at many locations on the first-wall support structure. Because of the asymmetric shape anticipated for the plasma of the reactor, detectors numbered in the hundreds, mounted in groups of tens or so, will be needed to provide a sufficiently good base for the tomographic analysis. To meet the requirement for control, at least initially, a few lines of sight encompassing key regions may be sufficient for developing basic control data. The groups of detectors will be mounted behind pinholes delimiting the size of the viewed volume. They will be mounted in boxes with well-designed temperature control. The detectors themselves will probably be gold-blackened etched platinum resistance thermometers, typically about 1 cm2 (30). The platinum is mounted on one side of a ceramic with a matching platinum resistor on its back, facing away from the plasma. The second resistor has the necessary role of measuring other transient sources of heat, particularly the nuclear heating caused by the neutrons and gammas coming from the plasma and the surrounding structure. These two resistors make up one pair in a Wheatstone’s bridge, the other pair being in the electronics boards away from the tokamak environment. Such bolometer arrays are providing very valuable information in the present-day tokamaks, but there is significant concern for their operation and survival in a reactor environment. Development and testing of a bolometer with a better ceramic interspace is an urgent requirement. Fusion Products The nuclear reaction products from the fusion reaction of deuterium and tritium fuel ions are neutrons and fast helium ions, called alpha particles. The neutrons escape from the plasma and provide the source of energy for generating electrical energy. The alpha particles, originating with an energy of 앑3.5 MeV, stay inside the plasma and slow down, giving up their energy to heat the plasma and sustain the burning plasma. Integral measurement of the neutrons gives information about the overall performance of the discharge. It will play a key role in keeping a stable plasma burn at the center
69
of the fusion reactor. Additional information on the spatial distribution of the ion temperature and the source region of the alpha particles can also be obtained (31). For present-day devices and the next-step fusion devices, the measurement of alpha particles will be very important to find out how they diffuse in space, how they respond to instabilities in the plasma and how the ‘‘ash’’ of alpha particles that have slowed down is accumulating in the core of the plasma, reducing the performance (32). For the fusion reactor it is probable that only the ash measurement and the prompt loss of alpha particles causing local hot spots on the first wall will be considered necessary because much simpler measurements of other plasma parameters will provide sufficient control information. It is possible that external measurement of the partial pressure of helium beyond the divertor region may provide an adequate control parameter. Neutron Diagnostics The simplest neutron measurement, very suitable for feeding into a control system, is a total flux measurement using a standard proportional counter (33). A fission-chamber proportional counter, surrounded by a moderator, can be placed outside the tokamak so that it can measure the flux of neutrons reaching it with relatively good time resolution. No spectral information is required because the neutrons have been scattered many times on their route to the detector. There is a very significant requirement that the neutrons need to be measured over a very wide dynamic range (as much as seven or eight orders of magnitude) to allow full control. The most difficult aspect of this measurement is relating the flux at the detector quantitatively to the source of neutrons originating in the plasma. The massive structures of the tokamak and its shielding must be taken into account. The geometry is such that nuclear codes are not adequate for providing this calibration, so that a calibration source must be used inside the tokamak (34). This calibration requirement may extend the dynamic range even further because available point sources are relatively very weak compared to any relevant plasmas. This time-dependent measurement can be enhanced by using neutron activation foil techniques whereby an elemental foil can be exposed close to the plasma and then the activation can be measured later at a remote location (35). The mechanical response is too slow to allow use of the foil technique for control purposes but it does provide the opportunity for checking on the instrumental reliability and calibration of the timedependent measurements, in addition to providing a record of the total fluence. Neutron cameras consisting of arrays of collimated tubes between the plasma and a variety of detectors or spectrometers can provide information about the spatial distribution of the neutron source and the temperature of the ions (36). The different sightlines across the plasma cannot be very close since massive shielding is needed to prevent crosstalk between different observation channels. Thus their usefulness is of some value for physics studies where the information is not otherwise available, but is not likely to be useful for the burn control. Alpha-Particle Measurements The measurement of the alpha particles confined inside the plasma is very important for the next-step device from the
70
FUSION REACTOR INSTRUMENTATION
point of view of physics understanding and hence performance improvement. But for a reactor the essential measurements are those to ensure that the reactor can sustain operation. An urgent question to be resolved is how rapidly the helium ash (the residue left of the alpha particles after they have passed all their energy over to the background plasma, thereby creating and sustaining the burning plasma) diffuses out of the core plasma and away through the divertor. Recent experiments (37) suggest that it is sufficiently fast, but more experiments are needed. Techniques using radio-frequency waves have been suggested for accelerating its removal, but these await demonstration now. This ash, if it accumulates, has the effect of reducing the densities of the fuel particles significantly for a given controlled electron density. Only one technique for measurement has so far been suggested. The technique makes use of the same active spectroscopic method with an incoming neutral beam discussed above for ion temperature measurement, but in this case the intensity of a visible spectral line of singly ionized helium will be measured (21). Some alpha particles will be lost continuously from the plasma, though most are relatively well confined despite their very high energy. But in some circumstances, such as certain kinds of MHD instabilities, there will be enhanced loss to the first wall in relatively localized regions. Since it is conceivable that the total energy deposited will be sufficient to cause damage if the loss is allowed to continue, it is desirable to measure the loss directly for the control system. The first technique suggested is to provide infrared imaging of as much of the first wall as possible. Such a system has already been proposed for detection of hot spots arising from misalignments of the first wall and for monitoring the heat load on the divertor plates. The data used for control would be selected from a few of the pixels of the camera image. Conceptual designs of the imaging system indicate the use of periscopes using reflective optics to take the light outside the radiation environment to a fiber-optic/camera imaging system. The benefit of this technique is that it is not dependent on any theoretical projection of where the escaping particles should go; its deficiencies are that it cannot discriminate between particles of different energies and that it may not be sufficiently sensitive until damage is close at hand because the camera is already viewing a very hot wall. A second technique makes use of discrete detectors placed at strategically chosen locations in the first wall. The detectors could consist of small scintillators mounted in small cameras to provide information about the energy and origin of the alpha particles as in the TFTR tokamak (38). A scintillator with better radiation survivability than that presently used and an optical system insensitive to radiation will have to be defined. Alternatively, small Faraday cup detectors measuring the total number of alpha -particles entering them could be used, as in the JET device. In this case, there is a much simpler electrical connection but a significant loss of information, which, nevertheless, is probably sufficient for control needs.
DEVELOPMENT ISSUES TO BE PURSUED In the preceding section, many of the measurement techniques likely to be used in control of a fusion reactor have been discussed very briefly. Nearly all of these techniques will
be evolutions from the techniques used in the physics studies being carried out on currently operating tokamaks. However, the radiation environment causes many compromises to be made. Many of these should be demonstrated during the operation of the next-step device where the neutron fluxes to the first wall and the main components of the tokamak should be approximately the same as for a reactor. For most of the plasma diagnostic equipment, the prompt noise induced in the signals is the most severe concern so that this will provide a good demonstration of the capability. However, the total fluences in which the equipment will have to function and maintain calibration in a reactor may be considerably higher, and this should be taken into account in its design. Such possibilities as ensuring replaceability will have to be considered. But at this moment, the development issues that have to be addressed are those needed for measurements on a next-step ignited-plasma device. Some of the studies are in progress. Radiation Effects in Materials The sensitivity of components of diagnostic systems to the neutron and gamma radiation environment close to the plasma is a very serious concern for diagnostics which have to provide reliable, quantitative information with good accuracy throughout the plasma discharge. Ceramic insulating material must retain sufficiently good insulation levels despite the induced effects, radiation-induced conductivity (RIC), radiation-induced electrical degradation (RIED), and radiation-induced electromotive force (RIEMF) (39). In the flux level at the first wall in ITER, 앑6 ⫻ 103 Gy/s for ceramics, the conductivity can be increased by as much as six orders of magnitude relative to the radiation-free environment, with some ceramics being more affected than others. The design criterion of an induced conductivity ⬍1 ⫻ 10⫺7 S/m necessary for small electrical stand-off pieces and cable insulation has been measured in in situ fission reactor neutron/gamma exposure for some alumina samples. Adequate insulation has also been seen in some samples of mineral-insulated cable, but much more detailed development and testing is needed to ensure qualified components. There is concern that the cables are severely affected by the quality of their termination so that they are suitable for use in vacuum, and also by the configuration of the wiring, so that extensive further testing in the radiation environment is necessary. In addition to this prompt property of the ceramics in a radiation field, a new concern is that of an RIEMF which is observed in reactor testing of magnetic coils made from mineral-insulated cable. The effect is not yet understood, but it has the effect of providing slow drifting of the integrated signal of, for example, a magnetic coil used in position measurement. In addition some ceramics show a cumulative damage effect (RIED) when irradiated with applied voltage, particularly in electron-radiation testing, which could lead to catastrophic failures. This behavior does not seem to be as significant in reactor irradiations. Again definitive qualification tests are required before application of the materials for making components to be used in the next-step device. Once those tests are completed, testing of the components integrated into a complete instrument will be highly desirable. Nuclear radiation also drastically affects the transmission of transparent optical components such as windows, lenses, and fiber optics as well as causing fluorescence (27,39). The
FUSION REACTOR INSTRUMENTATION
latter tends to be a more serious issue in fiberoptics because of the much greater length of transmission medium. The fluorescence occurs only during the radiation period—that is, the plasma discharge. The absorption is transient during the discharge, varying with the intensity of the neutron emission, but is also cumulative. Many quantitative in situ radiation studies at high flux levels have been carried out, but perhaps the most informative have been in relatively low flux examinations of behavior of fibers on the TFTR tokamak. The light signals from the plasmas, from spectroscopy, or from scintillators near the plasma are relatively low intensity (for the escaping alpha-particle diagnostic, the background radiationinduced pedestal in the fibers was about 20% of the maximum signal produced in the scintillator by the alpha particles) so that the background generated by neutrons should be both small and relatively well quantified. Since the use of fiber optics can open up the possibility of using imaging techniques and because of their flexibility, they are very desirable for interweaving optical paths through the shielding material surrounding the plasma. Hence, the development of new quartz fiber materials which are relatively radiation-hard to allow the fibers to be placed closer to the neutron source is highly desirable, and such development of low-OH and F-doped fibers are in progress. This development is at an early stage; meanwhile the use of quartz fibers with low defects, very careful handling, and a metallic cladding to permit hot operation at 앑300⬚C to minimize the absorption buildup is the best plan. Unfortunately, there do appear to be significant differences in the performance of fibers from apparently the same source, so that each fiber will have to be characterized very carefully for its specific use. It is very unlikely that the development of fibers for use in a radiation environment will ever achieve a quality to allow their use close to the first wall. Thus periscopes with mirrors reflecting light through shielding labyrinths are expected to form the first elements in the light paths from the plasma. The first mirror necessarily will be required to look through a wide aperture into the plasma to see a wide field of view and to collect sufficient light, particularly for spectroscopic measurement. This mirror is likely to be bombarded by a flux of order 1019 m⫺2 ⭈ s⫺1 of high energy neutral atoms from charge-exchange reactions between hot ions and cold neutral gas atoms in the outer regions of the plasma, in addition to the impact of neutrons and gammas. The design of these front mirrors, most of which have to maintain high optical quality, is a major challenge. Research in support of this design effort has recently started (40). From the point of view of minimal sputtering, a reflecting surface of rhodium appears most desirable, but major development is needed to first create the mirrors of sufficient optical and mechanical quality and then to test them thoroughly in high-energy neutral particle sources. Other aspects of the reactor plasma relative to the plasmas in today’s most advanced devices, such as the much larger size and higher electron temperatures arising from the heating by alpha particles, lead to concern about the viability of some measurement techniques. For example, the spectroscopic techniques based on atomic interactions with 앑100 keV hydrogenic neutral particles will only operate with a beam intensity more than three orders of magnitude higher than in today’s conventional heating neutral beams applied to tokamaks. A development program for a 5 GW, 1 ms beam has
71
been started (22). With the short pulse length, the total energy requirement and the impact on the plasma of this probe beam will be constrained. No techniques for measurement of the erosion of first wall surfaces and redeposition onto them in real time have been used on tokamaks. Some concepts such as some range-finding methods may be applicable, but development will be needed to apply any technique to provide sufficient spatial coverage. But it is not only in technology that research and development is required to make some of the measurement requirements credible (8). Quantitative evaluation of the concentrations of impurity elements in the divertor will be very difficult because of the strong spatial variation, partial self-absorption of spectral lines because of high local plasma densities, and the fact that most of the radiated power will be in spectral lines emitted in the vacuum ultraviolet spectral range. Because of accessibility issues, it is extremely difficult to get good spatial resolution in this range. Hence, some atomicphysics analysis to identify potentially good ways of getting the quantitative information from visible lines and more detailed spectroscopy of cold, high-density plasmas in operating divertors are clear areas for more research. New techniques should be found for measuring the core helium ash and the ratios of the hydrogenic fueling particles, not dependent on a very powerful diagnostic neutral beam. A direct measurement of the electric field inside the plasma, rather than from inference from measured plasma motions, could provide more direct information for the control system.
SUMMARY It is clear that the instrumentation required for a fusion reactor will be rather complex, particularly if it is based on the tokamak concept as appears likely at the moment. Many plasma parameters have to be measured, often with demanding requirements on resolution and accuracy, to provide the necessary information for control. While it is difficult to predict now exactly which of the measurements so important for the physics understanding and which are used in improving the performance of the plasma will remain important for maintaining an ignited plasma for long times, many of those shown in Table 1 will be necessary to fulfill the proper fueling, external heating, current drive, equilibrium, and stability properties. The instruments used in the many measurements carried out for physics reasons now are commercially available with proven reliability. It seems likely that similar instruments will be used for the reactor, with the advances in computer power making feasible the use of large quantities of interpreted data in its feedback controls. The difficult step for providing a full instrumentation capability for a fusion reactor will be in the interfacing with the plasma and the high-radiation environment. The demands for good spatial information about many plasma parameters set requirements for many penetrations in the shielding (to be compensated by labyrinths in shielding and additional shields) and for components to be located in severe environmental conditions, not only of nuclear radiation but also of high temperatures and high vacuum. Integration of operating systems with sufficiently good signal to noise, resistance to damage, and reliable calibration
72
FUSION REACTOR INSTRUMENTATION
integrity is the major engineering demand for the plasma diagnostics. Meanwhile, the physics program in magnetic fusion must continue to move forward to achieve an ignited plasma which is sustained for sufficiently long times to interest utilities in fusion as a power source. Part of that program will include improving the measurement capability through new techniques, better understanding of the physics interpretation of some measurement techniques, and adding better spatial and temporal resolutions. It is hoped that with better understanding of the details of the plasma behavior, it will be possible to greatly reduce the instrumentation demands for fusion reactor operation. There are many challenges associated with creating a fusion reactor. Among them is the provision of a capable set of instrumentation. There is every reason to think that this instrumentation will evolve with the fusion program, and many of the detailed problems apparent now will be resolved with further research and development. ACKNOWLEDGMENTS The author is grateful to the very many people who have been involved with him in the pursuit of the best possible plasma measurements. This work has been supported by DOE Contract DE-AC02-76-CHO-3073. BIBLIOGRAPHY 1. R. J. Hawryluk, Results from deuterium–tritium tokamak confinement experiments, Rev. Mod. Phys., 70: 537–587, 1998. 2. M. Keilhacker et al., Nucl. Fusion, 1998. 3. P. E. Stott, G. Gorini, and E. Sindoni (eds.), Diagnostics for Experimental Thermonuclear Fusion Reactors, New York: Plenum Press, 1996. 4. I. Hutchinson, Plasma Diagnostics, Cambridge, UK: Cambridge Univ. Press, 1987. 5. A. Miyahara, Y. Hamada, and K. Ida (eds.), Fusion Plasma Diagnostics, Fusion. Eng. Des., 34–35, 1997. 6. A. E. Costley et al., Requirements for ITER Diagnostics, in P. E. Stott, G. Gorini, and E. Sindoni (eds.), Diagnostics for Experimental Thermonuclear Fusion Reactors, New York: Plenum Press, 1996, pp. 23–37. 7. V. S. Mukhovatov et al., and the ITER Joint Central Team and Home Teams, Overview of the ITER Diagnostic System, in P. E. Stott, G. Gorini, and E. Sindoni (eds.), Diagnostics for Experimental Thermonuclear Fusion Reactors, New York: Plenum Press, 1998, pp. 25–40.
13. ITER Physics Basis Document, Nucl. Fusion, to be published 1998, Chap. 7. 14. E. J. Strait et al., Observation of poloidal current flow in the vacuum vessel wall during vertical instabilities in the DIII-D tokamak, Nucl. Fusion, 31: 527–534, 1991. 15. R. T. Snider et al., Applications of interferometry and Faraday rotation techniques for density measurements on the next generation of tokamaks, Rev. Sci. Instrum., 68: 728–731, 1997. 16. D. Johnson et al., TFTR Thomson scattering system, Rev. Sci. Instrum., 56: 1015–1017, 1985. 17. H. Salzmann et al., The LIDAR Thomson scattering diagnostic on JET, Rev. Sci. Instrum., 59: 1451–1456, 1988. 18. D. V. Bartlett, Physics Issues of ECE and ECA for ITER, in P. E. Stott, G. Gorini, and E. Sindoni (eds.), Diagnostics for Experimental Thermonuclear Fusion Reactors, New York: Plenum, 1996, pp. 183–192. 19. K. W. Hill, M. Bitter, and S. von Goeler, Concepts and Requirements of ITER X-Ray Diagnostics, in P. E. Stott, G. Gorini, and E. Sindoni (eds.), Diagnostics for Experimental Thermonuclear Fusion Reactors, New York: Plenum, 1996, pp. 341–352. 20. R. J. Fonck, Charge exchange recombination spectroscopy as a plasma diagnostic tool, Rev. Sci. Instrum., 56: 885–890, 1985. 21. E. S. Marmar, Active Spectroscopy Diagnostics for ITER Utilizing Neutral Beams, in P. E. Stott, G. Gorini, and E. Sindoni (eds.), Diagnostics for Experimental Thermonuclear Fusion Reactors, New York: Plenum Press, 1996, pp. 281–290. 22. H. A. Davis et al., Progress toward a microsecond duration, repetitively pulsed, intense-ion beam for active spectroscopic measurements on ITER, Rev. Sci. Instrum., 68: 332–335, 1997. 23. H. Soltwisch, Current distribution in a tokamak by FIR, Rev. Sci. Instrum., 57: 1939–1950, 1986. 24. F. M. Levinton et al., Magnetic field pitch angle diagnostic using the motional Stark effect, Rev. Sci. Instrum., 61: 2914–2919, 1990. 25. B. W. Rice et al., Direct measurement of the radial electric field in tokamak plasmas using the Stark effect, Phys. Rev. Lett., 79: 2694–2697, 1997. 26. N. J. Peacock et al., Spectroscopy for Impurity Control in ITER, in P. E. Stott, G. Gorini, and E. Sindoni (eds.), Diagnostics for Experimental Thermonuclear Fusion Reactors, New York: Plenum, 1996, pp. 291–306. 27. A. T. Ramsey et al., Radiation effects on heated optical fibers, Rev. Sci. Instrum., 68: 632–635, 1997. 28. A. T. Ramsey and S. L. Turner, HAIFA: A modular fiber-optic coupled, spectroscopic diagnostic for plasmas, Rev. Sci. Instrum., 58: 1211–1220, 1987.
8. ITER Physics Basis Document, Nucl. Fusion, to be published 1998.
29. R. Reichle, M. DiMaio, and L. C. Ingesson, Progress for the Reference Design for ITER Bolometers and Development of a High Performance Alternative, in P. E. Stott, G. Gorini, and E. Sindoni (eds.), Diagnostics for Experimental Thermonuclear Fusion Reactors, New York: Plenum Press, 1998, pp. 389–398.
9. K. M. Young and A. E. Costley, Members of the ITER-JCT, ITER Home Teams and the ITER Diagnostics Expert Group, An overview of ITER diagnostics, Rev. Sci. Instrum., 68: 862–867, 1997.
30. K. F. Mast et al., A low noise highly integrated bolometer array for absolute measurement of VUV and soft X-radiation, Rev. Sci. Instrum., 62: 744–750, 1991.
10. J. Wesley et al., Plasma control requirements and concepts for ITER, Fusion Tech., 32: 495–524, 1997.
31. L. C. Johnson et al., and the ITER Joint Central Teams and Home Teams, Overview of Fusion Product Diagnostics for ITER, in P. E. Stott, G. Gorini and E. Sindoni (eds.), Diagnostics for Experimental Thermonuclear Fusion Reactors, New York: Plenum Press, 1998, pp. 409–418.
11. K. M. Young, Advanced tokamak diagnostics, Fusion Eng. Des., 34–35: 3–10, 1997. 12. A. J. Wootton, Magnetic Diagnostics for Tokamaks, in P. E. Stott, D. K. Akulina, G. Gorini, and E. Sindoni (eds.), Diagnostics for Contemporary Fusion Experiments, Bologna: Editrice Compositori, 1991, pp. 17–36.
32. K. M. Young, Alpha-Particle Diagnostics, in P. E. Stott et al. (eds.), Diagnostics for Contemporary Fusion Experiments, Bologna: Editrice Compositori, 1991, pp. 573–594.
FUSION REACTORS 33. E. B. Nieschmidt et al., Effects of neutron energy spectrum on the efficiency calibration of epithermal neutron detectors, Rev. Sci. Instrum., 56: 1084–1086, 1985. 34. J. D. Strachan et al., Neutron calibration techniques for comparison of tokamak results, Rev. Sci. Instrum., 61: 3501–3504, 1990. 35. C. W. Barnes et al., Measurements of DT and DD neutron yields by neutron activation on the Tokamak Fusion Test Reactor, Rev. Sci. Instrum., 66: 888–890, 1995. 36. F. B. Marcus et al., A Neutron Camera for ITER: Conceptual Design, in P. E. Stott, G. Gorini, and E. Sindoni (eds.), Diagnostics for Experimental Thermonuclear Fusion Reactors, New York: Plenum, 1996, pp. 385–396. 37. E. J. Synakowski et al., Phys. Rev. Lett., 75: 3689–3692, 1995. 38. D. S. Darrow, S. J. Zweben, and H. W. Herrmann, Alpha particle loss diagnostics in TFTR and tokamak reactors, Fusion Eng. Des., 34–35: 53–58, 1997. 39. E. R. Hodgson, Radiation Problems and Testing of ITER Diagnostic Components, in P. E. Stott, G. Gorini, and E. Sindoni (eds.), Diagnostics for Experimental Thermonuclear Fusion Reactors, New York: Plenum, 1998, pp. 261–268. 40. V. S. Voitsenya et al., Imitation of Fusion Reactor Environment Effects on the Inner Elements of Spectroscopical, mm and Submm Diagnostics, in P. E. Stott, G. Gorini, and E. Sindoni (eds.), Diagnostics for Experimental Thermonuclear Fusion Reactors, New York: Plenum, 1996, pp. 61–70.
KENNETH M. YOUNG Princeton Plasma Physics Laboratory
73
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICA...CTRONICS%20ENGINEERING/40.%20Nuclear%20Science/W5217.htm
●
HOME ●
ABOUT US ●
//
CONTACT US ●
HELP
Wiley Encyclopedia of Electrical and Electronics Engineering Ionization Chambers Standard Article Charles D. Goodman1 1Indiana University, Bloomington, IN Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. : 10.1002/047134608X.W5217 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (110K)
Browse this title ●
Search this title Enter words or phrases ❍
Advanced Product Search
❍ ❍
Acronym Finder
Abstract The sections in this article are USES Ionizing Radiation Further Considerations
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20...20ENGINEERING/40.%20Nuclear%20Science/W5217.htm (1 of 2)16.06.2008 15:26:23
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICA...CTRONICS%20ENGINEERING/40.%20Nuclear%20Science/W5217.htm
| | | Copyright © 1999-2008 All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20...20ENGINEERING/40.%20Nuclear%20Science/W5217.htm (2 of 2)16.06.2008 15:26:23
718
IONIZATION CHAMBERS
IONIZATION CHAMBERS An ionization chamber is a device comprising a defined volume of gas containing electrodes which facilitate the collection and detection of free electrons and ions produced by the passage of ionizing radiation through the gas. A voltage is applied between the electrodes to produce an electric field in the gas volume. When ionization occurs, the freed electrons and the ions drift, respectively, to the anode and the cathode. The term ‘‘ionization chamber’’ is used only for those chambers where the electric field is weak enough throughout the volume so that the electrons (and ions) do not attain sufficient energy between collisions to cause secondary ionization. This condition is met typically with field gradients less than 106 V/m. When the field is stronger and secondary ionization is used to amplify the ion current, the device is called a proportional counter. When the field is so strong that the initial ionization triggers an ionization avalanche, the device is a Geiger counter, and the amplitude of the output current pulse is more or less independent of the number of ions in the initial ionization event. If the anode and cathode are well insulated from each other, and the ionization chamber is isolated from any external circuit, it is, in effect, a capacitor with a gas dielectric. It can be charged to a preset voltage and then disconnected from the charging circuit. As ions are collected, the voltage is reduced by an amount V ⫽ Q/C, where Q is the collected charge and C is the capacitance. When an ionization chamber is attached to an external circuit, the collected electrons and ions cause a current to flow in the circuit. This may be a continuous current or a pulse depending on the time distribution of the incident radiation relative to the collection time of the ions. Typically the electrode configuration is either (a) cylindrical, with a center wire anode and a coaxial cathode, or (b) a set of parallel plates. Other geometries are possible. Multiwire configurations with parallel anode wires strung between cathode plates, or alternating anode and cathode wires, are often used for chargedparticle detection, but these configurations are used with gas multiplication, so they are classed as multiwire proportional counters. J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
IONIZATION CHAMBERS
USES Ionization chambers are most commonly used in radiation dosimeters and in portable radiation survey meters (see Fig. 1). They are also used as fixed radiation monitors where they can be adapted to use with various kinds of radiation. An isolated ionization chamber is particularly useful for measuring time-integrated radiation dosage. The device is simple and can be made quite small. Such chambers are used for monitoring the radiation dosage of personnel working in radiation areas. A typical pocket dosimeter is a cylindrical ionization chamber, about the size of a thick pen, with air between the anode wire and the coaxial cathode. The device is charged to a preset voltage before the worker enters the radiation area. After the possible radiation exposure the voltage is read on an electrometer. The reduction of voltage is proportional to the number of ion pairs created in the gas volume by the well-known equation for a capacitor, Vi ⫺ Vf ⫽ Q/C, where Q is the total charge collected from the ionization and C is the capacitance of the chamber. For a given type of radiation the collected charge is proportional to the radiation dose, and the device can be readily calibrated by exposure to standard radiation sources. Some pocket dosimeters have built-in electrometers that can be viewed by looking into one end. Ionization chambers are also commonly used in radiation survey meters to measure radiation dose rate. This application requires an external electronic circuit to measure and indicate on a meter the current from the chamber. For a given type of radiation the current is proportional to the dose rate. A virtue of an ionization chamber compared with a proportional counter or a Geiger counter is that the current is insensitive to the applied voltage. Thus, it is a simple and stable device. Ionization chambers can also be used for detection of individual charged particles. In this case the collection of the ionization from the particle traversal of the chamber causes a current pulse in the electronic circuit. As long as the average
+HV
time between particle traversals is longer than the collection time of the ionization, the output will appear as current pulses. For this purpose it is more common to use proportional counters or Geiger counters because the pulses are much larger. IONIZING RADIATION The physical process that makes ionization chambers work is the ionization of gases by radiation. Different forms of radiation cause ionization though different mechanisms. Ionization by Charged Particles Charged particles (e.g., electrons, protons, and ionized atoms) lose energy mainly by scattering from electrons in the gas, causing excitation or ionization of the atoms. The interaction is through the Coulomb force, which is long range, so scattering can occur even when the charged particle is far outside the nominal radius of the atom. The theory of energy loss of charged particles in matter is a complicated subject in its own right. For the purpose of understanding the performance of ionization chambers it is necessary only to have some grasp of the magnitude of the rate of energy loss and how it depends on the particle type, the particle energy, and the kind of gas. An approximate formula due to Bethe for the rate of energy loss, also called stopping power, is −
4πe4 z2 dE 1.123 mv3 = NZ ln 2 dx mv ze2 ω
where m, z, and v are the mass, charge, and velocity of the ionizing particle. Z and N are the atomic number and the number of atoms per unit volume of the material that is being ionized. 웆 is a constant characteristic of the atoms being ionized. A more complete formula, the Bethe–Bloch formula, can be found in many physics textbooks.
Cathode
Guard ring Anode
To current integrating circuit
Thin window Insulator –HV
(a)
719
Insulator (b)
Figure 1. Photograph (a) and sketch (b) of an ionization chamber used in a portable radiation survey meter. Part of the cathode cylinder has been cut away and the thin end window has been removed to show the interior. The cathode is a thin, conductive, carbon film deposited on the inside of a rigid plastic cylinder. (Photograph by C. C. Foster.)
720
IONIZATION CHAMBERS
An important characteristic of stopping power is that as a particle loses energy, its rate of energy loss increases and reaches a maximum just before the particle stops. The increase is quite sharp at low energy, creating the so-called Bragg peak. This is illustrated in Fig. 2 for protons and for alpha particles in neon gas. It is obvious, therefore, that the number of ion pairs produced in a chamber depends on the particle energy and the particle type, and it will be largest if the particle stops in the chamber. For particle kinetic energies considerably above the particle rest energy the rate of energy loss goes through a minimum, and at higher energies it is nearly independent of the particle energy. Cosmic ray muons are minimum ionizing particles and constitute the major background radiation in environments away from artificial radiation sources. Ionization by Electromagnetic Radiation (photons) An atom or molecule can be excited by absorbing or scattering a quantum (photon) of electromagnetic radiation. However, unless the photon can impart at least as much energy as the binding energy of the least bound electron in an atom, ionization will not occur. The energy of a photon is given by the Planck relationship E ⫽ h ⫽ hc/ , where h is Planck’s constant, is the frequency, c is the velocity of light, and is the wavelength. The photons of radio waves and even of visible light are not energetic enough to ionize a gas. To get some feeling of the energies involved, the energy of a 1 GHz radio wave photon is 4 애eV. The energy of a 460 nm photon (blue light) is 2.7 eV, but the energy required to ionize a nitrogen gas molecule is 15.5 eV. Thus, radio waves and visible light are not ionizing radiation, and ionization chambers are not suitable for detecting them. X rays and gamma rays are, however, ionizing. The ionization may occur through one of three processes: the photoelectric effect, Compton scattering, or pair production. The dominant process depends on the energy of the photon. The electrons produced in the primary ionization event then lose their kinetic energy through ionization of the gas in the chamber. Thus the total number of ions produced by the traversal of an X ray or gamma ray depends in a complicated way on the energy of the photon.
2
MeV (mg/cm2)
1.5 Alpha particles 1
Response to Neutrons Neutrons, being electrically neutral, do not directly produce ionization. Nevertheless, ionization chambers are useful for dose measurements of slow neutrons. For this purpose a gas containing a nucleus with a very large cross section for absorbing thermal neutrons is introduced into the chamber. Boron trifluoride (BF3), which serves both as the chamber gas and as the reaction target, is commonly used for this purpose. The nuclear reaction that takes place is 10B(n, 움)7Li. The alpha particle and 7Li ion produce the ionization in the gas. Boron trifluoride chambers can be used for broad-spectrum neutron dose monitors. In this case, the chamber is surrounded with plastic or another material rich in hydrogen to moderate the neutrons down to thermal energy where the 10 B(n, 움) reaction will take place. Another useful reaction for thermal neutron dosimetry in a gas counter is 3He(n, p)3H. High-energy neutrons can produce nuclear reactions in any material, but high-energy neutron detection is generally done with liquid or solid organic scintillators rather than gas counters. Here the neutron scatters from a proton in the scintillation material. Through the same mechanism a hydrogenfilled gas counter will respond to high-energy neutrons, but, because of the low density of the gas the detection efficiency is extremely small. FURTHER CONSIDERATIONS
0.5 Protons 0
The photoelectric effect is the dominant ionization process for X rays and dominates for photon energies less than about 0.1 MeV. In this process a photon is absorbed by the atom and an electron is emitted, leaving a positive ion behind. The excess energy above that required to remove the electron from the atom appears as kinetic energy of the electron. For somewhat higher energies up to about several MeV the dominant ionization process is the Compton effect. In this process, the photon scatters from a bound electron and imparts sufficient energy kinematically to free the electron. The scattered photon, since it has lost energy, emerges with a lower frequency or longer wavelength. The energy imparted to the electron depends on the angle of scattering, which is random. The scattering obeys the laws of conservation of energy and momentum, where, in the kinematical equations, the momentum of the photon is h /c. For yet-higher energies the dominant process is pair production in which the photon is absorbed on an atom and an electron–positron pair is produced. This obviously requires a photon of energy greater than the rest mass of the electron– positron pair (1.022 MeV). In considering the response of an ionization chamber to photons, at the design stage one should note that a primary ionization event is more likely to occur in the dense walls of the chamber than in the low-density gas. However, this is not likely to be a practical problem, since calibration will be done with the complete chamber.
2
4 6 Particle energy (MeV)
Guard Rings 8
10
Figure 2. Rate of energy loss versus particle energy for protons and alpha particles. Data from Ref. 1.
In parallel plate chambers, guard rings may be used to define the volume from which electrons are collected and/or to minimize leakage current. For example, in a circular chamber, the guard ring would be a flat metal ring of slightly larger diameter than the anode (collector) placed around the anode. This
ISDN
is set to an electrical potential approximately equal to the anode potential. This extends the uniform electric field region outside the area of the anode so that edge effects are eliminated. Since the anode is insulated from the guard ring, a guard ring is also useful for minimizing the electric field across the anode insulator, thus reducing possible leakage current. Drift Velocity The electrons in the gas will gain energy as they are accelerated by the electric field and lose energy through collisions with gas molecules. In this process of gaining and losing energy they will acquire an average drift velocity that depends on the ratio of electric field strength to gas pressure. Very roughly, for nitrogen at atmospheric pressure, a voltage gradient of 104 V/m results in an electron velocity of about 103 m/s. The positive ions travel roughly a thousand times slower.
721
Reading List The classical texts on ionization chambers are: B. B. Rossi and H. H. Staub, Ionization Chambers and Counters, New York: McGrawHill, 1949; and D. H. Wilkinson, Ionization Chambers and Counters, Cambridge: Cambridge University Press, 1950. These books are quite old but contain much detailed information. A new reference intended for physics students that discusses ionization chambers, proportional counters, multiwire chambers, and drift chambers is W. R. Leo, Techniques for Nuclear and Particle Physics Experiments, Berlin: Springer-Verlag, 1994. Tables of ionization potentials of various gases can be found in the CRC Handbook of Chemistry and Physics, 75th ed., Boca Raton, FL: CRC Press.
CHARLES D. GOODMAN Indiana University
IONIZATION, ELECTRON IMPACT. See ELECTRON IMPACT IONIZATION.
Gridded Chambers Because of the finite drift time of the electrons, the pulse shape from a chamber operated in the pulse mode depends on the position and orientation of the particle track. In the early days of ionization chambers, considerable attention was given to introducing grids at some potential between the anode and cathode. This divided the chamber into two parts. The primary ionization event would take place in the region between the grid and the cathode, and the electrons would drift through the grid and then always fall through the same potential difference to produce the pulse. This particular problem seems to be of little concern now, since essentially all particle detection applications use proportional counters. In fact, the drift time is often measured and exploited in multiwire drift chambers to determine the position of particle tracks between wires. A gridded chamber may, however, be useful for measuring drift times. Reference 2 shows such an application and also shows that it is possible to construct an ionization chamber with a liquid dielectric. Recombination and Attachment Two processes that can cause loss of electrons are recombination and attachment. If an electron encounters a positive ion on its path toward the anode, it may recombine and is thus lost. This is an unlikely process unless the density of ions is large. A situation in which this process might be of some significance is if the particle tracks are densely ionizing and parallel to the electric field. Attachment cross sections are very small for most gases, and this process is usually not important. However, oxygen has an appreciable attachment cross section and this effect should be considered in a chamber containing oxygen. BIBLIOGRAPHY 1. L. C. Northcliffe and R. F. Schilling, Range and stopping-power tables for heavy ions, Nucl. Data Tables A7: 233–463, 1970. 2. E. Shibamura, A. Hitachi, T. Doke, T. Takahashi, S. Kubota, and M. Miyajima, Drift velocities of electrons, saturation characteristics of ionization and W-values for conversion electrons in liquid argon, liquid argon–gas mixtures and liquid xenon, Nucl. Instrum. Methods, 131, 249–258, 1975.
IONIZATION RADIATION DAMAGE TO SEMICONDUCTORS. See RADIATION EFFECTS. IONIZING RADIATION DETECTORS. See RADIATION DETECTION.
IONOSPHERE. See ELECTROMAGNETIC WAVES IN THE IONOSPHERE.
IONOSPHERE ELECTROMAGNETIC WAVES. See ELECTROMAGNETIC WAVES IN THE IONOSPHERE.
IONOSPHERIC RADIO PROPAGATION. See SKY WAVE PROPAGATION AT MEDIUM AND HIGH FREQUENCIES.
ION-SELECTIVE ELECTRODES. See ELECTROCHEMICAL ELECTRODES.
IRON-SILICON. See SOFT MAGNETIC MATERIALS. IRRADIATION MEASUREMENT. See RADIOMETRY.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICA...CTRONICS%20ENGINEERING/40.%20Nuclear%20Science/W5206.htm
●
HOME ●
ABOUT US ●
//
CONTACT US ●
HELP
Wiley Encyclopedia of Electrical and Electronics Engineering Light Water Reactor Control Systems Standard Article John A. Bernard1 1Massachusetts Institute of Technology, Cambridge, MA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. : 10.1002/047134608X.W5206 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (1633K)
Browse this title ●
Search this title Enter words or phrases ❍
Advanced Product Search
❍ ❍
Acronym Finder
Abstract The sections in this article are Neutron Life Cycle Reactor Operation Light Water Reactor Operation | | | file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20...20ENGINEERING/40.%20Nuclear%20Science/W5206.htm (1 of 2)16.06.2008 15:26:37
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICA...CTRONICS%20ENGINEERING/40.%20Nuclear%20Science/W5206.htm
Copyright © 1999-2008 All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20...20ENGINEERING/40.%20Nuclear%20Science/W5206.htm (2 of 2)16.06.2008 15:26:37
LIGHT WATER REACTOR CONTROL SYSTEMS The control and operation of nuclear reactors that use uranium for fuel and light water for both moderator and coolant is discussed in this article. The starting point is the physics of the neutron life cycle, which determines the dynamic behavior of a nuclear reactor. Reactor startup, lowpower operation while critical, and operation in the presence of feedback effects are then enumerated. Finally, specific aspects of the operation of both pressurized and boiling water reactors (PWRs and BWRs) are reviewed. The material in the first two sections of this article is extendible to reactors that utilize other fuels, moderators, and coolants.
NEUTRON LIFE CYCLE An isotope is defined as being fissile if its nucleus will split in two or, in other words, undergo nuclear fission, if struck by a neutron that has an extremely small kinetic energy, typically 0.025 eV, which corresponds to a speed of 2200 m/s. There are only four fissile isotopes (233 U, 235 U, 239 Pu, and 241 Pu). Of these, only 235 U, which constitutes 0.71% of all uranium atoms, is naturally occurring. Because light water absorbs neutrons, the fuel utilized in light water reactors (LWRs) must be enriched in the 235 U isotope in order to permit the establishment of a self-sustaining neutron chain reaction. If the LWR is large and therefore has a small surface-to-volume ratio, few neutrons will diffuse out of the fueled region, or core. The neutron leakage is said to be small and a low degree of enrichment is possible. This is the case with PWRs and BWRs, which have fuel enriched to 2% to 3%. In contrast, if the LWR is small and has a large surface-to-volume ratio, neutron leakage will be large and high enrichments will be needed. This is the case for research and test reactors, which may have enrichments of 20% or more. Figures 1 and 2 show the cross sections of 235 U and 238 U for neutrons over the energy range 1 eV to 10,000 eV. Cross sections are traditionally given in barns, with 1 barn equaling 10−24 cm2 . Cross sections may be defined for any type of interaction (scattering, absorption, fission, total) between a neutron and the nucleus in question. The cross section specifies the probability that the interaction will occur. For the 235 U fission cross section, the important features are that the probability of an interaction with neutrons at high energy (keV to MeV range) is quite small, a few barns, and that the probability of an interaction at low energy (≈0.025 eV) is high. For the 238 U total cross section, the important features are that some neutron absorption will occur at high energies and that there are several sharply defined resonances of large magnitude, with the first at 6.67 eV. The first of these features may result in fission if the incident neutron has an energy in excess of 1.5 MeV. The second results in neutron capture.<figureAnchor figures="W5206-fig-0001 W5206-fig-0002"/> The fissioning of 235 U produces an average of 2.5 neutrons. These are emitted over a certain energy distribution, the most probable energy being slightly below 1 MeV and the maximum at about 10 MeV. Neutrons in this energy range are classified as fast because of their high kinetic en-
ergies. The probability of these neutrons interacting with U to cause fission is remote, given the small 235 U cross section in this energy range. Examination of Fig. 1 suggests that a self-sustaining neutron chain reaction is possible if the fast neutrons are slowed down, or thermalized, in order to take advantage of 235 U’s large fission cross section for low-energy, or thermal, neutrons. The slowing-down process is called neutron moderation. Moderation is optimized by causing the fast neutrons to collide with something of similar mass. A head-on collision between two objects of the same mass will result in a complete transfer of energy. Thus, for nuclear reactors, the moderators of choice are hydrogen-rich substances such as light water. It is important to recognize that the slowing down occurs in discrete steps. Neutrons do not lose energy continuously. Rather they lose it every time they undergo a collision. The physics of the neutron life cycle that is needed to support a self-sustaining chain reaction can now be understood. This cycle is shown in Fig. 3. Assume that a certain number (n) of fast neutrons have been produced by the thermal fission of 235 U. What can happen to these neutrons? The fuel consists of the isotopes 235 U and 238 U. So, to understand the fate of the neutrons, consider what occurs as the fast neutrons move from high to low energy while subject to the interaction probabilities shown in Figs. 1 and 2. The possibilities are: 235
1. Some fast neutrons may be absorbed by 238 U and cause fission. This process is called fast fission and is quantitatively represented by the fast fission factor ε which is the ratio of the total number of neutrons produced from fast and thermal fission to the number produced from thermal fission alone. Values of ε depend on the enrichment, with ε approaching unity (from above) for fully enriched fuel. 2. Fast neutrons may also escape from the reactor core. This is referred to as fast-neutron leakage. Such neutrons are lost from the life cycle and will not cause a fission reaction. The quantity Lf is defined as the ratio of the total number of fast neutrons escaping leakage to the total number produced from fast and thermal fission. Lf is often called the non-leakage probability and therefore the fraction that does leak out is given by (1 − Lf ). 3. The remaining neutrons collide with moderator nuclei and, as a result, lose energy in a discontinuous manner. If, after a collision, a neutron’s energy corresponds to one of the 238 U resonances, it will be absorbed in 238 U. Such interactions do not continue the neutron chain reaction, because only neutrons with kinetic energies in excess of 1.5 MeV can cause 238 U to fission. Hence, during the slowing-down process, any neutrons that happen to have energies corresponding to a 238 U resonance are, like those that leaked out of the reactor core, lost from the life cycle. The quantity p is defined as the resonance escape probability. It is the ratio of the total number of thermalized neutrons to the total number of fast neutrons that escaped leakage. The quantity p is a function of the enrichment, and it approaches unity (from below)
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright © 2007 John Wiley & Sons, Inc.
2
Light Water Reactor Control Systems
Figure 1. Microscopic cross section of 235 U. Source: BNL-325.
Figure 2. Microscopic cross section of 238 U. Source: BNL-325.
for fully enriched fuel. 4. The next fate that might befall a neutron is that it leaks out of the core while at thermal energies. This is referred to as thermal neutron leakage. The quantity Lt is defined as the ratio of the total number of thermal neutrons escaping leakage to the total number of thermalized neutrons. 5. The neutrons are now at thermal energies. Some will be absorbed in the 235 U. Others will be absorbed in 238 U, core structural materials, control devices, or even the coolant. The thermal utilization factor, f, is defined as the ratio of the thermal neutrons absorbed in the fuel to the total number of thermal neutrons escaping leakage. This parameter f is the quantity that reactor operators alter when they manipulate control devices. 6. Of those neutrons absorbed in 235 U, only about 80% will cause fission. The rest result in a transforma-
tion to 236 U. The ratio of the fission and the total absorption cross sections gives the fraction that contribute to the neutron life cycle. For every fission, a certain number of neutrons are produced. This quantity is denoted by the symbol ν. The thermal reproductive factor νf /a is defined where f and a are the macroscopic cross sections for fission and absorption respectively. (Note: The cross sections shown in Figs. 1 and 2 are the microscopic ones, denoted by the symbol σ. Macroscopic cross sections are the product σN, where N is the number density (nuclei/cm3 ) of the material, 235 U in this instance.) The thermal reproductive factor η is defined as the ratio of the number of fast neutrons produced from thermal fission to the number of thermal neutrons absorbed in fuel.
Light Water Reactor Control Systems
The product of the above six quantities is the core multiplication factor K. Thus,
Equation (1) is commonly referred to as the six-factor formula. If the core is surrounded by a reflector of such size that the probability of non-leakage becomes unity (Lf = Lt = 1.0), then Eq. (1) reduces to:
which is called the four-factor formula. The symbol K∞ implies an infinite reflector. The quantity K may be interpreted physically as
It is useful, although admittedly artificial, to consider neutrons as moving through the life cycle shown in Fig. 3 in successive groups, or generations. In that case, the quantity K may also be written as
This interpretation follows because neutrons in the present generation were, aside from some small source contributions that are discussed below, all produced from fission, and those in the preceding generation were either absorbed (including absorption in fuel) or lost to leakage. If a reactor’s core multiplication factor is exactly unity, then the reactor is said to be critical. That is, the number of neutrons produced from fission is exactly balanced by those that are absorbed or lost to leakage. A reactor can be critical at any power level, because the number of neutrons in the life cycle does not matter to the achievement of criticality as long as a balance exists between production and removal. However, a reactor’s power level is proportional to the number of neutrons in the life cycle. More neutrons imply more fissions and hence a greater energy release. Thus, the reactor with the greater number of neutrons in its life cycle will be at the higher power level.
3
worth with each core component and feedback effect as if these components and/or effects existed independently. For example, movement of a control device will alter the core multiplication factor and hence insert a certain amount of reactivity. The actual magnitude of the change is usually expressed as a function of the distance over which the device is moved. In reality, it is a function of both that distance and the current core configuration. If the latter were changed, a different reactivity worth might be observed for the same movement of the device. The reactivity associated with a control device may be tabulated in terms of either a differential or an integral value. The former is the change in reactivity per unit distance of travel at a given position. It has units of millibeta per meter. The latter is the total change in reactivity generated by moving a device from one position to another. Its units are millibeta. Operation of a reactor may be subdivided into three distinct regimes. These are subcritical, critical but below the point-of-adding-heat, and critical at appreciable power. Each regime is associated with a different dynamic behavior. The reactor dynamics of the first regime are characterized by the subcritical multiplication process. The neutron population grows in response to the slow removal of the control mechanisms until a self-sustaining neutron chain reaction is achieved. The second regime is that in which the power is raised from whatever level existed when criticality was attained to the point-of-adding-heat, which is usually 1% to 3% of full power. The point-of-adding-heat is defined as the power level above which a change in power is reflected as a change in temperature. At lower powers, the plant heat capacity is such that power changes do not affect temperature. Reactor dynamics are described by the point kinetics equations. The distinguishing feature of operation in this regime is the absence of reactivity feedback effects associated with changes in temperature, void fraction, or fission product poisons such as xenon. The third regime, operation at power, is also describable in terms of the point kinetic equations provided that allowance is made for the many feedback effects.
REACTOR OPERATION If a reactor’s core multiplication factor is known, the rate of change of its neutron population can be determined. For subcritical behavior such as occurs during a startup, it is useful to work with the parameter K directly. However, once critical, it is preferable to define a quantity called the reactivity ρ. The definition is
Hence, ρ may be thought of as the fractional change in the neutron population per generation. A reactivity of zero implies that the reactor is exactly critical. If the reactivity is less than unity, the reactor is subcritical. If it exceeds unity, the reactor is supercritical. Reactivity is a global property of a nuclear reactor. Nevertheless, it is common practice to associate a reactivity
Reactor Startup and Subcritical Operation A prerequisite for a safe reactor startup is that there are sufficient neutrons present so that the nuclear instrumentation is operable. In a shut-down reactor, the ratio of photons (gamma rays) to neutrons may be as great as 100:1. Thus, it is important that the instruments be capable of distinguishing neutrons from photons and that the neutron population be sufficient to generate on-scale signals. Possible neutron sources in a shut-down reactor include spontaneous fission, photoneutron reactions, and installed sources. Spontaneous fission is important as a source of neutrons in both PWRs and BWRs because of the large amount of 238 U present. The magnitude of this source decreases as enrichment rises or as reactors become smaller, because in both instances there is less 238 U.
4
Light Water Reactor Control Systems
Figure 3. Neutron life cycle.
Photoneutrons are produced when gamma rays that are emitted by certain fission products interact with a deuterium (heavy hydrogen) nucleus to yield a neutron and proton: The photoneutron reaction is important in LWRs because deuterium oxide, which is heavy water (D2 O), is present in ordinary water (H2 O). The gamma rays needed to initiate the reaction require an energy of 2.2 MeV. Fission products that produce such gamma rays decay within a few months of a reactor shutdown. Thus, for a photoneutron source to be of use, the reactor must have been recently operated at full power for an extended time. There are several possible types of installed sources. One of the most common is plutonium–beryllium, or PuBe. The reaction is
A PuBe source consists of a mixture of plutonium and beryllium powder that is doubly encased in stainless steel. Heat transfer from these sources is therefore poor, and they must be removed from a reactor prior to the production of any appreciable power. [It is important to recognize that removal of such a source from a critical reactor will cause a power increase because moderator (light water) replaces the space previously occupied by the steel casing. This is further explained below under feedback effects.] Sources of the type described by the chemical equations above, can also be manufactured using certain other alpha emitters such as polonium. Another type of installed source is antimony–beryllium. The reaction is
Light Water Reactor Control Systems
Radioactive antimony (124 Sb) is a prerequisite for the operation of this source. Both spontaneous fission and the photoneutron reaction provide a homogeneous source of neutrons. No consideration need be given to source–detector geometry. However, if an installed source is used, then its placement becomes an issue. Installed sources should be positioned at the center of the core, with neutron detectors located symmetrically on the core perimeter, one in each quadrant. A shut-down reactor that contains fissile material will have a core multiplication factor that is greater than zero but less than one. The actual neutron population in this reactor will be greater than the source population alone. This is evident from the neutron life cycle as shown in Fig. 3. Suppose a source emits S0 neutrons every generation. Once these S0 neutrons have completed their life cycle sequence, they will have contributed KS0 neutrons towards the next generation. The total neutron population after one generation will therefore be S0 + KS0 . Similarly, two generations later there will be S0 + KS0 + K2 S0 neutrons, the contribution from the original S0 neutrons now being the term K2 S0 . The original S0 neutrons contribute to each successive generation until Kn approaches zero, where n is the number of generations. The existence of a neutron population in excess of the source level in a subcritical fissile medium is referred to as subcritical multiplication. After n generations, that population will be
If the core multiplication factor K is less than unity, then the series 1 + K + K2 + ··· Kn will converge to 1/(1 − K) as Kn approaches 0. Thus, the total neutron population is S0 /(1 − K). This relation may be used to calculate the equilibrium neutron level in a subcritical reactor. To do this, recognize that the count rate (CR) obtained over a designated time interval is proportional to the neutron population. Thus,
Several caveats apply. First, this formula for subcritical multiplication does not allow calculation of the time required to attain criticality, because time does not appear in it. Second, as the core multiplication factor K approaches unity, the number of generations and hence the time required for the neutron population to stabilize get longer and longer. This is one reason why it is important to conduct a reactor startup slowly. If the startup were to be done rapidly, there would not be sufficient time for the neutron population to attain its equilibrium value. Third, the equilibrium neutron level in a subcritical reactor is proportional to the neutron source strength. This is why it is important to have neutron count rates in excess of some minimum prior to initiating a startup. Fourth, the formula
5
is only valid while subcritical. This restriction applies because Eq. (6) will converge to a limit only if K is less than unity. Fifth, K is not zero in a shutdown reactor, but typically ranges between 0.90 and 0.95. Equation (7) is not very useful, because a method to measure K does not exist. However, both the count rate and the source level are measurable. The latter is simply the count rate with the reactor in a shut-down state. Thus, it is of benefit to rearrange Eq. (7) to
where CR0 is the initial count rate. This is the equation of a straight line where K is the independent variable and CR0 /CR is the dependent variable. Often, the latter term is written as S0 /CR. It can be used to predict the attainment of criticality provided that S0 remains constant. For example, suppose an installed source is at the core center and fuel assemblies are being loaded. Neutron counts are recorded after every fifth assembly is installed. A plot of S0 /CR versus the number of assemblies can be used to estimate the fuel loading required to achieve criticality. This is illustrated in Fig. 4. The broken line is an extrapolation of the measured data. It shows that criticality will be achieved at about 60 assemblies. Such graphs are referred to as 1/M plots, where M stands for multiplication. Such plots can also be used to predict criticality during a reactor startup. In that case, the horizontal axis will be the position of the control devices. Actual 1/M plots may not be linear because of several factors. First, poor counting statistics may affect the initial portion of the plot. Second, the source–detector geometry may not result in a uniform distribution of neutrons throughout the core region. Third, the effect of the independent variable (number of assemblies, control device position, etc.) may not be linear. Fourth, the source level may not be constant. For example, if spent fuel is replaced with fresh, then the photoneutron contribution to the source will diminish because the fission products that produce the 2.2 MeV photons are being removed. Care must therefore be exercised in the use of these plots. If a reactor is to be loaded with fresh fuel, then two 1/M plots will be made: the first with all control devices fully inserted, and the second with certain devices removed from the core. The difference in the projections of the two plots represents the minimum amount, or margin, by which the reactor can be shut down. Fuel will be loaded only until the projection of the second plot is attained. Low-Power Operation While Critical The starting point for the derivation of the time-dependent relations needed to describe neutron dynamics once criticality has been achieved is the fission process. The fission of a 235 U nucleus normally yields two fission fragments, an average of 2.5 neutrons, and an assortment of beta particles, gamma rays, and neutrinos. The neutrons that are produced directly from the fission event are referred to as prompt, because they appear almost instantly. Most of the neutrons produced in a reactor are prompt. However, certain fission fragments, which are called precursors, undergo a beta decay to a daughter nuclide that then emits a neu-
6
Light Water Reactor Control Systems
Figure 4. Example of 1/M plot.
tron. Neutrons produced in this manner are referred to as delayed. The delay is the time that must elapse for the precursor to undergo its beta decay. Delayed neutrons constitute an extremely small fraction of a reactor’s total neutron population. The fraction of delayed neutrons that exists at fast energies is denoted by the symbol β. Figure 5 illustrates the fission process. There are three parallel paths whereby neutrons may be produced. These paths or mechanisms yield prompt neutrons, delayed neutrons, and photoneutrons respectively. Delayed neutrons and photoneutrons are born at energies that are less than those of their prompt counterparts. Nevertheless, all three types of neutrons are fast when born, and all three types require moderation in order to continue the neutron chain reaction. However, because they are born at somewhat lower energies, the delayed neutrons and photoneutrons are less likely to be lost from the core because of fast leakage than are the prompt neutrons. Hence, the fraction of these neutrons increases during the slowing-down process. To summarize: In absolute numbers, all three neutron populations decrease as they move through the neutron life cycle. However, the loss of the prompt neutrons is greatest because they are born at the highest energies. So the fraction of the delayed neutrons and photoneutrons increases. The fraction of these neutrons at thermal energies is denoted by the °
symbol β, which is commonly called the effective delayed neutron fraction. Its value for LWRs is typically 0.0065. °
The effective delayed-neutron fraction β is a very small number. Nevertheless, delayed neutrons are crucial to the safe operation of a nuclear reactor. The reason is that they lengthen the average neutron lifetime so that control of the neutron chain reaction is possible. The time required for a prompt neutron to be born, thermalize, and cause a fission is about 1 × 10−4 s in an LWR. The corresponding average life time for a delayed neutron is about 12.2 s. Denote these times as tp and td . Then the average lifetime of a neutron is
°
°
(1 − β)tp + (β)td , or about 0.79 s. This is considerably longer than the 10−4 s prompt-neutron lifetime. Reactivity was defined earlier as (K − 1)/K, which is the fractional change in the neutron population per generation. It is common practice to quantify reactivity by reference to the delayed-neutron fraction. Thus, even though reactivity is a fraction and therefore dimensionless, several systems of units have been developed. The most common are K/K and the beta. The conversion for an LWR is that 1 beta of reactivity equals 0.0065 K/K. There are 1000 millibeta (mbeta) in 1 beta. Reactivity values are also sometimes quoted in dollars and cents, with 1 beta being equal to a reactivity of one dollar or 100 cents. The amount of positive reactivity present in a nuclear reactor should always be limited to some small percentage of the delayed neutron fraction. In this way, the delayed neutrons will be the rate-determining factor in any transient, and their 12.2 s lifetime will govern the reactor’s dynamics. The rationale for this approach is illustrated in Table 1. Shown are three cases, all with the initial condition that the reactor is critical with a population of 10,000 neutrons. For the first case, no change is made. One generation later there are 9935 prompt and 65 delayed neutrons. Criticality cannot be maintained without the delayed neutrons. Hence they are the rate-determining step. For the second case, 0.500 beta of reactivity is added. This corresponds to 0.00325 K/K, or 33 additional neutrons in the first generation. Thus, after one generation, there will be a total of 10,033 neutrons, of which 9968 will be prompt and 65 will be delayed. The reactor power, which is proportional to the number of neutrons in the life cycle, is rising. However, the delayed neutrons are still controlling, because it takes 10,000 neutrons to remain critical and there are only 9968 prompt ones. For the third case, 1.5 beta of reactivity is added. This corresponds to 0.00975 K/K or 98 neutrons in the first generation. Thus, after one generation
Light Water Reactor Control Systems
7
Figure 5. Fission process.
there are 10,032 prompt and 66 delayed neutrons. There are more than enough prompt neutrons to maintain criticality. Thus, the prompt neutrons with their 10−4 s lifetime are the controlling factor. If an amount of reactivity equal to delayed neutron fraction is inserted, the reactor is said to be in a prompt critical condition. This terminology should not be construed to mean that reactivity insertions of less than that value are safe and those greater than it are unsafe. There is no sharp division between safe and unsafe. Rather, as reactivity is increased, there is a continuous decrease in the importance of the delayed neutrons to the reactor’s dynamic response. For that reason, transients in LWRs are normally performed with small reactivity additions such as 100 mbeta. Reactivity is not directly measurable, and hence most reactor operating procedures do not refer to it. Instead, they specify a limiting rate of power rise, commonly called a reactor period. The reactor period is denoted by the Greek letter τ and is defined as the power level divided by its rate of change:
power by an order of magnitude (from 10% to 100%, for example) would require 230 s. Such a rate of rise is quite manageable. In order to understand the time-dependent behavior of a nuclear reactor, equations are needed that describe the response of the prompt and delayed neutron populations to changes in reactivity. This problem is mathematically complex, because the neutron population in a reactor is a function of both space (position in the core) and time. The spatial dependence is significant if the dimensions of the core exceed the distance that a neutron will travel while slowing down and diffusing. For many practical situations, it is acceptable to assume that the spatial and temporal behavior are separable. The result is the space-independent equations of reactor kinetics. These are often called the point kinetics equations. They are
where P(t) is the reactor power. Thus, a period of infinity (zero rate of change) corresponds to the critical condition. If the period is constant, the integration of Eq. (9) gives the relation
where T(t) amplitude function ρ(t) net reactivity
where P0 is the initial power. A reactivity addition of 100 mbeta (0.00065 K/K) would create a period of about 100 s. For such a period, Eq. (10) shows that an increase in reactor
β effective delayed neutron fraction l* prompt neutron lifetime λi decay constant of the ith precursor group Ci concentration of the ith precursor group N number of delayed-neutron precursor groups
°
8
Light Water Reactor Control Systems
The reader is referred to one of the classic texts on reactor physics for the derivation of these equations (1, 2). The amplitude function T(t) is a weighted integral of all neutrons present in the reactor. It is common practice to equate T(t) to the reactor power, denoted here as P(t). However, this simplification ignores an important restriction on the validity of the point kinetics equations. Namely, the derivation requires that the shape (as opposed to the amplitude) of the neutron flux remain constant during a transient. This is in turn implies that significant physical movement of a reactor’s control device would invalidate the point kinetics approach to the analysis of a transient. However, because the allowed magnitude of reactivity insertions is already limited for reasons of safety, this additional restriction often has little practical effect in the analysis of operational transients on LWRs. Table 2 gives the physical meaning of the quantities that appear in the point kinetics equations. The first kinetics equation describes the behavior of the neutrons. It states that the rate of change of the total neutron population equals the sum of the rates of change of the prompt and delayed neutrons. The second kinetics equation describes the behavior of the precursors. It says that the rate of change of the precursors is the difference between their production and loss, the latter being described by radioactive decay. Twenty fission fragments may undergo a beta decay to produce a daughter nuclide that then emits a delayed neutron. Each has its own unique half-life. However, some of the half-lives are sufficiently similar so that the precursors can be represented as six effective groups. Thus, the value of N is usually six.
The next step in the analysis of reactor transients is to relate the reactor period to the reactivity. This has traditionally been achieved through use of the Inhour equation, which is valid only for step insertions of reactivity and then only after sufficient time has elapsed since the insertion to achieve asymptotic conditions (1, 2). A more general approach is to combine the point kinetics equations through a process of differentiation and substitution to obtain the dynamic period equation (3). This relation is rigorous, subject only to the aforementioned limitations on Eqs. (11) and (12). The derivation is done in terms of the quantity ω(t) which is the reciprocal of the instantaneous reactor period. Thus, τ(t) = 1/ω(t), where
This definition is substituted into Eq. (11). The next step in the derivation is to define the standard effective multigroup decay parameter:
This parameter is time-dependent because the relative concentrations of the delayed-neutron precursor groups change as the power is raised and lowered. Specifically, as the power is increased, short-lived precursors dominate and the value of λe increases. As the power decreases, the reverse occurs. Application of this definition to the neutron kinetics equations yields
The next and crucial step in the derivation is to differentiate Eq. (15), the modified version of the neutron kinetics equation. So doing, and using the definition of the instantaneous period from Eq. (13) to eliminate the derivative of the amplitude function, we obtain
A series of substitutions are now made. First, the rate of change of the precursor concentrations is eliminated by substitution of Eq. (12):
Light Water Reactor Control Systems
Next, the quantity Ci (t) is eliminated by use of Eq. (14), the definition of the effective multigroup decay parameter. The quantity λi Ci (t) is then eliminated by substitution of Eq. (11) after modification by Eq. (13). The result is
Division by the amplitude function T(t) and consolidation of terms yields
Solving for ω(t) yields
The expression for the instantaneous reactor period is therefore
This relation is the standard dynamic period equation. It is a rigorously derived exact relation. Given that the prompt neutron lifetime is extremely small, terms containing l* can often be deleted and Eq. (21) reduces to
This result is a general relation that accurately predicts the instantaneous reactor period associated with any reactivity pattern provided that the prompt critical value is not approached. It can generally be considered valid provided that the reactor period is longer than 10 s. The instantaneous reactor period may also be expressed in terms of the alternate dynamic period equation. The two relations are mathematically equivalent. The alter²
°
nate form avoids the presence of the term (β e /λe )(β − ρ), which is difficult to evaluate numerically. The derivation is similar except that the differentiation is performed before substitution of the effective decay parameter and that parameter is defined differently as
9
The alternate form of the dynamic period equation is
Further information on both equations is given in Ref. 4. It has been shown that the Inhour equation is a special case (asymptotic conditions following a step reactivity insertion) of the dynamic period equation (5). Examination of either form of the dynamic period equation shows that the instantaneous reactor period is a function of the rate of change of reactivity, the reactivity, and the rate of redistribution of the delayed-neutron precursors within the defined groups. Furthermore, review of the derivation of this equation shows that the rate of change of reactivity is proportional to the prompt-neutron population, while the terms that contain the reactivity and the rate of redistribution of the precursors are related to the delayed-neutron population. Those realizations provide certain physical insights relative to reactor control and operation: 1. The period observed at any given moment in a reactor will depend on both the distance that a control device has been moved beyond the critical position and the rate at which that device is being moved. The former corresponds to the reactivity, and the latter to the rate of change of reactivity. 2. Changes in the velocity of a control device will have an immediate effect on the period because such changes alter the prompt-neutron population. 3. A high-speed insertion of the control devices over a short distance can be used to reduce the reactor power rapidly. Such power reductions, which are referred to as cutbacks, are used on some LWRs as an alternative to an abrupt reactor shutdown, or scram. Advantages of this alternative are that the reactor is subject to less thermal cycling and the time to restore full operation is reduced. 4. Reactivity cannot be changed on demand. Rather, a control device’s position or a soluble poison’s concentration has to be adjusted first. This takes time. 5. The dynamic response of a reactor is determined by that of its prompt- and delayed-neutron populations. Prompt neutrons appear essentially simultaneously with the fission event and are therefore a function of the current power level. Delayed neutrons appear some time after the fission event and are therefore a function of the power history. This dependence on the power history means that delayed neutrons will not be in equilibrium with the observed power during a transient. Hence, upon attaining a desired power level, the delayed neutrons will continue to rise, and an overshoot will occur unless the controller is capable of reducing the prompt-neutron population at a rate sufficient to offset the still-rising delayedneutron population.
10
Light Water Reactor Control Systems
devices are moved or when the concentration of a soluble poison is adjusted. Numerical analysis is required to determine the effect of a ramp insertion accurately. The material presented here is qualitative and is shown with the objective of illustrating important aspects of a reactor’s dynamic behavior. Assume that reactivity is inserted at the rate of 10 mbeta/s for 10 s, held constant at 100 mbeta for 40 s, and then removed at the rate of −5 mbeta/s. The reactor is exactly critical at the start of the insertion. It is useful to calculate the period immediately before and after each change in the reactivity insertion rate. Thus, calculations are done at t = 0− , 0+ , 10− , 10+ , 40− , 40+ , 60− , and 60+ s. In addition, a calculation is done at 50 s. The relation used is Figure 6. Response to a step change in reactivity.
These insights are best illustrated through analysis of a reactor’s response to step and ramp reactivity changes. The point kinetics equations can be solved analytically for step changes in reactivity. Figure 6 shows the behavior of both the neutrons and precursors following a step insertion °
such that ρ < β. The initial response is a rapid increase in the prompt-neutron population. This is called the prompt jump, and it represents the start of a nuclear runaway. However, the runaway cannot continue, because the reactivity is less than the delayed-neutron fraction. The prompt jump occurs almost instantly. (Note: The time scale on the figure is distorted.) This rapid rise is then followed by a more gradual one that corresponds to the growth of the delayed neutrons. Once asymptotic conditions are established, and if no feedback effects exist, the rise becomes exponential. The power after the prompt jump is given by
where P0 is the power level before the step insertion. Note that the effect of a step insertion depends on the initial °
power level. Suppose a reactivity of 0.2β is inserted as a step, and also assume the initial power to be 10% of allowed. The final power is 12.5% of allowed, a minor change. But what if the initial power had been 90% of allowed? The final power would be 112.5% of allowed, a potentially serious problem. The power behavior following the prompt jump may be approximated as
²
where ρ is zero, λe can be assumed to be 0.1 s−1 , and ρ is the step reactivity insertion. Equation (27) is only valid for asymptotic conditions. Reactivity would never be deliberately added in a stepwise manner to an LWR. In contrast, ramp reactivity insertions are commonplace. These occur whenever control
This relation is, as noted earlier, only approximate. Further, it is assumed that λe (t) is equal to 0.1 s−1 . The problem is °
solved with reactivity in millibetas. Thus, β equals 1000. At t = 0− , both the reactivity and its rate of change are zero. So the period is infinite. At t = 0+ , the reactivity is still zero. But the rate of change of reactivity is now a positive 10 mbeta/s. Hence, τ = (1000 − 0)/[10 + (0.1)(0)] = 100 s. So the mere act of initiating a reactivity insertion has immediately placed the reactor on a positive period of 100 s. At t = 10− , there is 100 mbeta of reactivity present, and reactivity is still being added at the rate of 10 mbeta/s. The period has gone from 100 s to 45 s. The power is rising at an ever increasing rate. At t = 10+ , rod withdrawal stops and the rate of reactivity change become zero. This causes the period to lengthen to 90 s. The power is still rising but at a slower rate. At t = 40− , conditions are the same as at t = 10+ . This means that the period was constant for 10 < t < 40. So, during that segment of the transient, the power rose on a pure exponential. At t = 40+ , the reactivity is still 100 mbeta, but the rate of change of reactivity is now −5 mbeta/s. Hence, τ = (1000 − 100)/[−5 + (0.1)(100)] = 180 s. The reactor power is still rising. This is a very important observation. The fact that the control devices are being inserted does not necessarily mean that the reactor power is decreasing. It may still be rising, but at an ever decreasing rate. This behavior is the result of the delayed-neutron precursors that have not yet achieved their equilibrium value for the current reactor power level. At t = 50, another interesting observation is made. The rate of change of reactivity is still −5 mbeta/s. The reactivity present is now 50 mbeta. Thus, τ = (1000 − 50)/[−5 + (0.1)(50)]. The period is infinite. The power increase has been halted even though there is still positive reactivity present. A decreasing prompt-neutron population is offsetting the still rising contribution of the delayed neutrons so as to establish a point of unstable equilibrium. This is sometimes called the “point of power turning.” At t = 60− , the period is −200 s and the power is actually decreasing. At t = 60+ , the period is infinite and the reactor is again exactly critical. However, it is at a higher power level than it was at the outset of the transient.
Light Water Reactor Control Systems
11
Figure 7. Response to ramp reactivity insertion.
distinct. A digital control algorithm that utilizes the dynamic period equation as a model of a reactor’s dynamics and thereby permits a desired power trajectory to be tracked is described by Bernard and Lanning (6). This reference also enumerates some of the safety issues associated with the use of digital controllers for nuclear reactors. Operation in the Presence of Feedback Effects
Table 3 summarizes the calculation, and Fig. 7 shows the results. The reciprocal of the period is plotted because of the difficulty in displaying an infinite value. Observe that whenever the rate of change of reactivity is altered, there is a step change in the period and a change of slope in the power profile. This calculation was idealized. In the actual case, the effect of the different half-lives of each precursor group would make the features shown in the figure less
The equations that were developed to describe a reactor’s response while critical at low power are also applicable to high-power operation. The only difference is that the reactivity now becomes a function of the reactor power because of the presence of various power-dependent feedback effects. Thus, the point kinetics equations become nonlinear and can only be solved via numerical methods. Several of the more important feedback effects are described here. Moderator Temperature. Negative reactivity feedback associated with the temperature of the moderator is one of the most important passive safety features in nuclear re-
12
Light Water Reactor Control Systems
actors. The physical basis for the effect in LWRs is that the coolant also functions as the moderator and as the temperature of the coolant rises, it becomes less dense. This in turn results in both less moderation and more leakage, because there are fewer thermalizing collisions between the fast neutrons and hydrogen nuclei. Leakage is a minor effect in a large reactor, so the decrease in moderation is usually the dominant factor in a large LWR. The neutron spectrum becomes hardened, that is, it is shifted to higher energies. As a result, negative reactivity is generated. This effect makes LWRs inherently self-regulating. The time scale for the effect is the time required for coolant to circulate through the primary loop. The specific sequence is evident from Fig. 8, which is a schematic of a PWR. Suppose that the demand on the turbine increases. The following sequence then occurs: 1. The turbine first-stage steam pressure decreases. 2. The steam flow from the steam generator increases. This causes the steam generator pressure Psg to drop. 3. The steam generator is a saturated system. So its temperature Tsg also drops. 4. The decrease in steam generator temperature causes a decrease in the cold leg temperature TCL of the primary coolant. The cold leg is the piping through which the coolant flows on exiting the steam generator. 5. Cooler primary coolant enters the reactor core. This denser coolant increases neutron moderation. 6. The reactor power increases, and so does the temperature of the hot leg (THL ). The hot leg is the piping through which coolant flows on exiting the core. 7. Hotter primary coolant reaches the steam generator. The steam generator temperature and pressure rise, and the steam supply equals the demand. The above sequence may be summarized as follows:
The final result is that the reactor power has increased to equal the demand. Also, the difference between the hot and cold leg temperatures has increased. Negative moderator temperature coefficients of reactivity are achieved by deliberately designing a reactor so that it is undermoderated. In the case of an LWR, this means that insufficient light water is present in the core to cause all of the fast neutrons produced from fission to slow down completely. The wisdom of this approach to reactor design is evident if the opposite is considered. If a core is overmoderated, then all neutrons attain thermal energies. If the reactor power becomes excessive, the coolant heats up and becomes less dense. So some moderator is lost from the core. However, this loss of moderator has no effect on neutron thermalization, because there was already more than enough moderator present. So the reactivity is unchanged, and the power excursion continues. Failure to design a reactor so that it is undermoderated may even result in the existence of a positive moderator temperature coefficient of reactivity. Materials selected to
be moderators are characterized by a high energy loss per neutron collision, a large cross section for scattering, and a small cross section for absorption. If sufficient moderator is present so that all neutrons are fully thermalized, then the absorption properties of the moderator, even though small, will affect the reactor’s dynamics. Specifically, a decrease in the moderator’s density will result in there being both fewer neutron scatters and fewer neutron absorptions. If the core was initially overmoderated, the loss of the scattering interactions will have no effect. However, the loss of some neutron absorption will generate positive reactivity. The power increase that caused the initial loss of moderator density will therefore accelerate. One of the design flaws in the Soviet style RMBK reactors was the existence of a positive coolant temperature coefficient of reactivity over a portion of the allowed temperature range. This was one of the contributing factors to the Chernobyl accident. Most reactors are required by national law to be undermoderated. (The RMBK reactors were not.) It should be recognized that there are certain accident scenarios for which negative temperature coefficients of reactivity make the situation worse. These include steam line breaks and control rod drops. In both cases, the reactor will cool off if no corrective action is taken, and the negative coefficient will cause a positive reactivity insertion. Protection against these accidents is often provided by quick-closing shutoff valves on the steam lines and a requirement to scram the reactor on a dropped rod. Void Coefficient. Negative void coefficients of reactivity are analogous to the negative moderator temperature coefficient. The formation of a void results in a decrease in the amount of moderator and hence the generation of negative reactivity. This effect is important in BWRs, which operate with a significant vapor fraction. In fact, BWR power can be controlled by adjusting the recirculation flow, which in turn controls the rate at which voids (steam bubbles) are generated in the core. The time scale for the void effect in a BWR is that for the coolant to flow through the recirculation loop. Fuel Coefficient. If a reactor’s fuel heats up, it will expand and become less dense. The result is the generation of negative reactivity, because there will be fewer fissions and more leakage. The fuel coefficient of reactivity is a minor effect in LWRs, because the uranium dioxide fuel has a very small coefficient of thermal expansion. It is, however, a significant effect in some other reactor types, particularly ones with metallic cores. Doppler Coefficient. The Doppler effect is another means whereby a change in fuel temperature may alter reactivity. Unlike the fuel coefficient, it is a very important passive safety feature in LWRs. LWR fuel consists of both 235 U and 238 U. As shown in Fig. 2, one of the distinguishing features of the 238 U total cross section is the presence of six narrow resonances, with the first occurring at 6.67 eV. Neutrons with energies that correspond to one of these resonances will be absorbed in reactions that do not lead to fission. (The absorption of a neutron by 238 U will, after two successive beta decays, yield 239 Pu, which,
Light Water Reactor Control Systems
13
Figure 8. Schematic of a pressurized water reactor.
if it then absorbs a neutron, may fission. The half-lives for the two beta decays are 23.5 min and 56 h, respectively.) The basis of the Doppler effect is that nuclei vibrate, at a frequency that increases with temperature. That is, the nuclei undergo thermal motion. The effect of the motion is to broaden the energy range over which the resonance is effective. This is illustrated in Fig. 9. If the 238 U nuclei were at so low a temperature as to be almost at rest, the only neutrons absorbed would be those with energies of E0 . Now suppose that a neutron with an energy that is slightly below the lower limit of the resonance strikes a vibrating 238 U nucleus while that nucleus is moving towards the neutron. The collision’s effect will be the same as if the neutron had slightly more energy and had hit an almost stationary 238 U nucleus. Hence, the neutron will be absorbed. So, as temperature increases, the effective width of the resonance increases. So far there is no net increase in absorption, because for every neutron that is now absorbed and that would not have been so previously, there is one that would have been absorbed and that now escapes. The increase in absorption occurs because the resonance has been broadened and because the fuel is heterogeneous, or lumped. Neutrons that scatter off 238 U only lose a small amount of energy, because 238 U is so massive. Moreover, because the fuel and moderator are spatially distinct, these neutrons may very well remain in the vicinity of the 238 U and interact with it again. If the neutron’s energy relative to the vibrating 238 U neutrons is in the vicinity of the resonance, these repeated interactions make it likely that an absorption will ultimately occur. The Doppler effect occurs on a very short time scale because there is no need for any heat transfer to occur. Fission energy is deposited directly in the fuel. Hence, the fuel temperature is immediately increased in the event of a power excursion. This makes the effect of more value to reactor
safety than either the temperature or the void coefficient of reactivity. However, unlike those two effects, the Doppler coefficient is of little use in routine regulation of a reactor, because the temperature change needed in the fuel for the effect to occur is quite large. Xenon. Xenon-135 has a cross section for thermal neutron absorption of 2.7 × 106 barns. That of 235 U is, by way of comparison, only 577 barns. Nuclides with exceptionally high absorption cross sections are called poisons. Unfortunately for reactor operation, 135 Xe is a by-product of the fission process and, as a result, affects a reactor’s dynamic behavior. The time scale is on the order of hours. Figure 10 shows that 135 Xe is produced both directly from fission and indirectly from the decay of iodine-135, which is in turn produced from the decay of the fission product tellurium135. Production from iodine is the dominant of the two processes. (Note: 135Te is so short-lived that it is often assumed that 135 I is produced from fission at the 135Te yield.) 135 Xe is removed by decay to cesium-135 and by neutron absorption (burnup) to 136 Xe. The dominant means of removal of 135 Xe is by burnup. 135 Xe affects all facets of a reactor’s operation, including startup, power maneuvers, shutdowns, and restarts. Assume that an LWR is initially xenon-free. On startup, the 135 Xe concentration begins to rise, because it is produced directly from fission and, once an inventory of 135 I is established, from that source as well. An equilibrium is eventually established between the production and loss mechanisms. This occurs in about 40 h, with the equilibrium value being given by
where Xeq equilibrium 135 Xe concentration
14
Light Water Reactor Control Systems
Figure 9. Effect of Doppler broadening on a resonance.
Figure 10. Xenon-135 production and removal mechanisms.
γ fission product yield for 135 Xe f macroscopic fission cross section φ neutron flux σ microscopic absorption cross section for 135 Xe λ decay constant for 135 Xe For low neutron fluxes, which correspond to low power levels σφ < λ, Eq. (30) indicates that the 135 Xe concentration is proportional to the neutron flux. For high power levels σφ > λ and the equilibrium concentration of 135 Xe approaches a limiting value. LWR neutron fluxes approach but do not attain the magnitude needed for the 135 Xe concentration to reach its limit. Thus, in an LWR, the 135 Xe concentration at 50% of full power will be about 70% of the 135 Xe value that exists at full power. The principal detriment associated with equilibrium 135 Xe is that additional fuel must be loaded in order to provide enough reactivity to offset that associated with the 135 Xe. This increases the size of the reactor and requires the presence of additional control devices. Reactivity transients occur as the result of the changing 135 Xe concentrations that follow both power maneuvers and shutdowns. Assume that an LWR is shut down following extended full-power operation. This eliminates the major means of removal of 135 Xe which is burnup. However,
the major means of production, which is decay from 135 I, continues. Production exceeds removal, so the 135 Xe concentration rises. This rise continues for about 11 h, at which time the iodine supply is sufficiently depleted so that 135 Xe removal by decay to 135 Cs becomes dominant. At this time, the 135 Xe concentration has peaked. Peak xenon causes operational problems in that, if an unplanned shutdown occurs late in the fuel cycle of an LWR, there may not be enough fuel present to permit restart until the 135 Xe peak has decayed. Such reactors are referred to as being xenonprecluded. Further xenon-related reactivity transients occur when a reactor is restarted. The major means of removal (burnup) is instantly resumed. However, the major source of supply (iodine-135) is below its equilibrium value. The concentration of 135 Xe therefore decreases to a level below its equilibrium value. While this occurs, the reactor’s control devices must be used to insert negative reactivity to offset the positive reactivity being generated by the decrease of 135 Xe. This is a counterintuitive maneuver in that control devices are normally withdrawn slowly to offset fuel burnup. Figure 11 shows the 135 Xe profile for a power history in which the reactor is started up, operated at full power for 40 h, shut down for 11 h, and then restarted to 50% of
Light Water Reactor Control Systems
15
Figure 11. Effect of power history on xenon concentration.
full power. 135 Xe first rises to its equilibrium value, then peaks on shutdown, drops below equilibrium on restart, and eventually settles out at about 70% of its full-power value. Varying 135 Xe concentrations can cause spatial oscillations of the neutron flux in large LWRs. Consider an LWR as comprising several sectors, each of which behaves individually subject to the constraint that the total power for the reactor must be kept constant. Suppose the power level rises slightly in one sector. In order for the total power to remain constant, that in the other sectors must be decreased. 135 Xe therefore starts to peak in those sectors. At the same time, it is burning out a little faster in the sector that is now at high power. So the change in 135 Xe reinforces the power imbalance, and the condition worsens. A flux tilt and possible power peaking problems may result. Balance is eventually restored, because more 135 I is generated in the high-power sector and less in the others. The tilt then reverses. Damping can be achieved through appropriate manipulation of the control devices or through the design of negative moderator temperature/void feedback mechanisms that are of sufficient magnitude to override the tilt. A final operational problem associated with xenon is that it may alter the shape of a reactor’s neutron flux distribution. Suppose an LWR is at full power. The shape of the radial neutron flux will be a sinusoid with the peak in the core center. The xenon distribution will have a similar shape. On shutdown, the xenon peaks with the peak being greatest in the core center. So, on restart, the neutron flux will be depressed in the core center and augmented on the core periphery. Control devices on the core perimeter will therefore have greater reactivity worths. The signal seen by nuclear instruments may be similarly affected. That is, those on the core perimeter may read higher than normal and those in the center less for the same actual power production in the core as a whole. Samarium. 149 Sm is another poison. It has only one source of supply, a fission product decay chain, and one means of removal, burnup. The production sequence is
The fission product yield of 149 Nd is 0.113. The absorption cross-section of 149 Sm for thermal neutrons is 40,800 barns. 149 Sm attains an equilibrium concentration about 300 h after startup. This value is independent of the neutron flux and hence reactor power level. On shutdown of the reactor, the 149 Sm peaks because production continues and there is no removal. The 149 Sm remains at its peak value until the reactor is restarted. It then asymptotically approaches its original equilibrium value. 149 Sm does not affect a reactor’s dynamics to the extent that 135 Xe does, because it has only one means of production and removal. Fuel manufacturers often load 149 Sm, which is stable, into the fuel so that its concentration and hence reactivity effect will not be a variable during the initial startup of the core. LIGHT WATER REACTOR OPERATION Thus far, the emphasis in this article has been on the attainment of criticality and the dynamic behavior of a reactor once critical. However, the thermal–hydraulic aspects of the reactor also require consideration. The need is for there to be integrated control of pressure and temperature while also observing heatup limits during both reactor startup and while maneuvering at power. The reader may find it useful to consult a reference on reactor design. That by Rahn et al. (7) is suggested. Pressurized Water Reactor Operation A distinguishing feature of a PWR is that both the initial pressurization of the plant and the control of pressure during operation are achieved independently of the production of energy from the reactor core. In addition, heatup of the coolant to the normal operating temperature can be done without the use of fission reactor power. As a result, two options exist for the startup of a PWR. The reactor can be taken critical while at low temperature and pressure, and fission energy then be used for plant heatup; or the reactor can be left subcritical until the normal operating pressure and temperature have been established. Both approaches have been used. At present, the latter is favored because it limits the consequences of certain accident sce-
16
Light Water Reactor Control Systems
narios. These include a continuous rod withdrawal with the reactor initially at low power, and a loss of pressure. The impact of the first scenario will be less severe under the second startup approach because the magnitude of the moderator temperature coefficient will be greatest with the plant at normal operating temperature and hence negative reactivity feedback will be generated to offset the effect of the rod withdrawal. Also, under the second approach, the safety system’s pressure sensors will be operable and will automatically shut the reactor down if pressure is lost. Under the other approach to startup (critical at low pressure and temperature), the pressure sensors would, by definition, be bypassed. Pressure Control. Plant pressure in a PWR is produced by the pressurizer, which is a tank that is connected to the hot leg and physically located above the reactor vessel. As shown in Fig. 8, the tank is equipped with electric heaters and a spray valve. The heaters are used to keep the water in the pressurizer at saturated conditions. That is, the liquid in the pressurizer is kept boiling so that a liquid–vapor interface exists. In contrast, coolant in the reactor vessel and the primary piping is kept subcooled with no bulk boiling allowed. The volume occupied by vapor in the pressurizer is called the bubble. It is this vapor pressure that is transmitted throughout the primary system. In addition to providing the source of pressure for a PWR’s primary side, the pressurizer dampens pressure oscillations. For example, if the power is increased, the coolant exiting the core heats up and expands. This creates a surge of water that flows into the pressurizer and compresses the bubble. The pressure rises. This in turn activates the spray valve, which causes some of the vapor to condense, thereby restoring pressure to normal. The reverse occurs on a downpower, only the heaters energize to raise the pressure. If a PWR is shut down and completely cooled down for maintenance, the pressurizer will be completely filled with water. This minimizes corrosion. Pressure control of the primary system is then achieved by coordinating flow into the piping from the charging system and flow out of the piping through the letdown system. When the pressurizer is completely filled with liquid, it is referred to as being solid. The process of reestablishing a vapor–liquid interface is referred to as drawing a bubble. Temperature Control. Under full-power operating conditions, the temperature is controlled by balancing the heat produced from the core with the energy removed by the turbine. If the temperature drops out of a narrow band centered on a designated setpoint, control rods are withdrawn to increase the power. Likewise, if temperature should rise, the rods are inserted. Plant heatup could, as noted earlier, also be done by using the energy produced from fission. The reactor would be critical at a few percent of full power with the energy used to offset plant heat capacity, so that a specified heatup rate is achieved. However, the preferred approach to plant heatup is to operate the primary coolant pumps with the reactor shut down. Friction losses provide a source of energy that raises the plant temperature to its normal operating value in about 6 h.
Reactivity Control. Five mechanisms are available for reactivity control. Of these, the first three are within the purview of the reactor operator. The latter two are design features. The five mechanisms are: 1. Movable Control Rods a. Full-Length Shutdown Rods These are normally fully withdrawn at startup and kept withdrawn during operation. Their function is to protect the core against a sudden positive reactivity insertion. b. Full-Length Control Rods These are used to create subcritical multiplication and to achieve criticality. They are also used to compensate for the reactivity associated with temperature changes when at power, to maneuver the reactor at up to 5%/min, and to compensate for reactivity changes associated with changes of reactor power. 2. Part-Length Rods These are used for power shaping. They are not inserted on a trip. 3. Soluble Poison (Boric Acid) This is used to control slow, long-term changes of reactivity, including fuel depletion, long-lived fission product buildup, and reactivity effects associated with plant heatup and xenon. 4. Negative Moderator Temperature Coefficient As described earlier, this is a design feature that promotes passive safety under normal conditions and makes reactors self-regulating. 5. Burnable Poison The presence of the soluble poison reduces the magnitude of the negative temperature coefficient and might even result in a positive and hence destabilizing coefficient. In order to reduce reliance on the soluble poison, burnable ones are incorporated in the fuel. These are materials that burn out faster than the fuel and hence allow additional 235 U to be loaded without increasing the core size. PWR Startup Certain limiting conditions such as observance of a maximum linear heat generation rate, the avoidance of interactions between the UO2 fuel pellets and the surrounding cladding, and the need to balance power production in the upper and lower portions of the core apply under all operating conditions. During startup, operation is further restricted by the need to observe (1) maximum pressure to avoid brittle fracture of the core vessel, (2) minimum temperature for criticality safety, and (3) minimum pressure for coolant pump operation. Maximum Pressure to Avoid Brittle Fracture. Reactor pressure vessels are made of carbon steel, which has a body-centered cubic lattice structure and is therefore subject to brittle fracture. That is, should the vessel temperature drop below a certain value and the vessel be subject to a high pressure, the vessel could fail in a brittle or catastrophic manner rather than in a ductile one. The temperature at which the failure mode shifts from ductile to
Light Water Reactor Control Systems
brittle is called the nil ductility temperature or NDT. It is a function of the reactor vessel material and the integrated neutron exposure of the vessel’s inner wall. The NDT rises as the fluence seen by the vessel increases. The maximum allowed pressure of the primary system is determined by the ductility of the pressure vessel steel. As temperature decreases, so does the allowed pressure, because the vessel steel is more susceptible to failure at low temperature. Stresses are different for heatup and cooldown, and hence there are two separate curves of pressure versus temperature. The curve for cooldown is more limiting, because under that condition the thermal, pressure, and accumulated fatigue stresses are all tensile for the inner wall of the pressure vessel.
17
There is one curve showing the minimum pressure at which the reactor coolant pumps may be operated. The plant pressure should be kept above this curve. Also shown in Fig. 12 is a typical trajectory for a plant startup in which the plant is taken from a cold shutdown condition with a solid pressurizer to a hot operating condition. Table 4 lists the principal steps in the startup of a PWR.
Minimum Temperature for Criticality Safety. This restriction reflects the need to ensure that the moderator temperature coefficient of reactivity is negative prior to the attainment of criticality. Also, if the coefficient is positive, then the reactor must be kept subcritical by an amount equal to or greater than the reactivity that would be inserted should there be a depressurization. (Note: There are some exceptions to these rules. The technical specifications of some plants do permit them to be taken critical in the presence of a positive moderator coefficient provided that the magnitude of the coefficient is small. All plants must have a negative moderator temperature coefficient at their normal operating temperature.) The moderator temperature coefficient of reactivity may not be sufficiently negative at temperatures below a certain minimum, because the coefficient of thermal expansion of water is small at low temperature and because of the presence of the soluble poison. Minimum Pressure for Coolant Pump Operation. The net positive suction head (NPSH) is defined as the pressure at the pump suction position less the saturation pressure that corresponds to the temperature of the fluid being pumped. If it is sufficiently positive, a centrifugal pump can operate. Otherwise, water will flash to steam in the eye of the pump and cause the pump either to cavitate or to become gasbound. Hence, there is a certain minimum pressure that must be maintained if the primary coolant pumps are being operated. This minimum pressure increases as plant temperature rises, because the saturation pressure rises with temperature. Figure 12 shows the pressure–temperature limits for a PWR. There are two curves showing the maximum pressure to avoid brittle fracture, one for heatup and one for cooldown. The primary system pressure must be kept below the appropriate curve at all times. The brittle fracture curves will shift to the right and become more restrictive as the pressure vessel incurs additional neutron damage. There is one curve for the minimum temperature for critical operation. If the reactor is critical, then the system temperature must be maintained to the right of the curve so that the moderator temperature coefficient of reactivity will be sufficiently negative. As a result, the consequences of both depressurization and continuous-rod-withdrawal accidents will be less severe.
BWR Operation Figure 13 is a schematic diagram of a boiling water reactor. Comparison of this diagram with that of a PWR shows several differences that affect reactor operation. First, BWRs are direct cycle plants. The steam that drives the turbine is produced by boiling action within the core. There is no use of a primary loop that transfers heat from the fuel to a secondary system via steam generators. Hence, BWRs can operate at lower pressures than PWRs while still achieving the same thermal efficiency. Second, coolant flow through the core is a mixture of both natural and forced circulation. The density difference between the liquid–vapor mixture in the core and the liquid in the downcomer (the region on the core perimeter where the jet pumps are located) causes flow. This flow can be enhanced by the recircula-
18
Light Water Reactor Control Systems
Figure 12. Pressure–temperature limits for a PWR.
Figure 13. Schematic of a boiling water reactor.
tion pumps, which drive water through the jet pumps. A region of low pressure therefore exists at the jet pump suction, so that feedwater combines with liquid from the steam separators and flows through the core. A third difference between BWRs and PWRs is that the source of pressure in a BWR is the boiling that occurs in the core. Thus, the
option does not exist in a BWR to achieve normal operating temperature and pressure and then take the plant critical. There are two mechanisms available for reactivity control in a BWR:
Light Water Reactor Control Systems
19
Figure 14. BWR thermal power and core flow restrictions.
1. Movable Control Rods One cruciform-shaped rod is present for every four fuel assemblies. This large number of rods provides a means to ensure that each set of fuel assemblies is properly operated. The boiling process could result in some assemblies producing too much power and others too little. In a PWR, the rods are driven from above the core. This is not possible in a BWR, because of the steam separators. So in a BWR the rods, which contain boron carbide, are driven upwards from the bottom of the core. Power changes in excess of 25% are accomplished with these rods. 2. Negative Moderator and Void Reactivity Coefficients The negative moderator and void reactivity feedback coefficients are within the control of the operator. They are used to accomplish power changes of as much as 25% of rated. An increase in the recirculating pump speed increases the flow rate through the core, which in turn increases heat transfer and decreases the vapor fraction. Therefore, the average density of the liquid–vapor mixture that is flowing through the core increases, and so does the neutron moderation. The net effect is that positive reactivity has been added to the core. The reactor power rises until increased vapor formation and higher fuel temperatures reverse the process. The reactor then settles out, so that it is critical but at a higher power level. A decrease in the recirculation flow will have the opposite effect.
BWR Startup The major steps in the startup of a BWR are:
1. The recirculation pumps are started and set at the desired speed, 28% of rated for the plant shown in Fig. 14. 2. The reactor is made critical by the withdrawal of the control rods. 3. The control rods are further withdrawn, thereby placing the reactor on a positive period. The power is raised to above the point-of-adding-heat, and the core vessel is pressurized. In accordance with the safety limit specifications, power may not be increased above 25% until the plant pressure exceeds 800 psia. 4. Once the pressure rises above 800 psia, the reactor power is raised to 55% of rated. A check is then made of the estimated critical position and the rod pattern. 5. The recirculation pump speed is then increased so that total core flow rises to 100%. As it does, the core power also rises because of the moderatortemperature–void reactivity coefficient. Power is then leveled at the desired value. This procedure takes approximately 18 h to complete. Not mentioned in the above summary are many other significant factors such as heatup limitations associated with thermal stress. Figure 14 shows the relations between thermal power and core flow that must be observed for a boiling water reactor. The initial conditions for startup are 0% power and 36% flow. The percentage of flow through the core exceeds the recirculation pump speed (28%) because of the effect of the jet pumps. Once critical, the plant is coordinated so that the power and flow move along the 28% pump speed line. The recirculation pump speed is kept constant while moving along this line. The reason for the increase in flow as
20
Light Water Reactor Control Systems
the power rises is natural circulation, which is also termed thermally induced flow. The line marked “maximum allowed power” must not be crossed until the plant pressure is above 800 psia. The power is then raised to 55% of rated. The corresponding core flow is 45%. This point is the intersection of the 28% pump speed line and the design flow control line. The control rods and the recirculating pump speed are then adjusted as needed to move the plant’s power and flow along the design flow control line until conditions of 100% power and 100% flow are attained. Also shown in the figure are net positive suction head limits for the jet and recirculation pumps and the rod block line. The latter represents the combinations of core flow and power for which control rod withdrawal is prohibited automatically. Certain combinations of core flow and thermal power have been associated with oscillatory behavior in BWRs. Accordingly, the operating curves shown in Fig. 14 have been modified to preclude or at least restrict operation in the knee of the curve. Analysis of the factors involved in this instability is given by Tong and Weisman (8).
BIBLIOGRAPHY 1. A. F. Henry Nuclear Reactor Analysis, Cambridge, MA: MIT Press, 1975. 2. J. J. Duderstadt L. J. Hamilton Nuclear Reactor Analysis, New York: Wiley, 1976. 3. J. A. Bernard A. F. Henry D. D. Lanning Application of the “reactivity constraint approach” to automatic reactor control, Nucl. Sci. Eng., 98 (1), 87–95, 1988. 4. J. A. Bernard Formulation and experimental evaluation of closed-form control laws for the rapid maneuvering of reactor neutronic power, Report MITNRL-030, Massachusetts Inst. Technol., 1989. 5. J. A. Bernard and L. W. Hu,“ Dynamic Period Equation: Derivation, Relation to Inhour Equation, and Precursor Estimation,” IEEE Transactions on Nuclear Science, Vol.46, No. 3,Part I, June 1999, pp 433–437. 6. J. A. Bernard D. D. Lanning Considerations in the design and implementation of control laws for the digital operation of research reactors, Nucl. Sci. Eng., 110 (1), 425–444, 1992. 7. F. J. Rahn et al. A Guide to Nuclear Power Technology, Malabar, FL: Keiger, 1992. 8. L. S. Tong J. Weisman Thermal Analysis of Pressurized Water Reactors, La Grange Park, IL: American Nuclear Soc., 1996.
JOHN A. BERNARD Massachusetts Institute of Technology, Cambridge, MA
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICA...CTRONICS%20ENGINEERING/40.%20Nuclear%20Science/W5211.htm
●
HOME ●
ABOUT US //
●
CONTACT US ●
HELP
Wiley Encyclopedia of Electrical and Electronics Engineering Nuclear Engineering Standard Article John C. Lee1 1University of Michigan, Ann Arbor, MI Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. : 10.1002/047134608X.W5211. pub2 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (468K)
Browse this title ●
Search this title Enter words or phrases ❍
Advanced Product Search
❍ ❍
Acronym Finder
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20...20ENGINEERING/40.%20Nuclear%20Science/W5211.htm (1 of 3)16.06.2008 15:27:09
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICA...CTRONICS%20ENGINEERING/40.%20Nuclear%20Science/W5211.htm
Abstract The sections in this article are Nuclear Power Plants Nuclear Reactor Physics Nuclear Reactor Safety Thermal and Hydraulic Analysis of Nuclear Power Plants Selection of Materials for Nuclear Power Plant Components Interaction of Radiation with Matter Neutron Diffusion Theory Reactor Kinetics Reactor Physics Analysis and Core Design Thermal-Hydraulic Analysis for Reactor Cores and Power Plants Fuel Cycle Analysis for Nuclear Power Plants Radioactive Waste Disposal Probabilistic Safety Analysis of Nuclear Power Plants Instrumentation and Control Systems in Nuclear Power Plants Temperature Coefficients of Reactivity and Inherent Reactor Safety Numerical Solution of the MGD Equations Neutron Transport Theory and Computational Algorithms Advanced Reactor Designs and Challenges for Nuclear Engineers
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20...20ENGINEERING/40.%20Nuclear%20Science/W5211.htm (2 of 3)16.06.2008 15:27:09
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICA...CTRONICS%20ENGINEERING/40.%20Nuclear%20Science/W5211.htm
Cross-references: | | | Copyright © 1999-2008 All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20...20ENGINEERING/40.%20Nuclear%20Science/W5211.htm (3 of 3)16.06.2008 15:27:09
NUCLEAR ENGINEERING NUCLEAR ENERGY NUCLEAR REACTORS Nuclear engineering is a branch of engineering concerned with peaceful uses of nuclear energy. It includes the study of processes related to the controlled release of fission and fusion energy and the conversion of nuclear energy into other useful forms of energy such as heat and electricity. Significant progress has been made over the past three decades in generating electricity from nuclear fission power plants, while intense research is still underway to achieve controlled release of nuclear fusion energy. We focus our discussion on fission reactor physics and engineering. In addition to accurate and timely control of the selfsustaining fission reaction in the nuclear reactor core, special attention must be given to the safe removal and efficient utilization of this unique, intense form of energy. Because high-energy ionizing radiations are emitted in the fission process, mechanical and structural properties of materials used in a nuclear power plant may degrade due to radiation exposure during the operating life of the plant. This requires accurate understanding of the mechanisms for interaction of radiation with matter and optimal selection of material compositions and structures for various components in nuclear power plants. Detection and monitoring of different forms of radiation in nuclear power plants forms a key element in the overall effort to ensure safety of the public associated with nuclear electricity generation. Nuclear engineers are also engaged in developing techniques that provide beneficial uses of ionizing radiations in industrial, scientific and medical applications.
NUCLEAR POWER PLANTS As of March 2006, 104 nuclear power plants provide an installed electrical generating capacity of 101 GW(electric) and account for about 20% of electricity generated in the United States, while 444 nuclear power plants provide an installed capacity of 372 GWe worldwide. All of the nuclear power plants in the U.S. and 80∼85% worldwide utilize light-water cooled reactors (LWRs), which may be grouped (1–3) into pressurized water reactors (PWRs) and boiling water reactors (BWRs). We refer common water as light water, in contrast to heavy water which consists of oxygen and deuteron, the heavy isotope of proton. About 70% of LWRs operating in the U.S. and around the world are PWRs. In the bulk of nuclear power plants, energy released in the fission process is deposited as heat energy initially in fuel pins enclosed in metallic tubes. This energy is eventually transmitted through heat conduction and convection to fluid circulating through the reactor core which is located within a steel pressure vessel. In the case of LWRs, water is used as the circulating fluid, known as the reactor coolant. In gascooled reactors, pressurized gases, e.g., helium or carbon
dioxide, may serve the role of reactor coolant, while circulating liquid metal, e.g., sodium or lead, picks up the heat in liquid-metal cooled reactors (LMRs) (2). The CANDU (Canadian Deuterium Uranium) reactor (2) may be cooled either with heavy or light water. Once the fission energy is picked up by the reactor coolant in the PWR, the coolant circulates through a heat exchanger, where the heat is transferred from the primary loop to the secondary loop, as illustrated (4) schematically in Fig. 1. The heat exchanger is functionally similar to the radiator in an automobile, where the heat produced in the internal-combustion engine is dissipated through a circulating fluid. The heat exchanger in the PWR is known as a steam generator, since the circulating fluid in the secondary heat transfer loop is allowed to boil and the resulting steam is separated from liquid. The steam is used to turn the steam turbines and electrical generators, thereby producing electricity. Included in the schematics of Fig. 1 is a pressurizer, which is essentially an extension of the primary loop to regulate the pressure of the primary system, and a reactor coolant pump which circulates the reactor coolant. The circulating fluid in the secondary heat transfer loop is known as feedwater and the steam that exits from the turbines is condensed into feedwater in the condenser and associated machinery. The condenser as well as the feedwater system is illustrated in Fig. 1. The feedwater system reheats the condensed steam and regulates the temperature of the feedwater before it recirculates into the secondary side of the steam generator. The heat transferred from the steam into the condenser is eventually rejected to the atmosphere through a cooling pond or cooling tower in a tertiary loop, which is the final heat transfer loop shown in Fig. 1. The reactor pressure vessel, coolant pump, steam generator, and pressurizer are enclosed in a concrete containment structure, built with an inner steel liner. The plant components located within the containment building are collectively known as the nuclear steam supply system (NSSS), while those located outside the containment are generally known as the balance of plant (BOP). Particular attention is given to the reliability and integrity of NSSS components, which are subject to specific regulations and oversight by the U.S. Nuclear Regulatory Commission. In modern BWRs employing a direct cycle, coolant water circulating in the primary loop is allowed to boil inside the reactor vessel. Steam is separated from liquid water in the reactor vessel and is used to turn the turbo-generators, in much the same way steam extracted from the steam generators in PWRs is used to generate electricity. Incorporation of a direct cycle in BWRs eliminates a heat transfer loop and allows for simplifications in the plant system design. Production of a significant amount of steam within the reactor core, however, requires a number of special considerations for the design and analysis of reactor core and fuel elements in BWRs. Located inside the reactor pressure vessel of a PWR plant, illustrated (4) schematically in Fig. 2 (a), is the reactor core comprising 150∼200 fuel assemblies, surrounded by steel plates which form the flow baffle. A cylindrical barrel separates the upward flow of coolant through the core from the inlet coolant flowing downward in the an-
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright © 2007 John Wiley & Sons, Inc.
2
Nuclear Reactors
Figure 1. Overall system layout for a PWR plant, indicating key components and illustrating connections between the primary, secondary, and tertiary loops. (Courtesy of Westinghouse Electric Corporation.)
Figure 2. Core and fuel assembly structure of a typical PWR plant. (a) Top view of the reactor core, comprising fuel assemblies and other structures inside the reactor vessel; (b) Sketch of a fuel assembly illustrating fuel rods, spacer grids, rod cluster control elements, and other components. (Courtesy of Westinghouse Electric Corporation.)
nulus formed by the barrel and pressure vessel. Neutron shield panels are located in the lower portion of the vessel to attenuate high-energy gamma rays and neutrons leaking out of the core, thereby reducing the radiation-induced embrittlement of the vessel. Specimens to monitor radiation exposure of the vessel are also indicated in Fig. 2 (a). Figure 2 (b) illustrates a typical PWR fuel assembly, with an array of approximately 250 fuel rods, each consisting of a stack of UO2 pellets loaded in zirconium-alloy tubes with a diameter of 10∼12 mm and an effective fuel length of 3.6 m. Other prominent structures for the fuel assembly include the spacer grids and clustered control absorbers inserted into the top of the assembly. NUCLEAR REACTOR PHYSICS Designing a nuclear reactor core involves the determination of nuclear fuel element configurations (5–7) and me-
chanical and control devices so that the we may attain a self-sustaining chain reaction, i.e., critical configuration, in the core and produce power over a substantial period of core life without refueling. This entails an initial selection of fuel material, composition, and geometry, together with devices to control the chain reaction, thermal and mechanical structures for fluid flow, heat transfer system, and mechanical support for core components. Equations representing the balance of neutrons undergoing migrations in and out of the core and interacting with fuel and non-fuel materials in the core are solved (5–7) to arrive at an estimate of the critical core configuration with fresh fuel. This stage of design calculations invariably requires iterative adjustments of fuel, fluid, or control configurations until an initial criticality is attained. A key aspect of the neutron balance equation is the representation of probabilities that neutrons interact with various materials in the core, including the probability that neutrons induce the fission chain reaction in fuel. Since neutrons are electrically neutral, they penetrate the electron orbits of an atom and interact with the nucleus. Thus, neutron interactions with matter entail nuclear reactions and the reaction probability (5,6,8) is expressed in terms of the effective cross sectional area of a given nucleus or simply nuclear cross section. The cross section depends heavily on the structure of the nucleus involved, the relative speed between the neutron and nucleus, and the type of reaction, e.g., absorption or scattering collision. The probability of neutrons leaking out of the core also depends on the speed or energy of neutrons. Neutrons are created in the fission process with high energy, typically around 2.0 MeV of kinetic energy on average, corresponding to a speed of 2.0 × 107 m/s. These high-energy neutrons undergo collisions with nuclei of the surrounding material and may either be absorbed by the nuclei or undergo scattering collisions
Nuclear Reactors
and emerge with a reduced kinetic energy and speed. The slowing down of neutrons through scattering collisions is known as moderation of neutrons and materials comprising nuclei with low atomic weight and small absorption cross section, e.g., light water, heavy water, graphite, and beryllium, are known as moderators. Moderators are often introduced in nuclear reactor cores, because the neutronnucleus reaction probability increases significantly as the neutron speed decreases, and light nuclei are more efficient in reducing the neutron energy through scattering collisions than heavy nuclei. Typically in an LWR core, about 75% of fissions are induced by neutrons of energy below 0.625 eV, often referred (7) to as thermal neutrons. Thus, the balance equation or criticality relationship for a reactor core has to account for the slowing down of high-energy neutrons born in the fission process, absorption collisions with the nuclei of incore and excore materials, and leakage of neutrons out of the core as well as the chain reaction process itself. In a typical fission reaction, 2.45 neutrons are released on average. One of these 2.45 neutrons causes fission in a fuel nucleus to sustain the chain reaction, while the remaining 1.45 neutrons may either be absorbed in nonfission reactions in fuel or in non-fuel materials or may leak out of the core. The ratio of the number of neutrons available in a nuclear reactor core in the present generation of chain reactions to that in the previous generation is known as the effective multiplication factor keff . When keff = 1.0, the population of neutrons in each generation stays constant and the system is called critical. When keff > 1.0, the system is supercritical and the neutron population would increase from one generation to the next, while, in a subcritical system with keff < 1.0, the neutron population will die away given a sufficient period of time. We may write keff = k∞ PNL , where k∞ is the effective multiplication factor for an infinitely large chain reacting system and is known as the infinite multiplication factor while PNL represents the probability that neutrons do not leak out of the chain reacting system of a finite size, e.g., a reactor core. To reduce the leakage of neutrons out of the core, i.e., to increase PNL , the core is surrounded with reflectors comprising moderating materials. The critical mass of a reflected core is therefore smaller than that for a bare, unreflected core of the same composition and layout. The detailed balance statement describing the timedependent neutron behavior in a reactor core is obtained as a complex integro-differential equation, known as the Boltzmann neutron transport equation <xref target="W5211-mdis-0007"/>. Because of the complexity involved in and computational effort required for solving the transport equation for realistic problems, a series of approximations is usually employed to provide sufficiently accurate estimates for the spatial and energy distributions of neutrons and the criticality of a reactor core. The approximations typically involve the synthesis of detailed solutions for a subregion of the core with approximate global calculations for the entire core.
3
NUCLEAR REACTOR SAFETY When a neutron is absorbed in a uranium nucleus and the resulting compound nucleus splits, usually into two fragments, approximately 200 MeV of fission energy is released and carried away by several different types of ionizing radiation, neutrons, and fission fragments or products. The energy density in a nuclear reactor core is six orders of magnitude higher than that typically encountered in conventional, fossil-fueled power plants. This simple fact translates into an important observation that a 1.0-GWe nuclear plant consumes approximately 1.0 Mg of uranium per year compared with approximately 3.0 × 106 Mg of coal consumed in a coal-fired plant of the same capacity. This high energy density in nuclear plants implies that various components in the core will be subject to intense heat and have to be properly cooled and maintained. Degradation of mechanical properties of materials, subject to intense radiation exposure, has to be duly considered in the design and operation of nuclear power plants. Since the majority of the fission products are radioactive, additional energy is released from the decay of fission products, even after the fission chain reaction is terminated. Removal of decay heat is a primary concern in postulated accidents (9) involving loss of cooling. Indeed, in the Three Mile Island accident of 1979, the failure to remove decay heat, long after the fission events were terminated, resulted in meltdown of a significant number of fuel elements in the core. Most of the risk to the general public due to nuclear energy production is associated with operating nuclear power plants and, to a lesser extent, with the disposal of radioactive waste generated. Since radiation release from normally operating nuclear plants is negligible, public risk associated with nuclear power plant operation is evaluated for postulated accidents. The risk estimates (9, 10) are subject to considerable uncertainties and the perception of nuclear power risks is highly subjective, as is the case for the bulk of our daily activities. We may consider the safety of nuclear power plants as a complex function of the calculated risk and public acceptability of the risk. The acceptability of any risk that we take in our daily life depends heavily on our perception of the risk as voluntary or involuntary and on whether the risk is distributed or acute. This is readily illustrated by the fact that annual traffic fatalities on the order of 50 000 on U.S. highways are simply accepted as a voluntary and distributed risk, while 100∼300 deaths annually resulting from a few airline crashes generate immediate and visible anxiety among the public. To ensure safe and reliable operation, nuclear power plants are designed with multiple barriers for containing radioactivity generated in the fission process. This concept, known as the defense in depth, (9) may be illustrated for LWR cores: (1) UO2 fuel pellets are in a ceramic matrix retaining fission products, (2) fuel pellets are enclosed and sealed in metallic tubes, (3) fuel elements are located in a steel pressure vessel, and finally (4) the entire NSSS including the pressure vessel, steam generator and pressurizer is located inside a leak-tight containment building. The defense-in-depth approach is used in other system designs and structures whenever possible, together with the
4
Nuclear Reactors
diversity and redundancy in the choice of signals, actuation mechanisms, and equipment for all plant monitoring and control systems. In particular, special attention is given to the design and testing of the reactor shutdown system so that the fission chain reaction in the core may be stopped, under any circumstance, with a high degree of reliability. Similarly, the containment structure is built and tested so that radionuclides released from the NSSS in worst credible accidents, known as design basis accidents, may safely be retained. Safe operation of any equipment, however, depends ultimately on human operator and effort is made to provide a high level of operator training and enforce tight regulations and oversight by the U.S. Nuclear Regulatory Commission. THERMAL AND HYDRAULIC ANALYSIS OF NUCLEAR POWER PLANTS Determining the outcome and consequences of postulated accidents in nuclear power plants requires computational models that can account for rapid variations in fuel and coolant temperature distributions subject to significant disruptions in the coolant flow. The coupled thermalhydraulic (TH) analysis (11) has to represent boiling of the reactor coolant for routine operation in BWR cores and in severe transients in PWR cores. A number of sophisticated computational fluid dynamics algorithms have been developed over the years to handle complex nonlinear fluid flow conditions characteristic of the transient reactor TH problems. Some of the well known TH codes include not only discretized first-principles fluid flow models for complex flow circuits in the primary and secondary heat transfer loops but also simplified lumped-parameter models for key power plant components, e.g., the pressurizer and coolant pump. Detailed TH analysis of the reactor core is very much necessary to obtain accurate estimates of the thermal design margin and to maximize the power output from a given fuel inventory. Because of the high energy density existing in the core, as discussed earlier, we need to determine, to a high degree of accuracy, the spatial distribution of power density in the core and the resulting temperature distributions for fuel elements and reactor coolant. Since there are approximately 40 000∼50 000 fuel rods in an LWR core, it requires considerable computational effort to represent coolant flow explicitly around each fuel rod. Actual TH analysis of reactor coolant channels involves typically detailed solutions for a small region of the core, combined with approximate global solutions for the entire cluster of coolant channels. SELECTION OF MATERIALS FOR NUCLEAR POWER PLANT COMPONENTS Various materials (1) used within and outside the nuclear reactor core are subject to intense heat and radiation and may also be exposed to a corrosive environment. Of particular concern is the degradation in physical properties of the reactor pressure vessel, associated internal structures, and coolant piping in the primary loop and small-diameter
tubes used in PWR steam generators. Because the integrity of the pressure vessel plays a pivotal role in the overall safety of the plant, special attention is given to the design, material selection, fabrication, testing and inspection of the vessel. Typically, the body of the vessel is low-alloy carbon steel and inside surfaces in contact with primary coolant are clad with a layer of austenitic stainless steel or inconel to minimize corrosion. The primary concern is that fast neutron bombardment may result in slow increases in the temperature, called the nil ductility transition temperature, where the pressure vessel may suffer a significant loss of ductility, and that cooling of the vessel under pressure could develop cracks in the vessel. To minimize the potential problems associated with this pressurized thermal shock phenomena, (1) effort has been made in recent years to revise fuel loading patterns and to load dummy fuel elements as additional neutron shields. The possibility of annealing the vessel at radiation-sensitive welds to restore ductility is also under consideration. Typical steam generators in PWR plants comprise thousands of long tubes, either straight or U-shaped, depending on the type of the heat exchanger design. The steam generators employ a tube-and-shell design, where the radioactive primary coolant flows within the tube and heat is transferred across the tube wall to the non-radioactive feedwater flowing outside the tube in the steam generator shell. Since failure of steam generator tubes provides a leakage path for radioactive nuclides from the primary loop to the secondary side, careful attention is given to the design of the steam generator and selection of the tube material. To minimize corrosion in the secondary side, where water purity is not as strictly enforced as in the primary side, nickel-based alloys, in particular inconel 600, are often used for steam generator tubes in LWRs. Over a period of time, the tubes may develop leaks or cracks due to flowinduced vibrations and high cycle fatigue or due to stress corrosion cracking, (1) which results from a combination of mechanical and thermal stress and a corrosive chemical environment. Steam generator tube failures have been reduced through tighter water chemistry control, but PWR steam generators operate with often a number of leaky tubes plugged and a few plants have undergone expensive replacements of the entire steam generator units.
INTERACTION OF RADIATION WITH MATTER The design and analysis of devices used in industrial and medical applications of radiation as well as of nuclear power plant components requires accurate representation of interactions of nuclear radiation with matter. For nuclear reactor physics analysis, the mechanisms for neutrons undergoing collisions with nuclei of core materials are of primary interest. For the study of radiation shields, interactions of both neutrons and γ-rays have to be accounted for, while the contributions from other ionizing radiations, including β- and α-particles, are minimal and are not considered. Our discussion on radiation interaction mechanisms focuses on neutron reactions, with only a brief review on γ-ray interactions with matter. Charged particle interactions play a major role in plasma physics and
Nuclear Reactors
controlled thermonuclear reactions and are not discussed here. Radioactive Decay For a sample of any radioactive species or nuclide, the number of particular radioactive nuclei decaying in unit time is proportional (5–8) to the number of the particular radioactive nuclei present at that instant. If N(t) is the number of the particular nuclei existing at t and the decay process is characterized by a proportionality constant λ, known as the decay constant, then the number −dN of the nuclei decaying in time interval dt around t is given by −dN = λ N(t)dt, or equivalently we write: dN(t) = −λN(t). dt
(1)
Given the number N(0) of nuclei of the particular species present at t = 0, integration of Eq. (1) yields the number N(t) of the nuclei remaining at time t: N(t) = N(0)e−λt .
(2)
From Eq. (2), we obtain the half life t1/2 = ln 2 / λ = 0.693/λ, defined as the time interval over which the number of radioactive nuclei is reduced to half its initial value, i.e., N(t1/2 )/N(0) = 0.5. The activity of a radioactive substance is defined as λN, in units of either [Bq = 1 disintegration/s] or [Curie = 3.7 × 1010 Bq]. Since Eqs. (1) and (2) hold equally for a sample of unit volume, we consider N, from now on, as the number density of nuclei in units of [nuclei/cm3 ]. Neutron-Nucleus Reactions We consider a simple experiment, where a collimated beam of neutrons of intensity I [neutrons/cm2 · s] is incident uniformly on a slab of thickness x. The number of neutrons −dI suffering collision in a thin layer dx of the slab per unit area per unit time will be proportional to the beam intensity I and the number Ndx of nuclei in unit cross sectional area of the layer exposed to the beam. With a proportionality constant for the interaction selected as σ, we obtain: −dI = σINdx,
(3) dI cm2 nuclei = σ( )Ndx( ). I nucleus cm2 The fraction −dI / I represents the fraction of the nominal slab cross sectional area that serves as the effective target area. This leads us to define the reaction probability σ as the microscopic cross section, (6, 8) expressed in units of [barn = 10−28 m2 ], together with the macroscopic cross section = Nσ, in units of [cm−1 ]. Equation (3) may be recast: or alternately,
−
dI = −I. dx
(4)
In analogy to Eq. (2), we may readily integrate Eq. (4) to calculate the intensity I(x) of the beam of neutrons that penetrate, without suffering any collision, the entire slab of thickness x: I(x) = I(0)e
−x
.
(5)
Equation (4) suggests that represents the probability of neutron-nucleus reactions per unit distance of neutron travel. For a mixture of materials, has to be constructed
5
as a sum of macroscopic cross sections calculated for constituent nuclides. The microscopic cross section σ is characteristic (6, 7) of each nuclide and is a function of reaction type, e.g., scattering, capture, fission, and depends heavily on the relative speed between the neutron and nucleus. A nuclear reaction of particular interest involves the formation of a compound nucleus when the neutron is absorbed by the target nucleus. When a compound nucleus is formed, the mass of the nucleus is less than the sum of the neutron mass and target nuclear mass. The compound nucleus is excited to an energy level equal to the sum of the energy corresponding to the mass defect and the neutron kinetic energy. The probability for the compound nucleus formation becomes markedly large when the excitation energy lies near a quantum level of the nucleus. When the compound nucleus (6) decays from the excited state to the ground state with the emission of a γ-ray, the reaction is called resonance absorption or radiative capture, often written as the (n, γ) reaction. The compound nucleus may also decay with the emission of a neutron in resonance elastic scattering. Neutrons with energies E > 0.1 MeV may experience inelastic scattering, whereby the neutron leaves the compound nucleus in an excited state, which subsequently decays to the ground state with γ-emission. Neutrons may also undergo potential scattering with nuclei as though the interacting particles are ordinary billiard balls undergoing elastic collisions. Neutrons may also undergo other types of reactions such as (n, 2n), (n, α), and (n, p) reactions. One absorption reaction of particular interest is nuclear fission, where the compound nucleus becomes so unstable that it immediately splits into two parts with the release of two or three neutrons. Some nuclides, including 233 U, 235 U, and 239 Pu, allow fission with thermal neutrons and are known as fissile nuclides, while fertile nuclides, including 238 U, need fast neutrons to induce fission. Approximately 200 MeV of energy is released per fission, including the energy recoverable from the radioactive decay of fission products. Many nuclides, especially those with high atomic number, exhibit a large number of complex and often overlapping resonances. The magnitude of resonance cross sections as well as the energy levels at which the resonances occur depends heavily on the nuclide and neutron cross sections have to be experimentally determined. We illustrate energy dependence of absorption cross section σ a for 238 U in Fig. 3. For scattering collisions, we have to measure not only the magnitude σ s of the cross section, but also the changes in the energy and direction of neutron travel. For incident neutrons of energy E and direction of motion emerging with energy E in direction , we determine the differential scattering cross section σs (E → E , → ). Gamma-Ray Interactions with Matter Among several different ways γ-rays interact with matter, three mechanisms are of significance to radiation shielding and detection: photoelectric effect, pair production, and Compton scattering. In the photoelectric process, the γ-ray is absorbed by an atom and one of the orbital electrons is ejected from the atom. The photoelectric reaction probabil-
6
Nuclear Reactors
cross section represents the interaction probability per unit distance of neutron travel, we note that φ properly represents the desired reaction rate. One-Group Neutron Diffusion Equation Figure 3. Absorption cross section of 238 U plotted as a function of neutron energy. Note a large number of sharp resonances in the range of 6 eV to 150 eV, in particular, those at 6.7 eV, 20.9 eV, and 36.7 eV. (Courtesy of Academic Press, Inc.)
ity is large for low-energy γ-rays, with E < 1.0 MeV. In the second type of γ-ray interactions, the photon interacts in the Coulomb field of a nucleus and produces an electronpositron pair. Since the rest-mass energy of two electrons is 1.02 MeV, pair production is possible for photon energy above this threshold and the reaction probability increases rapidly as the photon energy increases further. In Compton scattering, the photon undergoes elastic scattering with an electron and the reaction probability decreases as a function of photon energy. Accounting for all three γ-ray interaction mechanisms, we determine total macroscopic reaction cross sections for γ-rays in much the same way as we define for neutron interactions. By tradition, macroscopic cross sections for γ-ray interactions are called the attenuation coefficients, (12) with the symbol µ and in units of [cm−1 ]. Gamma-ray interaction probabilities are written frequently in terms of the mass attenuation coefficient µ/ρ [cm2 /g], where ρ is the physical density of the material. The particular form is often preferred because, for a given photon energy, variations in µ/ρ are small (13) for a variety of materials. Penetration of γ-rays through materials may be estimated through Eq. (5), with replaced by µ. For simple shielding calculations, the uncollided beam intensity calculation of Eq. (5) has to be corrected for multiple photon interactions through an empirical factor, known as the buildup factor.
In terms of the neutron number density n and flux φ, we may now set up a neutron balance equation by writing the time rate of change of neutron population in unit volume as the difference between the rate of neutron production and rate of neutron loss. With the absorption and fission cross sections a and f , respectively, together with the average number ν of neutrons produced per fission and the leakage rate written in terms of the current of neutrons J [neutrons/cm2 · s], we obtain the balance equation: ∂n = ν f φ − a φ − ∇ · J. ∂t
(6)
Using Fick’s law of diffusion J = −D∇φ, with D = 1/3tr , we rewrite Eq. (6): 1 ∂φ(r, t) = D∇ 2 φ(r, t) + (νf − a )φ(r, t). v ∂t
(7)
°
The transport cross section tr = t − µ0 s is not a physical cross section but introduced as a convenient parameter in terms of total cross section t , scattering cross section °
s , and the average cosine of the scattering angle µ0 . Equation (7) is perhaps the most useful form of neutron balance statement and is known as the single-speed or one-group neutron diffusion equation (6, 7). Criticality Condition For a critical reactor in steady-state operation, we write Eq. (7) as a wave equation: 2 ∇ 2 φ(r) + Bm φ(r) = 0,
(8)
νf − a , which D represents the curvature of the flux distribution φ(r). Standard separation-of-variables techniques yield a general solution to Eq. (7):
2 = in terms of the material buckling, Bm
NEUTRON DIFFUSION THEORY To derive a balance equation for neutrons moving around and undergoing collisions with nuclei of the surrounding medium, we extend Eq. (4) for the rate of neutrons suffering collisions for a collimated beam to the case of neutrons approaching the target in arbitrary directions. Since, for a collimated beam, the beam intensity I may be written as the number density n of neutrons, in units of [neutrons/cm3 ], times the speed v of neutrons, we define the neutron flux (6, 7) φ = nv, in units of [neutrons/cm2 · s], where n now represents the number density of neutrons with arbitrary directions of motion. Then, the number of neutron reactions, I, per unit cross sectional area of the slab, per unit time, and per unit distance of neutron travel for the beam may be extended to the reaction rate φ [number of reactions/cm3 · s] by effectively collecting neutron reaction rates for all possible directions of neutron motion. The scalar flux φ = nv may also be interpreted as the total track length traveled in unit time by neutrons in unit volume regardless of their direction of motion, and remembering that the macroscopic
φ(r, t) =
∞
ψn (r)Tn (t).
(9)
n=0
Here, the spatial component of the solution results from an eigenvalue equation: 2 ∇ 2 ψn (r) + (Bm + λn )ψn (r) = 0,
(10)
with the eigenvalues λn , n = 0, 1, 2, . . . , while the temporal solution Tn (t) = Tn (0)e−λn vDt is obtained with the initial condition Tn (0). Equation (10), subject to conditions at the boundary, yields non-trivial eigenfunctions or spatial modes ψn (r) only for certain values, i.e., eigenvalues λn . If we arrange the eigenvalues in ascending order, we note that, for a reactor to be critical, i.e., for φ(r, t) to be constant in time, λ0 = 0 and all higher harmonics with λn > 0, n = 1,2,. . . , will vanish in short time. This means that, if we obtain the lowest eigenvalue Bg2 of Eq. (10), corresponding to
Nuclear Reactors
7
the fundamental mode ψ0 (r) with λ0 = 0, we should satisfy: (7) 2 Bg2 = Bm .
(11)
The eigenvalue Bg2 is determined entirely by the geometry of the system, and hence is known as the geometrical buckling. Equation (11) is a succinct statement of criticality of a chain-reacting system: the lowest eigenvalue Bg2 of 2 the wave equation (10) equals the material buckling Bm 2 2 defined in Eq. (8). It is also clear that if Bg > Bm , i.e., λ0 > 0, the system is subcritical, and the neutron flux will 2 die away in due time. Likewise, if Bg2 < Bm , i.e., λ0 < 0, the system becomes supercritical, resulting in an uncontrolled growth of the neutron population. Equations (8) together with the criticality condition of Eq. (11) yields: νf − a = Bg2 D (12) νf k∞ a = 1 = = k P = k . ∞ NL eff a + DBg2 a + DBg2 Here, we note that k∞ = νf /a yields the number of fission neutrons produced per neutron absorption and should properly represent the infinite multiplication factor. The term DBg2 represents the neutron leakage rate relative to the absorption rate represented by a , and hence PNL represents the non-leakage probability of neutrons in the reactor, yielding keff = k∞ PNL . This is the most direct statement of neutron balance in a multiplying medium expressed in terms of one-group neutron diffusion theory. Even when the system is not exactly critical, i.e., when 2 Bg2 = Bm , we may still wish to obtain an expression for the flux φ(r) as a solution to the eigenvalue equation (10) with the eigenvalue λ0 = 0 and φ(r) = ψ0 (r). Such a solution implies that the material composition and/or arrangement 2 of the reactor should be adjusted until λ0 = 0 or Bg2 = Bm . Alternatively, we may recast Eq. (10) for n = 0 in terms of a multiplicative eigenvalue λ: ν f φ(r). (13) λ Equation (13) is equivalent to Eq. (10), in the sense that there is an adjustable parameter introduced in either equation. Comparing Eqs. (8) and (13) shows that λ = keff and, when λ = 1, Eq. (13) again implies that the system has to be adjusted until a critical state is attained. Equation (13) is quite useful for non-critical systems, because we are able to obtain solution φ(r), albeit with λ = 1, and gain understanding of the degree of adjustments required to arrive at a critical configuration. Even without obtaining a precisely critical configuration, we may determine the relative changes in the eigenvalue or reactivity of the system due to perturbations in core parameters. Such perturbation calculations would not be possible, without the introduction of the eigenvalue λ in Eq. (13), because it is an eigenvalue equation and renders a unique solution only if the critical2 ity condition Bg2 = Bm is satisfied. Table 1 presents fundamental mode solutions of Eq. (10), corresponding to n = 0, for representative geometries. For a spherical mass of fissionable material, the critical radius Rc D of the sphere can be estimated as Rc = π( )1/2 . The νf − a −D∇ 2 φ(r) + a φ(r) =
critical radius Rc applies to a bare reactor and, when the fissionable material is surrounded by a reflector, the corresponding critical radius will be less than Rc . We present in Table 2 number densities and microscopic cross sections for a PWR core fueled with UO2 containing 2.78 wt. % of 235 U. We include 10 B to represent boric acid dissolved in the coolant at the concentration of 2210 ppm by weight of natural boron in the water. From the data in Table 2, we obtain νf = 0.1570 cm−1 , a = 0.1532 cm−1 , D = 9.21 cm, k∞ 2 = 1.025, Bm = 4.13 × 10−4 cm−2 , which yields, for a critical core with height H = 3.66 m, PNL = 0.975 and effective core radius R = 1.31 m. This compares favorably with a more accurate design calculation, R = 1.22 m. Multi-Group Neutron Diffusion Equation Although one-group diffusion theory provides many useful results both for steady-state and time-dependent behavior of a nuclear reactor, it cannot account for the energy dependence of neutrons as they undergo scattering collisions with core and reflector materials. To remedy this deficiency would require in general representing the slowing down or moderation of neutrons, in terms of the neutron flux and reaction rates varying as an explicit function of the neutron energy. It would be necessary in particular to account for the absorption of neutrons, throughout the slowing down process, in resonances illustrated in Fig. 3. One approximate but practical approach to represent the energy dependence of the neutron population is to discretize the energy variable and establish a neutron balance statement in terms of a number of discrete energy groups (7). We consider a two-group formulation, which provides sufficiently accurate representation of key phenomena of interest in many practical applications. We set the boundary between the two energy groups so that practically all of fission neutrons are emitted in the first group and the fast neutrons slow down from group 1 to become thermal neutrons in group 2. The source of neutrons in group 2 is entirely due to those escaping absorptions in the fast group and we allow both fast and thermal neutrons to induce the next generation of fissions. For many LWR applications, the group boundary is chosen typically at 0.625 eV. Introducing r to represent the slowing down of neutrons from group 1 into group 2, we extend Eq. (13) to obtain the two-group neutron diffusion equations: (7) −D1 ∇ 2 φ1 (r) + (a1 + r )φ1 (r) =
1 [νf1 φ1 (r) + νf2 φ2 (r)],(14) k
−D2 ∇ 2 φ2 (r) + a2 φ2 (r) = r φ1 (r).
(14)
Here we introduce, as in Eq. (13), an eigenvalue λ = keff = k into the fission source term so that we obtain a time-
8
Nuclear Reactors
Figure 4. Comparison of one-group and two-group flux distributions for a reflected slab reactor. (a) One-group flux distribution showing a monotonic decrease across the core-reflector interface; (b) Two-group flux distributions, indicating thermal flux peaking in the reflector.
independent balance statement for a reactor which may not be exactly critical. Provided we generate accurate estimates of the twogroup neutron cross sections, including the slowing down cross section r , Eqs. (14) could account for essentially all of the important aspects of neutron behavior in a chainreacting system. We illustrate schematically in Fig. 4 twogroup flux distributions in a reflected reactor and compare them with the corresponding one-group representation. Fast neutrons produced in the core from the fission process leak into the reflector where they slow down into the thermal group and eventually return to the core to induce further fissions. This point is clearly indicated by the thermal flux peaking in the reflector, with a positive gradient of φ2 at the core-reflector interface. Since the current of neutrons in Eq. (6) is proportional to the negative gradient of flux, this positive gradient of φ2 at the core-reflector interface shows that there is a net current of thermal neutrons back into the core. This aspect of neutron slowing down and migration in a reflected reactor core cannot obviously be accounted for by one-group diffusion theory, which merely indicates, in Fig. 4, a monotonically decreasing flux distribution at the core-reflector interface. With its ability to provide a much more accurate representation of neutron behavior in a nuclear reactor, twogroup diffusion theory naturally provides an improved estimate of the critical mass or the effective multiplication
factor k. For a bare reactor, we assume the spatial flux distribution for each group is described by the wave equation (10) with a group-independent geometrical buckling B2 , which renders the partial differential equations (14) into a pair of algebraic equations. Dividing the fast-group equation by φ1 and replacing the flux ratio φ2 /φ1 in the fission source term by the corresponding expression from the thermal-group equation, we obtain: k=
1 r (νf1 + νf2 ). D1 B2 + a1 + r D2 B2 + a2
(15)
For an infinitely large reactor with B2 = 0, Eq. (16) yields: k∞ =
νf1 νf2 r + ≡ k1 + k2 , a1 + r a2 a1 + r
(16)
where we recognize that k1 and k2 represent the contributions to k∞ from fast and thermal neutron fissions, respectively. For a finite reactor, Eq. (16) may be approximated by: k ≈ k∞
a1 + r a2 D1 B2 + a1 + r D2 B2 + a2
≡ k∞ PNLF PNLT = k∞ PNL ,
(17)
where PNLF and PNLT are the fast and thermal non-leakage probabilities, respectively, defined analogously to Eq. (12). The product of PNLF and PNLT yields the net non-leakage probability PNL accounting for the slowing down of fast neutrons and migration of neutrons in both groups. To amplify the physical interpretation of the two-group expression for k∞ in Eq. (17), we break up k2 by introducing the thermal absorption cross section Fa2 for fuel: k∞ =
νf1 r Fa2 νf2 + ≡ k1 + p fη. a1 + r a1 + r a2 Fa2
(18)
The parameter p = r /(a1 + r ) represents the probability of fast neutrons escaping absorption during slowing down and is called the resonance escape probability. The ratio f = Fa2 /a2 represents the fraction of thermal neutron absorptions taking place in fuel and is known as the thermal utilization, while the last ratio η = νf2 /Fa2 describes the
Nuclear Reactors
9
number of neutrons released per thermal neutron absorption in fuel. Thus, for each thermal neutron absorbed in an infinitely large system, k2 yields the number of fission neutrons that are emitted, slow down without getting captured in group 1, and finally arrive in group 2. Hence, k2 represents the contribution to k∞ from thermal neutron fissions, and will be equal to k∞ if fast neutrons were not to cause any fission. Setting = 1 + k1 /k2 , we may interpret as a fast fission correction to k2 , which would then put Eq. (19) in the form of conventional four-factor formula (5). Equation (17) is, however, preferable for LWR analysis where k1 ≈ k2 /3. Thus, in this case, cannot be considered a minor correction for fast fissions, as was the intent when it was first introduced as the fast fission factor in the early days of reactor development.
unit of reactivity is known as dollar. Reactivity is often expressed as ρ = (k − 1) / k, in units of [% k/k] or in other related units. Although the point kinetics equations (21) are derived in terms of the neutron number density n(t), we use n(t) conveniently to represent the flux, fission reaction rate, power density or even total power output of the reactor, since anyone of these quantities is proportional to n(t). When K = 1 dollar, the reactor is said to be at prompt criticality, which implies that the reactor is able to remain at steady state, even without the help of delayed neutrons. This in turn suggests that, in practice with the delayed neutrons present, the power level will increase exponentially when K ≥ 1 dollar. For a step insertion of reactivity K0 , we may obtain solution to the point kinetics equations (21) by applying
REACTOR KINETICS
Laplace transform and solving for the transform n(s) of the power n(t). The transform then is inverted to the time domain to yield two exponential terms with time constants:
°
To gain quantitative understanding of the dynamic behavior of a nuclear reactor, we now return to the one-group neutron diffusion Equation (7), but with a slight modification to recognize that neutrons are released through radioactive decay of certain fission products as well as directly from the fission process. Hence, a fraction of the neutrons produced in the overall fission process appear with time delays, and they are called delayed neutrons while the fission products yielding delayed neutrons are called delayed neutron precursors. For 235 U fission, the delayed neutron fraction β = 0.0065 with an effective decay constant λ = 0.08 s−1 , averaged over several (usually six) groups of delayed neutron precursors. In terms of the concentration C of delayed neutron precursors, Eq. (7) is now modified to:
s1 ≈
K0 − 1 λK0 and s2 ≈ . 1 − K0
(23)
For a typical LWR configuration with λ = 0.08 s−1 and = 10−2 s, corresponding to the unnormalized neutron lifetime = 6.5 × 10−5 s, if we introduce a step reactivity K0 = 0.5 dollars, we get s1 = −50 s−1 and s2 = 0.08 s−1 . Similarly, for K0 = −1.0 dollar, we obtain s1 = −200 s−1 and s2 = −0.04 s−1 . These simple examples illustrate that the first exponential term corresponding to s1 will die away rapidly for reactivity insertions of practical interest and the power level variations, after the initial transients, will 1 ∂φ(r, t) be represented by s2 . For |K0 | < 1, s2 ≈ λK0 , indicating 2 = D∇ φ(r, t) − a φ(r, t) + (1 − β)νf φ(r, t) + λC(r, t),(19) that, for a small reactivity insertion, the e-folding time T v ∂t of the power level variation is inversely proportional to K0 coupled with the balance equation for the precursor conto a good approximation. A simple measurement of T = 1/s2 , centration: known as the reactor period, yields the inserted reactivity ∂C(r, t) K0 . This relationship is known as the inhour equation and (20) = −λC(r, t) + βνf φ(r, t). ∂t forms the basis for routine reactivity measurements. AcWe introduce the approximation that the neutron flux and tual applications of the inhour equation, however, require precursor concentration exhibit the same spatial depena more accurate expression for T based on six groups of dence described by the steady-state equation (13), with gedelayed neutron precursors, rather than Eq. (25) with one ometric buckling B2 , throughout a transient. This yields equivalent group of precursors. the point kinetics equations describing (7) the time depenEquation (25) shows that for K0 < −1 dollar, s2 ≈ −λ, dence of the neutron number density n(t) and precursor which implies that when a reactor is shut down by insertconcentration C(t): ing a large negative reactivity, the power level cannot decrease, after the initial transients, any faster than with dn(t) K(t) − 1 = n(t) + λC(t), (21) a period 1/λ. In practice, this limiting shutdown period is dt governed by T ≈ 80 s, corresponding to the decay constant n(t) dC(t) of the longest-delayed precursor group. Equation (25) indi= −λC(t) + , (21) dt cates further that, for K0 > 1 dollar, i.e., for super-prompt critical transients, s1 will be positive and large, yielding with the reactivity K and neutron generation time defined an exponential increase of power output and essentially as: requiring no contributions from delayed neutrons. In prack−1 K= , = . (22) tice, as the power level increases the fuel and non-fuel makβ kβ terials heat up and reactivity K(t) will decrease due to these temperature increases. The temperature feedback mechaHere, k = keff defined in Eq. (12), while = [v(a + DB2 )]−1 nisms are, in fact, an important inherent safety feature of represents the average time a neutron spends between its LWRs and will be discussed further in connection with the birth and loss due to either absorption or leakage. Both K temperature coefficients of reactivity. and are introduced in Eqs. (21) in units of β, and this
10
Nuclear Reactors
Figure 5. Overall reactor physics calculational procedure, indicating computer codes employed in rectangular boxes and various data or databases in oval or rounded boxes. The coupling between MGD analysis and fuel depletion and T/H feedback calculations is illustrated, together with the cross section parameterization scheme.
REACTOR PHYSICS ANALYSIS AND CORE DESIGN The multi-group neutron diffusion (MGD) equations, as illustrated by the two-group equations (14), are used quite extensively in routine nuclear reactor design tasks. The MGD equations may be solved in full three-dimensional geometry representing individual fuel rods, surrounding structural materials and fluid flow. In many applications of the MGD equations, to lessen the computational requirements, a combination of two-dimensional or onedimensional calculations may be utilized in a synthesis approach. Regardless of the details represented or the dimensionality retained in the core design analysis, such MGD calculations require macroscopic cross sections for individual materials or subregions of the core. The generation of multi-group cross sections, or multi-group constants as they are often called, consists of processing a compiled set of experimental data on neutron cross sections into a suitable discrete group structure, with due account given for the number density of every nuclide specified for the core and reflector regions. Figure 5 illustrates the entire process (14) of generating multi-group constants, including thermal-hydraulic feedback and fuel depletion that have to be incorporated in an overall reactor physics and core design analysis. Neutron Cross Section Library The primary source of neutron cross sections currently in use is the Evaluated Nuclear Data File, Part B, Version V or VI (ENDF/B-V or -VI). The ENDF cross section libraries (15–18) are generated by the National Nuclear Data Center (NNDC), located at Brookhaven National Laboratory. The Cross Section Evaluation Working Group monitors activities at the NNDC, which reviews raw cross section data collected in the ENDF, Part A, and compiles into Part B a single set of recommended cross section data in a consistent format. Various NNDC data bases are available for online retrieval. There are a number of other compiled neutron cross section libraries generated at cross section data centers in the U. S. as well as in other countries. The best known among them is the Joint Evaluated File (JEF) maintained (19) at Saclay, France, under the aegis of the Organization for Economic Cooperation and Development (OECD). The ENDF/B-V library was released around 1979 and is available as a three-volume book (15–17) in a combination
of tabulated and curve formats. The publication follows a long tradition of nuclear cross section data published under the Brookhaven report number, BNL-325, and is still informally referred to as BNL-325 or by its nickname barnbook. The ENDF/B-VI library was released during the early 1990s and has been implemented in a number of reactor physics or neutronics computer codes. The release of the ENDF/B-VII library is expected before the end of 2006. Cross Section Processing Codes As indicated in Fig. 5, the processing of neutron cross section data into a multi-group structure takes two steps. First, the experimental data are processed and averaged over a number of fine-energy groups, where the averaging is performed with a set of approximate estimates of the neutron flux spectrum φ(E) used as weighting functions over a few broad intervals. For example, φ(E) is set equal to the energy spectrum χ(E) of fission neutrons for the neutron energy E in the MeV range and to the Maxwellian distribution M(E) for E < 1.0 eV, together with φ(E) = 1/E for the intervening energy interval. The processed cross section data are then supplied to a lattice physics code, which accounts for the particular composition and geometry of each subregion or fuel assembly and produces microscopic or macroscopic cross sections suitable for global MGD analysis. Among several cross section processing codes available, the NJOY code (20) has gained popularity as a general tool applicable for a number of lattice physics codes. The number of fine groups selected in the processed cross section library varies anywhere between 30 and 2000 groups, depending on the requirements of the lattice physics code that the library is intended for. The fine-group library for the MC2 -2 code (21) would be structured in 2000 groups so that the resonance cross sections and inelastic scattering cross sections of particular interest to LMR lattice physics analysis may be accurately represented. Lattice Physics Codes Once a set of fine-group cross sections is generated with an estimate of the space-independent φ(E) used as a weighting function, we wish to account for the specific composition and geometric details of each fuel assembly and determine the flux spectrum more accurately, which then allows us to collapse the fine-group cross sections into a MGD structure. In traditional lattice physics analysis, the entire set of 40 000 ∼ 50 000 fuel rods and the surrounding structures in an LWR core is represented by a few representative unit cells each of which comprises a fuel rod and the surrounding moderator. In a unit cell analysis, the square lattice is cylindricized and the zirconium-alloy clad is homogenized with the pellet-clad gap. With a large number of identical fuel rods in the core, we assume that there is no net current of neutrons across the cell boundary. This idealized unitcell construction allows us to determine a cell-average flux spectrum φ(E), while accounting for the spatial flux distribution across the fuel lattice for thermal neutrons with E ≤ 0.625 eV and the preferential absorption of fast neutrons in fuel resonances with E in the eV ∼ keV range. Each distinct fuel assembly may be represented by one unit cell, typically based on the average 235 U enrichment
Nuclear Reactors
of the assembly. To account for extraneous materials, e.g., neutron absorbers, structural components, and extra water volumes associated with the instrumentation system, a fourth region, called the non-lattice region, is added to the three-region unit cell consisting of the fuel, clad, and moderator regions. This super-cell arrangement forms the basis for the well-known LEOPARD code (22) and its variants. Although the LEOPARD code performs, in a strict sense, only a zero-dimensional spectral calculations, it yields sufficiently accurate MGD constants for many LWR configurations and has served as a primary tool for PWR lattice physics calculations for a couple of decades until the late 1980s. The main limitation of the unit-cell lattice physics methodology lies in its approximate, ad hoc treatment of the non-lattice regions, especially when strong neutron absorbers are present in the region. Removal of this deficiency has resulted in the development of assembly-level lattice physics codes which solve an integral form (7) of the neutron transport equation. To derive the integral transport equation for the neutron flux φ(r) at position r in a reactor core of volume V, we assume that neutrons emerge isotropically, or equally distributed in all directions, from any fission or scattering event. In terms of the isotropic neutron source S(r ) and the transport kernel T (r → r), which yields the neutron flux at r due to a unit isotropic source of neu trons at r , we obtain:
dr S(r )T (r → r).
φ(r) =
(24)
V
The transport kernel may be obtained by combining the exponential attenuation of Eq. (5) and the geometric attenuation of the flux of particles, isotropically released at the center r of a sphere and arriving at a spherical surface at the radius |r − r |: T (r → r) =
exp(−t |r − r |) . 4π|r − r |2
(25)
With Eq. (27) substituted into Eq. (26) and with S(r) written in terms of φ(r) both for the fission and scattering components, Eq. (26) forms the integral transport equation. In practice, the energy dependence has to be explicitly considered for the flux, source, and transport kernel, and the resulting integral equation for the energy-dependent flux φ(r,E) is discretized both in space and energy. The discretized equation is expressed in terms of the probability that neutrons produced in a subregion of the core will suffer collisions in another subregion. Hence, the integral transport approach to calculate φ(r,E) is often called the collision probability (CP) method. Although Eq. (26) is derived for an isotropic source S(r), we may account for, with sufficient accuracy, the anisotropy of source neutrons by replacing the total cross section t in Eq. (27) by the transport cross section tr of Eq. (7). Actual solution of the CP equations for the space- and energydependent neutron flux for a two-dimensional representation of distinct material regions in a fuel assembly, however, requires considerable computational effort. Hence, a combination of one- and two-dimensional CP formulations is used in the CPM-3 and CASMO-4 codes (23, 24) for both fast and thermal spectrum calculations at the assembly
11
level. The first step involves a one-dimensional fine-mesh, micro-group calculation, for each of the distinct fuel and absorber rod types. Fine-group fluxes from the micro-group calculations yield macro-group unit-cell average cross sections for each rod in the assembly. This is followed by a two-dimensional CP calculation for the flux distribution using the coarse-mesh, macro-group constants, which represent in (x-y) geometry the actual locations of fuel rods and non-lattice regions of the assembly. The two-dimensional flux distributions are used, together with unit-cell flux distributions, to generate MGD constants averaged over the assembly. Effects of Material Heterogeneities Through a synthesis of fine-mesh, fine-group unit-cell calculations and a coarse-mesh, coarse-group assembly calculation, the CP formulations account for material heterogeneities, explicitly and with sufficient accuracy, both at the unit-cell and two-dimensional assembly levels. Material heterogeneities have to be explicitly considered (7) especially when the mean free path of neutrons is comparable to the characteristic dimension of such heterogeneities, as is the case for thermal neutrons in LWR cores. For neutrons in the eV ∼ keV range, the resonance absorption of neutrons is affected significantly by material heterogeneities. The thermal utilization f introduced in Eq. (19) has to be modified to represent the spatial flux distribution across the fuel assembly explicitly. Since the absorption cross section of fuel is usually much larger than those for non-fuel materials, thermal neutrons would be preferentially absorbed in fuel, resulting in a lower thermal flux in the fuel region. Since the neutron reaction rate is given by the product of the macroscopic cross section and neutron flux, the fraction f of thermal neutrons absorbed in fuel is reduced in a heterogeneous unit cell compared with that in an equivalent homogenous cell, where the flux distribution is uniform across the mixture of the fuel and non-fuel materials. Material heterogeneities similarly reduce the absorption of neutrons in fuel in the slowing down range because the neutron flux is depressed in fuel due to large absorption resonances. This, however, renders an opposite effect on k∞ , since reduced resonance absorptions result in a significant increase in the probability that neutrons escape absorption during slowing down, i.e., the resonance escape probability p of Eq. (19). This increase in p, due to fuel lumping, typically exceeds the corresponding decrease in f in LWRs. So was the case in the first critical chain-reacting system built by Enrico Fermi and his colleagues in 1942. In fact, only through a heterogeneous lattice consisting of natural uranium cylinders placed judiciously in graphite blocks, a concept classified during the war, was Fermi able to achieve a critical assembly. This is because a homogeneous mixture of natural uranium and graphite yields k∞ < 0.85 and even an infinitely large assembly comprising such a homogeneous mixture would have remained subcritical. Overall Reactor Physics Calculation We may assemble MGD constants generated through the lattice physics analysis for each distinct fuel assembly and
12
Nuclear Reactors
determine flux and power distributions through a global MGD calculation using Eqs. (14) or equivalent. With the power distribution P(r) obtained for a fresh load of fuel elements, we can proceed to calculate the amount of fuel consumed over a period of time. This provides new number densities N(r,t) for fuel nuclides, which are used in a new round of lattice physics and MGD calculations, as illustrated in Fig. 5. We define the fuel burnup (7–25) E over an operating period t as a product of power density and t, and introduce a relationship between the fuel burnup E(r,t), at time t into a fuel cycle, and the time-dependent power distribution P(r,t): ∂E(r, t) = P(r, t). ∂t
(26)
The burnup equation (28) is typically integrated over time for each position r to yield E(r,t) in units of [MWd/kgU] corresponding to the power distribution calculated in units of [MW/kgU]. Likewise, given the power distribution P(r,t), we perform TH calculations to determine the temperature T(r) and density ρ(r) for fuel and non-fuel materials in the core for steady-state analysis or the corresponding timedependent distributions in transient analysis. The temperature and density data are used to update number densities N(r,t) of every nuclide in the core, requiring another round of lattice physics analysis. In practical design analysis, the coupling between the lattice physics and MGD calculations, which accounts for both fuel depletion and TH feedback, becomes too costly and unwieldy. A table lookup approach (14, 26) is usually adopted to break up the coupling. Lattice physics calculations are first performed to generate a table of microscopic or macroscopic multi-group constants as a function of a few values of temperature T, density ρ, and fuel burnup E in the expected range of each variable. In MGD calculations, entries in the cross section table are interpolated to yield MGD constants corresponding to specific values of T(r,t), ρ(r,t), and E(r,t) at position r and at time t, coupled with fuel depletion and TH calculations. In BWR cores, control absorbers are actively utilized during full-power operation and water density varies significantly throughout the core. Hence, for coupled nuclear-thermal-hydraulic (NTH) analysis of BWR cores, special attention has to be given to the cumulative effects of control and water density variations, in addition to the instantaneous values of the control and thermal-hydraulic variables. THERMAL-HYDRAULIC ANALYSIS FOR REACTOR CORES AND POWER PLANTS Temperature and density distributions in a reactor core have to be calculated with a high degree of precision to ensure that the reactor operates with a sufficient margin and to properly account for the coupled NTH effects both in steady-state and transient conditions. The axial temperature distribution along the length of a fuel rod is coupled to the radial temperature distribution across the rod radius and to the axial temperature distribution of coolant water in the channel as well as to those in other channels. We assume, however, that the coolant channels are decoupled
Figure 6. Radial temperature distribution across a fuel rod, illustrating a large temperature drop across the fuel pellet and fuelclad gap.
from one another and introduce a closed, single-channel model, (7, 11) which corresponds to the unit cell structure considered in the lattice physics analysis. We assume that the rod is infinitely long compared with its small radius and perform a one-dimensional radial temperature calculation for the rod. This is followed by a one-dimensional axial calculation for the coolant temperature distribution in a PWR coolant channel. Since the temperature rise across a fuel length of 3.6 m in a PWR core is 30∼60 K compared with 600∼1200 K across a radius of 5 mm, it is a reasonable approximation to neglect axial heat conduction in a fuel rod and concentrate on the radial heat conduction. For this one-dimensional heat conduction problem, we assume further that the entire fission energy of 200 MeV/fission is deposited in the fuel rod at a rate of S [kW/m3 ] and write the surface heat flux q [kW/m2 ] in term of Fourier’s law of heat conduction, q = −k∇T, where k is the thermal conductivity of the pellet in units of [kW/m · K]. Similar to the steady-state form of the diffusion Equation (6), we obtain the steady-state heat conduction equation: ∇ · q = S, or − k∇ 2 T = S.
(27)
Solving Eq. (29) in one-dimensional cylindrical geometry yields T(r) as a quadratic function of radius r. The linear heat generation rate P / L, i.e., the power produced per unit length of the fuel rod, is given as a function only of the fuel surface and centerline temperatures, and not of the fuel radius. The overall radial temperature distribution across a fuel rod is illustrated in Fig. 6 for a typical PWR design, showing the temperature variations across the pellet, clad, pellet-clad gap and coolant volume near the fuel rod surface. For the coolant channel, with coolant mass flow rate W [kg/s], heat capacity Cp [kJ/kg · K], and wetted perimeter M [m], surrounding a fuel rod, we set up a steady-state energy balance for a length z of the channel by considering a coolant temperature rise Tc corresponding to surface flux q [kW/m2 ] to obtain WCp Tc = qMz. Here, WCp Tc represents the energy picked up by the fluid in traversing distance z of the channel, which has to equal the heat transferred through the corresponding fuel rod surface area, Mz. Rewriting the energy balance as WCp
dTc (z) = Mq(z), dz
(28)
and recognizing that q(z) is proportional to the axial power distribution and to the axial neutron flux distribution
Nuclear Reactors
13
and travel with the same speed, while the more sophisticated two-fluid models represent the non-equilibrium thermodynamic conditions and distinct phase velocities of the liquid and vapor phases. FUEL CYCLE ANALYSIS FOR NUCLEAR POWER PLANTS
Figure 7. Axial coolant and fuel surface temperature distributions along the length of a typical PWR fuel rod. The coolant temperature Tc (z) is proportional to an integral of the axial heat flux distribution q(z), while the fuel surface temperature Ts (z) is nearly proportional to q(z), adjusted slightly by Tc (z).
πz of Table 1, we integrate Eq. (30) for Tc (z) for H a channel of length H = 3.6 m, as shown in Figure 7. The axial temperature distribution within the fuel rod follows q(z), adjusted slightly by Tc (z), as illustrated by the pellet surface temperature Ts (z). For transient TH analysis, (11) the energy balance equations for fuel rods and coolant channels have to be solved together in a fully coupled manner. Such coupled solutions would also be necessary in detailed steady-state TH analysis, often called subchannel analysis, (11) where individual fuel rods and coolant channels are discretely represented for subregions of the core comprising several fuel assemblies. The limiting TH conditions in LWR designs entail the design basis accident involving an instantaneous rupture of a primary coolant pipe with a diameter of approximately 1.0 m and the resulting loss of primary coolant. We assume that the severed sections of the pipe are misaligned from each other so that coolant is lost from both sections and that the rupture occurs in the cold leg of the primary piping. This postulated accident scenario (9) is known as the largebreak or 200% loss-of-coolant accident (LOCA) and serves as the basis for the emergency core cooling system (ECCS) design. Since coolant lost from the cold leg cannot pick up any heat from the core, a cold-leg LOCA would result in greater heating of fuel elements and severer damage to the core than a hot-leg LOCA. During a postulated LOCA, ECCS water will be injected through the unbroken pipe into the downcomer annulus between the reactor pressure vessel and core barrel, and will have to counter and quench the steam emerging from the overheated core. In the case of PWR accidents, overheating of steam generators must be also considered in determining the peak temperature and pressure during the accident. Over the past three decades, a number of sophisticated power plant simulation models have been developed to represent complex TH phenomena expected in postulated LOCAs in LWRs. These production TH codes (11, 27) include single- and two-phase fluid flow models of varying degree of complexity and accuracy for a network of flow paths, as well as lumped-parameter models for key plant components, e.g., steam generator and pressurizer. The simplest two-phase flow formulation uses the homogeneous equilibrium model (HEM), where the liquid and vapor phases are assumed to be in thermal equilibrium φ(z) = cos
Although the main objective of fuel management (25) in any nuclear power plant is to make efficient use of nuclear fuel and to ensure safe, reliable operation of the plant, there are a number of distinct steps that need to be considered preceding and following the production of energy. They are usually grouped into the front end of the cycle, including (1) mining of uranium ore, (2) milling and conversion of uranium to suitable forms, (3) isotope enrichment and (4) actual fuel fabrication, and the back end of the cycle, including (1) storage of spent fuel, (2) reprocessing and refabrication of fuel and (3) disposal of nuclear waste. In the United States, reprocessing of spent nuclear fuel is not currently performed, but efforts are underway to develop an underground repository for permanent disposal of spent nuclear fuel. The front and back ends of the cycle are also known as excore fuel management in contrast to incore fuel management, which will be the focus of our discussion here. Incore Fuel Management for LWR Plants Incore fuel management (25) addresses the selection of design and operating parameters that impact nuclear fuel utilization. The placement of fuel rods and control absorbers in fuel assemblies is illustrated in Fig. 2 (b) and loading pattern for fuel assemblies in Fig. 2 (a). Incore fuel management should include the specification of (1) fuel enrichment and control absorber designs, (2) loading pattern for fuel assemblies within the core, (3) cycle length and refueling interval, and (4) overall control requirements and management strategy. LWR fuel designs may call for a uniform fuel enrichment for every rod in a fuel assembly or enrichment varying within an assembly. In BWR assemblies, there are variations in the loading of control absorbers admixed with fuel over the axial length of a given rod as well over different rod locations within an assembly. The control absorbers, known as burnable absorbers, are loaded into LWR fuel assemblies to control the reactivity swing associated with fuel depletion and to flatten the core power distribution. The fuel assemblies are typically loaded in a modified checkerboard pattern, with fresh fuel assemblies loaded mostly near the periphery of the core. Selection of these design features and fuel management strategy is dictated primarily by the desire to flatten the power distribution throughout the core, thereby minimizing the peak fuel temperature and maximizing power output. Typically, LWR cores operate with a cycle length, or the time interval between refueling operations, ranging anywhere between 12 to 24 months. At each refueling, one third or fourth of fuel elements are discharged, fuel elements shuffled, and new elements loaded. Recently, increased attention has been given in fuel design and loading strategy to minimize the neutron exposure of the pressure vessel and thereby reduce the degradation of the critical protection barrier in nuclear power plants.
14
Nuclear Reactors
Alternate Fuel Cycles Currently, all LWRs in the United States operate with a once-through cycle based on UO2 fuel containing approximately 3 ∼ 5 wt. % of 235 U. Through radiative capture or (n, γ) reaction, 238 U is converted to 239 Pu and other Pu isotopes during the reactor operation. Spent nuclear fuel can be reprocessed to recycle the plutonium in the form of (U-Pu)O2 fuel, known as the mixed oxide (MOX) fuel. A number of countries, in particular, France and Japan, have made extensive use of the MOX fuel in LWRs, thereby reducing the burden associated with spent nuclear fuel disposal and extracting extra energy out of the spent fuel. In addition to this basic uranium cycle, limited utilization has been made of alternate cycles involving Pu-U and U-Th fuel. In an LMR core, where liquid metal, e.g., sodium, serves as coolant, the neutron flux spectrum stays hard, i.e., the bulk of fissions occur with high energy neutrons, thereby allowing for an efficient conversion of 238 U to Pu isotopes. An LMR core may produce more fissile plutonium, 239 Pu and 241 Pu, than it consumes, and such a reactor is known as a breeder. In yet another alternate fuel cycle, 232Th may be used as fertile material producing fissile 233 U. In the Th-U cycle, the production of Pu nuclides may be reduced compared with traditional once-through LWR uranium cycle. This feature could alleviate concerns regarding potential proliferation of nuclear weapons associated with plutonium recycling in LWRs and has contributed to recent resurgence of interest in the Th-U cycle.
Estimation of Fuel Burnup As defined in Eq. (28), fuel burnup E [MWd/kgU] is calculated as a parameter proportional to the energy produced in fuel. Recognizing that approximately 200 MeV of energy is released per fission and 1 gram atom of U, weighing 0.238 kg, contains 6.022 × 1023 U atoms, equal to Avogadro’s number, we obtain E (MWd/kgU) = 939 fima. Here, fima is defined as the number of fissions per initial metal atom, with the word metal referring to heavy metal or actinides, i.e., nuclides with atomic number Z ≥ 89. For an LWR fuel cycle, fima may be written as a product of fifa, the number of fissions per initial fissile atom, and the enrichment e of fissile 235 U. If we introduce F representing the ratio of the total number of fissions to that occurring in the initial fissile 235 U atoms and determine the fraction β of initial 235 U atoms fissioned, then fifa = Fβ, yielding E (MWd/kgU) = 939 fifa × e = 939 β F e. An LWR fuel element with e = 0.045 is discharged with β = 0.8, and 45% of fissions occur in Pu, averaged over a cycle, or F = 1.8, which yields fifa = 1.45, fima = 0.065, and E = 60 MWd/kgU. This fuel burnup corresponds to an LWR fuel batch, discharged after 4.5 years of irradiation time. This simple analysis with fima ≈ 0.065 also implies that approximately 6.5% of initial fuel loaded may undergo fission and produce energy in LWR cores. This may be contrasted with values of fima as large as 0.15 achievable in LMR designs utilizing (UPu-Zr) metallic fuel, indicating clearly that LMR designs, even without fuel reprocessing and recycling, could make significantly increased utilization of fuel resources.
RADIOACTIVE WASTE DISPOSAL Radioactive waste generated during the operation of a nuclear power plant and related fuel cycle activities is generally grouped (28) into high-level waste (HLW) and low-level waste (LLW). Another group of radioactive waste containing transuranics (TRUs), i.e., nuclides with atomic number Z > 92, has accrued from the weapons program. The HLW refers to spent nuclear fuel or highly radioactive material resulting from fuel reprocessing, while LLW comprises contaminated clothing, tools, chemicals and liquids that become radioactive during various phases of nuclear power plant operation and from medical procedures. The main concern behind radioactive waste disposal is the presence of long-lived radionuclides in spent nuclear fuel, although due care must also be taken to dispose of LLW as well. A 1.0-GWe LWR plant operates with a fuel inventory of approximately 100 Mg, with one third of fuel discharged and reloaded every 18 months. For an initial fissile enrichment of 4.5 wt. % of 235 U, the discharged fuel contains approximately 1 wt. % of 235 U remaining and 1 wt. % of TRUs produced during the fuel lifetime. Plutonium makes up about 90 % of TRUs, with the remaining 10 % comprising minor actinides, Am, Np, and Cm nuclides. Primary nuclides of concern in disposal or storage of unprocessed spent nuclear fuel are actinides 239 Pu, 240 Pu, 237 Np, 241Am, and 243Am, plus fission products 99Tc, 129 I, and 135 Cs. For underground disposal of spent nuclear fuel, the risk these nuclides pose to the public should be analyzed in terms of the radioactive half life, radiation exposure or dose associated with the particular type and energy of radiation released, and dissolution and transport properties of the species. With 104 LWRs operating in the U.S., the inventory of spent fuel accumulated by 2010 is expected to be 63 000 Mg. This inventory of fuel would occupy a volume roughly equal to a football field with a depth of 3 m, although actual disposal would, of course, require a much larger space to allow for proper heat dissipation and engineered barriers. Current plans for HLW disposal focus on the underground repository under study at Yucca Mountain, in the vicinity of the Nevada nuclear weapons test site. Studies (29) have been made to explore the feasibility and advantage of reprocessing spent nuclear fuel as well as recycling and transmuting actinides and fission products in critical and subcritical reactor cores. Reprocessing is expected to improve the waste form so that public risk associated with spent fuel disposal will be reduced. Spent fuel recycling for the purposes of waste disposal appears quite promising especially in the hard neutron spectrum of LMR cores but will require further engineering study. Argonne National Laboratory has been developing pyroelectric techniques, (30) which are similar to common elecrorefining process, to reprocess metallic and oxide nuclear fuel. More recently, significant effort has been made to develop the UREX+ aqueous separation and reprocessing processes (31). The pyroelectric and UREX+ processes do not allow the separation of plutonium from highly radioactive fission products or other transuranic materials during the entire reprocessing steps, thereby minimizing proliferation risk associated with spent nuclear fuel reprocessing.
Nuclear Reactors
Approximately 100∼200 Mg of LLW, amounting to 500∼1000 m3 in volume, is generated annually in a 1.0GWe LWR plant. Considerable premium is placed on decreasing the volume of LLW, both to reduce storage space and disposal charges. Volume reduction of LLW up to a factor of 10 may be achieved through a number of techniques including compaction, evaporation, and incineration. The processed LLW is stored in above-ground facilities, either in the form of covered trenches or tumulus. Considerable effort will, however, be required in the future to clean up and manage mixed chemical-nuclear waste, including the TRU waste, from the nuclear weapons program.
PROBABILISTIC SAFETY ANALYSIS OF NUCLEAR POWER PLANTS Safe and reliable operation has always played the most important role in the design and analysis of nuclear power plants, as exemplified by multiple safety features and barriers installed to minimize the probability of accidents and the release of radioactive materials to the environment. Since it is not, however, possible to guarantee, under all circumstances, the operability of even highly reliable components, a probabilistic approach, (10) known as probabilistic risk assessment (PRA), has been developed to estimate the risk associated with the failure of plant systems and components. Any estimates of such risk, usually calculated in terms of acute and chronic fatalities, are, however, subject to considerable uncertainties. Thus, PRA estimates of the risk due to operating a nuclear power plant should be considered primarily (1) to see if the calculated risk appears acceptable, (2) to compare the risk of operating the plant with that of alternate energy sources or other nuclear plants, and (3) to determine if improvements in plant design or operating strategy should be made. The PRA technique (10) makes a combined use of two semi-pictorial constructs, called fault and event trees, to estimate the probability of occurrence of rare events representing the failure of components with high reliability. An event tree follows a sequence of events starting from initiating failures through stages of safety systems to be activated or processes to be invoked, with a success-failure binary branch constructed at each stage. Summing up the probabilities associated with risk-significant sequences yields the overall risk of the system. The probability of failure at each branch point is calculated with a fault tree, which represents in Boolean logic the structural relationship between the failure of the system at the branch, considered the top event of the tree, and components making up the system. The component failures are treated as basic events of the tree contributing to the top event. Given the probability of each basic event, the determination of cutsets, i.e., the set of basic events which causes the top event to occur, and Boolean elimination of any redundancies between the cutsets yield the probability of the top event, supplying the desired branch probability to the event tree. Uncertainties in basic event probabilities, which are difficult to estimate for highly reliable components, are directly reflected in the top event probabilities and, eventually through the event tree, also in the risk calculated for
15
the entire system. A pioneering application of the PRA technique was made in the 1970s to assess the risk of operating one PWR and one BWR plant, which were intended to serve as surrogates for the entire population of U.S. nuclear power plants. The study (32), published as a U.S. Nuclear Regulatory Commission report, WASH-1400, provided many valuable insights to nuclear power plant safety. In particular, WASH-1400 indicated that the probability of largebreak LOCAs is rather small but that small-break LOCAs are much more likely to occur. This particular point was, in some way, validated only a few years later by the Three Mile Island accident (9) of 1979, which indeed was initiated as a small-break LOCA due to a valve failure. In this unfortunate accident, due to incorrect diagnosis of the valve failure, operators turned off, during a critical period, the ECCS system which had been activated automatically as designed. This operator action resulted in meltdown of a large portion of the core but with insignificant release of radioactivity to the environment. Following the catastrophic Chernobyl accident of 1986, which resulted from flagrant violations of safety procedures and basic design deficiencies, Nuclear Regulatory Commission initiated a study (33) to determine the risk due to severe accidents in nuclear power plants. Included in the study was a comprehensive application of PRA techniques to five representative LWR plants, three PWRs and two BWRs, in the United States. Severe accident risk for each plant was determined by summing the product, (probability of accident) × (consequences of the accident), for all accidents leading to radiation release to the environment. Figure 8 shows that the accident probability is further broken down into the probability P(I) of initiating events, conditional probability P(D|I) of initiating events leading to core and plant damage, and the conditional probability P(A|D) of plant damage states leading to containment failures grouped in accident progression bins. The consequences of accident are calculated by sequentially calculating the amount P(S|A) of radionuclides released given containment failures and the health consequences P(C|S) associated with the radionuclide release. Each of the square boxes in Fig. 8 represents a set of event tree analyses, while the rounded boxes indicate event probabilities or consequences calculated through the event tree analyses. The severe accident risk study, released as NUREG-1150, has provided voluminous documentation on the characteristics and consequences of severe accidents, especially core meltdown accidents. The study suggests that the overall risk of operating the five representative plants is acceptable but that individual plant characteristics, rather than generic attributes, have to be specifically considered in risk studies. The uncertainties in the risk estimates are still quite large, especially because some of the failure probabilities used in the PRA study are determined through subjective expert judgment. One interesting illustration of general usefulness of PRA studies is the identification of a BOP system deficiency in the Zion PWR plant, which was promptly corrected even before the final NUREG-1150 report was released.
16
Nuclear Reactors
Figure 8. Event tree structure of the NUREG-1150 PRA study, indicating sequential event tree analysis in square boxes, each of which represents the calculation of a conditional probability or consequence. Rounded boxes indicate event probabilities or consequences calculated in the PRA methodology.
INSTRUMENTATION AND CONTROL SYSTEMS IN NUCLEAR POWER PLANTS Monitoring and controlling core reactivity is one of major tasks in nuclear power plant operation. It is equally important to monitor and control the power distribution in the core so that safety margins are not compromised. We describe the principles of operation of representative radiation detectors, with emphasis on neutron detectors, and discuss how they are used in the nuclear instrumentation system to monitor the core reactivity and neutron population. The reactivity control system in a nuclear power plant handles long-term reactivity variations due to fuel depletion as well as those associated directly with power level changes. Nuclear Instrumentation System All devices that detect and measure ionizing radiation rely (34) on converting products of radiation interaction in the detector volume into an electrical signal. For each γ-ray interaction with gas or solid material in the detector volume, a free election and an ion are produced, and the movement of this ion pair in an applied electrical field is converted into electrical current. Neutrons are detected by means of charged particles that are released as a result of neutron interactions within the detector volume. Detection of slow or thermal neutrons makes use of α-particles released through neutron absorption in 6 Li or 10 B. Through elastic scattering collision with fast neutrons, hydrogen nuclei, i.e., protons, acquire part of the neutron energy and the measurement of these recoil protons provides the desired signal for fast neutrons. Among a variety of neutron detectors using these basic approaches, gas-filled detectors are employed extensively in both incore and excore nuclear instrumentation systems in power plants, because they respond over a wide range of radiation intensity and offer sufficient resistance to radiation damage. Another type of detectors unique to neutron detection is the self-powered neutron detectors (SPNDs), in which electrons produced, either directly or indirectly, in neutron interactions with emitter materials are collected without applied electric field. To provide continuous power level monitoring, both PWRs and BWRs use a set of neutron detectors covering broad power ranges but with different arrangements: PWR plants rely extensively on BF4 -filled excore detectors in contrast to miniature incore fission detectors installed in BWR plants. Depending on the power level, excore neutron
detectors with different designs and characteristics are selected in the PWR instrumentation system to provide the necessary level of discrimination against the gamma background. Fission chambers used in the BWR system are gas-filled detectors which are lined with highly enriched uranium to increase the ionization current and thereby to enhance discrimination against the gammas. In a number of PWR plants, SPNDs, instead of fission chambers, are installed as part of the incore power distribution monitoring system. Fixed neutron detectors provide continuous information both on the core power distribution and power output. Movable fission chambers are employed to perform periodic calibrations of fixed incore and excore neutron detectors in LWR plants.
Reactivity and Power Distribution Control Core reactivity K(t) defined in Eq. (24) in terms of keff cannot be measured directly and has to be derived from the measurement of power level variation n(t). Given n(t), we may determine K(t) either through the inhour equation or by inverting the point kinetics equations (21) for K(t). Since the response of a neutron detector is affected by the proximity of the source of reactivity perturbations, e.g., control rod movement, due care must be taken to ensure that the detector reading provides an accurate measure of the core-average neutron flux variation. For this reason, the point kinetics equations (21), derived with the assumption that the spatial flux distribution does not vary during a transient, have to be replaced by the time-dependent MGD equations, essentially combining Eqs. (14) and (20). Reactivity control in a nuclear power plant has to address short-term reactivity variations associated with power level changes, including emergency reactor shutdowns, and long-term effects due to fuel depletion and fission product buildup. The core power distribution has to be controlled and maintained so that the limitations (25) on peak fuel temperature or surface heat flux are not violated anytime during the operation. In BWR plants, control blades inserted from the bottom of the core are responsible for the bulk of the reactivity and power distribution control tasks. Judicious control of the coolant mass flow rate is also used to vary vapor mass distributions throughout the core and complements the control absorber movement in performing the control tasks. The reactivity and power distribution control tasks in PWR plants are handled through a combination of control rod movement and variation in the concentration of boric acid dissolved in coolant water. Although control rods are exercised during power level variations, the rods are kept essentially all the way out of the core during rated power operation. The reactivity control requirements associated with fuel depletion and fission product buildup are reduced, in both PWR and BWR plants, by the use of burnable absorbers, either in the form of lumped neutron absorbers or absorber materials admixed in fuel.
Nuclear Reactors
TEMPERATURE COEFFICIENTS OF REACTIVITY AND INHERENT REACTOR SAFETY Among the many safety measures built into every operating nuclear reactor is an inherent safety feature associated with the temperature or power coefficients of reactivity. Reactivity coefficients (7) refer to changes in reactivity due to changes in power level or fuel and coolant temperatures, and every operating reactor should be designed so that an inadvertent increase in power level will not cause an uncontrollable increase in reactivity. All LWRs operating in the United States and those designed according to U.S. technology all abide by this basic principle. One exception to this basic safety philosophy perhaps was the ill-fated Chernobyl reactor, where one particular reactivity coefficient, known as the void coefficient of reactivity, did not obey this fundamental guideline. Temperature Coefficients of Reactivity We invoke the two-group diffusion theory expression of Eq. (19) and approximate the core material as a mixture and fuel and moderator to discuss key factors that control the reactivity coefficients in LWRs. In terms of the reactivity ρ = (k − 1) / k = k / k, we write the power coefficient of reactivity αp for PWRs in terms of the fuel temperature coefficient αf and moderator temperature coefficient αm : k/k ∂ln k ∂Tf ∂ln k ∂ρm ∂Tm ∂Tf ∂Tm αp = ≈ + ≡ αf + αm .(29) P ∂Tf ∂P ∂ρm ∂Tm ∂P ∂P ∂P We recognize here that a power level variation P affects the reactivity through changes in fuel temperature Tf and moderator temperature Tm and that Tm influences k primarily through thermal expansion of water and the resulting change in water density ρm . For the ceramic UO2 fuel in LWRs, in the case of an increase in Tf , the bulk of the reactivity change is due to increased absorption in fuel resonances, known as Doppler broadening of resonances, with only minor contributions from thermal expansion of fuel. This ensures that the fuel temperature feedback is prompt and αf < 0. For BWR designs, the moderator density term in Eq. (31) is replaced by a term representing changes in the core average vapor fraction Vm : αp ≈
∂ln k ∂Tf ∂ln k ∂ln Vm ∂Tf ∂ln Vm + ≡ αf + αv , (30) ∂Tf ∂P ∂ln Vm ∂P ∂P ∂P
where αv is called the void coefficient of reactivity. We recognize that αm and αv are merely two different representations of the same moderator density effects on reactivity. If there is a decrease in ρm caused by a power rise, thermal utilization f of Eq. (19) will increase because the number density of water and hence the thermal absorption cross section M a2 of moderator and a2 decrease. This increase in f is, however, accompanied by a decrease in resonance escape probability p, since the water density reduction decreases the ability to slow down fast neutrons and hence decreases the slowing down cross section r . Neglecting small changes in the two remaining parameters, η and k1 , for k∞ as well as in the nonleakage probability PNL in Eq. (18), we note that the net effect of the moderator temperature and density changes in LWRs is deter-
17
mined by the competition between the conflicting changes in f and p. Thus, for some moderator density, a peak in k∞ and keff will occur, as illustrated by a bell-shaped curve in Fig. 9. The left-hand half of the curve represents the under-moderated regime where water density does not allow optimal moderation of fast neutrons while the righthand half corresponds to the over-moderated regime. In a typical LWR design, the rated operating condition is chosen somewhere in the under-moderated regime, as marked in Fig. 1, so that any increase in Tm or a decrease in ρm results in sliding down the keff curve. This implies that a PWR will have ∂ln k / ∂ρm > 0 and hence αm < 0. Likewise, a BWR core operating in the under-moderated regime can guarantee αv < 0. Combined with the inherently negative value of αf , negative values of αm and αv for PWRs and BWRs, respectively, always yield αp < 0, thus ensuring an automatic decrease in reactivity in the case of an inadvertent increase in power. Unfortunately, in the Chernobyl design, a positive value of αv was possible at low power with a small number of control rods inserted, and that is where the 1986 accident was initiated. This clearly illustrates the importance of maintaining the proper design and operating parameters and, at the same time, should clarify the crucial point that a runaway power excursion is inherently impossible in any properly designed LWR plant. Inherent Passive Safety Characteristics of Nuclear Reactors This inherent safety characteristic of LWRs is extended further in LMR designs, where self-shutdown capability, even in the case of primary sodium pump failure coupled with a scram failure, was demonstrated (35) at the 20MWe Experimental Breeder Reactor Unit II (EBR-II) in 1986. In this type of under-cooling event, the resulting power transient is sufficiently slow so that we may assume a quasi-static neutronic behavior and consider the net reactivity δρ ≈ 0 during the transient. Furthermore, the power transient primarily raises fuel temperature, while the sodium coolant temperature is determined largely by the flow coastdown rate. This allows us to represent the reactivity balance in terms of αp decoupled from a coolant coefficient of reactivity αc : δρ =
∂ln k ∂Tf ∂ln k δTc ≡ αp δP + αc δTc ≈ 0. δP + ∂Tf ∂P ∂Tc
(31)
Since both αp and αc are negative, an under-cooling transient can be terminated at a low power level corresponding to δP < 0, even with a scram failure, if δTc > 0 results in an acceptable rise in sodium coolant temperature, which is exactly what was demonstrated succinctly in the EBR-II passive shutdown demonstrations. The quasi-static analysis of Eq. (33) indicates that, to have the largest possible reduction in power, we desire to make αp as small negative as feasible. This objective, however, should be contrasted with another objective we need to consider for a transient initiated by positive reactivity insertion δρex . With a quasi-static reactivity balance δρ = δρex + αp δP ≈ 0, in order to minimize the power increase δP, we need to maximize the magnitude of the negative αp . This simple example illustrates that passive
18
Nuclear Reactors
Figure 9. The effective multiplication factor k plotted as a function of coolant water density in an LWR core. The dot in the undermoderated regime indicates typical LWR operating conditions, illustrating the inherent safety feature that any inadvertent overheating of the coolant results automatically in a decrease in k and hence a reduction in power output.
safety of nuclear power plants requires a careful balance between a number of conflicting objectives. This perhaps is merely one of many challenges that lie ahead for nuclear engineers in developing the next generation of nuclear power plants. NUMERICAL SOLUTION OF THE MGD EQUATIONS Basic approach for solving the MGD equations entails standard finite-difference formulation (7) in each of the spatial dimensions represented. Since coupling between the groups, as indicated in Eqs. (14), is through the source terms on the RHS of the MGD equations, the numerical solution of the MGD equations can proceed group by group, with the source terms and core eigenvalue or keff iteratively updated in a source or outer iteration. Finite-difference solution of the MGD equations in one-dimensional geometry requires the inversion of a tri-diagonal matrix group by group. Inversion of the matrix is usually known as the inner iteration, although tri-diagonal matrices are usually inverted, without iteration, through the Gaussian elimination algorithm comprising the forward elimination and backward substitution steps. For two- and threedimensional geometries, the inner iteration actually involves an iterative inversion of five- and seven-band matrices, respectively. Although fine-mesh MGD calculations may be performed to yield a three-dimensional power distribution across individual fuel pins in a large nuclear reactor core, such calculations still require considerable computational resources and are usually reserved for benchmark problems. Routine reactor physics analysis often involves coarse-mesh MGD calculations, generically known as nodal calculations (36), coupled with pin-to-pin power distributions obtained from CP calculations, e.g., with the CASMO code. In the nodal expansion method (NEM), which forms the basis for the SIMULATE code (37), polynomial expansions approximate flux distributions within each node so that full-blown three-dimensional MGD so-
lutions may be obtained with coarse spatial mesh. When the coarse-mesh NEM solution φglobal for the global flux distribution is combined with the intra-assembly solution φform to form the pin-to-pin flux distribution φreactor , discontinuities are encountered in φreactor at assembly boundaries. This is because two adjacent assemblies, with distinct fuel and control absorber arrangements, entail different intra-assembly flux distributions φform , and this would produce discontinuities in φreactor at the boundary between the assemblies, if φglobal is obtained to preserve the continuity of flux at each mesh boundary. To avoid discontinuities in φreactor , an assembly discontinuity factor (ADF) is calculated (38) as the ratio of the assembly-boundary flux to the assembly-average flux at each assembly boundary. The ADFs are applied to NEM calculations, as an interface flux condition, so that a discontinuity is induced in φglobal at each assembly boundary, which then offsets the differences in φform and renders φreactor continuous across assembly boundaries. The ADF approach, combined with global NEM and assembly CP calculations, allows us to reconstruct sufficiently accurate pin-to-pin power distributions for LWR core configurations with minimal computational requirements. Further development will be required to apply this type of synthesis approach to other core configurations, especially with non-square assembly geometry, and to time-dependent problems. NEUTRON TRANSPORT THEORY AND COMPUTATIONAL ALGORITHMS Although neutron diffusion theory provides many valuable insights to neutron behavior in a chain reacting system, we have recognized its limitation especially in representing material heterogeneities inherent at the unit-cell or assembly level. For this reason, we have discussed an integral from of the neutron transport equation in the CP lattice physics analysis. We now consider a full-blown integrodifferential form of the neutron transport equation and discuss the important role it plays in reactor physics and
Nuclear Reactors
numerical algorithms developed to solve the equation. We extend the concept of neutron flux φ(r,t), which we now call the scalar flux, and define the angular flux ψ(r,,t) in terms of the track length, similar to φ(r,t), but single out neutrons traveling in direction . This allows us to interpret ψ(r,,t) as the neutron current relative to a unit surface area normal to and substitute it for J in the balance Equation (6). Adding the rate of neutrons scattered out of the energy interval of interest to the absorption rate to obtain the total collision rate and including the in-scattering rate of neutrons in the total source S(r,,t), we obtain the Boltzmann neutron transport equation: (7, 39) 1 ∂ψ(r, , t) = S(r, , t) − ∇ · ψ(r, , t) − ψ(r, , t),(32) v ∂t where we suppress the energy dependence for notational convenience and = t . With the energy dependence included, the source term is given as an integral over both energy and directional variables, and Eq. (34) takes on the form of an integro-differential equation in seven variables: three in space, two in directions, energy, and time. Numerical solution of the transport equation was the primary motivation behind J. von Neumann’s effort to develop computing machines during the Manhattan Project and still remains a challenge for super-computers. Equation (34), slightly modified to represent the proper interaction mechanisms, also describes the transport of photons, i.e., γ- or x-rays, in radiation shielding and medical applications. Numerical solution of Eq. (34) for time-dependent transport problems is still in a rather limited stage of development but significant progresses have been made over the past three decades in solving the steady-state form of Eq. (34). Computational algorithms for solving the transport equation (34) can be classified as either deterministic or stochastic. Deterministic algorithms involve separation of variables techniques, discretization in the space of one or more variables, or a combination of both. Stochastic algorithms, often referred to as Monte Carlo algorithms, simulate a large number of neutrons undergoing collision and migration, and the mean behavior of the neutrons simulated yields the solution. Deterministic Algorithms We consider deterministic algorithms using a onedimensional, steady-state form of Eq. (34) written in terms of µ, cosine of the angle θ between the direction of neutron motion and the spatial coordinate z: µ
∂ψ(z, µ) + ψ(z, µ) = S(z, µ). ∂z
(33)
In a separation of variables technique, known as the Pn method, (7, 39) the angular flux ψ(z,µ), expanded as a function of Legendre polynomials Pm (µ), is substituted into Eq. (35) and a coupled set of ordinary differential equations is obtained for the expansion coefficients φm (z) by invoking the orthonormality properties of Pm (µ). The set of differential equations is truncated by retaining the angular dependence of neutron population up to a certain order of anisotropy, i.e., by setting φm (z) = 0 for m > n. With the 1 recognition that scalar flux φ(z) = φ0 (z) = −1 dµψ(z, µ) and
19
1
current J(z) = φ1 (z) = −1 dµµψ(z, µ), two equations for the P1 approximation are identified as the diffusion Equation (6) and Fick’s law of diffusion. The Pn equations are solved for φm (z) through discretization schemes similar to those for the neutron diffusion equation. Another approach to solve Eq. (35), called the Sn or discrete ordinates method, (38) entails calculating the angular flux for a few discrete values of µ and approximating 1 the integral −1 dµψ(z, µ) by a summation wn ψ(z, µn ) n
in terms of a suitable set of quadrature weights wn . For discrete direction µn , approximating the derivative by a first-order difference over a mesh interval zj = zj+1/2 − zj−1/2 yields µn
ψ(zj+1/2 , µn ) − ψ(zj−1/2 , µn ) + ψ(zj , µn ) = S(zj , µn ), (34) zj
where the cell-center flux ψ(zj ,µn ) has to be obtained as a function of mesh-boundary fluxes ψ(z j−1/2 , µn ) and ψ(z j+1/2 , µn ). In the diamond differencing scheme, a simple arithmetic averaging is used: ψ(zj , µn ) = 0.5 × [ψ(z j−1/2 , µn ) + ψ(z j+1/2 , µn )]. Given the source term S(zj ,µn ), Eq. (36) is solved for mesh-boundary fluxes, following the direction of neutron travel for each µn . To avoid numerical difficulties, including negative values of flux, that are encountered in diamond differencing, a number of alternate high-order schemes have been developed. One popular scheme, called the linear discontinuous scheme, approximates ψ(z,µn ) for each µn by a linear function which is discontinuous at the mesh boundaries. In this scheme, two difference equations, similar to Eq. (36), are solved for each spatial cell and for each discrete direction. Once the angular flux ψ(z,µn ) is solved through diamond differencing or alternative approaches, the source term S(zj ,µn ) may be updated by using the latest estimate of ψ(z,µn ) and the process repeated until convergence is reached. In this traditional source iteration method, the convergence can be slow, since the spectral radius ρ, i.e., the largest value of the magnitude of eigenvalues of the governing iteration matrix, is equal to the ratio c = s /. To overcome this difficulty, a number of alternate iteration schemes have been developed. In the diffusion synthetic acceleration (39, 40) scheme, the source iteration is accelerated by combining the discretized solution for ψ(z, µn ) with a consistently discretized solution for a low-order approximation, usually diffusion theory or low-order Pn formulation. Significant accelerations can be attained in this synthetic approach, with the spectral radius ρ reduced to 0.23 c, for slab-geometry transport problems. Monte Carlo Algorithms By selecting a host of random numbers, Monte Carlo algorithms (38) simulate individual particles that follow physical laws of particle interaction and transport, as represented by Eq. (34), without the need to discretize any of the spatial, energy or direction variables. Monte Carlo algorithms offer the potential to provide accurate solutions for transport problems with complex geometries and material heterogeneities, with the solution accuracy limited only by the computing resources at our disposal. With rapid
20
Nuclear Reactors
advances made in computer hardware, there has been a significant increase in the popularity of Monte Carlo algorithms in both neutron and photon transport applications. This increased popularity owes in no small measure to the versatility that the MCNP5 code (41) offers: (1) simple description of complex geometries using well-defined surfaces, (2) separate or coupled neutron and gamma transport calculations, and (3) cross section libraries in continuous energy structure, rather than in discrete group formulation. Although the accuracy of Monte Carlo calculations is limited by the number of particle histories simulated, the MCNP code running on workstations provides acceptable accuracies for many practical calculations, especially for criticality calculations, where the eigenvalue is determined as a sum total of particle histories. Local flux or reaction rate calculations may, however, suffer from statistical fluctuations inherent in Monte Carlo calculations, especially in deep-penetration shielding problems.
ADVANCED REACTOR DESIGNS AND CHALLENGES FOR NUCLEAR ENGINEERS In spite of proven safety records of LWR plants, both PWR and BWR, effort is continuing to develop new reactor and plant designs that reflect lessons learned from the current generation of power reactors. These advanced reactor designs cover a number of different features that may be classified as evolutionary in nature as well as those representing more radical changes and providing enhanced passive safety characteristics. For example, several power plants featuring two evolutionary LWR designs (42), ABB System 80+ and General Electric Advanced BWR (ABWR), have been operating in Korea and Japan, respectively, for past several years. Enhanced safety is clearly the focus of these new designs, which include improved ECCS features and an alternate emergency power source for System 80+ and installation of internal recirculation pumps for the ABWR. The elimination of external recirculation pumps, which are used in all BWR plants operating in the U.S., is expected to substantially reduce the likelihood of LOCAs in the ABWR. One key example of advanced power plant designs is the AP1000 plant, which offers, with rated power of 1100 MWe, enhanced passive safety characteristics and competitive economics. The design includes passive features for safety injection of coolant during a LOCA, residual heat removal, and containment cooling. For example, cooling of the containment structure, both inside and outside, by natural circulation is effectively used. The design also features an increased size and hence a larger coolant inventory for the pressurizer and an increased reservoir of coolant through the in-containment refueling water storage tank. The AP1000 design received the final design certification (43) from the U.S. Nuclear Regulatory Commission in December 2005 and utility companies will be able to expedite the process of combined construction and operating license application with the certified design. The AP1000 design certification process required nearly two decades of development, starting with its predecessor AP600, and a cumulative expenditure of $600 million by Westinghouse Electric Company. The total generation cost, including capital,
operating and maintenance, and fuel costs, is calculated to be $0.03−0.035/kWh of electricity for a twin-unit AP1000 plant. Significant effort has been underway in the U. S. to develop a new breed of nuclear power plants, known as Generation IV plants, which could meet the demand for clean, economic electricity for the twenty-first century. The advanced reactor designs, including System 80+, ABWR, and AP600, are classified as Generation III plants, in contrast to Generation II plants comprising conventional PWR, BWR, and other plants operating currently in the U. S. and elsewhere. AP1000 is the first Generation III+ design to complete the design certification process. Other Generation III+ designs under development include General Electric Company’s ESBWR (44) and Areva’s USEPR designs (45), which offer power ratings around 1500 MWe. The Generation IV initiative (46) is built around innovative designs that will (a) increase economic competitiveness, (b) enhance safety and reliability, (c) minimize radioactive waste generation, and (d) increase nuclear proliferation resistance. Under the leadership of the U. S. Department of Energy (DOE), a multi-national study was performed to develop the Generation IV roadmap (46) and to select six most promising systems for detailed design and development. The DOE has selected to focus on the veryhigh temperature gas-cooled reactor (VHTR) and sodiumcooled fast reactor (SFR) designs for development in the U.S. among the six designs included in the roadmap. The VHTR design has the capacity to heat the helium coolant to temperatures in excess of 1100 K, suitable for the generation of hydrogen via dissociation of water. On the other hand, the SFR, operating with neutron energies around 0.1 MeV, offers the best potential for transmuting the entire transuranic elements, not just plutonium, from the LWR irradiated fuel inventory. In consideration of this potential, the DOE initiated in February 2006 the Global Nuclear Energy Partnership (GNEP) (47), which involves focused effort to develop SFR transmuters, together with the pyroprocessing and UREX+ aqueous processing technology (30, 31). The GNEP will actively pursue the reprocessing and recycling of LWR spent fuel, thereby significantly reducing the burden on the planned Yucca Mountain repository. The initiative also proposes that supplier nations, including the U.S., to provide slightly enriched LWR fuel to user nations, with the promise to take back spent fuel for reprocessing and recycling. This is a bold initiative that offers the potential to minimize the desire to develop indigenous uranium enrichment facilities in every country that would deploy nuclear power plants. Incorporating advanced passive safety features in new innovative designs, the next generation of nuclear power plants is expected to be highly competitive in the world energy market. Realization of the goals enunciated for the Generation IV and GNEP initiatives will present new challenges to nuclear engineers. Power plant designs, including fuel, coolant, and engineered safety systems, should be optimized systematically. Many of design approaches and computer code packages need considerable updating and improvement, to increase the accuracy of design calculations in general and to accommodate passive safety features in particular. It has become increasingly neces-
Nuclear Reactors
sary to represent the dynamics of power plant systems in a fully integrated manner. This will require the development of efficient, verifiable super-system models in the near future. Similar challenges have to be met in the reactor physics arena. Improved fuel economics and increased dependence on reactivity coefficients for passive safety, especially for SFRs, demand substantial enhancement to the subgrid modeling and synthesis approaches, which characterize the reactor physics methodology in use today. Substantial effort will have to be made to improve and optimize the methodology for fuel management and operations support systems. Another challenge is the safe disposal of radioactive waste, even with Generation IV plants. This will require parallel effort to establish safe disposal sites and to successfully reprocess and recycle the entire transuranic materials via a synergistic use of LWR and SFR power plants. U. S. nuclear power plants have achieved an impressive record of safe operation and low electricity generation cost in recent years, especially as a result of the formation of large operating companies owning as many as 20 nuclear plants each. Coupled together with the increasing need for clean, non-carbon-emitting energy around the world, there is considerable expectation that new Generation III+ plants will be ordered in 2007 or 2008, leading to successful deployment of Generation IV plants over the next 20–30 years.
CROSS-REFERENCES: See (1) Fusion Reactors and (2) Nuclear Power Plant Design
BIBLIOGRAPHY 1. F. J. Rahn, A. G. Adamantiades, J. E. Kenton, and C. Braun, A Guide to Nuclear Power Technology, New York: Wiley, 1984. 2. R. A. Knief, Nuclear Engineering, 2nd ed., Washington: Hemisphere Publishing, 1992. 3. A. V. Nero Jr., A Guidebook to Nuclear Reactors, Berkeley, CA: University of California Press, 1979. 4. D. Testa (ed.), The Westinghouse Pressurized Water Reactor Nuclear Power Plant, Pittsburgh, PA: Westinghouse Electric Corporation, 1984. 5. R. L. Murray, Nuclear Energy, 4th ed., Oxford: Pergamon, 1993. 6. J. R. Lamarsh, Introduction to Nuclear Engineering, 2nd ed., Reading, MA: Addison-Wesley, 1983. 7. J. J. Duderstadt and L. J. Hamilton, Nuclear Reactor Analysis, New York: Wiley, 1976. 8. R. A. Serway, C. J. Moses, and C. A. Moyer, Modern Physics, Philadelphia: Saunders College Publishing, 1989. 9. J. G. Collier and G. F. Hewitt, Introduction to Nuclear Power, Washington, DC: Hemisphere Publishing, 1987. 10. N. J. McCormick, Reliability and Risk Analysis, Orlando, FL: Academic Press, 1981. 11. N. E. Todreas and M. S. Kazimi, Nuclear Systems I: Thermal Hydraulic Fundamentals; II: Elements of Thermal Hydraulic Design, New York: Hemisphere Publishing 1990.
21
12. H. Goldstein, Fundamental Aspects of Reactor Shielding, Reading, MA: Addison-Wesley, 1959. 13. Reactor Physics Constants, ANL-5800, 2nd ed., Argonne, IL: Argonne National Laboratory, 1963. 14. P. J. Turinsky, Overview of reactor physics calculations, InY. Ronen (ed.), CRC Handbook of Nuclear Reactors Calculations, Vol. III, 210–231, Boca Raton, FL: CRC Press, 1986. 15. S. F. Mughabghab, M. Divadeenam, and N. E. Holden, Neutron Cross Sections, Vol. 1, Neutron Resonance Parameters and Thermal Cross Sections, Part A, Z = 1 − 60, New York: Academic Press, 1981. 16. S. F. Mughabghab, Neutron Cross Sections, Vol. 1, Neutron Resonance Parameters and Thermal Cross Sections, Part B, Z = 61 − 100, Orlando, FL: Academic Press, 1984. 17. V. McLane, C. L. Dunford, and P. F. Rose, Neutron Cross Sections, Vol. 2, Neutron Cross Section Curves, San Diego, CA: Academic Press, 1988. 18. P. R. Rose (ed.), ENDF/B-VI Summary Documentation, ENDF-201, Upton, NY: Brookhaven National Laboratory, 1991. 19. P. J. Finck and C. Nordborg, The JEF evaluated data library— current status and future plans, Trans. Am. Nucl. Soc., 73: 422–424, 1995. 20. R. E. MacFarlane and D. W. Muir, The NJOY Nuclear Data Processing System, Version 91, LA-12740-M, Los Alamos: Los Alamos National Laboratory, 1994. 21. H. Henryson II, B. J. Toppel, and C. G. Stenberg, MC2 -2: A Code to Calculate Fast Neutron Spectra and Multigroup Cross Section, ANL-8144, Argonne, IL: Argonne National Laboratory, 1976. 22. L. E. Strawbridge and R. F. Barry, Criticality calculations for uniform water-moderated lattices, Nucl. Sci. Eng., 23: 58–73, 1965. 23. D. B. Jones, and K. E. Watkins, CPM-3 Computer Code Manual, EPR-CPM-001-M-002, Vol. 2, Rev. A, Palo Alto, CA: Electric Power Research Institute, 2000. 24. M. Edenius, K. Ekberg, B. H. Forss´en, and D. Knott, CASMO-4, A Fuel Assembly Burnup Program, Studsvik/SOA-93/1, Newton, MA: Studsvik of America, 1993. 25. H. W. Graves Jr., Nuclear Fuel Management, New York: Wiley, 1979. 26. S. E. Aumeier, J. C. Lee, D. M. Cribley, and W. R. Martin, Cross-Section Parameterization Using Irradiation Time and Exposure for Global Depletion Analysis, Nucl. Technol., 108: 299–319, 1994. 27. RELAP5/MOD3.3 Code Manual, Volume 1: Code Structure, Systems Models, and Solution Methods, NUREG/CR-5535, Rev. 1, Washington, DC: U. S. Nuclear Regulatory Commission, 2001. 28. N. Tsoulfanidis and R. G. Cochran Radioactive Waste Management, Nucl. Technol., 93: 263–304, 1991. 29. N. C. Rasmussen, J. Buckham, T. J. Burke, G. R. Choppin, M. S. Coops, A. G. Croff, E. A. Evans, D. C. Hoffman, H. K. Forsen, G. Friedlander, B. J. Garrick, J. M. Googin, H. A. Grunder, L. C. Hebel, T. O. Hunter, W. M. Jacobi, M. S. Kazimi, C. J. King III, E. E. Kintner, R. A. Langley, J. C. Lee, G. E. Lucas, E. A. Mason, F. W. McLafferty, R. A. Osteryoung, T. H. Pigford, D. W. Reicher, J. E. Watson Jr., S. D. Wiltshire, and R. G. Wymer, Nuclear Wastes: Technologies for Separations and Transmutation, Washington, DC: National Academy Press, 1996. 30. J. J. Laidler, J. E. Battles, W. E. Miller, J. P. Ackerman, and E. L. Carls, Development of pyroprocessing technology, Prog. Nucl. Energy, 31: 131–140, 1997.
22
Nuclear Reactors
31. G. F. Vandegrift, et al.,“ Lab-Scale Demonstration of the UREX+ Process,”Waste Management 2004 International Symposium, Tucson, AZ, 2004. 32. Reactor Safety Study—An Assessment of Accident Risks in U.S. Commercial Nuclear Power Plants, WASH-1400, Washington, DC: U.S. Nuclear Regulatory Commission, 1975. 33. Severe Accident Risks: An Assessment for Five U.S. Nuclear Power Plants, NUREG-1150, Vol. 1, Washington, DC: U.S. Nuclear Regulatory Commission, 1990. 34. G. F. Knoll, Radiation Detection and Measurement, 3rd ed., New York: Wiley, 2000. 35. H. P. Planchon, J. I. Sackett, G. H. Golden, and R. H. Sevy, Implications of the EBR-II inherent safety demonstration test, Nucl. Eng. Design, 101: 75, 1987. 36. R. D. Lawrence, Progress in nodal methods for the solution of the neutron diffusion and transport equations, Prog. Nucl. Energy, 17: 271–301, 1986. 37. K. S. Smith, SIMULATE-3 Methodology, Advanced Three-Dimensional Two-Group Reactor Analysis Code, Studsvik/SOA-92/02, Newton, MA: Studsvik of America, 1992. 38. K. S. Smith, Assembly homogenization techniques for light water reactor analysis, Prog. Nucl. Energy, 17: 303–335, 1986. 39. E. E. Lewis and W. F. Miller Jr., Computational Methods of Neutron Transport, New York: Wiley, 1984. 40. E. W. Larsen, Diffusion-synthetic acceleration methods for discrete-ordinates problems, Transport Theory Stat. Phys., 13: 107, 1984. 41. F. B. Brown, MCNP—A General Monte Carlo Code for NParticle Transport Code, Version 5, LA-UR-03-1987, Los Alamos, NM: Los Alamos National Laboratory, 2003. 42. E. L. Quinn,“ New nuclear generation–in our lifetime,” Nucl. News, 52,October 2001. 43. H. J. Bruschi,“ The Westinghouse’s AP1000 affirmed by NRC,” Nucl. News, 10,February 2006. 44. D. Hinds and C. Maslak,“ Next-generation nuclear energy: The ESBWR,” Nucl. News, 35,January 2006. 45. R. C. Twilley, Jr.,“ EPR development – an evolutionary design process,” Nucl News, 26,April 2004. 46. “A Technology Roadmap for the Generation IV Nuclear Energy Systems,” http://gif.inel.gov/roadmap, 2002. 47. “Global Nuclear Energy Partnership,” http://www.gnep. energy.gov, 2006.
JOHN C. LEE University of Michigan, Ann Arbor, MI
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICA...CTRONICS%20ENGINEERING/40.%20Nuclear%20Science/W5213.htm
●
HOME ●
ABOUT US //
●
CONTACT US ●
HELP
Wiley Encyclopedia of Electrical and Electronics Engineering Particle Spectrometers Standard Article Daniel M. Kaplan1 and Kenneth S. Nelson2 1Illinois Institute of Technology, Chicago, IL 2University of Virginia, Charlottesville, VA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. : 10.1002/047134608X.W5213 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (336K)
Browse this title ●
Search this title Enter words or phrases ❍
Advanced Product Search
❍ ❍
Acronym Finder
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20...20ENGINEERING/40.%20Nuclear%20Science/W5213.htm (1 of 2)16.06.2008 15:27:26
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICA...CTRONICS%20ENGINEERING/40.%20Nuclear%20Science/W5213.htm
Abstract The sections in this article are Overview of Subatomic Particles Overview of Particle Detection Probabilistic Nature of Particle Reactions Detailed Discussion of Particle Detectors Particle Spectrometers Summary Acknowledgments | | | Copyright © 1999-2008 All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20...20ENGINEERING/40.%20Nuclear%20Science/W5213.htm (2 of 2)16.06.2008 15:27:26
662
PARTICLE SPECTROMETERS
PARTICLE SPECTROMETERS This article introduces the reader to the field of high-energy physics and the subatomic-particle detection techniques that it employs. These techniques are of interest to the electrical engineer because they often entail sophisticated signal-processing and data-acquisition systems. We begin with an overview of the field, and then briefly introduce subatomic particles and their detection before treating particle detectors in more detail. We conclude with two examples that illustrate how a variety of detectors work together in typical highenergy-physics experiments. The experimental study of subatomic particles and their interactions has revealed an unexpected layer of substructure underlying the atomic nucleus and has shed light on the evolution of the universe in the earliest moments following the Big Bang. This field of research is commonly referred to as elementary-particle physics or (because of the highly energetic particle beams employed) high-energy physics. Modern subatomic-particle experiments employ elaborate spectrometry systems, often with state-of-the-art electronic instrumentation. While there is much variation among spectrometers, generally they measure the trajectories and energies of subatomic particles passing through them. In a typical experiment, a beam of subatomic particles is brought into collision with another particle beam or with a stationary target. Interactions between particles yield reaction products, some of which pass through the spectrometer. Measurements can include the momentum, angle, energy, mass, velocity, and decay distance of reaction products. Particle-detection techniques pioneered in high-energy physics have received broad application outside that field. ImJ. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
PARTICLE SPECTROMETERS
portant examples include nuclear physics, astronomy, medical imaging, X-ray scattering, diffraction, and spectroscopy, and the use of synchrotron radiation in biophysics, biochemistry, materials science, and the semiconductor industry (1,2). There has of course been intellectual traffic in both directions, for example, the pioneering use of semiconductor detectors in nuclear physics and of charge-coupled devices (CCDs) in astronomy (3).
OVERVIEW OF SUBATOMIC PARTICLES Subatomic particles include the familiar electron, proton, and neutron, which are the components of the atom. In addition, dozens of less stable particles have been discovered since the 1930s that can be produced in reactions among electrons, protons, and neutrons and subsequently decay in a variety of ways. Each particle is characterized by a unique set of values for mass, electric charge, average lifetime, etc. Subatomic particles also possess a property called spin, which differs from the classical concept of angular momentum in that it is quantized (in units of ប/2) and immutable. Table 1 defines the units of measurement commonly used in high-energy physics for these quantities that are employed in this article.
663
Quarks The hadrons are composed of quarks, of which (like the leptons) only six types are known. These are designated up and down, charm and strange, and top and bottom [see Table 3 (4)]. (For historical reasons, the top and bottom quarks are also designated by the alternative names truth and beauty; somewhat illogically, top and beauty are the names more commonly used.) Like the leptons, the quarks come in pairs with the members of a pair differing in electric charge by one unit. The up, charm, and top quarks have charge ⫹ e. The down, strange, and bottom quarks have charge ⫺ e. For each type of quark there exists a corresponding antiquark with opposite electric charge. Quarks are bound together into hadrons by the strong force. This is observed to occur in two ways: a quark can bind to an antiquark to form a meson or antimeson, and three quarks or antiquarks can bind together to form a baryon or antibaryon. Bare quarks, as well as combinations of quarks other than those just mentioned, have never been observed and are presumed to be forbidden by the laws governing the strong force. (The possible existence of hadrons made up entirely of gluons is a subject of current experimental investigation but has not been definitively established.) OVERVIEW OF PARTICLE DETECTION
Leptons, Hadrons, and Gauge Bosons Since the 1960s a simple unifying principle for the plethora of subatomic particles has become generally accepted. Subatomic particles fall into three categories: leptons, hadrons, and gauge bosons [see Table 2 (4)]. The hadrons are made of quarks (described later). Leptons and quarks each have one quantum of spin, while gauge bosons have two or four spin quanta. Gauge bosons are responsible for the forces between particles. For example, the electromagnetic force arises from the exchange of photons among charged particles, and the strong force from the exchange of gluons. Leptons and hadrons can be distinquished experimentally by their modes of interaction. Hadrons are subject to the strong force (which also binds the nucleus together), while leptons are not. There are only six types of lepton: the (negatively charged) electron, muon, and tau and their neutral partners, the electron neutrino, muon neutrino, and tau neutrino. The three charged leptons all have charge ⫺1e. For each type of lepton there exists a corresponding antiparticle. Lepton and antilepton have equal mass, spin, and lifetime, and electric charges equal in magnitude but (for charged leptons) opposite in sign.
Subatomic particles can be detected via their interactions with bulk matter. Most particles can interact via more than one of the four forces (in order of decreasing interaction strength): strong, electromagnetic, weak, and gravitational. It is typically the stronger forces that give the most dramatic and easily detectable signals. Since subatomic particles have such small masses, the gravitational force is entirely useless for their detection. All charged particles can be detected via the electromagnetic force, since they ionize nearby atoms as they pass through matter. Neutrinos (which as neutral leptons ‘‘feel’’ only the weak and gravitational forces) are exceedingly difficult to detect directly, and their production is typically inferred via conservation of momentum and energy by observing that some of the momentum and energy present before a reaction are missing in the final state. Position Measurement: Hodoscopes and Telescopes Detectors that measure particle position can be arranged as hodoscopes or telescopes. Hodoscopes are arrays of adjacent detectors typically used to measure the position of a particle along a direction perpendicular to the particle’s path. Tele-
Table 1. Units Commonly Used in High-Energy Physics Unit
Value in MKS unitsa
Charge Energy
e electron volt (eV)
1.60 ⫻ 10 C 1.60 ⫻ 10⫺19 J
Mass Momentum Spin
GeV/c 2 b GeV/c b ប
1.78 ⫻ 10⫺27 kg 5.34 ⫻ 10⫺19 kg ⭈ m/s 1.05 ⫻ 10⫺34 J ⭈ s
Quantity
a b
Comment
⫺19
Values are quoted to three significant digits, which is sufficient precision for most purposes. 1 GeV ⫽ 109 eV.
Kinetic energy of particle of charge e accelerated through 1 V. Mass and energy related by E ⫽ mc 2. Reduced Planck constant; spin quantum is ប/2.
664
PARTICLE SPECTROMETERS
Table 2. Properties of Selected Subatomic Particlesa,b Particle
Charge (e)
Symbol
Mass (GeV/c2)
Mean Life (s)
Spin (ប)
5.11 ⫻ 10⫺4 0 0.105 0 1.78 0
Stable Stable 2.20 ⫻ 10⫺6 Stable 2.91 ⫻ 10⫺13 Stable
Leptons ⫺1 0 ⫺1 0 ⫺1 0
⫺
Electron Electron neutrino Muon Muon neutrino Tau Tau neutrinoc
e e 애⫺ 애 ⫺
Hadrons
Quark content
Baryons Proton Neutron Lambda Cascade
⫹1 0 0 ⫺1 0
p n ⌳ ⌶⫺ ⌶0
Mesons ⫹1, ⫺1 0 ⫹1, ⫺1 0 0 ⫹1, ⫺1 0
앟⫹, 앟⫺ 앟0 K⫹, K⫺ K 0, K 0 J/ B⫹, B⫺ B 0, B 0
Pion Kaon J/psi B
0.938 0.940 1.12 1.32 1.31
stable 887 2.63 ⫻ 10⫺10 1.64 ⫻ 10⫺10 2.90 ⫻ 10⫺10
uud udd uds dss uss
0.140 0.135 0.494 0.498 3.10 5.28 5.28
2.60 ⫻ 10⫺8 8.4 ⫻ 10⫺17 1.24 ⫻ 10⫺8
0 0 0 0 1 0 0
ud, du uu, dd us, us ds, sd cc ub, bu db, bd
d
1.25 ⫻ 10⫺19 1.62 ⫻ 10⫺12 1.56 ⫻ 10⫺12
Gauge bosons 웂 W ⫹, W ⫺ Z0 g G
Photon Weak bosons Gluons Graviton c
Force mediated
0 ⫹1, ⫺1 0 0 0
0 80.3 91.2 0 0
Stable 1.59 ⫻ 10⫺25 1.32 ⫻ 10⫺25 Stable Stable
1 1 1 1 2
Electromagnetic Weak Weak Strong Gravitational
a
Data presented here are for illustrative purposes; more complete and detailed information is available in the Review of Particle Physics (4), published biennially and available on the World-Wide Web at http://pdg.lbl.gov. b Values are quoted to three significant digits, which is sufficient precision for most purposes. c The existence of these particles has been postulated but is not yet definitively established. d Due to mixing of neutral kaons with their antiparticles, these particles do not have definite lifetimes. Symmetric and antisymmetric linear combinations of K 0 and K 0, known as K S and K L , have lifetimes of 8.93 ⫻ 10 ⫺11 s and 5.17 ⫻ 10 ⫺8 s, respectively.
scopes are arrays of detectors arranged sequentially along the particle’s path so as to track the motion of the particle. Commonly used position-sensitive detectors include scintillation counters, solid-state detectors, proportional tubes, and multiwire proportional and drift chambers. These produce electrical signals that can be digitized and processed in real time or recorded for further analysis using high-speed digital
computers. Specialized detectors less commonly used nowadays include the cloud chamber and bubble chamber, in which measurements are made continually as the particle traverses an extended gaseous or liquid medium, the spark chamber, and stacks of photographic emulsion. These detectors typically produce information on photographic film that must be processed optically, requiring scanning and measurement by trained personnel.
Table 3. The Three Generations of Quarks and Antiquarksa
Momentum and Energy Measurement
Charge (e)
Spin (ប)
⫹2/3 ⫺1/3
1/2 1/2
⫹1/3 ⫺2/3
1/2 1/2
Generation 1
2
3
Quarks
Approx. mass (GeV/c 2 ): a
u d d u 0.01
After ‘‘Review of Particle Physics’’ (4).
c s
0.005
Antiquarks s c 0.25
t b
1.3
b t 4.2
180
Magnetic Spectrometry. In a magnetic field, charged particles follow helical trajectories. The radius of curvature is proportional to the particle momentum and inversely proportional to the particle charge and the field strength. Given the radius r in meters, momentum p in GeV/c, charge q in units of the electron charge, and field strength B in tesla, r = 0.3
p qB sin θ
(1)
where is the angle between the field direction and the particle momentum vector. From measurements of the curvature
PARTICLE SPECTROMETERS
of the particle track within the field, the momentum can thus be determined. Even if no measurements are made within the field, the curvature within it (and hence the momentum) can be inferred by measuring the particle’s trajectory before and after it traverses the field. The magnetic field, typically in the range 1 T to 2 T, is generally produced using an electromagnet, which may be air core or solid and have conventional (copper or aluminum) or superconducting coils. Calorimeters. Calorimeters are detectors of thickness sufficient to absorb as large a fraction as possible of the kinetic energy of an incident particle. While for electrons and hadrons this fraction can approach 100 %, there is usually some leakage of energy out the back of a calorimeter. An electrical signal is produced proportional to the deposited energy. Unlike tracking detectors, calorimeters can detect neutral as well as charged particles. Calorimeters also play an important role in electron identification and are sometimes used for muon identification, as described next. Particle Identification Of the charged subatomic particles, five are sufficiently stable to travel many meters at the energies typical in high-energy physics (1 GeV to several hundred GeV), so that their trajectories can be easily measured in a magnetic spectrometer. The problem of particle identification is thus that of distinguishing among these five particles: electrons, muons, pions, kaons, and protons. In experiments that identify particles, multiple particle-identification techniques are typically used together in order to enhance the efficiency of identification and lower the probability of misidentification. Calorimetric Electron (and Photon) Identification. As discussed in more detail in later sections, in material of high atomic number (Z), high-energy electrons create characteristic electromagnetic showers consisting of a cascade of photons, electrons, and antielectrons (positrons). Thus the pattern of energy deposition in a calorimeter, as well as the correlation of deposited energy with magnetically measured momentum, can be used to distinguish electrons from other charged particles. In a calorimeter optimized for this purpose, e–앟 rejection of 10⫺4 can be achieved (i.e., only 10⫺4 of pions mistaken for electrons) (5), while maintaining 75% efficiency for electrons (i.e., only 25% of electrons rejected as having ambiguous identification) (6). Since high-energy photons also create electromagnetic showers in high-Z materials, electromagnetic calorimetry can also be used to identify photons and measure their energy. Photons are distinguishable from electrons since they do not give observable tracks in tracking telescopes. Muon Identification. Muons (and also neutrinos) are distinguished from other charged particles by their low probability to interact with nuclei: muons can pass through many meters of iron while depositing only ionization energy. A muon can thus be identified efficiently and with little background, with typical 애–앟 rejection of order 10⫺2 (7), by its failure to shower in a calorimeter. Often for muon identification, instead of a full calorimeter, a crude structure is used consisting of thick shielding layers of steel or concrete interspersed with detectors; an example of such a muon-identification system is shown in Fig. 6.
665
Time of Flight and Ionization. If a particle’s momentum is known from magnetic spectroscopy, measurement of its velocity determines its mass. At momenta up to a few GeV/c, particle velocity can be measured well enough for particle identification using time-of-flight measurement over a distance of order meters (8). This is typically accomplished using thick (several centimeters) scintillation counters (discussed later) to determine flight time to a fraction of a nanosecond. This information is often augmented by repeated measurements of ionization rate in proportional chambers since (as described later) the rate of ionization in a medium is velocity dependent. Cherenkov Detectors. Particle velocity can be measured (or limits can be placed on it) using the Cherenkov effect, by which a charged particle moving through a transparent medium at a speed greater than the speed of light in that medium emits photons at a characteristic velocity-dependent angle. (This process is mathematically analogous to the emission of sonic boom by a supersonic object or the creation of a bow wave by a fast-moving boat.) The speed of light in a medium is slower than the speed of light in vacuum by the factor 1/n, where n is the medium’s refractive index. Threshold Cherenkov counters (9) determine limits on a particle’s speed by establishing that the particle does or does not emit Cherenkov photons in media of various refractive indices. Several threshold Cherenkov counters with appropriately chosen thresholds can be used together to distinguish pions, kaons, and protons within a given momentum range. This technique is typically useful from about 1 GeV/c up to several tens of GeV/c. Ring-imaging Cherenkov counters (10) measure the particle’s speed by determining the photonemission angle directly, and can be used up to a few hundred GeV/c. Note that Cherenkov detectors are rarely useful for muon identification, since muons and pions are so similar in mass that their Cherenkov thresholds (and photon-emission angles) are nearly indistinguishable in practice. Transition-Radiation Detectors. Transition radiation consists of photons emitted when a charged particle crosses an interface between media of differing refractive index. Particles with ‘‘highly relativistic’’ velocity (i.e., with kinetic energy greatly exceeding their mass energy) produce detectable numbers of ‘‘soft’’ X rays (energy of order a few kiloelectronvolts) when traversing stacks of thin metal or plastic foils typically including hundreds of interfaces. These X rays can be detected in proportional chambers and used for e–앟 discrimination at momenta exceeding 1 GeV/c and hadron (앟, K, or p) identification up to a few hundred GeV/c (11). Using calorimetry and transition-radiation detection together, e–앟 rejection of 10⫺5 has been achieved (12). PROBABILISTIC NATURE OF PARTICLE REACTIONS Since subatomic-particle spectrometers deal with the smallest objects we know of, they encounter directly the statistical aspects of quantum mechanics and the ‘‘microworld.’’ It is a striking feature of the laws of quantum mechanics that they do not predict the outcome of individual particle reactions but only particle behavior on the average. Nevertheless, most aspects of particle detection can be understood using classical
666
PARTICLE SPECTROMETERS
physics, and quantum uncertainty is rarely a dominant contribution to measurement error. Example 1: Elastic Scattering As a first example, if we consider a proton colliding elastically with another proton, classical physics predicts exactly the scattering angle as a function of the impact parameter (the distance between the centers of the protons measured perpendicular to the line of flight). However, quantum mechanically the proton is described not as a hard sphere with a well-defined radius, but rather as a wave packet, with the square of the wave’s amplitude at any point in space giving the probability for the proton to be at that location. The impact parameter in any given collision is thus an ill-defined quantity. It is not necessary to delve into the mathematical complexities of quantum mechanics to realize that in this situation the scattering angle for a given encounter is a random and unpredictable quantity. What the laws of quantum mechanics in fact predict is the probability for a proton to scatter through any given angle, in other words, the scattering-angle distribution. The random nature of quantum mechanics underlies Heisenberg’s famous uncertainty relations, which give the fundamental limits to the accuracy with which any quantity can be measured. Example 2: Inelastic Scattering Next we consider an inelastic collision between two protons, in which one or both protons emerge in excited states that decay into multiple-particle final states. This is a common type of interaction event of interest in high-energy physics, since from the properties and probability distributions of the final state can be inferred various properties of the protons, their constituent quarks and gluons, and the interactions among them. In any given encounter between two protons, whether an inelastic collision will take place cannot in principle be predicted, nor, if so, what particles will be produced and with what momentum and spin vectors. What quantum mechanics does predict (in principle) is the distributions of these quantities over a large number of collisions. However, since we do not yet have a completely satisfactory theory of the strong force, these distributions cannot as yet be predicted in detail from ‘‘first principles.’’ Classical Uncertainty One should note that uncertainty in the outcome or measurement of an event is often not quantum mechanical in origin. For example, even classically, the angle of elastic scattering of a given proton incident on a target is in practice not predictable, since it is not feasible to measure the position of the proton with respect to the scattering nucleus with sufficient precision to know the impact parameter. Thus even a classical analysis of the problem predicts only the scattering-angle distribution. Measurement Resolution When measurement yields a distribution for some parameter rather than a definite value, we can characterize the quality of the measurement by the width of the distribution, that is, the measurement resolution or uncertainty. Common ways of characterizing the width of a distribution are the root-meansquare (rms) deviation and the full width at half maximum
(FWHM). Of course, if looked at in fine enough detail, any measurement yields a distribution, though the distribution may be extremely narrow in some cases. A broad distribution in the result of some measurement can reflect quantum-mechanical uncertainty or simply lack of knowledge of the exact input state. Later we consider examples of both classical and quantum contributions to measurement resolution. Randomness and Experimental Instrumentation For the designer of particle-spectrometry systems, a consequence of these uncertainties is that randomness must be taken into account. For example, one might be designing data-acquisition equipment for an experiment intended to operate at an event rate of 100 kHz. This means that on average one has 10 애s to acquire and process the information from each event, but the actual number of events occurring in a given time interval will be random and characterized by a Poisson probability distribution [see Eq. (3)]. Thus if a large fraction of all events are to be captured, data acquisition for each event must be accomplished in a time that is short compared to 10 애s (in this example), to keep sufficiently small the probability that a second event occurs while the first is being processed. DETAILED DISCUSSION OF PARTICLE DETECTORS All detectors of subatomic particles operate by virtue of the energy lost by charged particles as they traverse matter. Charged particles lose energy in matter by several mechanisms. These include ionization of nearby atoms, bremsstrahlung (emission of photons in the electric field of an atomic nucleus), Cherenkov and transition radiation, and strong nuclear interactions. For all charged particles except electrons, ionization typically dominates over other mechanisms. The key challenge to particle detection is amplification of the small signals (typically tens to thousands of photons or electrons) produced by these mechanisms. Ionization Energy Loss The rate dE/dx of ionization energy loss by a charged particle passing through material depends primarily on the particle’s speed, or more precisely on the quantity v/c βγ = √ 1 − (v/c)2
(2)
where 웁 ⬅ v/c is the particle’s speed expressed as a fraction of the speed of light in vacuum, and the time dilation factor 웂 ⫽ 1/ 兹1 ⫺ (v/c)2. (Note that 웁웂 reduces to v/c in the nonrelativistic limit v Ⰶ c.) The rate of ionization is given by the Bethe–Bloch equation; see Ref. 4 for details. As shown in Fig. 1 (4), slow particles are heavily ionizing. The rate of ionization energy loss drops with increasing 웁웂 approximately as (웁웂)⫺5/3 to a minimum at a value of 웁웂 that depends only slightly on the material. The ionization minimum is at 웁웂 ⫽ 3.5 for nitrogen, which decreases to 웁웂 ⫽ 3.0 for high-Z materials such as lead. While dE/dx per unit thickness varies substantially among materials, if the thickness is divided by density, thus being expressed as mass per unit area, the strongest part of
PARTICLE SPECTROMETERS
667
multiplier tubes and solid-state photodetectors. Both organic and inorganic scintillators are in use.
10 8
dE/dx (MeV⋅ g–1⋅ cm2)
6
H2 liquid
5 4 He gas 3 Fe
2
Al
C
Sn Pb
1 0.1
1.0
10
100
1000
10000
βγ Figure 1. The dependence of the ionization energy-loss rate on the relativistic speed variable 웁웂 for particles of charge e (except electrons) in various materials (after Fig. 22.2 of Ref. 4). The ionization rate first drops approximately as 웁웂⫺5/3, then rises logarithmically. Particles with 웁웂 ⬎ 1 (speed greater than c/ 兹2) can loosely be considered minimum-ionizing.
the dependence on material is eliminated. For particles of charge e (except electrons), and for all materials except hydrogen, the energy-loss rates at the ionization minimum range from 1 MeV/(g/cm2) to 2 MeV/(g/cm2). For 웁웂 above minimum-ionizing, the ionization energy-loss rate rises approximately logarithmically. At 웁웂 ⫽ 104, the loss rate is less than double relative to its minimum at 웁웂 앒 3. In this ultrarelativistic regime radiative energy loss (bremsstrahlung) becomes significant relative to ionization. Radiation Length In materials of high atomic number, there is a high probability per unit length for electromagnetic radiative processes to occur, that is, for electrons to radiate photons by bremsstrahlung and for photons to convert into electron–positron pairs in the electric field of a nucleus. This probability is characterized by the radiation length X0 of the material, defined as the thickness of material in which a high-energy electron will lose all but a fraction 1/e of its initial energy (13). Radiation length also characterizes the degree to which charged particles scatter randomly, due to multiple encounters with the electric fields of nuclei, in passing through material. If precise measurement of particle trajectories is to be achieved, this scattering effect must be minimized. Materials with short radiation length (e.g., lead, X0 ⫽ 0.56 cm, and tungsten, X0 ⫽ 0.35 cm) are thus desirable for use in electromagnetic sampling calorimeters (discussed later) and in some shielding applications, but in general should be avoided in other particledetection contexts. Scintillation Counters Scintillators (14) are materials in which some of the ionization energy lost by a charged particle along its trajectory is converted into light via fluorescence. The light may be detected in a variety of ways, including (most commonly) photo-
Organic Scintillators. Organic scintillators typically consist of aromatic liquids dissolved in a plastic such as polystyrene, polyvinyltoluene, or polymethylmethacrylate. Liquid scintillators, more common in the past, have generally been abandoned in high-energy physics (except in specialized applications) in favor of the plastic scintillators, which offer greater ease of use. A common configuration is a piece of plastic of a few millimeters to a few centimeters thickness, a few to several centimeters width, and length ranging from several centimeters to a few meters, glued at one end to a plastic light guide that is in turn glued to or butted against the entrance window of a photomultiplier tube (PMT). A minimum-ionizing charged particle traversing the plastic deposits ionization energy at a rate of about 2 MeV/(g/ cm2). As the ionized plastic molecules deexcite, they emit ultraviolet photons, most of which are quickly reabsorbed by the plastic. To provide a detectable light signal, the plastic is doped with a low concentration of dissolved aromatic ‘‘wavelength shifters’’ (fluors) such as p-terphenyl, 3-hydroxyflavone, and tetraphenylbutadiene. These absorb in the ultraviolet and reemit at visible wavelengths, where the plastic is transparent. (Since there is inevitably some overlap between the wavelength-shifter absorption and emission bands, too large a concentration of wavelength shifter would result in excessive attenuation of the light signal as it travels towards the photodetector.) In a counter of large length-to-width ratio, light collection is inherently inefficient, since only a narrow range of emission angle is subtended by the photodetector. Furthermore, the light is attenuated by absorption along the length of the counter. Light is typically emitted at a rate of about 1 photon per 100 eV of ionization energy, but often only a few percent of these reach the photodetector, where additional losses may be incurred due to reflection at the interfaces. The quantum efficiency of the photodetector further reduces the signal. For a PMT, the quantum efficiency is the probability that an incident photon causes the emission of an electron from the photocathode. The typical PMT visible-light quantum efficiency is about 20%, but solid-state photodetectors can have quantum efficiencies approaching 100% (15). Since photodetectors are subject to single-electron shot noise, the typical signal-tonoise ratio in a plastic scintillation counter of about 1 cm thickness is of order 10 to 100. With fast fluors, the light signal develops quite rapidly (rise times of order nanoseconds). Instantaneous counting rates of the order of 10 MHz can be sustained. With highspeed PMTs, thick scintillators can achieve subnanosecond timing accuracy, ideal for time-of-flight particle identification. Average counting rates are limited by the current ratings of the PMT and base. In high-counting-rate applications, transistorized bases (16) are crucial to avoid ‘‘sagging’’ of the dynode voltages. Scintillating Fibers. In recent years advances in photodetectors and in the manufacture of plastic optical fibers have made scintillating optical fibers a practical detector for precision particle tracking at high rates (17). Scintillating fibers work by trapping scintillation light through total internal reflection. Since the fibers are typically less than 1 mm in diameter, detection of the scintillation signal is technically chal-
668
PARTICLE SPECTROMETERS
lenging: the ionization signal is only of the order of 103 photons, and the light trapping efficiency of the order of 1%. To convert the scintillation photons efficiently to visible light, wavelength shifters of large Stokes shift (i.e., large separation between the absorption and emission bands) are required so that they can be used in sufficiently high concentration (of order 1%) without excessive attenuation. If the light is detected with solid-state cryogenic visible-light photon counters (VLPCs) (18), advantage can be taken of their 80% quantum efficiency, so that a trapped-photon yield as low as several per minimum-ionizing particle suffices for good detection efficiency [see Eq. (3)]. Fibers as narrow as 800 애m in diameter can then be used over lengths of meters (19). An advantage of the large Stokes shift is operation in the green region of the visible spectrum (as opposed to the blue of conventional scintillators), so that yellowing of the plastic due to radiation damage in the course of a high-rate experiment has only a slight impact on performance. Inorganic Scintillators. Inorganic scintillators include doped and undoped transparent crystals such as thallium-doped sodium iodide, bismuth germanate, cesium iodide, and lead tungstate. They feature excellent energy resolution and are typically employed in electromagnetic calorimetry (discussed later). Some notable recent applications (20) have featured silicon-photodiode readout, allowing installation in the cramped interior of colliding-beam spectrometers as well as operation in high magnetic fields. Proportional and Drift Chambers Developed starting in the 1960s by Charpak et al. (21), proportional and drift chambers have largely supplanted visualizing detectors, such as bubble chambers, as the workhorse detectors of high-energy physics due to their higher rate capability and their feasibility of manufacture and operation in large sizes. They typically can provide submillimeter spatial resolution of charged-particle trajectories over volumes of several m3 (22). Installations of these detectors are commonly realized as hodoscopic arrays of anode wires immersed in a suitable gas mixture and arranged so as to detect the ionization energy released when the gas is traversed by a charged particle. The passage of the particle causes an electrical pulse on the nearest wire, yielding, with the simplest type of signal processing, discrete coordinate measurements (i.e., the true position of the particle is approximated by the location of the wire). Continuous coordinate measurements can be achieved by more sophisiticated signal processing that provides interpolation between anode wires. Although the primary use of these detectors is for position measurement, they also find use in particle identification in the detection of transition radiation and the measurement of ionization rate (dE/dx). Proportional-Tube Operating Principle. Many proportionalchamber arrangements have been devised. The simplest conceptually is the proportional tube, in which a single thin anode wire is operated at a positive potential (of order kilovolts) with respect to a surrounding conducting cathode surface. The tube is filled with a gas suitable for detecting the particles of interest. For example, charged particles are readily detected in a variety of mixtures of argon with hydrocarbons, while detection of X rays, for example, from transi-
tion radiation or crystal scattering, is more efficiently accomplished using a higher-Z gas (such as xenon) as the major component. The exact choice of gas mixture also depends on such experimental requirements as rate capability, position resolution, and detector operable life-span (23). Size of Primary Ionization Signal. A minimum-ionizing charged particle traversing a proportional tube deposits only a small fraction of its energy in the gas, in a number of ionizing collisions averaging about 0.5 to 5 per mm ⭈ atm, depending on gas composition. Due to the independent and random nature of the collisions, they are characterized by Poisson statistics, that is, P(n) =
µn e−µ n!
(3)
where P(n) is the probability to produce n ionizing collisions when the mean number produced is 애. Furthermore, because of the wide range of energies imparted in these collisions, the yield of electron-ion pairs is subject to large fluctuations (24). Consequently, amplification electronics designed to detect the passage of minimum-ionizing particles through the tube should be capable of handling the large dynamic range (typically exceeding 10) of these signals. In contrast, soft X rays interact in the gas primarily via the photoelectric effect, giving a narrower range of signal sizes since the amount of ionization is more closely correlated with the X-ray energy. Electron and Ion Drift. Under the influence of the electric field in the tube, the electrons and positive ions produced by the initial interaction separate and drift toward the anode and cathode, respectively. In the range of electric-field strength E typically found in proportional tubes, the average drift velocity u⫹ of the positive ions is proportional to E; it is often expressed in terms of the ion mobility 애⫹ ⫽ u⫹ /E. This proportionality results from competition between two effects: acceleration of the ion by the electric field and randomization of its direction by collisions with gas molecules. A typical drift field E ⫽ 1 kV/cm gives an ion drift velocity in the range (0.5 to 2) ⫻ 103 cm/s depending on ion species and gas composition. In weak electric fields, electrons are transported in a manner similar to that of positive ions. However, in a sufficiently strong electric field, the electron’s wavelength , which decreases in inverse proportion to its momentum p according to the deBroglie relationship ⫽ h/p, becomes comparable to the size of molecular orbitals. (Here h is Planck’s constant.) In this regime, the probability per encounter for an electron to scatter off of a molecule has a strong dependence on the electron momentum, displaying successive minima and maxima as the momentum increases. In many gas mixtures, the net effect is that the electron drift velocity saturates, becoming approximately independent of electric field (25). This saturation typically occurs for fields in the neighborhood of 1 kV/ cm and in argon-based mixtures results in a velocity of about 5 cm/애s. The saturation of the electron drift velocity is an important advantage for drift-chamber operation (discussed later), since it reduces the sensitivity of the position measurement to operating conditions. Development of the Avalanche Signal. As the electrons approach the anode wire, the electric field increases inversely as the distance to the wire. Above an electric-field threshold whose value depends on the gas, the electrons are accelerated
PARTICLE SPECTROMETERS
between collisions to sufficient energy that they can ionize a gas molecule on their next collision. Subsequently, the produced electrons (along with the initial electron) are accelerated and produce further ionization. An avalanche multiplication of charge rapidly develops, with gain typically in the range 104 to 106 electron–ion pairs per initial electron. Unlike the case of Geiger tubes and spark chambers, in proportional and drift chambers the avalanche is normally not allowed to grow into a spark but remains proportional in size to the amount of energy lost by the particle. The avalanche develops essentially instantaneously (in a time interval less than 1 ns) within a few wire diameters of the anode. The time development of the anode-current pulse is determined by the increasing separation of the electron–ion pairs generated in the avalanche. Using Green’s reciprocity theorem (26), and assuming the anode is connected to a low-inputimpedance amplifier, one can show that the increment dq of induced charge on the anode due to the vector displacement dr of a charge Q between the two electrodes is given by dq =
Q E · dr V0
(4)
where V0 is the anode–cathode potential difference and E the electric field. [For cases with more than two electrodes, a more general result can be obtained from the weighting-field method (27).] At the instant of the avalanche, the electrons are collected on the anode, giving rise to a sharp initial current pulse. However, since the electrons are produced very near the anode, this initial pulse represents only a few percent of the total signal charge. Thus it is the slow drift of the positive ions from anode to cathode that provides most of the signal charge and determines the subsequent pulse development. [It should be noted that this imbalance between electron and ion contributions to the signal is specific to the cylindrical proportional-tube geometry with thin anode wire. In a parallel-plate geometry (28), the electrons contribute a much larger fraction of the signal charge.] Multiwire Proportional Chambers. In order to register the positions of many particles spread over some area, one might employ a hodoscope made of individual proportional tubes. In the simplest approach, the true position of the particle is then approximated by the location of the wire that produces the pulse, that is, measured positions are ‘‘quantized’’ in units of the distance between adjacent anode wires. Proportional-tube hodoscopes have long been common in large-area, low-resolution applications such as muon detection (7,29), and have lately become popular in high-rate applications as the strawtube array (30). However, a more common arrangement (which minimizes the detector material by eliminating the tube walls) is a multiwire proportional chamber (MWPC), consisting of a planar array of anode wires sandwiched between two cathode foils or grids. Although in such a device the electric field near the cathodes approximates a parallelplate configuration, close to an anode wire the field shape resembles that in a proportional tube (see Fig. 2). The instrumentation of this detector usually involves individual signal detecting and coincidence/memory circuits for each anode wire. The strength of the MWPC is its ability to handle a high flux of particles while providing good position resolution over
669
Cathode grid or foil
Anode wires
x
w
Figure 2. Sketch of the electric-field configuration in a multiwire proportional chamber. The anode wires are seen end-on. Close to the anode wire the field lines are radial as in a proportional tube, while close to the cathode planes the field lines are parallel as in a parallelplate capacitor. The presence of a signal on an anode wire determines the position of the particle along the x axis in units of the anode-wire spacing w.
a large detector area. Rates above 106 particles per cm2 ⭈ s have been achieved (31) while maintaining greater than 95% detection efficiency. The rate capability of an MWPC is a strong function of the anode-wire spacing and the anode–cathode gap width, both of which should be as small as possible for high-rate operation. The statistics of the primary ionization described in Eq. (3) implies a minimum anode–cathode gap of a few millimeters for efficient particle detection (i.e., for the probability of no ionization to be negligibly small). In large detectors the anode-wire spacing and anode–cathode gap are limited by electromechanical instabilities (32). Thus large MWPCs (anodes exceeding about 1 m in length) have typical anode spacing of a few millimeters, while for anode length under 10 cm, spacing down to 0.5 mm is feasible. The anode-wire spacing w determines the rms position resolution of an MWPC according to
σ =
Z 1 w
w/2
w x2 dx = √ 12 −w/2
(5)
This resolution is not always achievable due to the interplay between the pattern-recognition software and the occurrence of clusters of two or more wires registering hits for a single incident particle. Such hit clusters can arise when two adjacent wires share the ionization charge due to a track passing halfway between them, when several adjacent wires share the charge due to an obliquely inclined track, or when an energetic ‘‘knock-on’’ electron (also known as a 웃 ray) is emitted at a large angle by the incident particle and traverses several adjacent wires. When the best position resolution is required a proper treatment of hit clusters is necessary, but not always possible, leading to inefficiencies in the track reconstruction. Two shortcomings of the MWPC in measuring particle positions are that the anode plane measures only one position coordinate (the one perpendicular to the wire length), and that the measured positions can assume only those values corresponding to anode–wire positions. For minimum-ionizing particles, a common way to provide information in the perpendicular coordinate direction (i.e., the distance along the wire) is to employ several anode planes, each oriented at a different angle, thus viewing the particle trajectory in ‘‘stereo’’
670
PARTICLE SPECTROMETERS
Particle trajectory
Figure 3. Schematic illustration showing how track positions in two dimensions are determined from measurements in three successive MWPCs with anode wires at three different angles; in each MWPC plane, only the wire producing a pulse is shown. Two stereo views would be sufficient in principle, but the third view helps to resolve ambiguities when more than one particle is being measured simultaneously.
(Fig. 3). In a multiple-particle event, a software algorithm is then required to match up the signals corresponding to each particle in the various views. In X ray detection another method must be used, since X rays interact in the gas via the photoelectric effect and are thereby absorbed, leaving a signal in only one plane. A technique for two-dimensional position measurement using only a single anode plane is charge division (33) in which the ratio of charges flowing out the two ends of a resistive anode wire specifies the position of the avalanche along the wire. However, in practice this technique yields only modest resolution (about 1% of wire length). Cathode Readout of Proportional Chambers. Another technique for two-dimensional measurement of particle position using only one plane of anode wires is cathode readout. Cathode readout also improves position resolution by allowing interpolation between anode wires. When an avalanche occurs on an anode wire, the pulse induced on the cathode planes carries information about the avalanche location. If the cathode planes are segmented and the charge induced on each segment digitized, the avalanche location can be accurately determined from the center-of-gravity of the charge distribution. The simplest arrangement is that of cathode-strip planes. To obtain two-dimensional information, one cathode plane can have strips oriented perpendicular to the wire direction (illustrated in Fig. 4), and the other can have strips parallel to the wires. Computation of the center of gravity can be performed either ‘‘off-line’’ after the signals have been digitized or ‘‘on-line,’’ for example, via the
Cathode strips
Avalanche
Induced pulses
Anode wires
Figure 4. Schematic illustration of MWPC cathode-readout principle. (Dimensions are not to scale but are exaggerated for clarity.) The charges induced on the cathode strips, combined with the signals on the anode wires or on a second cathode plane (not shown) with strips parallel to the wires, allow localization of the avalanche in two dimensions.
transformation of signals from the spatial domain to the time domain using a specially patterned cathode (34). The accuracy of such delay-line position sensing is, in principle, independent of wire length. For cathode strips perpendicular to the wires, there is a small nonlinearity in position measurement due to the finite strip width (35). This effect can be mitigated to some extent using specially shaped segmentation patterns (36). For cathode strips parallel to the wires, the measured position is modulated by the discrete positions of the wires around which the ions are created. However, at gains low enough that the avalanche remains in the proportional regime, the angular spread of the avalanche around the wire is limited (37), allowing some interpolation between the wires (38). Typically the position resolution along the wire is of order 100 애m and that perpendicular to the wire approximately five times worse. The accuracy of the center-of-gravity method for cathode readout measuring the coordinate along the wire is at least an order of magnitude better than that of resistive charge division (for a given signal charge). For the coordinate perpendicular to the wire the electron-drift timing technique discussed in the next section is superior except when (as in X-ray detection) a timing reference is not available. The disadvantages of cathode readout are that it requires a large number of well-calibrated electronic channels and that one is still faced with the problem of correlating the coordinate pairs in a multiple-particle event. However, the latter problem is mitigated by the availability of a charge ‘‘signature.’’ Due to the large dynamic range of primary ionization energy, cathode pulses from different particles will tend to differ in total charge, while pulses from the same particle will be correlated in total charge in the two views. Drift Chambers. One drawback of the MWPC is the large number of anode wires and associated readout circuits needed to give fine position resolution over a large area. Drift chambers can substantially reduce the wire and electronics-channel counts. However, the generally wider anode spacing reduces rate capability, and the need for time measurement increases the electronic complexity of each channel. The idea is to record not only the position of the wire closest to the particle trajectory, but also the distance of the particle from that wire, as measured by the time taken for the ionization electrons to drift along the electric-field lines in to the wire. One thus records the time of occurrence of the avalanche relative to some reference time tied to the passage of the particle. For minimum-ionizing-particle detection, the reference time can be provided by a signal from a scintillation counter. Given the known drift velocity of the electrons, the drift time determines the distance from the anode wire to an accuracy (typically 100 애m to 200 애m) primarily limited by diffusion of the drifting electrons in the gas. (In special cases, such as high pressure or specially tailored gas mixtures, the diffusion contribution can be reduced so that the contribution from 웃 rays becomes the limiting factor, allowing sub-100 애m accuracy.) In a drift chamber the cathode planes are usually formed by wires at ‘‘graded’’ potentials, to provide a constant electric field for the drifting electrons (Fig. 5). Use of a gas mixture having a saturated drift velocity reduces the dependence of the position measurement on field inhomogeneities and operating conditions. As in an MWPC, a single anode plane of a
PARTICLE SPECTROMETERS
671
Particle trajectory
Field-defining cathode wires
–HV3 –HV2 –HV1
Gnd
–HV1 –HV2 –HV3 –HV2 –HV1
Gnd
–HV1 –HV2 –HV3
Field-defining cathode wires Anode wires: +HV
Drift distance
x Figure 5. Sketch of the electric-field configuration in a drift chamber with ‘‘graded’’ cathode potentials. Wires are shown end-on. The anode plane is sandwiched between two planes of cathode wires whose high voltages (HV1, HV2, HV3) with respect to ground potential (Gnd) are stepped to produce an approximately constant drift field.
drift chamber measures only the particle coordinate perpendicular to the wire direction. However, there is a twofold (socalled left–right) drift-direction ambiguity, which can be resolved by using several planes of anode wires with positions staggered so as not to lie parallel to a possible particle trajectory. Although not often done, the left–right ambiguity can also be resolved using the asymmetry of induced charge on nearby electrodes mentioned earlier, assuming the avalanche is sufficiently localized in azimuthal angle around the anode (39). The particle coordinate along the anode wire can be obtained through similar means as in an MWPC: additional stereo-view planes, charge division, or induced cathode signals. Since for drift chambers, position resolution is decoupled from anode-wire spacing, larger wire spacing than in MWPCs is typically used, making them more straightforward to construct and operate. Time-Projection Chambers. A time-projection chamber (TPC) (40) is a gas-filled chamber in which ionization electrons produced along the path of a charged particle drift over a substantial distance (several centimeters to meters) before avalanching and being detected in an array of wire grids. With two-dimensional position measurement (e.g., anode and cathode readout), the entire particle trajectory through the chamber is recorded, with the third dimension ‘‘projected’’ into drift time. Such detectors are suitable when the average time between events is sufficiently long compared to the drift time. They are also beneficial when it is desirable to identify particles via their dE/dx ionization energy loss as discussed earlier. Electronics for Proportional and Drift Chambers. Radeka (41,42) has analyzed the noise and resolution considerations in amplifiers for proportional chambers. Special operating conditions, such as high rates, bring additional concerns, some of which are discussed in Ref. 43. Since the majority of the signal charge stems from the slow motion of the positive ions liberated in the avalanche, one can use Eq. (4) to show that the signal current has the form (44) i(t) ∝
1 1 + t/t0
(6)
assuming that the ions drift within a radial electric field directed outward from the anode. The characteristic time constant t0 (of the order of 1 ns) depends on ion mobility, the
electric-field strength at the anode, and the anode-wire diameter. In high-rate applications it is desirable to cancel the slow ‘‘1/t tail’’ in order to be ready for another incident particle as quickly as possible. To do this, elaborate pulse-shaping circuits are sometimes employed using multiple pole–zero filters (43). With well-designed amplifiers and pulse shapers, typical double-pulse resolution of about 100 ns is feasible. When designing such a circuit, one should keep in mind that fluctuations (due to the arrival at the anode of individual ions) with respect to the average pulse shape make perfect pulse-shaping impossible. One must also consider that the amplifier is to be connected to a wire operated at kilovolt potential with respect to a surface that may be only a few millimeters away. While sparks are highly undesirable and may even break a wire, their occurrence cannot be ruled out. Thus amplifiers should be provided with adequate input protection. In typical MWPC or drift-chamber operation, the amplifier output (or the output of a separate pulse-shaping circuit if used) is conveyed to a discriminator (a comparator driving a one-shot multivibrator) to produce a logic pulse when an input exceeding threshold occurs. The discriminator output may be used to set a latch in coincidence with a reference (gate) signal, as is typically done in MWPC installations. The readout circuitry then provides a list of wires having signals within the time interval of the reference signal. In drift-chamber operation one also needs to know the drift time of the ionization electrons; in this case the time interval between the discriminator output and the reference signal must be digitized. If cathode readout is desired, the pulse heights on all cathode segments must also be digitized. The amplifier, pulse shaper, and discriminator may all be on separate multichannel circuit boards, combined on one or two circuit boards, or even all combined into a single hybrid or integrated circuit. Since for high-rate applications these circuits need to have both large gain and large bandwidth, and each is connected to an imperfectly shielded antenna (the anode wire), stabilizing large installations against parasitic oscillation is usually challenging and requires careful attention to grounding and circuit and detector layout. Readout of the induced signals on cathodes usually requires longer shaping time and hence less bandwidth. This is primarily due to the longer time required for development of the cathode signal, since the induced cathode charge increases as the ions drift away from the screening influence of the nearby anode wire. Nevertheless, since the accurate
672
PARTICLE SPECTROMETERS
computation of the center of gravity requires a large dynamic range, to guard against electromagnetic interference and cross-talk, the cautions just mentioned concerning system layout apply here as well. Solid-State Detectors Silicon-strip detectors (45) have come into increasing use for tracking applications near the interaction vertex, where tracks are close together and precise position measurements are needed. These detectors are multiple-diode structures fabricated on single wafers of high-resistivity silicon and operated under reverse bias. A center-to-center distance between adjacent strips as small as 10 애m (25 애m to 50 애m is common) allows position resolution an order of magnitude better than that of drift chambers. The resolution achieved depends on readout mode: With single-bit-per-strip digital readout (as for MWPCs) the resolution is as given in Eq. (5), while if analog pulse-height information is used, interpolation between strips is possible because of charge spreading over adjacent strips; then rms resolution of a few microns can be achieved. A charged particle traversing silicon creates electron–hole pairs from ionization energy at the rate of one pair per 3.6 eV. With typical 300 애m detector thickness, the signal is about 25,000 electrons, an order of magnitude smaller than in a proportional chamber. However, the reduced capacitance of a silicon strip and its associated readout electronics compared to that in an MWPC can allow improved noise performance. This is especially true for the recently developed silicon pixel detectors (46–48), in which an individual diode can have dimensions of 30 애m ⫻ 300 애m or less. Compared to strip detectors, pixel detectors also offer ease of track reconstruction, since the firing of a pixel determines a point in space along the particle trajectory rather than a line segment. To achieve efficient and rapid charge collection from the full thickness of the detector requires fully depleting the diodes, leading to typical operating voltage of about 100 V. The signal out of the n-type side then develops in a few nanoseconds (49). With fast shaping time, extremely high particle rates (of order MHz per strip or pixel) can thus be handled, the limit to rate capability being radiation damage to the detectors and electronics over the long term. Charged-coupled devices (CCDs) have also been employed for space-point tracking close to the vertex (50). To achieve adequate signal-to-noise ratio they must be operated with cryogenic cooling. CCDs have the virtue of good position resolution (⬍10 애m rms) in both dimensions, at the expense of long (of order 100 ms) readout time. They are thus not well suited to high-rate experiments. Other materials have been considered for strip and pixel particle-position measurement. At present much development effort is focused on the problem of radiation damage in vertex detectors (51), since silicon detectors commonly become unusable after a few megarad of irradiation. Due to their larger band gaps, materials such as GaAs (52) or diamond (53) should be substantially more radiation-hard than silicon; however, they feature worse signal-to-noise ratio. Reverse-biased silicon (and germanium) detectors and CCDs are also in widespread use for X-ray and synchrotronradiation detection (2), nuclear physics, etc. Amplifiers and signal-processing circuitry for silicon-strip and pixel detectors present challenges to the designer since
the small feature size of the detector implies very large channel counts (of order 105 strips or 108 pixels) in an experiment. The cost per channel is thus a key design criterion, and, since the circuits often need to be packed into a small volume, so also are circuit size, interconnections, and power dissipation. Various implementations have lately been developed as semicustom (54) and full-custom integrated circuits (55). Pixel detectors necessarily require custom very-large-scale integrated circuit (VLSI) readout electronics, either integrated onto the detector chip itself (47) or as a separate chip bump-bonded to the detector chip (48). Calorimeters Two common types of calorimeter are those optimized for the detection of electrons and photons (designated electromagnetic) and those optimized for strongly interacting particles (designated hadronic). Another important distinction is whether the output signal is proportional to all of the deposited energy or to only a portion of it; in the latter case the calorimeter is of the sampling type. Sampling Calorimeters. A common arrangement for a sampling calorimeter is a sandwich consisting of layers of dense material interspersed with particle detectors such as scintillation counters. Such a calorimeter can be electromagnetic or hadronic depending on the dense material chosen. Often a combined electromagnetic–hadronic device is built, consisting of an initial electromagnetic section using lead plates followed by a hadronic section using iron plates. Lead’s short radiation length (0.56 cm) combined with its long (17 cm) mean free path for hadronic interaction means that electrons and hadrons can be well discriminated in such a structure. Electrons interact in the lead producing an electromagnetic shower as they radiate bremsstrahlung photons that produce electron– positron pairs that in turn radiate photons, etc. Almost all of the electron’s energy is thus deposited in the electromagnetic section, which is typically about 20 radiation lengths thick. In a well-designed calorimeter, the ionization energy deposited by the shower of electrons and positrons in the interspersed scintillator (‘‘active’’) layers is proportional to the energy of the incident electron to good approximation. Most hadrons pass through the electromagnetic section leaving only ionization energy and proceed to interact strongly, producing a hadronic shower, in the iron plates of the hadronic section. Energy measurement in sampling calorimeters is limited in resolution due to statistical fluctuations in the ratio of the energy deposited in the active layers to that in the inactive layers. The percent resolution is inversely proportional to the square root of the deposited energy. Typical performance for electromagnetic showers is relative rms energy uncertainty (E)/E ⫽ 10%/ 兹E (75%/ 兹E for hadronic), where E is expressed in GeV. At the highest energies, as this quantity goes to zero, other contributions (for example, calibration uncertainties) dominate. It is difficult to measure energy in sampling calorimeters to better than a few percent. The poor energy resolution of hadronic sampling calorimeters arises from random fluctuations in the shower composition (e.g., in the relative numbers of neutral versus charged pions produced) and from energy-loss mechanisms (such as breakup of nuclei in the inactive layers) not yielding signal in the sampling medium. The decay of the neutral pion into a
PARTICLE SPECTROMETERS
pair of photons converts hadronic energy into electromagnetic energy, which degrades the energy resolution due to the differing response to electromagnetic and hadronic energy. In compensating calorimeters, design parameters are tuned to minimize this response difference and thereby optimize hadronic energy resolution (56). Techniques for calibrating calorimeters include injecting light using lasers as well as studying the response to highenergy muons. Since muons do not shower, they deposit only minimum-ionizing pulse height in the active layers. The need to measure with precision both muons and showers leads to stringent demands for analog-to-digital-converter linearity and dynamic range (57); 14 bits is not uncommon. Homogeneous Calorimeters. These include the inorganic scintillators discussed earlier as well as lead-glass arrays and liquid-argon and liquid-xenon ionization chambers. Lead glass is not a scintillator, but electrons and positrons from an electromagnetic shower occurring within it emit visible Cherenkov light that can be detected using PMTs. Since they are not subject to sampling fluctuations, homogeneous electromagnetic calorimeters generally have better energy resolution than sampling calorimeters, for example, the 2.7%/E1/4 (FWHM) that was achieved by the Crystal Ball collaboration using thallium-doped sodium iodide (58) and the 5%/ 兹E achieved by the OPAL collaboration using lead glass (59). PARTICLE SPECTROMETERS Particle spectrometers are characterized by great variety in their purposes and layouts. Generically they may be divided into fixed-target spectrometers, in which a beam is aimed at a target that is stationary (or in the case of a gas-jet target, moving slowly) in the laboratory, and colliding-beam spectrometers, in which two particle beams moving in opposite directions are brought into collision. We consider next two typical examples to illustrate the use of the detectors and techniques described previously. The Fermilab HyperCP Spectrometer As a simple example of a fixed-target spectrometer we consider that of the Fermilab HyperCP experiment (Fig. 6). The goal of the experiment is the precise comparison of decays of ⌶⫺ baryons (quark content ssd) with those of ⌶⫹ antibaryons (ssd), in order to search for a postulated subtle difference between matter and antimatter. The difference in properties between matter and antimatter has been observed through the behavior of only one particle type to date (the neutral kaon). Nevertheless, it is believed to be a general feature of the fundamental interactions among elementary particles and, furthermore, to be responsible for the dominance of matter over antimatter in the universe (60). Baryons containing strange quarks are known as hyperons. The ⌶⫺ and ⌶⫹ hyperons are produced by interactions of 800 GeV primary protons from the Fermilab Tevatron accelerator in a small metal target upstream of the Hyperon magnet (Fig. 6). That magnet is filled with brass and tungsten shielding, into which a curved channel has been machined such that charged particles of momenta in the range 125 to 250 GeV/c traverse the channel and emerge out the end to form the secondary beam, while neutral particles and charged
673
particles outside that momentum range curve either too little or too much and enter the shielding, where they shower and are absorbed. The field directions in the hyperon and analyzing magnets can be set to select either ⌶⫺ or ⌶⫹ events. Figure 7 shows the momentum distribution of charged particles emerging from the channel in the positive-beam (⌶⫹) setting. Note that this distribution arises classically, not quantum-mechanically: To accept only a single value of momentum the channel would need to be infinitesimally narrow. Since its width is finite, it in fact accepts particles over some range of track curvature and momentum. The ⌶⫺ or ⌶⫹ hyperon undergoes ‘‘cascade’’ decay as each strange quark decays in turn via the weak force. As indicated in Fig. 6(b), the ⌶⫺ can decay into a ⌳0 hyperon and a negative pion, and the ⌳0 can decay into a proton and a negative pion. It is this decay chain that the HyperCP experiment studies. The events of interest thus contain a proton (or antiproton) of one charge and two pions of the opposite charge. (For simplicity, ⌶⫺ and ⌶⫹ are generically referred to simply as ⌶ in the following discussion, and ⌳0 and ⌳0 as ⌳.) Triggering and Data Acquisition. The pion and proton hodoscopes (Fig. 6) are arrays of vertical scintillation counters used to trigger data acquisition from the spectrometer. A trigger signal is created whenever counts are detected simultaneously in both hodoscopes and in the hadronic calorimeter. The state of all detector elements is then digitized and recorded on magnetic tape for later computer analysis. The role of the calorimeter is to suppress triggers that could occur when ambient muons or other low-energy particles count in the hodoscopes. Large numbers of such background particles are produced by particle interactions in the shielding, but their contribution to the trigger rate is effectively suppressed by the calorimeter trigger requirement. The 100 kHz rate of event triggers is dominated by interactions of secondary-beam particles in the material of the spectrometer that give counts in both hodoscopes. The HyperCP data acquisition system (61) has the highest throughput of any currently in use in high-energy physics. Digitization of event information typically is completed in less than 3 애s, giving average ‘‘live time’’ (the fraction of time that the system is available to process triggers) of about 70% at 100 kHz trigger rate. To minimize the amount of information that must be recorded to describe each event, the spectrometer design was kept as simple as possible, resulting in an average ‘‘event size’’ of just 580 bytes. Nevertheless, the average data rate is about 15 Mbyte/s and is streamed to 40 magnetic tapes in parallel by 15 single-board computers housed in five VME crates. (Since in fixed-target operation beam is extracted from the Tevatron for only about 20 s each minute, the data acquisition rate from the digitizing system is about three times the average rate to tape, with a 960 Mbyte buffer memory providing temporary data storage.) Coordinate Measurement. The trajectories of charged particles in the spectrometer are measured using a telescope of multiwire-proportional-chamber modules (C1 to C8). Since the channeled secondary-beam rate exceeds the rate of ⌶ decays by a factor greater than 104, the rate capability of the these detectors is key to obtaining the desired large sample (of order 109 events) of hyperon and antihyperon decays. To maximize rate capability, 1 mm anode-wire-spacing MWPCs
674
PARTICLE SPECTROMETERS 3.0 2.5
C5 C7 Vacuum decay region
Target
2.0
C1 C3
y (m) 1.5 C2 C4
Proton beam Hyperon magnet
C6
C8
Analyzing magnet
Pion hodoscope Calorimeter Proton Muon hodoscope system
1.0 0.5 (a) 3.0
Pion hodoscope
2.5 2.0
π C7
Target
Ξ Proton beam Hyperon magnet
Figure 6. (a) Elevation and (b) plan views of the Fermilab HyperCP spectrometer. (Note the different horizontal and vertical distance scales.) Typical particle trajectories are shown for a cascade decay ⌶ 씮 ⌳앟, ⌳ 씮 p앟. For graphical simplicity, the curvature of the charged-particle tracks within the analyzing magnet is approximated by a single sharp bend.
C1 C3
Calorimeter
Λ
Muon system
0 C2 C4
0
x (m)
–0.5 –1.0
Analyzing magnet
–10
0.5
p
C8
10
20
–1.5 Proton hodoscope 30
z (m)
40
50
–2.0 60
–2.5 (b)
are employed for modules C1 and C2, with wire spacing ranging up to 2 mm for modules C7 and C8. With a gas mixture of 50% CF4 –50% isobutane, module C1 (which experiences the highest rate per unit area) operates reliably at a rate exceeding 1 MHz/cm2. To measure the particle positions in three dimensions, more than one measurement view is required. Each of the eight chamber modules contains four anode planes, two of which have vertical wires and two of which have wires at angles of ⫾27⬚ with respect to the first two. This choice of stereo angle is found to optimize the measurement resolution for hyperon mass and decay point. Measurements are thus provided in x as well as in directions rotated by ⫾27⬚ with respect to x, from which y coordinates can be computed. z coordinates are given by the known locations of the MWPC planes.
4000 Number of events
π
C6
5000
3000
2000
1000
0
1.0
C5
Vacuum decay region
1.5
125
150
175 200 225 Momentum (GeV/c)
250
Figure 7. Charged particles emerging from the HyperCP hyperon channel have momenta distributed about a mean value of about 170 GeV/c.
Event Reconstruction. Given the information from the MWPC telescope, three-dimensional reconstruction of the momentum vector of each charged particle can be carried out on a computer. Since momentum is conserved, the vector sum of the momenta of the ⌶ decay products must equal the momentum vector p⌶ of the ⌶ itself, and likewise, since energy is conserved, the sum of the energies of the decay products must
PARTICLE SPECTROMETERS
equal the energy E⌶ of the ⌶. From the relativistic relationship among mass, energy, and momentum, the mass of the ⌶ can be reconstructed as
p
E2
2500
−
p2 c2
(7)
This calculation requires knowledge of the energies of the ⌶⫺ decay products. To calculate the energy of a decay product from its momentum, its mass must be known. While the masses could be determined using Cherenkov counters, we will see later that in this instance it is sufficient simply to assume that the two equal-charged particles are pions and the particle of opposite charge is the proton or antiproton. Of course, these assumed particle identities are not always correct, nor are all observed combinations of a proton (or antiproton) and two negative (or positive) pions in fact decay products of a ⌶⫺ (or ⌶⫹). Figure 8 shows the distribution in mass of a sample of p앟⫹앟⫹ combinations from the HyperCP experiment. A clear peak at the mass of the ⌶⫹ is evident, superimposed on a continuum of background events in which the assumed particle identities are incorrect. The width of the peak reflects uncertainties in the measurement of the particle momentum vectors. These arise from the wire spacings of the MWPCs and from multiple scattering of the particles as they pass through the material of the detectors (again an example in which the measurement uncertainty is not quantum-mechanically dominated). By requiring the reconstructed mass to fall within the peak, one can select predominantly signal events and suppress background. The signal-to-background ratio can be improved by carrying out constrained fits to the cascade decay geometry and constraining the momentum vectors of the ⌳ decay products to be consistent with the known ⌳ mass. While the signal-to-background ratio could be further improved by using Cherenkov counters for particle identification, the improvement would come at the expense of increased cost, complexity, and event size and is not needed for the purposes of the experiment. Also of interest in the HyperCP experiment is the distribution in the decay point of the ⌶ hyperons. This is shown in
20,000
Number of events
17,500 15,000
2000
1500
1000
500
0
0
200
400 600 800 1000 1200 Decay distance (cm)
Figure 9. Distribution in decay distance of a sample of ⌶⫹ 씮 ⌳앟⫹ 씮 p앟⫹앟⫹ combinations from the HyperCP experiment. To enhance the signal relative to the background, the p앟⫹앟⫹ mass is required to fall within ⫾5 MeV/c2 of the known ⌶⫹ mass.
Fig. 9. The ⌶ decay point is reconstructed for each event by first locating the point of closest approach of the ⌳ decay products. This point represents the position at which the ⌳ decayed. The ⌳ trajectory is then extrapolated upstream to its point of closest approach with the pion track from the ⌶ decay, which represents the position at which the ⌶ decayed. We observe an (approximately) exponential distribution as expected for the decay of an unstable particle. This reflects quantum-mechanical randomness: although the ⌶ has a definite average lifetime (as given in Table 2), the actual time interval from creation to decay of a given individual ⌶ cannot be predicted but varies randomly from event to event. The deviations from exponential character arise from three sources: (1) some background events are present in the sample, (2) we have not corrected for the (momentum-dependent) relativistic time-dilation factor 웂, which is different for each event, and (3) the detection probability is not entirely uniform for ⌶ hyperons decaying at different points within the vacuum decay region. These effects can all be corrected in a more sophisticated analysis, but the simple analysis presented here serves to illustrate the key points. Collider Detector at Fermilab
12,500 10,000 7,500 5,000 2,500 0
3000
Number of events
m c = 2
675
1.3
1.32 1.34 + + pπ π mass (GeV/c2)
1.36
Figure 8. Distribution in mass of a sample of p앟⫹앟⫹ combinations from the HyperCP experiment.
To indicate the wide range of possible spectrometer configurations, we next consider briefly the Collider Detector at Fermilab (CDF) spectrometer. This is an example of a collidingbeam spectrometer notable for its use (along with the D0 spectrometer) in the 1995 discovery of the top quark. A key difference between fixed-target and colliding-beam spectrometers is that in the former case the reaction products emerge within a narrow cone around the beam direction, whereas two beams colliding head-on produce reaction products that emerge in all directions. This leads to rather different spectrometer layouts in the two cases. A typical design goal of colliding-beam spectrometers is hermeticity, that is, as few as possible of the particles produced in the collisions should escape undetected. This of course contradicts the requirements
676
PARTICLE SPECTROMETERS
;; ;; ;; ;; Central muon upgrade
Central muon extension
Steel
Central muon system
Central hadron calorimeter Central EM
Forward EM calorimeter
Plug EM calorimeter
Central tracking chamber
Forward muon system Toroids
Forward hadron calorimeter
;; ;
Solenoid coil
Hadron calorimeter
Interaction point
Plug hadron calorimeter
Low β quad
Figure 10. Schematic diagram of the Collider Detector at Fermilab spectrometer. One-quarter is shown; the rest is implied by rotational symmetry about the beam pipe and mirror symmetry about the plane through the interaction point perpendicular to the beam pipe.
Silicon Vertex vertex TPCs detector 1m 0
that the detectors be supported in place and that the signals be brought out; thus compromises are necessary. The CDF detector has been described in the literature (62); space constraints preclude a detailed discussion here. Figure 10 shows schematically one-quarter of the spectrometer, which surrounds the point (actually a region about 0.5 m long) inside the Tevatron beam pipe at which the proton and
Tevatron
Beryllium Beam-beam beam pipe Corrugated interaction stainless steel counters beam pipe
1
2
3
4
5m
antiproton beams collide. Figure 11 shows the actual layout. Figure 12 is an event display, that is, a schematic diagram showing the particle tracks as reconstructed by the spectrometer; the event shown contains a high-momentum muon (애⫺) and antimuon (애⫹) resulting from the production and decay of a Z0 gauge boson. In the figure the beam axis runs into and out of the page, as does the magnetic field due to the super-
Figure 11. Photograph of the Fermilab Collider Detector Facility (CDF) spectrometer in its assembly hall; the forward detectors have been retracted to give access to the central portion.
PARTICLE SPECTROMETERS
677
c2 is due to the Z0 boson. Its width reflects both the (classical) measurement resolution of the magnetic spectrometer and the (quantum-mechanical) uncertainty of the Z0 boson’s mass (intrinsic width ⌫ ⫽ 2.49 GeV/c2 FWHM) due to its short lifetime. Since ⌫c2 represents the Z0 boson’s energy uncertainty and the lifetime its duration uncertainty, they satisfy a version of the Heisenberg uncertainty relation: ⌫c2 ⫽ ប/2. SUMMARY
Figure 12. End-view display of a CDF event containing a Z0 씮 애⫹애⫺ decay. The muon tracks are the two line segments emerging back-to-back from the interaction point at about 5 o’clock and 11 o’clock. They are identified as muons by the ⫻’s that indicate signals in the inner and outer muon detectors. Because of their high momentum (p애 ⫽ mZ0c/2 ⫽ 45.6 GeV/c), the muon tracks show little curvature as compared to the tracks of the remaining (lower-momentum) charged particles in the event. It is apparent that more tracks point down and to the left than up and to the right, suggesting that noninteracting electrically neutral particles (neutrinos) may have been produced, or that some neutral particles were missed due to cracks in the calorimeters. The ‘‘missing momentum’’ vector due to the undetected neutral particles is indicated by the arrow. (The ‘‘low-B’’ quadrupole magnets serve to focus the proton and antiproton beams at the interaction point.)
conducting solenoidal momentum-analyzing electromagnet. The curvature of charged-particle tracks due to the magnetic field is clearly evident. A positive identification of a muon can be made for those trajectories that pass through the massive hadronic calorimeter relatively unscathed and leave signals in surrounding scintillators and wire chambers. Figure 13 shows the distribution in muon-pair mass observed by the CDF detector. The prominent peak at 91 GeV/
Following an introduction to particle physics and particle detectors, we have considered two contrasting examples of subatomic-particle spectrometers, ranging from the relatively simple (HyperCP) to the complex (CDF). While the brief discussion just given illustrates the variety of issues encountered in designing particle spectrometers and their electronic instrumentation, the actual design process is quite involved. Extensive computer simulation is generally employed to tailor a solution optimized for the problem at hand. Requirements for performance and reliability often come up against practical constraints on cost and on development and assembly time. The ongoing development of new technology for particle detectors and their instrumentation, together with the development of increasingly intense particle beams, make measurements possible that were previously not feasible. When new detector technology is employed, simulation studies must be combined with prototype tests both on the bench and at test beams. The investigation of matter and energy at ever deeper and more sophisticated levels exemplifies fruitful collaboration among scientists and engineers. ACKNOWLEDGMENTS The authors thank N. Gelfand for useful discussions and the Particle Data Group and the Fermilab CDF and HyperCP collaborations for permission to reproduce their results. This work was carried out with support from the U.S. Department of Energy under Grants DE-FG02-94ER40840 and DE-AS0589ER40518. BIBLIOGRAPHY 1. M. Mandelkern, Nuclear techniques for medical imaging: Positron emission tomography, Ann. Rev. Nucl. Part. Sci., 45: 205– 254, 1995. 2. E. M. Westbrook and I. Naday, Charge-coupled device-based area detectors, Methods Enzymol., 276: 244–268, 1997.
Figure 13. Dimuon mass spectrum obtained by the CDF collaboration.
3. J. R. Janesick and S. T. Elliot, History and advancement of large area array scientific CCD imagers, Astron. Soc. Pacific Conf. Series, Tucson, AZ, 1991. 4. R. M. Barnett et al., (Particle Data Group), Review of particle physics, Phys. Rev. D, 54: 1–720, 1996. 5. J. A. Appel et al., Performance of a lead-glass electromagetic shower detector at Fermilab, Nucl. Instrum. Methods, 127: 495– 505, 1975. 6. I. Gaines, Hadrons and leptons at high transverse momentum, Ph.D. thesis, Columbia Univ., 1976, p. 59. 7. T. Murphy et al., Hadron showers in iron and muon identification, Nucl. Instrum. Methods A, 251: 478–492, 1986. 8. G. D’Agostini et al., Nucl. Instrum. Methods A, 219: 495–500, 1984.
678
PARTICLE SPECTROMETERS
9. J. Litt and R. Meunier, Cerenkov counter technique in high-energy physics, Annu. Rev. Nucl. Sci., 23: 1–43, 1973. 10. J. Seguinot and T. Ypsilantis, A history survey of ring imaging Cherenkov counters, Nucl. Instrum. Methods A, 343: 1–29, 1994; T. Ypsilantis and J. Seguinot, Theory of ring imaging Cherenkov counters, ibid., 343: 30–51, 1994. 11. B. Dolgoshein, Transition radiation detectors, Nucl. Instrum. Methods A, 326: 434–469, 1993. 12. KTeV Collaboration, E. Cheu et al., Proposal to continue the study of direct CP violation and rare decay processes at KTeV in 1999, proposal to Fermilab, 1997 (unpublished). 13. The radiation lengths of various materials have been calculated and tabulated by Y. S. Tsai and can be found in the Review of Particle Physics (Ref. 4), p. 72. 14. R. K. Swank, Characteristics of scintillators, Annu. Rev. Nucl. Sci., 4: 111–140, 1954. 15. P. H. Eberhard et al., Detection efficiency and dark pulse rate of Rockwell (SSPM) single photon counters, in G. A. Lampropoulos, J. Chrostowski, and R. M. Measures (eds.), Applications of Photonic Technology, New York: Plenum, 1995, pp. 471–474. 16. C. R. Kerns, A high rate phototube base, IEEE Trans. Nucl. Sci., NS-24: 353–355, 1977. 17. R. Ruchti, Scintillating fibers for charged-particle tracking, Annu. Rev. Nucl. Sci., 46: 281–319, 1996. 18. A variant of the solid-state photomultiplier described by M. D. Petroff and W. G. Stapelbroek, Photon-counting solid-state photomultiplier, IEEE Trans. Nucl. Sci., NS-36: 158–162, 1989; M. D. Petrov, M. G. Stapelbroek, and W. Z. Kleinhans, Detection of individual 0.4–28 애m wavelength photons via impurity-impact ionization in a solid-state photomultiplier, Appl. Phys. Lett., 51: 406– 408, 1987. 19. B. Baumbaugh et al., Performance of multiclad scintillating and clear waveguide fibers read out with visible light photon counters, Nucl. Instrum. Methods A, 345: 271–278, 1994. 20. C. Bebek, A cesium iodide calorimeter with photodiode readout for CLEO-II, Nucl. Instrum. Methods A, 265: 258–265, 1988. 21. G. Charpak et al., The use of multiwire proportional counters to select and localize charged particles, Nucl. Instrum. Methods, 62: 262–268, 1968. 22. G. Charpak and F. Sauli, High resolution electronic particle detectors, Annu. Rev. Nucl. Sci., 34: 285–349, 1984. 23. J. Kadyk, Wire chamber aging, Nucl. Instrum. Methods A, 300: 436–479, 1991; J. Wise, J. A. Kadyk, and D. W. Hess, A chemical model for wire chamber aging in CF4 /iC4H10 gases, J. Appl. Phys., 74: 5327–5340, 1993. 24. H. Fischle, J. Heintze, and B. Schmidt, Experimental determination of ionization cluster size distributions in counting gases, Nucl. Instrum. Methods A, 301: 202–214, 1991. 25. S. F. Biagi, A multiterm Boltzmann analysis of drift velocity, diffusion, gain and magnetic-field effects in argon-methane-water vapour mixtures, Nucl. Instr. Methods A, 238: 716–722, 1989. 26. D. J. Griffiths, Introduction to Electrodynamics, 2nd ed., Englewood Cliffs, NJ: Prentice-Hall, 1989, p. 156. 27. S. Ramo, Currents induced by electron motion, Proc. Inst. Radio Eng., 27: 584–585, 1939; see also Ref. 42. 28. A. Peisert, The parallel plate avalanche chamber as an endcap detector for time projection chambers, Nucl. Instrum. Methods, 217: 229–235, 1983. 29. C. Brown et al., D0 muon system with proportional drift tube chambers, Nucl. Instrum. Methods A, 279: 331–338, 1989. 30. P. Baringer et al., A drift chamber constructed of aluminized mylar tubes, Nucl. Instrum. Methods A, 254: 542–548, 1987.
31. J. Fischer et al., Proportional chambers for very high counting rates based on gas mixtures of CF4 with hydrocarbons, Nucl. Instrum. Methods A, 238: 249–264, 1985. 32. G. Charpak et al., Some features of large multiwire proportional chambers, Nucl. Instrum. Methods A, 97: 377–388, 1971; R. Bouclier et al., Proportional chambers for a 50,000-wire detector, ibid., 115: 235–244, 1974. 33. V. Radeka and P. Rehak, Second coordinate readout in drift chambers by charge division, IEEE Trans. Nucl. Sci., NS-25: 46– 52, 1978. 34. A. R. Erwin et al., Operational experience with a 2.5 ⫻ 1.5 meter delay line chamber, Nucl. Instrum. Methods A, 237: 493–500, 1985. 35. E. Gatti et al., Optimum geometry for strip cathodes or grids in MWPC for avalanche localization along the anode wires, Nucl. Instrum. Methods, 163: 83–92, 1979. 36. E. Mathieson and G. C. Smith, Reduction of non-linearity in position-sensitive MWPCs, IEEE Trans. Nucl. Sci., NS-36: 305–310, 1989. 37. T. J. Harris and E. Mathieson, Angular localisation of proportional chamber avalanche, Nucl. Instrum. Methods, 154: 183– 188, 1978. 38. G. Charpak et al., Progress in high-accuracy proportional chambers, Nucl. Instrum. Methods, 148: 471–482, 1978. 39. G. Charpak, F. Sauli, and W. Duinker, High-accuracy drift chambers and their use in strong magnetic fields, Nucl. Instrum. Methods, 108: 413–426, 1973. 40. D. R. Nygren, Future prospects of the TPC idea, Phys. Scr., 23: 584–589, 1981. 41. V. Radeka, Signal, noise and resolution in position sensitive detectors, IEEE Trans. Nucl. Sci., NS-21: 51–64, 1974. 42. V. Radeka, Low noise techniques in detectors, Annu. Rev. Nucl. Part. Sci., 38: 217–277, 1989. 43. R. A. Boie, A. T. Hrisoho, and P. Rehak, Signal shaping and tail cancellation for gas proportional detectors at high counting rates, Nucl. Instrum. Methods, 192: 365–374, 1982. 44. G. R. Ricker, Jr. and J. J. Gomes, Pulse risetimes in proportional counters, Rev. Sci. Instrum., 3: 227–233, 1969. 45. G. Hall, Semiconductor particle tracking detectors, Rep. Prog. Phys., 57: 481–531, 1994; G. Lutz and A. S. Schwarz, Silicon devices for charged-particle track and vertex detection, Annu. Rev. Nucl. Part. Sci., 45: 295–335, 1995. 46. T. Mouthuy, Silicon pixel detector research and development, Nucl. Instrum. Methods A, 368: 213–216, 1995. 47. C. J. Kenney et al., A prototype monolithic pixel detector, Nucl. Instrum. Methods A, 342: 59–77, 1994. 48. S. L. Shapiro et al., Silicon PIN diode array hybrids for charged particle detection, Nucl. Instrum. Methods A, 275: 580–586, 1989; M. Campbell et al., Development of a pixel readout chip compatible with large area coverage, Nucl. Instrum. Methods A, 342: 52– 58, 1994. 49. A. Rudge and P. Weilhammer, A very high bandwidth low noise amplifier for Si detector readout, in Proc. 3rd Int. Conf. Electron. Future Colliders, Chestnut Ridge, NY: LeCroy Res. Syst., 1993, pp. 181–197. 50. M. G. Strauss (for the SLD Collaboration), Performance of a silicon pixel vertex detector in the SLD, in C. H. Albright, P. H. Kasper, R. Raja, and J. Yoh (eds.), The Fermilab Meeting: DPF92, Singapore: World Scientific, 1993, pp. 1758–1760; C. J. S. Damerell, CCD vertex detectors in particle physics, Nucl. Instrum. Methods A, 342, 78–82, 1994. 51. For a recent summary see G. Hall, Radiation damage to silicon detectors, Nucl. Instrum. Methods A, 368: 199–204, 1995, and references therein.
PASSIVATION 52. K. M. Smith, GaAs detector performance and radiation hardness, Nucl. Instrum. Methods A, 368: 220–223, 1995. 53. F. Borchelt et al., First measurements with a diamond microstrip detector, Nucl. Instrum. Methods A, 354: 318–327, 1995. 54. D. Christian et al., The development of two ASIC’s for a fast silicon strip detector readout system, IEEE Trans. Nucl. Sci., NS36: 507–511, 1989; T. Zimmerman, A high speed, low noise ASIC preamplifier for silicon strip detectors, ibid., NS-37: 439–443, 1990. 55. J. Antos et al., The SVX II silicon vertex detector upgrade at CDF, Nucl. Instrum. Methods A, 360: 118–124, 1995; M. Tanaka et al., LSI design and data-acquisition architecture for a silicon microvertex detector at the KEK B-factory, ibid., 342: 149–155, 1994; W. J. Haynes, Silicon tracker data acquisition, in G. J. Blanar and R. L. Sumner (eds.), Proc. 6th Int. Conf. Electron. Particle Phys., Chestnut Ridge, New York: LeCroy Res. Syst., 1997, pp. 25–42. 56. R. Wigmans, High resolution hadronic calorimetry, Nucl. Instrum. Methods A, 265: 273–290, 1988. 57. R. J. Yarema et al., A high speed, wide dynamic range digitizer circuit for photomultiplier tubes, Nucl. Instrum. Methods A, 360: 150–152, 1995; T. Zimmerman and M. Sarraj, A second generation charge integrator and encoder ASIC, IEEE Trans. Nucl. Sci., NS-43: 1683–1688, 1996. 58. E. D. Bloom and C. Peck, Physics with the crystal ball detector, Annu. Rev. Nucl. Part. Sci., 33: 143–197, 1983. 59. M. A. Akrawy et al., Development studies for the OPAL end cap electromagnetic calorimeter using vacuum photo triode instrumented leadglass, Nucl. Instrum. Methods A, 290: 76–94, 1990. 60. For more on the search for matter-antimatter asymmetry see D. M. Kaplan, Fixed-target CP-violation experiments at Fermilab, in A. K. Gougas et al. (eds.), 1st Int. Four Seas Conf., CERN Yellow Report 97-06, Geneva, Switzerland: CERN, 1997, pp. 25– 33, and references therein. 61. D. M. Kaplan et al., The HyperCP data acquisition system, in G. J. Blanar and R. L. Sumner (eds.), Proc. 6th Int. Conf. Electron. Particle Physics, Chestnut Ridge, New York: LeCroy Res. Syst., 1997, pp. 165–177. 62. CDF Collaboration, F. Abe et al., The CDF detector: An overview, Nucl. Instrum. Methods A, 271: 387–403, 1988. Reading List Good introductory treatments of special relativity and particle physics may be found in textbooks on modern physics, for example: A. Beiser, Concepts of Modern Physics, 5th ed., New York: McGrawHill, 1995. K. S. Krane, Modern Physics, 2nd ed., New York: Wiley, 1995. H. C. Ohanian, Modern Physics, 2nd ed., Englewood Cliffs, NJ: Prentice-Hall, 1995. There are also more elementary and abbreviated treatments in general-physics textbooks, for example: D. Halliday, R. Resnick, and J. Walker, Fundamentals of Physics (Extended), 5th ed., New York: Wiley, 1997. H. C. Ohanian, Physics, Vol. 2, 2nd ed., New York: Norton, 1989, Expanded. Introductory texts on particle physics include D. J. Griffiths, Introduction to Elementary Particles, New York: Harper and Row, 1987. D. H. Perkins, Introduction to High Energy Physics, 3rd ed., Menlo Park, CA: Addison-Wesley, 1987. Detailed treatments of particle detection techniques may be found in R. M. Barnett et al. (Particle Data Group), Review of Particle Physics, Phys. Rev. D, 54: 1–720, 1996.
679
T. Ferbel (ed.), Experimental Techniques in High Energy Nuclear and Particle Physics, 2nd ed., Singapore: World Scientific, 1991. W. R. Leo, Techniques for Nuclear and Particle Physics Experiments, 2nd ed., New York: Springer, 1994.
DANIEL M. KAPLAN Illinois Institute of Technology
KENNETH S. NELSON University of Virginia
Abstract : Photomultipliers : Wiley Encyclopedia of Electrical and Electronics Engineering : Wiley InterScience
● ● ● ●
My Profile Log In Athens Log In
●
HOME ●
ABOUT US ●
CONTACT US
Home / Engineering / Electrical and Electronics Engineering
●
HELP ●
Recommend to Your Librarian
Photomultipliers
●
Save title to My Profile
●
Article Titles A–Z
Standard Article
●
Email this page
●
Topics
●
Print this page
Wiley Encyclopedia of Electrical and Electronics Engineering
Bradley Utts1 and Phillip Rule1 1Burle Industries, Inc., Lancaster, PA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W5218 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (144K)
Abstract The sections in this article are Functions Tube Operation Tube Structure Tube Characteristics Stability Advanced Section Temporal Characteristics
About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELE...CS%20ENGINEERING/40.%20Nuclear%20Science/W5218.htm16.06.2008 15:29:30
Browse this title
Search this title
●
Advanced Product Search
●
Search All Content
●
Acronym Finder
336
PHOTOMULTIPLIERS
PHOTOMULTIPLIERS Photomultiplier tubes are vacuum tubes that detect light energy in the ultraviolet (UV), visible, and near-infrared regions of the electromagnetic spectrum. The detected light energy is converted into electrical current, which is internally amplified to a measurable level. Output signal current is ideally proportional to input illumination. Photomultiplier tubes are extremely sensitive, and some types are capable of detecting and counting single photons. Photomultiplier tubes are widely used in medical, scientific, and industrial applications for the detection and measurement of low-intensity light. Some examples are cancer detection, exhaust emission monitoring, high-energy physics research, baggage inspection, and oil exploration. The light being detected can be of a variety of types J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
PHOTOMULTIPLIERS
including incandescent, fluorescent, Laser, and Cerenkov radiation. Nuclear radiation can also be detected by using a photomultiplier tube to detect light flashes in an optically coupled scintillating material. Material defects or optical density variations can be measured by passing a semitransparent test material in the path between the light (or radiation) source and photomultiplier tube. Photomultiplier tubes give excellent performance in many radiation detection applications by providing relatively noise free gain and wide bandwidth amplification. Optimum performance in each application is achieved through knowledge of tube design and tube operating characteristics. Information is the key to selecting the correct tube for the application. Photomultiplier tube manufacturers are the primary sources of such information. Manufacturers provide catalogues and technical handbooks with detailed information on physical and chemical principles, tube construction, tube operation, performance characteristics, and test methods. Textbooks, patents, and journal articles are also excellent sources of detailed information. Application requirements dictate choices in tube characteristics; size, spectral range, sensitivity, dark current, noise, gain, speed, linearity, and cost are the primary factors.
FUNCTIONS The overall function of a photomultiplier tube is to detect light and generate an electrical signal. The process is best understood by analyzing the individual functions occurring within the tube. The first function is detection. Light is detected by the photocathode, which is formed within the vacuum environment. A window provides access for external light to reach the photocathode. Window materials are designed to be highly transparent in the wavelength region of interest. When incident light energy in or near the visible region is absorbed by the photocathode, photoelectrons are emitted into the vacuum space adjacent to the photocathode surface. The photocathode sensitivity range and the light transmission cutoff characteristics of the window material limit the useful range of tube sensitivity. The photocathode is typically the limiting factor at the red end of the spectrum; the window material, typically glass, tends to be highly absorbing in the blue end of the spectrum. Emitted photoelectrons are directed and focused toward the multiplier section by the focus element, which is positioned above the first dynode. A voltage difference generated between the photocathode and the focus element creates an electrostatic field, which draws the electrons toward the first dynode in the multiplier section. The multiplier section amplifies the signal by increasing the number of electrons as the electrons traverse the section. The multiplier section is composed of a series of plates, or dynodes, having successively higher applied positive voltage. These plates are made with secondary emitting surfaces and are arranged such that electrons emitted from one are attracted to the next. Primary electrons striking each dynode give rise to an increased number of secondary electrons, which are in turn attracted to the next dynode. The number of electrons grows geometrically as the charge pulse moves through the multiplier section. Multiplier-section gain can range from about 1000 up to 100 million.
337
The anode is the last element in the photomultiplier tube and has the most positive applied voltage. The anode is typically positioned between the last two dynodes, so as to function as both accelerating grid and collector of electrons. The electron charge collected by the anode is conducted to external circuitry, which is used to further amplify or process the output signal. The photocathode, focus element, dynodes, and anode are electrically connected to external leads by vacuum feed-through wires. TUBE OPERATION Photomultiplier tubes are manufactured in a clean environment using metal, ceramic, and glass parts. The focus element, multiplier section, and anode are sealed inside an envelope (usually glass), leaving only a small vacuum port. Air is evacuated through the vacuum port. While under vacuum, the photocathode and secondary emitting surfaces are formed by vapor deposition of alkali metals. When the vapor deposition processing is completed, the vacuum port is sealed and tubes are given additional stabilization processing through application of voltage, light, and heat. Photomultiplier tubes are operated by applying a directcurrent (dc) voltage to each of the tube elements and dynodes. One high-voltage dc source is normally used to power a photomultiplier tube. A resistor string voltage divider distributes voltage among the tube elements. External circuitry is connected to the anode to detect tube output signal. Extraneous light and other disruptive inputs are excluded before applying voltage. Measurements are made by exposing the tube’s photocathode to the light source of interest. Photomultiplier tubes are easily damaged by improper use. For example, exposure to helium is to be avoided as helium readily diffuses through certain types of glasses, raising the internal gas pressure. Damage also may result if photomultiplier tubes are operated outside the manufacturer’s maximum rating for applied voltage, current, temperature, or other environmental conditions. Exposure to ambient lighting while the tube is under voltage also may damage the photocathode and dynodes. Personal precautions are recommended when working with high voltage. Eye and face protection is recommended when handling large tubes where danger of implosion is present. TUBE STRUCTURE Functional Parts The photomultiplier tube is composed of five basic functional parts: (1) window, (2) photocathode, (3) focus element, (4) multiplier section, and (5) anode. The window allows light to reach the photocathode. The photocathode converts light energy into free electrons. The focus element directs the photoelectrons into the multiplier section. The multiplier section amplifies the photoelectrons, and the anode outputs the resulting amplified signal. Figure 1 shows a typical end-window photomultiplier tube and the location of functional parts. Tubes are evacuated to provide the proper environment for the formation and survival of the photocathode and secondary emitting surfaces. Internal vacuum in the range 10⫺8 torr is needed for photomultiplier tube operation. Good vacuum is
338
PHOTOMULTIPLIERS
External connections
Vacuum
Window Photocathode
Vacuum level
EA
Eg
Anode
Conduction band Fermi level
Eg + EA
Valence band
Multiplier section Focus element Internal connections
Material Vacuum
Figure 1. Photomultiplier-tube cross section showing basic structure and location of functional parts.
Figure 3. Band-gap model showing the minimum photon energy (EG ⫹ EA) required to achieve emission of a photoelectron.
window, or on an internal structure. Photocathodes of the first variety are called transmission mode photocathodes, and the light energy incident on one side produces photoelectrons on the opposite side. Light passing through the window may be absorbed or reflected by the photocathode, or it may pass on through the photocathode without generating photoelectrons. The other types of photocathodes are called reflective mode photocathodes and emit photoelectrons on the same side as the incident light energy. A reflective mode photocathode is deposited on a reflective nonwindow surface within the tube. Light striking the photocathode has two chances to be absorbed: first on the initial pass and again on the reflected pass if not absorbed on the first. Photoemission is the end product of a three-step process: (1) absorption of a photon which transfers energy to an electron, (2) movement of the electron having increased energy toward the surface of the photocathode, and (3) escape of energized electron from the photocathode surface into the vacuum. Photoelectrons generated deep inside the photocathode tend not to be emitted as they lose energy through lattice collisions and collisions with other electrons before reaching the surface. The ideal photocathode would produce one photoelectron for each incident photon. Typical photocathodes produce about one photoelectron for every three or four incident photons. High photocathode sensitivity, or quantum efficiency, is important in determining tube performance. Photoemission occurs when an excited electron has sufficient energy to escape from the surface of the photocathode. To generate a free electron, a photon striking the photocathode must have energy h sufficient to raise a valence-band electron to the vacuum level. In the band-gap model shown in Fig. 3, this minimum required energy is the sum of band-gap energy (EG) and electron affinity (EA). Band-gap energy is the energy required to raise an electron from the valence band to the conduction band. Electron affinity is the energy required for an electron in the conduction band to escape into vacuum. Values for (EG) and (EA) are dependent on the photocathode type and composition. Light in the visual region has energy in the range of 1.6 to 3 eV. To be sensitive in the visual region, a photocathode ma400 terial must be able to produce photoelectrons with input photons having energies of 3 eV or less. The relationship between energy and wavelength is
also necessary to minimize the presence of gas molecules and to ensure that the mean free paths of the electrons far exceed the distance between dynodes. Electrons which strike gas molecules generate ions that disrupt tube function and damage the photocathode. The most common window materials for photomultiplier tubes are borosilicate (hard) glass and lime (soft) glass. Window materials are chosen so as not to restrict photocathode sensitivity in the region of interest. The type of window material is important when working in or near the ultraviolet part of the spectrum because the transmission of most window materials cut off at short wavelengths. Quartz and special ultraviolet transmitting glasses are normally used in these applications. Magnesium fluoride and sapphire extend the range further into the ultraviolet region. Figure 2 shows the transmission characteristics of common window materials. The 10% cutoff points for these materials are 115 nm for MgF2, 140 nm for sapphire, 160 nm for quartz, 190 nm for UV glass, 270 nm for borosilicate glass, and 300 nm for lime glass. Optical losses may occur due to index of refraction mismatches between the tube window and the surrounding environment. Most of the common window materials have an index of refraction of about 1.5. Photocathode material may be deposited directly on the inside surface of the window, inside the tube but opposite to the
100
% Transmission
80
MgF2
Sapphire
60
UV glass Quartz
40
Lime Borosilicate
20
0 100
150
200
250
300
Wavelength (nm)
350
Figure 2. Window material cutoff characteristics may limit tube sensitivity at short wavelengths, even though photocathode sensitivity extends beyond the cutoff point.
E=
hc , λ
using λ =
c ν
PHOTOMULTIPLIERS
where E is energy, h is Planck’s constant, c is the speed of light in a vacuum, is wavelength of light, and is frequency. Electron affinity can be reduced by lowering the surface work function. With sufficient downward band bending at the surface, the vacuum level becomes lower than the bottom of the conduction band and electron affinity becomes negative. This class of photocathodes is called negative electron affinity (NEA). For NEA photocathodes, lower-energy (longer-wavelength) photons are able to produce photoelectrons, thus extending sensitivity out into the near-infrared region. The two commonly used solid-state NEA photocathodes are GaAs(Cs) and InGaAs(Cs). The most common photocathodes are made from the alkali metals: lithium, sodium, potassium, rubidium, and cesium. Alkali metals are chosen for having low electron affinity. The photocathodes are composed of thin layers of alkali metal grown by vapor deposition on surface materials such as antimony or magnesium oxide. The pure alkali materials for the photocathode are produced by thermal reduction of alkali metal salts. Vacuum must be maintained during and after photocathode formation as the evaporated alkali metals quickly react with air, oxygen, or water vapor. Alkali metal combinations are used to create unique characteristics. Photocathodes composed of two mixed alkali metals are called bialkali cathodes. Mixture combinations of more than two alkali metals, called multialkali, are used to create photocathodes with wider band sensitivity extending further toward the red. Photocathodes are tailored for special applications such as matching sensitivity with a scintillating source or having low dark current at elevated temperatures for oilwell logging. Common bialkali cathodes are KCsSb or RbCsSb. NaKSb is used for high-temperature bialkali cathodes. Multialkali or extended red multialkali (ERMA) cathodes are made of NaKCsSb. Photocathode radiant sensitivity is measured in terms of radiant flux, with units being amperes per watt. Figure 4 shows the photocathode radiant sensitivity and range of sensitivity for several common photocathodes, as a function of wavelength.
Sensitivity (mA/W)
100
10
Bialkali 1.0 Multialkali Extended red multialkali
GaAs
0.1 200
400
600
800
1000
Wavelength (nm) Figure 4. Sensitivity ranges of common photocathodes falling in the UV, visual, and near-infrared region.
339
Quantum efficiency (QE) is the ratio of the number of emitted photoelectrons to the number of incident photons at a given wavelength, expressed in percent. QE(λ) =
nk Shc Shν = = np e λe
where nk is number of photoelectrons emitted, np is number of incident photons, S is cathode radiant sensitivity at the given wavelength, h is Planck’s constant, is frequency, e is electron charge, c is speed of light in a vacuum, and is the wavelength of light. The peak quantum efficiency reaches about 25% to 30% for most photocathodes at the wavelength of maximum emission. The photocathode surface may be flat, or curved to equalize electron path lengths to dynode 1. The concave inner window surface shortens the electron path lengths at the edges of the photocathode and improves timing uniformity across the photocathode. The electron optics design of the photocathode, focus element, and first dynode is critical in determining collection efficiency and timing characteristics. The focus element is designed to attract and focus all possible photoelectrons to the prime area of the first dynode in the multiplier section. The focus element is physically positioned between the photocathode and dynode 1. Mechanical shape, distance from photocathode, and applied voltage are adjusted to optimize the electron optics. The focus element is sometimes given a voltage intermediate to the photocathode and dynode 1 voltages. Unipotential lenses are also commonly used, with the focus element voltage equal to first dynode voltage. The multiplier section is composed of a series of secondary electron emitting plates called dynodes. The physical processes of secondary emission are similar to those of photoemission except that electron bombardment causes electron emission. Incident electrons interact with, and transfer energy to, one or more electrons in the secondary emitting material. Some of the energized electrons move toward the material surface, and those reaching the solid–vacuum surface with sufficient remaining energy to escape are emitted into the vacuum. The average number of secondary electrons emitted per primary striking electron is called the secondary emission ratio. The common dynode substrate materials are beryllium copper, nickel, and stainless steel. Nickel and stainless steel dynodes are typically coated with antimony, and the secondary emitting surface is alkali antimonide formed by the reaction between the surface antimony and the alkali metals present in the tube during the photocathode building process. Beryllium copper is treated to produce a uniform surface layer of beryllium oxide. Alkali metals are deposited to increase secondary emission properties by lowering the surface work function. The principles of negative electron affinity can also be applied to secondary emitting materials. Gallium phosphide treated with cesium, or cesium plus oxygen, makes an NEA material with useful photomultiplier tube properties. GaP is typically used as a first dynode to provide a high secondary emission ratio and improve energy resolution. Current amplification, or gain, is achieved through a cascade process as electrons move from dynode to dynode. The number of electrons is increased at each stage by the second-
340
PHOTOMULTIPLIERS
ary emission ratio of the dynode at that stage. For dynode 1, we have δ1 = a
I 1
Ik
where 웃1 is the secondary emission ratio of dynode 1, a is the collection efficiency, I1 is current leaving dynode 1, and Ik is current leaving the cathode. For each successive dynode where unity collection efficiency is assumed, we have δm =
Im Im−1
where m represents dynode position number. Total current amplification (gain) for the multiplier section is the product of the individual dynode gains, or anode current divided by photocathode current: Gain = δ1 δ2 , . . ., δm =
Ip Ik
where Ip is anode current. For equal secondary emission ratios at each dynode, we have Gain = δ n where n is the number of dynodes. Multiplier sections, also called cages, are constructed in several forms, with each having an advantage for certain characteristics. The basic multiplier section classifications are circular, box and grid, venetian blind, linear focused, and mesh. The circular cage takes little space and has fast time
Photocathode 1
response. The larger dynode box and grid-type cage has good collection efficiency and energy resolution. The venetian blind type is simple in design and has good collection efficiency. The linear focused type has fast time response and good pulse linearity. The mesh type is short in length and is much less affected by magnetic fields. Voltage Divider Photomultiplier tubes require external electrical circuits to operate which are called voltage dividers (also referred to as resistor networks or bleeder strings). The voltage divider is usually a string of resistors that provide successively increasing voltage potentials from the photocathode, through the dynode elements, and to the anode. Typical examples of voltage divider circuits are shown in Fig. 5. The application of the photomultiplier tube determines what the voltage distribution should be among the photocathode, dynodes, and anode. In many applications the resistors can be of equal value. The voltage difference between each stage is V (m) =
V (T )R(m) R(T )
where V(m) is the voltage difference between stages separated by resistor R(m), V(T) is the photocathode to anode voltage, and R(T) is the total resistance between the photocathode and anode. For timing applications the voltage difference between the photocathode and the first dynode may be increased with respect to the voltage differences between the other elements to decrease the time of arrival of photoelectrons collected at the first dynode.
Dynodes 2
m–1
3
m
Anode
Out
Focus
Positive high voltage
Photocathode 1
+ HV
Dynodes 2
3
m–1
m
Anode
Out
Focus
Negative high voltage – HV Figure 5. Examples of voltage divider circuits for positive and negative high voltage, showing how applied high voltage across the divider distributes voltage to each tube element.
PHOTOMULTIPLIERS
V(T) and R(T) determine the total current, I, passing through the divider network. The value of R(m) represents a compromise between high and low divider chain current applications, but typically ranges between 50 k⍀ and 5000 k⍀. Care should be exercised to choose the power rating of the resistors to ensure adequate heat dissipation. A voltage divider powers the photomultiplier tube using positive or negative high voltage. For positive high-voltage operation the photocathode is at ground potential, and the dynodes and anode are at positive voltage. In this mode of operation (also called the pulse mode), an anode capacitor blocks dc current and dc high voltage from the external electronics and allows the passage of only a charge pulse. The combination of the load resistor and anode capacitor creates a time constant that affects the shape of the output pulse. For negative high-voltage operation (also called the dc mode, or current mode) the photocathode is operated at negative voltage, the dynodes are at successively increasing voltage, and the anode is at ground potential. A load resistor may be placed between the anode and ground, and a capacitor between the anode and external electronics is not needed. External electronics can therefore measure dc current and charge pulses directly from the anode. The outside surface of the photomultiplier tube glass envelope should be at the same voltage as the photocathode. Negative high-voltage operation usually requires a conductive coating over the glass envelope which is connected at the photocathode voltage. This prevents ions from migrating through the glass envelope that can result in loss of photomultiplier tube sensitivity. An insulator to protect the user from electrical shock is placed over the conductive covering. The conductive coating also serves as an electrostatic shield which reduces noise. Stray capacitance and inductance should be minimized in the divider chain since these distort pulse shape. Coaxial cable is used to connect the anode signal lead to the processing electronics for high-frequency and pulsed signals. TUBE CHARACTERISTICS Sensitivity Luminous sensitivity is a term given to describe photomultiplier tube output per unit luminance input. Luminous sensitivity is measured using a light source of suitable broadband spectral emission characteristics over a range of wavelengths where photomultiplier tubes are operated. A common practice is to specify the photomultiplier tube sensitivity using a tungsten lamp with a lime glass window operated at a color temperature of 2856 K as a light source. Sensitivity is also measured using the same light source with a blue band-pass filter to simulate the expected response within the emission spectrum range of a thallium-doped sodium iodide scintillator. Known constant values of luminous flux at the photocathode faceplate are used to determine luminous sensitivity. Sensitivity is reported in three ways by photomultiplier tube manufacturers. Photocathode sensitivity is a measure of the integral quantum efficiency of the photocathode. It is normally determined by measuring the current flowing between the negatively biased photocathode and remaining elements inside the photomultiplier tube at ground potential. The gain of the photomultiplier tube is not taken into account for this
341
measurement. Anode sensitivity is a measure of integral quantum efficiency of the photocathode, the collection efficiencies and gains of the dynodes, and collection efficiency of the anode measured at the anode output. It is frequently used to determine the expected output of the photomultiplier tube when the user knows the input luminous flux. Both photocathode and anode sensitivity are expressed in units of amperes per lumen. Photocathode radiant sensitivity is the photocathode output current divided by unit radiant flux input for a given wavelength or range of wavelengths. Photocathode radiant sensitivity indicates the spectral sensitivity of the photomultiplier tube. It is typically expressed in units of amperes per incident watt. Dark Current and Noise Dark current is a term that refers to the anode current present when the photomultiplier tube is kept completely in the dark while under bias. It is typically measured at a given high voltage and temperature. There are many contributions to the noise of a photomultiplier tube. Thermionic noise is a result of thermionic emission of single electrons from the photocathode. These single electrons are then multiplied in the dynode chain and collected by the anode. This noise component is temperature-dependent, and reducing the temperature at which the photomultiplier tube is operated can minimize the thermionic emission rate. The relationship between this noise component and temperature can be generalized by (1) J = constant T 2 e(−W /kT ) where J is the thermionic current density, T is temperature in degrees kelvin, W is the thermionic work function, and k is the Boltzmann constant. Ohmic noise is caused by leakage current across various insulators used in the construction of the photomultiplier tube. It can be distinguished from other noise sources in that it generally has a linear relationship with applied high voltage. This may be caused by simple resistances such as contaminants on the inside or outside surface of the photomultiplier tube envelope. The presence of very long-lived radioactive impurities within the photomultiplier tube envelope contributes to the overall noise. Natural potassium contains 0.0118% of potassium-40, which has a half-life of 1.3 ⫻ 109 years. It is present in many bialkali and multialkali photocathode materials and in the glass used in making photomultiplier tube envelopes. The decay scheme of potassium-40 is such that beta particles and gamma rays can be emitted during the decay process. Glass used in manufacturing modern photomultiplier tubes is usually selected to have a low potassium content. Afterpulses and light emission are regenerative sources of noise and are kept to very low levels in modern photomultiplier tubes. Afterpulses are caused by the feedback of positive ions to parts of the dynode elements and the photocathode, and occur at a time after the signal pulse. Afterpulses are caused by the presence of residual gasses inside the photomultiplier tube envelope. Different residual gases give rise to afterpulses of different delay times. For example, impurity gases of Xe⫹, Ar⫹, N2⫹, and H2⫹ have afterpulses of characteristic delays of 2.5, 1.34, 1.17, and 0.32 애s, respectively (2).
342
PHOTOMULTIPLIERS
Electron bombardment and subsequent luminescence of construction materials and impurities on the internal parts of the photomultiplier tube can cause light emission. Emitted photons may be detected by photosensitive elements inside the photomultiplier tube giving rise to a nonsignal increase of photomultiplier tube current. Light emission and feedback in linearly focused photomultiplier tubes can be a source of noise when the photomultiplier tube is operated at very high gains (3). Field emission is the emission of electrons from the internal elements of the photomultiplier tube when localized electric fields become very high due to sharp edges or surface roughness. Good manufacturing and design practices reduce points of intense localized electric fields and control field emission. Exposure of the photomultiplier tube to light may result in phosphorescence from its components. This phosphorescence has been attributed primarily to metastable excitation mechanisms primarily from the glass faceplate (3). It is a temporary source of noise that typically decays away in time periods from minutes to hours and is particularly troublesome in low light level applications such as single-photon counting. Keeping the photomultiplier tube in the dark immediately before use may minimize this source of noise. Three different figures of merit are used to characterize dark current noise using generalized methods rather than simply stating the dark current in amperes at a given high voltage. Equivalent anode dark current input (EADCI) characterizes the dark current for different values of anode sensitivity. EADCI is the value of the luminous flux incident on the photocathode required to produce an anode current equal to the observed dark current. It is the ratio of the dark current divided by the anode luminous sensitivity at a given high voltage. Units reported for EADCI are either in lumens, watts at the wavelength of maximum cathode responsivity, or watts at a specified wavelength. Equivalent noise input (ENI) may be a useful means to characterize noise if the light source is modulating and its bandwidth is known. ENI is the value of luminous flux which, when modulated in a known manner, produces an rms output current equal to the rms noise current within the specified bandwidth. Noise equivalent power (NEP) is very close to being the same as ENI, except units of power (watts) are used instead of luminous flux (lumens) to characterize the light incident on the photocathode. Linearity Linearity refers to the linear curve obtained when the logarithm of the anode output current is plotted against the logarithm of the incident luminous flux. This curve maintains a linear nature over several orders for magnitude of incident luminous flux. Saturation of the anode output occurs at very high levels of incident flux. This is generally caused by a space-charge limiting effect between the last two dynode stages between which the anode is situated and may be dependent on the design of the photomultiplier tube. Linearity may be extended by increasing the voltage difference between the last few dynodes compared to the voltage differences between the other dynodes. Another cause for deviation from linearity is the use of an improper divider chain. Nonlinearity can result when the anode current is of the same order of magnitude as the di-
vider chain current. The voltage divider should be designed for at least 10 times the current of the anticipated anode output to prevent this type of nonlinearity.
STABILITY Magnetics The gain stability of a photomultiplier tube is sensitive to changes in surrounding magnetic fields, including the earth’s magnetic field. Magnetic fields can deflect the path of electrons moving inside the photomultiplier tube. The degree of sensitivity is dependent upon the design of the photomultiplier tube and its orientation in the magnetic field. Magnetic sensitivity is usually high for photomultiplier tubes used for scintillation counting because of long distances between the photocathode and first dynode. Stability is improved by enclosing the photomultiplier in high magnetic permeability metals called mu-metal shields for all types of photomultiplier tubes. These shields can be cylinders of foils or solid metal. Optimum stability is obtained when magnetic shields are at the same potential as the photocathode. Magnetic shields must be isolated electrically to prevent electrical shock hazard when the photomultiplier tube is operated with its photocathode at negative potential. Fatigue and Drift Photomultiplier tubes that operate under conditions of excessively high anode currents for long periods of time may exhibit an abnormal decrease in their anode sensitivity. This anode sensitivity decrease is referred to as fatigue (4). Fatigue of a photomultiplier tube is thought to be due to a degradation of the secondary electron emission process occurring on the dynode surfaces because of intense electron bombardment and surface dynode polarization (5,6). Electron bombardment may result in cesium migration from the surface of the dynode that causes a decrease in secondary electron emission. Some photomultiplier tubes undergo less fatigue than others because of design or materials used in construction. Drift is a change in anode output under normal conditions of constant photocathode illumination. Individual photomultiplier tubes may increase or decrease in gain during the drift measurement time period (7). Charging of the insulator elements internal to the photomultiplier tube after the light source is activated may cause short-term drift. There is a generalized method developed to measure drift. This generalized method employs a light source providing a constant signal input to the photomultiplier tube. Drift is then quantified according to the relation (8)
Drift =
n
!
|p − p(l)|/n (100/p)
l=1
where p is the mean pulse height averaged over n readings, p is the pulse height at the lth reading, and n is the total number of readings. Characterization of drift using this expression usually commences after an initial stabilization time period of the photomultiplier tube. Readings are then taken at regular intervals thereafter.
PHOTOMULTIPLIERS
Changes in the input signal count rate can produce shifts in the amount of charge per pulse at the anode output. This is particularly true in the case of scintillation counting. Count rate shifts may be due to the design of the voltage divider and the photomultiplier tube. The voltage between dynodes may vary with count rate if the voltage divider has too low a current that results in changing gain. Photomultiplier tubes employing different dynode materials and structures may have different count rate shift properties. External gain correction circuitry may be required in photomultiplier tube applications which require a high degree of stability for long periods of time. This is especially the case when external environmental conditions such as ambient temperature are known to vary. ADVANCED SECTION Output Characteristics The intensity and shape of the output pulse is dependent on the intensity and shape of the input pulse, internal photomultiplier tube resistance and capacitance, and total resistance and capacitance in the anode circuit when noise can be neglected. In general, the shape of the pulse viewed at the anode will take the form (9,10): V (t) = (αQ/C)(1/[α − τ ])(e
−τ t
−e
−αt
)
where ⫽ 1/RC, R is the total anode output resistance, C is the total anode output capacitance, Q is the charge per pulse collected at the anode, 움 is the decay constant with units of inverse time for the input signal, and t is time. It is assumed that the input signal has an exponential decay of the form e —움t, the rise time is negligible with respect to the decay time, and the input pulse shape is not disturbed by the photomultiplier tube. From this equation certain useful approximations can be deduced. The first is when Ⰷ 움. For this case the shape of the output pulse may be approximated by V (t) ∼ (Q/C)(α/τ )(1 − e−τ t ) for short time periods t, and by
343
In these situations the anode pulse has an amplitude proportional to Q, the rise time of the anode pulse is dominated by the decay constant of the signal pulse, and the decay time of the anode pulse is dominated by . The intensity of the anode output pulse is usually large. The application of the pulse counting system may determine which of the two above approximations should be employed. The anode output pulse is proportional to Q in both of the above approximations. Circuits designed with Ⰷ 움 may be considered when it is necessary to learn the decay characteristics of the input signal by simply viewing the decay of the anode output pulse, and also in timing applications. However, the anode signal level is usually small, and the noise contribution of the measurement system should be kept at a minimum to keep the signal-to-noise ratio high. Circuits designed with Ⰶ 움 may be considered when high anode outputs are needed for signal processing and gain is at a premium. However, the pulse rate capabilities for these circuits may be limited since the decay time of the anode pulse may be longer than the decay time of the input signal.
Temporal Characteristics Photomultiplier tubes are sometimes required to produce output pulses of specific temporal characteristics for applications requiring coincident and anti-coincident circuits. This is especially true in applications involving positron emission tomography (PET) and time of flight (TOF). Useful properties that describe temporal characteristics are rise time, fall time, transit time, and transit time spread. These properties are usually measured using a light pulse having the approximate shape of a delta function. Practical light sources producing light pulses approximately the shape of a delta function include light emitting diodes, mode-locked Lasers, and spark sources. Rise time is the time required by the leading edge of the pulse to increase in magnitude from 10% to 90% of the maximum amplitude of the pulse as shown in Fig. 6. Fall time is the time required by the trailing edge of the pulse to decrease in magnitude from 90% to 10% of the maximum amplitude of the pulse.
V (t) ∼ (Q/C)(α/τ )e−αt
V (t) ∼ (Q/C)(1 − e−αt ) for short time periods t, and by V (t) ∼ (Q/C)e−τ t for long time periods t.
Transit time Output pulse amplitude
for long time periods t. In these situations the anode pulse has an amplitude proportional to Q, the rise time of the anode pulse has a shape dominated by , and the decay time of the anode pulse approximates the decay constant of the signal pulse for longer time periods t. The intensity of the anode output pulse is usually small. The second useful approximation occurs when Ⰶ 움. For this case the shape of the output pulse may be approximated by
100% 90%
Rise time
Fall time
50%
10% 0% Pulse width (FWHM)
Time
Delta light pulse, time = 0
Figure 6. Timing definitions for the photomultiplier tube output pulse, referenced from the time that light arrives at the photocathode.
PHOTOMULTIPLIERS
Transit time is the time interval between the arrival of the delta function light pulse at the photomultiplier tube entrance window and the time at which the output pulse at the anode reaches peak amplitude. Transit time varies considerably for spot source illumination across the face of the photocathode when the area of the spot source is much less than the active area of the photocathode. This is because the distance between different points on the photocathode and the first dynode varies, and the electric field intensity across the photocathode is not completely uniform, which results in different electron velocities. Transit-time difference is the time maximum difference between peak current outputs of different regions of the photocathode for simultaneous small spot illumination. Transit time spread is the full-width half-maximum of the time distribution of a set of pulses, each of which corresponds to the photomultiplier tube transit time for that individual event. It is a measure of the distribution in time for a charge pulse when collected at the anode. Transit time spread is influenced by the same factors that cause differences in the transit time for spot illumination and by the number of photoelectrons. A photomultiplier tube with poor transit time spread characteristics may yield inferior time information since the pulse has been broadened in time. Rise time, fall time, transit time, and transit time spread may be evaluated using a 50 ⍀ load at the anode output. With a 50 ⍀ load the output of the photomultiplier tube approximates a current pulse, which tends to preserve time information and helps to minimize electrical pulse reflections in the associated measuring electronics. Scintillation Counting One of the more important uses of photomultiplier tubes is their application in scintillation detectors for the measurement of ionizing radiation. A scintillation detector is a scintillator optically coupled to a photodetector, such as a photomultiplier tube or photodiode. A scintillator is a material that emits a light pulse when it absorbs ionizing radiation. The intensity and shape of the light pulse contains information on the ionizing radiation being absorbed in the scintillation material. Two of the more important solid scintillation materials presently used in fields of nuclear medical imaging are crystals of thallium-doped sodium iodide (NaI(Tl)) and bismuth germanium oxide (Bi4Ge3O12). Presently, tens to hundreds of thousands of photomultiplier tubes are consumed for these commercial and academic purposes each year. The scintillation detector produces a charge pulse output whose intensity is proportional to the energy of a totally absorbed gamma ray. This charge pulse is then processed, and information is extracted from the absorbed gamma ray, such as energy. The variance in the charge collected for successive pulses from the scintillation detector is an extremely important parameter to minimize since it affects the ability to resolve the energy of the gamma ray. Statistically, the variance in the charge per pulse output of the scintillation detector can be separated into components arising from the photomultiplier tube and scintillator and represented by the relationship (11): σ 2 (D) = σ 2 (S) + σ 2 (P)
Pulse height Counts per channel
344
FWHM
Energy,channel number Figure 7. Energy distribution of photomultiplier output pulses as collected and displayed using a multichannel analyzer. Percent pulse height resolution (PHR) is a figure of merit calculated as 100 times the full-width half-maximum in channels divided by the pulse height in channels.
where (D) is the variance of the charge per pulse output of the detector, (S) represents the variance in the number of photons impinging on the photocathode from the scintillator, and (P) represents the variance on the number of electrons per charge pulse due to photomultiplier tube alone. Statistically, high photocathode quantum efficiency, good first dynode collection efficiency, and high first dynode gain are essential in reducing the variance on the charge per pulse collected at the anode (12). Frequently the charge output of the scintillation detector is converted into a voltage pulse for processing. Variances in the charge per pulse then cause fluctuations in the magnitude of the resulting voltage pulses. Pulse height resolution (PHR) is a number used in nuclear spectroscopy to quantify these voltage fluctuations and is normally measured using a multichannel analyzer. The multichannel analyzer displays the accumulation of voltage pulses in the form of a photopeak. A photopeak is a histogram where the x axis represents voltage (proportional to gamma ray energy). The y axis of the histogram represents frequency of occurrence (number of counts per counting time interval per voltage increment). The position along the x axis for the centroid of the histogram usually represents the pulse height (mean value of voltage for a Gaussian shaped distribution corresponding to the energy of the gamma ray being measured). Referring to Fig. 7, PHR is calculated according to PHR(%) = (FWHM/PH) × 100 where FWHM is the full-width half-maximum of the histogram and PH is the position of the centroid. Mathematically, (D) and FHWM are directly proportional if sources of noise other than the detector can be neglected (10). Low PHR values are essential for the separation of photopeaks when gamma rays of many different energies are simultaneously present and to discriminate against lower-energy scattered gamma rays. This implies an advantage for low values of (P). Photomultiplier tube manufacturers will typically publish PHR numbers along with the conditions for which PHR is determined as a figure of merit for (P) to aid in the selection of photomultiplier tubes.
PHOTONIC CRYSTALS
Future Developments Future trends in photomultiplier tube applications involve the development of compact designs, position-sensitive photomultiplier tubes for nuclear medical imaging, microchannel plates, and hybrid photomultiplier tubes. Compact designs typically result in photomultiplier tubes whose overall length is shorter and weigh less than what is presently available (13). Position-sensitive photomultiplier tubes offer the prospect of providing discreet positional readout (14,15). Microchannel plates offer fast time response and insensitivity to magnetic fields (12). Hybrid photomultiplier tubes are compact devices consisting of an evacuated housing containing a window and photocathode where the photoelectrons are accelerated and impinge on a silicon target to create a charge output with gain (16). BIBLIOGRAPHY 1. J. B. Birks, Theory and Practice of Scintillation Counting, New York: Macmillan, 1964. 2. G. A. Morton, H. M. Smith, and R. Wasserman, IEEE Trans. Nucl. Sci., NS-14 (1): 443–448, 1967. 3. H. R. Krall, IEEE Trans. Nucl. Sci., NS-14 (1): 455–459, 1967. 4. R. U. Martinelli and G. A. Morton, Theoretical study of the mechanisms of fatigue in photomultiplier PHASE II, NASA CR-66906, 19 Jan. 1970. 5. O. Youngbluth, Appl. Opt., 9 (2): 321–328, 1970. 6. J. Cantarell, IEEE Trans. Nucl. Sci., NS-11: 152–159, 1964. 7. D. E. Persyk, IEEE Trans. Nuc. Sci., 38 (2): 128–134, 1991. 8. IEEE Standard Test Procedures for Photomultiplier for Scintillation Counting and Glossary for Scintillation Counting Field, ANSI/ IEEE Std. 398-1972 (reaffirmed 1982). 9. G. F. Knoll, Radiation Detection and Measurement, New York: Wiley, 1989. 10. Z. H. Cho, J. P. Jones, and M. Singh, Foundations of Medical Imaging, New York: Wiley, 1993. 11. P. Dorenbos, J. T. M. de Hass, and C. W. E. van Eijk, IEEE Trans. Nucl. Sci., 42, No. (6): 2190–2202, Dec. 1995. 12. BURLE Photomultiplier Handbook, TP-136, 1980. 13. T. Hayashi, IEEE Trans. Nucl. Sci., 36 (1): 1078–1083, 1989. 14. H. Kume, S. Muramatsu, and M. Iida, IEEE Trans. Nucl. Sci., 33 (1): 359–363, 1986. 15. A. J. Bird and D. Ramsden, NIM, A299: 480–483, 1990. 16. A. J. Alfano, Appl. Spectrosc., 52 (8): 303–307, 1998. Reading List P. W. Nicholson, Nuclear Electronics, New York: Wiley, 1974. A. H. Sommer, Photoemissive materials, New York: Krieger, 1980. N. Carleton (ed.), Methods of Experimental Physics: Astrophysics, Vol. 12, New York: Academic Press, 1974.
BRADLEY UTTS PHILLIP RULE Burle Industries, Inc.
PHOTON FLUX. See RADIOMETRY. PHOTONIC BAND GAP MATERIALS. See PHOTONIC CRYSTALS.
PHOTONIC CONTROL OF ELECTRONIC CIRCUITRY. See PHOTOCONDUCTING SWITCHES.
345
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICA...CTRONICS%20ENGINEERING/40.%20Nuclear%20Science/W5214.htm
●
HOME ●
ABOUT US //
●
CONTACT US ●
HELP
Wiley Encyclopedia of Electrical and Electronics Engineering Radiation Detection Standard Article Gerald Entine1 and Michael R. Squillante1 1Radiation Monitoring Devices, Inc., Watertown, MA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. : 10.1002/047134608X.W5214 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (172K)
Browse this title ●
Search this title Enter words or phrases ❍
Advanced Product Search
❍ ❍
Acronym Finder
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20...20ENGINEERING/40.%20Nuclear%20Science/W5214.htm (1 of 2)16.06.2008 15:29:47
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICA...CTRONICS%20ENGINEERING/40.%20Nuclear%20Science/W5214.htm
Abstract The sections in this article are Interactions of Ionizing Radiation with Matter Mechanisms of Detection Nonelectronic Mechanisms of Detection Electronic Mechanisms of Detection Semiconductor Detectors Summary | | | Copyright © 1999-2008 All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20...20ENGINEERING/40.%20Nuclear%20Science/W5214.htm (2 of 2)16.06.2008 15:29:47
RADIATION DETECTION
nant over a different range of photon energy, the exact range being determined in part by the atomic number of the material. In general, the photoelectric effect is most probable at energies below about 100 keV, the Compton effect is dominant at energies between 100 keV and 2 MeV, and pair production is dominant for all higher energies. In a photoelectric event, the photon loses all of its energy to an atomic electron (usually in the K shell) which is ejected from the atom. The energy of the ejected electron will be the energy of the incoming photon (Ee⫺) minus the binding energy of the electron (Eb):
Of all the available sensors for detecting radiation, solid-state radiation detectors have always had the most aesthetic appeal. In principle, they provide the most sensitivity per unit volume, the most flexibility in packaging, and the most efficient conversion of ionizing radiation into electric signals suitable for measurement with modern instrumentation. Traditionally, solid-state detectors were defined as sensors made from semiconductor materials in which the electronic charge produced by ionizing was collected with an electric field and amplified by external electronics for interpretation. More recently, this class of detectors has been expanded to include combinations of scintillator materials with solid-state photodetectors as contrasted with sensors relying on vacuum tube optical detectors such as photomultiplier tubes. Until 25 years ago, the only solid-state detectors of importance were those made of germanium or silicon. Since that time, however, progress has been made in many other materials and the list now includes such materials as CdTe, CdZnTe, PbI2, and HgI2, as well as scintillation-based sensors combined with silicon p–i–n photodiodes, silicon avalanche photodiodes, HgI2 photodiodes, PbI2 photodiodes, and amorphous silicon photodiode arrays. This article will review some of the mechanisms by which ionizing radiation interacts with matter, how these interactions are converted to electrical signals within the various types of solid-state sensors, the properties of many of the more common solid-state nuclear detectors, and a comparison of their properties as it relates to choosing among them for a particular application. More traditional radiation detectors, such as Geiger tubes and ion chambers, as well as important specialized detector systems, such as particle spectrometers, are addressed separately in other articles of this encyclopedia.
Ee − = hν − Eb
(1)
The ion left behind is left with a vacancy in a low orbital that is quickly filled by capturing a free electron from the medium or by rearrangement of the atom’s electrons. This process results in the emission of one or more of the characteristic X rays of the material. The X ray may be reabsorbed by another photoelectric event or may escape from the material. The probability of a photon undergoing a photoelectric event, , is a strong function of both energy and atomic number, Z, as follows: ϕ ∝ Zn /E 3
(2)
where n varies from 4 to 5 depending on the energy. This strong dependence on Z is the reason why high-Z materials are preferred for shielding and for use in detector materials. In Compton scattering, the photon loses only a portion of its energy to the electron, resulting in the release of both an energetic electron and a photon of lower energy. The energy of the scattered photon, h⬘, depends specifically on both the energy of the incident photon and the angle of the scattered photon () as follows:
INTERACTIONS OF IONIZING RADIATION WITH MATTER
hν =
The major forms of ionizing radiation are X rays, gamma rays, charged particles, and neutrons. X rays and gamma rays are both electromagnetic radiation (photons) that differ only in their origination. X rays originate from transitions of electrons in the orbits of atoms, while gamma rays originate from transitions taking place within the nucleus. Charged particles include electrons or beta particles (electrons that originate from events in the nucleus), positrons, protons, alpha particles, fission fragments, and ions. Ionizing radiation deposits energy in matter through several different sets of mechanisms of interaction, each of which is characteristic of the type of ionizing radiation involved. For the most common class of ionizing radiation, the high-energy photons (X rays and gamma rays), there are three primary mechanisms of interactions: photoelectric absorption, Compton scattering, and pair production. In each of these interactions, the photon loses all or part of its energy to a free or orbital electron, causing the electron to move through the material at high speed and transfer its newly found energy to other species in the solid. Each of these mechanisms is domi-
hν 1 + (hν/m0 c2 )(1 + cos θ )
(3)
where m0c2 is the rest mass of the electron (511 keV). The scattered photon may undergo a photoelectric event or a second scatter event or it may escape. The third mechanism, pair production, can occur only at energies above 1.02 MeV, which equals twice the rest mass of an electron. If a photon at this energy is within a Coulomb field of a nucleus, it can transform into an electron and a positron, with both particles carrying off any energy excess to the amount needed to create them. In general, the electron will survive, but the positron, after slowing down as it moves through the material, will undergo an annihilation reaction with an electron and create two characteristic 511 keV photons that are emitted in opposite directions. In contrast to photons, charged particles lose energy primarily through multiple Coulombic interactions with electrons in the medium. As the particle loses energy, it creates a wake of free electrons and characteristic X rays. The maximum distance that the charged particle goes before it loses all of its energy in a material (the range) is related to the 1
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
2
RADIATION DETECTION
nature of the material (primarily its density) and is also related to the size, charge, and energy of the particle. For small particles such as electrons, the range can be substantial (mm to cm); while for larger particles involving nuclei, the range is typically very short (less than 1 애m) except at relativistic energies, with the typical ranges for protons being intermediate. For fission fragments, which often have significantly higher atomic weights, almost all of the ionization takes place within angstroms of the surface. In general, the probability of a neutron interacting with matter is much less than that for charged particles. Neutrons interact with matter through several mechanisms, all of which are highly dependent on the energy of the neutron. For slow neutrons, those with energies below about 0.5 eV, the primary means of interaction are elastic scattering and nuclear reaction. Only a relatively small number of atomic species have nuclei with a high probability of absorbing neutrons. The most common of these elements include tritium, boron, lithium, and cadmium. In materials which do not contain such elements, the path of slow neutrons is quite long (centimeters) and the neutron essentially becomes a particle moving through the matter with the thermal energy dictated by the temperature (0.02 eV at 20⬚C). The energy imparted by low-energy neutron elastic scattering to the surrounding matter is too low to be directly measured, and therefore slow neutrons can only be detected if they produce reaction products after being absorbed by the nuclei of the medium. A number of such absorption interactions are possible including (n, 웂) in which a gamma ray is emitted, (n, 움) in which an alpha particle is emitted, (n, p) in which a proton is emitted, and (n, fission) in which a fission fragment of the atom are emitted. As the energy of the neutron increases, the probability of its undergoing a nuclear reaction decreases. Instead, the higher-energy neutrons can collide with and impart energy to the nuclei in the material through simple scattering. The nuclei to which energy has thus been imparted can then create secondary radiation as it now moves through the medium and interacts as an energetic charged particle. Eventually, the high-energy neutrons either escape from the material or lose enough energy to be absorbed and undergo a nuclear reaction.
MECHANISMS OF DETECTION It should be evident from the discussion above that for all forms of ionizing radiation, the detection of radiation is primarily based on detecting the charge created by the electrons and their complementary positive ions or positive ‘‘holes’’ (depending on the detector type) that are generated by either the primary or secondary interactions. In spite of this, there are significant differences among the detectors used to detect different forms of ionizing radiation. In most cases, it is the penetrating ability and the energy of the radiation that determines the specifications and requirements of the detector. For example, charged particles stop very quickly and thus a detector for such particles must have its active region very close to the surface facing the source. Neutrons, on the other hand, are very penetrating and neutron detectors must either be relatively large or rely on converter materials to convert the neutron energy into secondary radiation that is easier to stop. For X rays and gamma rays, the details of the detector structure is more dependent
on the energy of the incoming photons. Low-energy photons are easy to stop, but require a very good signal-to-noise ratio in the detector such that the detector may have to be cooled or have internal gain (e.g., avalanche photodiodes). High-energy photons produce ample signals, but require detectors which are larger and made of high-Z materials in order to achieve useful sensitivity. It is convenient to segregate the many types of detectors into two classes: (a) those that rely on the energy deposited by the ionizing radiation being converted into nonelectronic energy such as light energy or chemical energy and (b) those that rely on the energy being converted directly into detectable electronic charge. Useful detectors from both of these classes exist for virtually all types of ionizing radiation, and many of them involve semiconductor devices.
NONELECTRONIC MECHANISMS OF DETECTION Scintillators The most common nonelectronic methods for detecting ionizing radiation rely on the conversion of the ionizing radiation energy into visible light using a scintillating material. The resulting visible flux is then detected with photographic film, a photomultiplier tube, and a photodiode or some other optical detector. There are a variety of solid, liquid, and gaseous scintillating materials, and important characteristics in selecting such a scintillating material include the stopping power of the material for the ionizing radiation of interest (usually a function of atomic number and density), the energy conversion efficiency usually specified in terms of number of photons per unit of ionizing energy, and the speed of response, relating to the rise and fall times of optical pulses produced for each ionizing event. The two major groups of scintillating materials are the organic and inorganic. For organic scintillators, the fluorescence process arises from molecular electronic transitions and thus the material can be a liquid, solid, or vapor. The characteristics of these molecular transitions are that they are typically fast (a few nanoseconds) and have wavelengths shorter than 500 nm. These materials often have low-energy conversion efficiencies because the processes which convert the ionizing radiation energy into light must compete with other, nonradiative electronic transitions. Many of these materials, being in the form of liquids or vapors, provide excellent flexibility in form, but at the cost of stopping power due to low density and low atomic number. Plastic scintillators made from solid scintillating polymers are also available in arbitrary shapes and sizes. These scintillators have relatively low light output, but are easy to form into complex shapes and are particularly useful for certain neutron applications. The energy conversion mechanism of inorganic scintillators, typically crystalline solids, is significantly different. In these cases, the secondary electrons are promoted in energy to the valance band associated with the crystal lattice. Over a period of time (usually in the range of a few hundred nanoseconds to a few microseconds) the excited electrons in the valance band lose energy by one of several mechanisms resulting in heat and light generated in the scintillator. Light is generated when the excitation results in energy transfer to luminescent centers which are often associated with impuri-
RADIATION DETECTION
3
Table 1. Properties of Scintillator Materials
Material
Atomic Number, Z
Density (g/cm2 )
NaI(Tl) CsI CsI(Na) CsI(Tl) BGO BaF2 CaF2(Eu) CdWO4 LiF(W) LiI(Eu) LuAP LSO GSO GOS PbSO 4 CsF CeF 3 YAP Anthracene Plastic Liquid
11, 53 55, 53 55, 53 55, 53 83, 32, 8 56, 9 20, 9 48, 74, 8 3, 9 3, 53 71, 13, 8 71, 14, 8 64, 14, 8 64, 16, 8 82, 16, 8 55, 9 58, 9 39, 13, 8 Organic Organic Organic
3.67 4.51 4.51 4.51 7.13 4.88 3.18 7.90 2.64 4.08 8.34 7.4 6.71 7.34 6.20 4.64 6.16 5.55 1.25 0.9–1 0.9–1
Emision Wavelength (nm) 415 315 420 550 565 480 220/310 435 480 430 470 485 360 420 430 510 350 390 300/340 447 385–580 385–430
Output NaI ⫽ 100
Decay Time (애s)
Index of Refraction
100 6 85 110 11 2/15 50 38 4 35 30 63 26 50 10 4 11 46 43 10–25 15–35
0.23 0.005 0.63 1.0 0.6 0.00008/0.6 0.94 1/10 40 1.4 0.018 0.04 0.04/0.6 3 0.1 0.005 0.005, .02 0.024 0.03 0.001–0.02 0.002–0.004
1.85 1.95 1.80 1.84 1.79 2.15 1.49 1.44 2.25 1.4 1.96 1.82 1.85 1.88 1.48 1.68 1.62 1.4–1.5 1.4–1.5
Hygroscopic Yes Slightly Slightly Slightly No No No No No Yes No No No No No Yes No No No No No
BGO ⫽ Bi 4 Ge 3 O 12 ; GOS ⫽ Gd 2 O 2 S; GSO ⫽ Gd 2 SiO 5 ; LSO ⫽ Lu 2 SiO 5 , LuAP ⫽ LuAlO 3 ; YAP ⫽ YAlO 3 .
ties in the scintillator material. The light emitted can then be detected by an external detector. The efficiency of producing light is an important characteristic of a scintillator material. It can require from about 30 eV to hundreds of electronvolts to generate a single optical photon in such material. The pulse shape of the resulting optical pulses is also very material dependent and can range from nanoseconds to seconds. Many of these materials exhibit emissions with multiple time constants, so that it is not uncommon for 5% to 15% of the emitted light to have a time course which is hundreds of times longer than the primary light flash. The magnitude of this phenomenon, often referred to as afterglow, can often be a determining factor whether a scintillating material can be used for a particular application. Some of the more common scintillating materials are shown in Table 1. When used as a spectrometer, the energy resolution that can be achieved with a scintillator depends on both the characteristics of the scintillator and the associated optical detector. Until recently, only photomultiplier tubes could provide the performance needed for most useful scintillation-based ionizing radiation detectors. These devices provide large diameters, high gain, and low noise with moderate power consumption. With these sensors, the limitations on the performance of the spectrometer as a whole are determined primarily by the properties of the scintillator. The variability in scintillator materials is such that ‘‘hand-picked’’ premiumgrade samples can provide energy resolution twice as good as that of standard samples. More recently, solid-state optical detectors have been used in place of photomultipliers for several applications. These devices are attractive because they are much more compact than photomultipliers, are insensitive to magnetic fields, have
higher quantum efficiencies, and use much less power. However, they do have inherently more noise than photomultipliers as well as more stringent limitations on their maximum size. The most commonly used solid-state optical detectors in such applications are the low-noise silicon p–i–n diodes and the high-gain silicon avalanche diode, both of which have been successfully applied with diameters of up to 1 in. Of these two, the avalanche diode has internal gain and significantly better signal-to-noise ratio, but is more expensive and requires a much higher operating bias voltage. As the cost of sophisticated electronics has come down, there has been a corresponding increase in the use of scintillation detectors in the form of multielement arrays. Previously, the only widely used configurations of scintillation detector to provide two-dimensional spatial resolution were either the Anger camera or the single-probe scanner. In the first configuration, an array of 20 to 100 photomultipliers are attached to the back of a large disc of scintillator material, and the sum and ratios of the light detected by the sensors are used to compute the total energy and position of the incident ionizing radiation (most frequently a gamma ray from a medical isotope). The newer configurations of scintillators consist of two-dimensional arrays of segmented scintillators with opaque separations between the segments and are now being used for position-sensitive detectors and for imaging. The details of the solid scintillator surface preparation influences the performance of these sensors, and several varieties of techniques are in current use. For example, the surface of the scintillation material which is attached to the optical detector is often highly polished and has a coupling of index matching fluid to improve light collection. The other surfaces of the scintillator which are not in contact with the detector are frequently cov-
4
RADIATION DETECTION
ered by white, reflective materials, including typist correcting fluid, teflon tape, and filter paper to improve the collection and transfer of optical photons to the photosensor. Film Screens and Other Films of Scintillating Materials. The first scintillators used to detect X rays over 100 years ago were in the form of thin films. This practice continues with the widespread use of film screens for radiography. Films screens are prepared by depositing a layer of a ground-up scintillating material in a binder. Until recently, these films were used primarily in conjunction with photographic film. There has been a dramatic increase in interest in using scintillators films coupled to electronic imaging detectors. This has led to research on producing large-area, thin films of scintillators with better spatial resolution than can be achieved with film screens, such as the production of CsI films made up of small columnar crystallites. These can be optically coupled to film charge coupled devices (CCDs) and amorphous silicon diode arrays and can achieve very high spatial resolution with higher X-ray detection efficiency than standard phosphor film screens. Other Indirect Detection Techniques. Although the most common nonelectronic method of detecting ionizing radiation involves the conversion of the energy of the ionizing particles into light, there are other detection mechanisms that fall into this category. These methods involve the conversion of the energy of the electrons into heat or sound or involve the use of that energy to induce chemical reactions or phase changes. In some techniques the signals are read immediately, but in many others the energy is stored for later readout. For example, the energy can be stored by trapping the secondary electrons in stable elevated energy states for later excitation by heat (thermoluminescent devices) or by light (phosphor image plates) with a subsequent detection of the resulting emitted light detected using optical sensors. In materials with sufficient electron mobility, the electrons can also be transported to the surface of the detection medium and then detected by one of several external means. In fluoroscopes, the electrons are emitted by thermionic emission from the inner surface of the medium (which can be similar to the CsI films described above) and accelerated in an electric field onto a secondary detection device that is sensitive to electrons such as a phosphor screen or a CCD or a microchannel plate coupled to a CCD. In an X-ray vidicon tube, an electron beam rasters the surface of the medium and detects the areas of the medium that have absorbed energy in exactly the same manner as does a traditional video vidicon. In xeroradiography, the charges generated by X-ray absorption are detected and imaged in the same way that photocopies are made. A particularly sensitive method to detect ionizing radiation relies on the use of superconductors. The energy released upon interaction breaks the Cooper pairs in the superconductor and the free electrons created can be detected. Because of the small amount of energy required to break a Cooper pair, these detectors can have extremely good energy resolution, better than 30 eV; however, they must operate at temperatures near absolute zero. ELECTRONIC MECHANISMS OF DETECTION The primary detectors used for the direct detection of radiation are ionization chambers, proportional counters, Geiger–
Mueller tubes, and a wide variety of semiconductor devices. This article emphasizes semiconductor detectors since the other direct detection devices are discussed elsewhere in this encyclopedia and in many texts such as Knoll (1) and Tsoulfanidis (2); thus only a brief description of the other detectors is provided to contrast their capabilities to those of solidstate devices. Gas Ionization Detectors An ionization chamber is a gaseous detector in which the ions and electrons generated by an ionizing event are collected in an electric field. When the applied electric field is sufficiently strong to prevent the recombination of the electron ion pair, the signal produced is proportional to the amount of energy deposited. Since the signals are quite small, ionization chambers are typically used in the direct current (dc) mode where the charges generated are integrated into a signal current that is proportional to the total energy deposition rate. Proportional chambers are gas-filled detectors in which the charge pulse detected is proportional to the energy deposited by the ionizing event. However, they are operated under sufficient bias voltage that the electrons generated accelerate in the electric field to sufficient energy to ionize additional atoms, creating charges that undergo the same process until there is an avalanche of charges. Under the proper conditions, the gain is constant and the signal is still proportional to deposited energy. The charge gains that are achieved (102 to 104) are sufficient that the signals are relatively large, and proportional counters are usually used in pulse mode as spectrometers. Geiger–Mueller counters (also called Geiger counters or G–M tubes) have been used since the late 1920s. They are also gas-filled tubes but with much higher applied fields than ionization chambers or proportional counters. As with proportional chambers, the electrons generated accelerate in the electric field and create avalanches. In parallel, the excited gas molecules can return to their ground state by emission of an ultraviolet (UV) photon that can be absorbed in another location in the tube setting off another avalanche process. This series of avalanches produces gains of 102 to 109 and results in a very large charge pulse. The pulses are not proportional to the amount of energy deposited by the ionizing radiation, so no energy information is available and thus Geiger–Mueller tubes can only be used as counters. They are rugged, inexpensive, and reliable and are commonly used on survey meters and dosimeters. SEMICONDUCTOR DETECTORS In principle, semiconductor radiation detectors are the most attractive means of detecting ionizing radiation. They can be very sensitive, compact, stable, and rugged, and they use low power, have low noise, are insensitive to magnetic fields, and provide exceptionally good quality information on the energy distribution of the incident radiation. However, practical limitations on their size, material uniformity, electrode structures, and cost have prevented them from replacing the more traditional sensors. In a semiconductor detector, the energy of the ionizing radiation is transferred to the charge carriers of the crystal lattice, resulting in a cloud of free charge which can be exter-
RADIATION DETECTION
nally measured in the form of an electronic pulse. The magnitude of the charge cloud is directly proportional to the energy absorbed from the ionizing radiation and the number of clouds equal to the number of absorbed ionizing particles. Thus, the magnitude of the charge pulse measured by the external electronics is truly proportional to the energy deposited. Thus, if the associated electronic noise from both the detector itself and the associated electronics is small, very high quality measurements can be made. The key properties affecting the performance of a semiconductor detector include its stopping power, size, sensitivity, gain stability, noise, energy resolution, operational requirements, susceptibility to radiation damage, and, if required, its ability to provide position information. The stopping power of a semiconductor depends on the nature of the incident radiation, the atomic number of its constituents, and density. If the incoming radiation of interest consists of low-energy X rays or heavy ionizing particles, the energy transfer interactions all take place in the first few microns of the material and only the area of the detector is critical for determining its sensitivity. If, however, the incoming radiation is of a more penetrating type, then the thickness (and hence the volume) of the detector becomes more of a consideration. Similarly, the atomic numbers of the constituents of the detector crystal and the density of the material help determine the size of crystal which must be used to achieve good detection efficiency. For penetrating radiation, the detection efficiency will increase approximately linearly with the density and as the atomic number to the 4.5 power. The noise associated with a semiconductor detector arises from both the bulk material and the electrodes. The amount of this noise determines both (a) the lowest-energy ionizing particle that can be detected and (b) the precision with which the energy of such particles can be measured. The latter is referred to as the energy resolution and is a function of not
5
only the noise, but also the magnitude and uniformity of the pulses produced throughout the volume of the detector. Oftentimes, the operational requirements of a detector determine its suitability for a particular application. For example, some semiconductor detectors can operate only under cryogenic conditions while others operate well at room temperature, but both types of detector will permanently degrade when exposed to even slightly elevated temperatures. Similarly, some semiconductor detectors can operate with moderately low operating voltages, while others require several thousand volts. The susceptibility of a detector to radiation damage depends on both the nature of the incident radiation and the material of the detector. In general, unless the flux levels are extremely high, X-ray and gamma-ray applications cause relatively little radiation damage, while high-energy particles cause substantially more. Applications involving neutrons are particularly challenging, because they are often associated with very high flux levels and can permanently damage many of the commonly used semiconductor materials. Since the usefulness of different semiconductor detectors is so dependent on the application, it is useful to first become familiar with some of the details of the properties of semiconductor detectors before attempting to choose a detector for a specific requirement. Table 2 lists the relevant characteristics of materials used for fabricating radiation detectors. These characteristics form the basis for selecting a detector for specific application. The key properties to be considered include stopping power and detection efficiency, sensitivity, energy resolution, noise, operational requirements, radiation damage, and position resolution for imaging applications. Stopping Power and Detection Efficiency As discussed above, the stopping power is a function of atomic number and density. A number of factors influence the detec-
Table 2. Properties of Semiconductor Used for Radiation Detectors Materials at 25ⴗC
Material
Atomic Number, Z
Density (g/cm3 )
Band gap (eV)
Ge Si CdTe CdZnTe CdSe HgI2 GaAs InI Diamond TlBr PbI2 InP ZnTe a-Si a-Se CdS SiC
32 14 48, 52 48, 30, 52 48, 34 80, 53 31, 33 49, 53 6 81, 35 82, 53 49, 15 30, 52 14 34 48, 16 14, 6
5.33 2.33 6.2 앒6 5.81 6.4 5.32 5.31 3.51 7.56 6.2 4.78 5.72 2.3 4.3 4.82 3.2
0.67 1.12 1.44 1.5–2.2 1.73 2.13 1.43 2.01 5.4 2.68 2.32 1.35 2.26 1.8 2.3 2.5 2.2
Melting point (⬚C) 958 1412 1092 1092–1295 ⬎1350 250 (127) c 1238 351 4027 480 402 1057 1295
Materials are listed in order of decreasing 애 (e) at room temperature. Estimated. c Solid-solid phase transition. a b
1477
Epair(eV) 2.96 3.62 4.43 앒5 5.5 b 4.2 4.2 13.25 6.5 4.9 4.2 7.0 b 4 7 7.8 b 9.0 b
Resistivity (25⬚C) ⍀ ⭈ cm
애 (e) Product (cm2 /V) a
50 up to 104 109 ⬎1010 108 1013 107 1011
⬎1 ⬎1 3.3 ⫻ 10⫺3 2 ⫻ 10⫺3 7.2 ⫻ 10⫺4 10⫺4 8 ⫻ 10⫺5 7 ⫻ 10⫺5 2 ⫻ 10⫺5 1.6 ⫻ 10⫺5 8 ⫻ 10⫺6 4.8 ⫻ 10⫺6 1.4 ⫻ 10⫺6 6.8 ⫻ 10⫺8 5 ⫻ 10⫺9
12
10 1012 107 1010 1012 1012
애 (h) Product (cm2 /V) ⬎1 앒1 2 ⫻ 10⫺4 1 ⫻ 10⫺5 7.5 ⫻ 10⫺5 4 ⫻ 10⫺5 4 ⫻ 10⫺6 ⬍1.6 ⫻ 10⫺5 1.5 ⫻ 10⫺6 ⬍1.5 7 2 1.4
⫻ ⫻ ⫻ ⫻
10⫺5 10⫺5 10⫺8 10⫺7
6
RADIATION DETECTION
tion efficiency in a detector. Not all of the radiation striking a detector is detected. As the atomic number or density increases, so does efficiency; and at higher energies where the photons can penetrate the detector, its efficiency increases with thickness. The ability to stop high-energy photons is proportional to the active area of the device and increases with thickness by the familiar attenuation equation: I = I0 exp(−ut)
(4)
where I0 is the initial intensity of the radiation, I is the intensity after attenuation by the medium (i.e., the detector), u is the linear attenuation coefficient in cm⫺1, and t is the thickness in centimeters. The mass attenuation coefficient in square centimeters program is the linear coefficient divided by the density, u/ . The amount of radiation stopped in the detector is simply I0 − I
(5)
Tables of absorption coefficients are readily available for performing calculations of detector efficiency [such as in the Health Physics Handbook (3)], as is commercially available software. At low energies in real detectors, the measured count rate may be lower than calculated since some counts may be lost in the noise or absorbed in surface layers or protective packaging. Charged particles can typically only penetrate a fraction of a micron, so that stopping power is usually not a factor; however, the ability of the particle to penetrate past inactive layers at the surface of the detector will limit the efficiency. All semiconductor materials have a ‘‘dead’’ layer at the surface. This dead layer results from damage that occurs during fabrication and from oxidation. In addition, the devices need some form of electrode and frequently protective coating, both of which absorb energy. Sometimes it is possible to combine the electrode and the protective coating, as is done in Schottky barrier alpha-particle detectors in which a thin layer of gold performs both functions. Sensitivity The sensitivity of a detector is the lower limit of detection in terms of energy and/or flux. Detector size affects sensitivity: As a detector gets larger, it will obviously interact with more radiation; however, as a general rule, noise and background effects increase with size. As the area of a detector gets larger, both the capacitance and the leakage current increase. Both of these factors result in a decrease in signal-to-noise ratio. As a detector gets thicker, it stops more penetrating radiation and the capacitance decreases. However, charge collection effects such as ballistic deficit (where the amplifier integration time is not sufficiently long to integrate the entire pulse) and trapping increase, as does the associated noise due to semiconductor generation/recombination effects (g–r noise). These conflicting parameters usually result in the user making a trade-off between various parameters to optimize the performance for a specific application. Energy Resolution In many applications, energy resolution is an important property. The ability to resolve energy allows the user to identify
a source and also to separate the signal generated by the source or the energy range of interest from background or other unwanted radiation signals. The energy resolution of a detector is affected by noise, charge carrier properties, and detector uniformity. In an ideal detector, every signal pulse would be exactly proportional to the absorbed energy, and all photon of the same energy would generate identical signal pulses, within the statistics of charge generation. In a real detector, a number of factors influence the energy resolution that is attainable. The statistics of charge generation process itself causes a broadening of energy, electronic noise adds a Gaussian distribution to the photopeak, and charge collection effects and material nonuniformity also add to this broadening in nonstatistical ways. Energy resolution is commonly expressed as the full width at half-maximum (FWHM) of a photopeak at an energy or as a percent of the energy. The detector type of choice greatly affects the energy resolution attainable, with Ge detectors having better than 1% resolution (below 0.1% under favorable conditions) and other materials ranging from 2% to 10% depending on the energy. Factors Affecting Device Performance of Semiconductor Detectors Charge Carrier Properties. Sensitivity and energy resolution depend on the characteristics of the electronic signal generated in a detector and on electronic noise. The signal comes from electronic charge generated by an ionizing event. In crystalline semiconductor materials, charges (electron–hole pairs) generated by ionizing interactions are promoted into the conduction band. The charges can be collected at electrodes on the surfaces of the device by applying an electric field across the device. Two of the most important characteristics of the electronic charge generated in a material are the charge carrier mobility (애), in square centimeters per voltsecond, and charge carrier lifetime (), in seconds. The product of the mobility and the lifetime, the ‘‘애 product,’’ is an important figure of merit for detector materials. A larger value of the 애-product results in lower losses from charge trapping which in general permits the fabrication of larger detectors since charges can be collected over greater distances. If either the electrons or the holes are not completely collected (as is the case in many compound semiconductor detectors) a larger 애-product results in a larger signal. Resistivity and Noise. The other important electronic property is the resistivity of the material. Noise generated in the detector comes from the collection of electronic charge generated by means other than an ionizing event. A major component of this noise is the leakage current that flows through a semiconductor when it is under bias. Lower resistivity results in higher leakage current and more noise. The leakage current is proportional to the number of charge carriers present in the conduction band. The resistivity (⍀) of a semiconductor is given by the equation ρ = 1/neµ
(6)
where n is the density of charge carriers and e is the charge on the electron. The number of charge pairs is a function of temperature and bandgap. The probability (p) of thermally
RADIATION DETECTION
p(T ) ∝ T 2/3 exp(−Eg /2kT )
(7)
where T is temperature in Kelvin, Eg is the semiconductor bandgap, and k is Boltzmann’s constant. Theoretically, the inherent resistivity increases with increasing bandgap and decreasing temperature. This dependence on temperature results in two broad classes of semiconductor detectors: room temperature detectors and cooled detectors. Cooling a detector improves performance by decreasing the number of electrons in the conduction band, increasing the resistivity, and reducing the noise. In general, room-temperature detectors have bandgaps larger than 1 eV. Factors Affecting Device Performance of Semiconductor Detectors Statistics of Charge Generation. The statistical spread of charge generation may be described by a Poisson process and because the average number of charges (N) is relatively large it can be described by a Gaussian function G(H): G(H) = (A/σ (2π )1/2 ) exp[−(H – H0 )2 /(2σ 2 )]
(8)
where H0 is the centroid of the Gaussian (i.e., the average pulse height) and is proportional to N; K is a proportionality constant; A is the area of the peak; 2 is the variance; is the standard deviation. For a Gaussian distribution, FWHM ⫽ 2.35 . The resolution R due to the statistics of the charge generation is: R = FWHM/H0 − 2.35K(N)1/2/KN = 2.35/(N)1/2
(9)
In real detectors, the broadening due to this statistical spread can actually be lower than predicted, indicating that the processes by which charge carriers are produced are not independent events. This phenomenon is described by the Fano factor, F. F is an empirical scalar that accounts for the difference between the observed and the Poisson-predicted variance. It is always less than unity. Applying the Fano factor to Eq. (9) gives the resolution as 2.35(F/N)1/2. Noise. All detectors and detector electronics have noise associated with them. There are three main sources of noise which originate in the detector and the amplifier used with the detector: shot noise due to fluctuations in the leakage current, thermally generated noise in the input amplifier, and 1/f noise generated by the detector itself and/or the amplifier. Shot noise (or parallel thermal noise) is given by: ENCp = e/2(τ Id /q)1/2
(10)
where ENC is the equivalent electronic noise charge in electrons rms, e ⫽ 2.718 . . ., Id is the detector leakage current, q is the electronic charge, and is the integration time of the amplifier which incorporates CR–RC (capacitance– resistance) filtering circuitry, which is common in radiation spectroscopy (4). The thermal series noise from the input FET (field-effect transistor) of the preamplifier is given by ENCs = (e/q)Ci ((kT/2τ )(0.7/gm ))1/2
(11)
where Ci is the input capacitance of the preamplifier, which is the sum of the detector, gate, and stray capacitance, k is Boltzmann’s constant, and gm is the transconductance where the input device is an FET. The 1/f noise of the detector is given by ENCt = (e/q)[Ci (A f ))1/2
(12)
where Af is the 1/f spectral noise density. These noise terms add in quadrature. Figure 1 shows how these sources combine in a detector. The noise contributions add to the other statistical contributions in quadrature. Extensive discussions of noise in preamplifiers used with semiconductor detectors are available in the literature (e.g., see Refs. 4 and 5). Charge Collection and Hole Tailing. Clearly, after the charges are generated by an ionizing event, they need to be detected. In direct detection detectors, this is achieved by applying an electric bias voltage across the device. During collection, charges can be lost due to trapping and recombination. Some of these effects are random, and some are dependent on the position of the interaction in the device. These effects decrease the total quantity of charges collected and are not Gaussian and give rise to a ‘‘tail’’ of counts on the energy side of the energy spectrum obtained. Nonuniformity. All materials exhibit some degree of nonuniformity. In semiconductors, for example, there are distributions of defect type and concentration as well as in dopant concentration. For compound semiconductors variations in composition are difficult to avoid. These variations can and do affect charge collection in the material. Pulse Shape and Speed of Response The collection of charges results in a signal pulse whose rise time, shape, and duration depend on the detector used. Pulses from detectors used in pulse mode can be characterized by their rise time and fall time. Typically, the rise time is a function of the detector type and design, while the fall time is determined by the characteristics of the detector and the readout circuitry. The speed of response is a measure of the
Total noise
ENC
generating an electron–hole pair is given by:
7
Series noise 1/f noise
Parallel noise
Measurement time Figure 1. Contribution of various noise sources as a function of the measurement time (integration time) of the amplifier. At very short times (e.g., less than 0.1 애s for most semiconductors) the series noise dominates. As more of the signal is integrated (times over 5 애s for most semiconductors), the parallel noise dominates. These parameters differ for each semiconductor and at different temperatures, so the optimum operating point is usually determined experimentally by varying the integration time of the amplifier.
8
RADIATION DETECTION
rise time of the detector, and the timing resolution is a measure of the ability to separate two sequential pulses in time. Time resolution is typically measured in FWHM and full width at tenth maximum (FWTM). The speed of response is a function of the capacitance of the device and the readout circuitry, the applied bias, and the mobility of the charge carriers. The ability to separate pulses determines the maximum count rate at which a detector can be used. Radiation Damage Another important requirement for detectors in many important applications in research, medicine, and industry is the susceptibility of the detector to radiation damage. Radiation damage causes the size of the signal to change with exposure. Most materials can tolerate a large exposure before noticeable changes occur. The detrimental effects of radiation damage are due to defects created in the material. In both semiconductors and scintillators these defects trap charge and reduce the size of the resulting signal (6,7). The radiation dose at which such effects become significant varies with material and radiation type. Typically, exposures of 1011 cm⫺2 to 1014 cm⫺2 cause signal reduction. In semiconductors, the effect of damage becomes important when the concentration of defect sites in the bandgap caused by the radiation approaches the same order of magnitude as the level of dopant added to the material or of other impurities and native defects. Typically, compound semiconductors are more radiation-resistant than elemental semiconductors because they normally contain higher concentrations of defects. Fundamentals of Semiconductor Device Fabrication Solid-state detectors are usually fabricated from ultrapure single crystals of the semiconductor material. A variety of methods are used to grow semiconductor materials for nuclear sensor applications (8). Si, Ge, GaAs, and InP are typically grown by the Czochralski method, although the float zone technique is also popular for Si growth (9). In the Czochralski method, single crystals of the material are slowly pulled up out of the melt by touching the surface of the liquid with a seed crystal and raising it; as the material rises out of the pool of liquid, it cools and crystallizes. In float zone growth, a short zone of molten material is passed through a vertical ingot of material by moving a heater along the length of the material. Surface tension keeps the liquid in place, and the material melts and crystallizes as the heater passes. The Bridgman crystal growth method is also used, especially in the early stages of development in a new material because it is a relatively straightforward technique which can be implemented without a large capital investment. In this method, material is melted in a sealed ampoule in a furnace and slowly dropped out of the furnace. Vertical zone and horizontal zone melt growth techniques are used on a variety of materials including cadmium telluride (CdTe) and lead iodide (PbI2). Solution growth techniques can also yield good results; the best CdTe available at the time of this article is grown by a vertical solution zone technique, the traveling heater method (THM) (10,11), which has also been used for other II–VI materials. The ternary material cadmium zinc telluride (CZT) is grown at high pressures by a process called high-pressure Bridgman (HPB) crystal growth (12).
Vapor growth of single crystals for some semiconductor materials used as radiation detectors has been investigated, but only mercury iodide (HgI2) crystals are regularly grown from the vapor phase (13–15). HgI2 must be grown by vapor phase growth because of a solid–solid phase transition at 127⬚C that destroys the quality of any crystals grown from the melt. In general, when possible, growth from the melt or from solution is preferred because vapor growth processes are generally much slower than liquid techniques. Thin films, however, are readily grown epitaxially on selected substrates from the vapor phase or the liquid phase. Most thin films of the materials of interest for nuclear detectors are presently grown by physical vapor deposition or one of several modifications of the chemical vapor deposition (CVD) technique. Since the films are thin, the relatively slow growth rates from the vapor are not a detrimental factor and CVD processes allow tight control over growth parameters and film stoichiometry. A major part of any development effort on new materials is research to identify appropriate device fabrication procedures such as finding etching procedures and workable electrode structures. In general, two generic types of devices are fabricated on the crystalline materials during the early stages of developing new materials: photoconductors and photodiodes. These devices have a relatively simple configuration: Parallel planar electrodes are vacuum evaporated, plated, or painted onto both surfaces of cut or cleaved wafers that have been cleaned, polished, and etched. The electrodes are selected from materials that form ohmic contacts for photoconductors and Schottky barrier contacts for diodes. As a material matures and its properties are better understood, more sophisticated electrode structures, diffused junctions and specialized surface treatments are often used to modify and improve the performance. Thin semiconductor films can also be tested by fabricating simple ohmic and diode structures; but for imaging sensors, device fabrication makes use of existing photolithographic technology to build arrays of sophisticated multilayer diodes, photodiodes, and transistors. Materials Used in Semiconductor Detectors Of the many semiconductor materials available, only three have been regularly used for commercial radiation detectors: Si, Ge, and CdTe (and its ternary alloy with zinc, Cd1⫺xZnxTe, or CZT), and only Si, CdTe, and CZT are used at room temperature. Germanium detectors must always be cooled; and to obtain the best performance from silicon detectors, some cooling is also needed. Cooled Detectors. Whenever energy resolution is the highest priority, either Si or Ge is used at reduced temperature. They are attractive detector materials because of the very high values of the 애 product of electrons and holes. However, because of the relatively small bandgaps, thermally generated charges are a problem. The easiest method for solving this problem is cooling the detector. Because Ge has a higher atomic number, Z, than Si (32 versus 14), has excellent charge carrier properties, and is available in large sizes, high-purity germanium (HPGe) is usually the preferred detector for high-resolution gamma-ray spectroscopy. HPGe detectors must be cooled to cryogenic temperatures to obtain good energy resolution and to avoid damaging the crystal during operation. Thus an HPGe detec-
RADIATION DETECTION
Germanium Detectors. Germanium detectors have been the most widely used detectors for high-resolution X-ray and 웂ray spectroscopy for over 30 years. HPGe detectors are currently used in diverse applications such as nuclear physics research, environmental monitoring, high-energy physics, material science studies, geophysical exploration, health physics, and 웂-ray astronomy. Early germanium detectors were fabricated by compensating p-type material (with impurity concentration of about 1013 cm⫺3 to 1014 cm⫺3) using an interstitial donor (lithium) in order to produce material with lower carrier concentration (16). These were referred to as lithium-drifted germanium
8000
8000
7000
7000
6000
6000
5000
5000
Counts
Counts
tor must be attached to a liquid nitrogen cryostat or else connected to multistage thermoelectric coolers. This adds considerably to system complexity and expense. Figure 2(a) shows the performance that can be obtained from an HPGe detector in comparison with that obtained using other detector technologies in Figs. 2(b) and 2(c). Cooled silicon detectors provide excellent performance for low-energy X-ray spectrometry. Cryogenically cooled and thermoelectrically cooled Si detectors can have energy resolutions better than 150 eV at 1 keV to 10 keV. Such detectors are commonly used in energy dispersive X-ray analysis systems.
4000
4000
3000
3000
2000
2000
1000
1000
0
0 0
50
100 Energy (keV)
150
200
9
0
50
100 Energy (keV)
150
200
(b)
(a)
8000
7000 6000
Counts
5000 4000
3000 2000 1000 0
0
50
100 Energy (keV) (c)
150
200
Figure 2. Spectrum of the 122 keV and 136 keV gamma rays of 57 Co. Spectrum (a) shows the results obtained using a cryogenically cooled Ge detector. The FWHM is about 1%. Spectrum (b) shows the results obtained using CdTe at room temperature. The FMHM is about 3%, but the effect of hole tailing is evident. The 136 keV photopeak is still well separated from the peak at 122 keV. Spectrum (c) was obtained using an APD coupled to a CsI scintillation crystal at room temperature. The resolution is dominated by the properties of the scintillator, and the FWHM is about 13%. The 136 keV photopeak cannot be resolved.
10
RADIATION DETECTION
and Ge(Li) ‘‘jelly’’ detectors. In the early 1970s, inherently pure single crystals of germanium became available (17). Such HPGe crystals achieved carrier concentrations as low as 1010 cm⫺3 (at 77 K) and resulted in switch from Ge(Li) to HPGe for almost all germanium detector fabrication efforts. This was primarily due to the fact that while Ge(Li) detectors had to be kept cool (77 K) at all times, even when not in operation (in order to prevent decompensation), HPGe detectors need to be cooled only during actual operation. As a result, almost all commercial suppliers of germanium detectors use HPGe material at present. Significant advances have been made during the last decade in high-purity germanium crystal growth; and as a result, large-volume crystals (both p-type and n-type) are routinely produced. A variety of germanium detectors are fabricated using such crystals, and the two popular geometries for HPGe detectors are the coaxial detectors and the pla-
nar detectors. The coaxial detectors are generally used for detection of X rays and 웂 rays with energies from a few kiloelectronvolts to about 10 MeV, with high efficiency, while the planar detectors are used for high-resolution detection of lower-energy photons (few kiloelectronvolts to about 200 keV). All HPGe detectors have a p⫹ –i–n⫹ structure, where the intrinsic (i) region is the sensitive detection volume. In planar detectors, both the parallel-plate and wrap-around contact designs are used, as shown in Fig. 3. The wrap-around contact or LEGe design has the advantage of lower capacitance for a fixed detector area and thickness (as compared to parallel electrode design). In coaxial detectors, a closed coaxial structure is used as shown in the figure (as opposed to an open coaxial structure, where the core is drilled through the entire detector thickness), because in closed coaxial detectors, there is only one face with junction, where the exposed germanium needs to be passivated.
p+ contact Ge disk Holes
Electrons n+ contact
Planar detector with parallel electrodes
n+ contact
p+ contact
p_ type HPGe
n_ type HPGe
p+ contact
n+ contact
Passivation
Passivation
Planar detectors with wrap around outer electrode geometry (also known as LEGe detectors)
p_ type HPGe
n_ type HPGe n+ contact
p+ contact
p+ contact
Figure 3. Schematic representation of various germanium detectors. The top figure shows a planar detector. The middle figures show modified planar or LEGe detectors with wrap-around outer junction contacts with reduced capacitance. The bottom figures show coaxial detectors which are mainly used for detection of high-energy 웂-rays.
Passivation
Passivation Closed end coaxial detectors
RADIATION DETECTION
Materials Used for Room-Temperature Semiconductor Detectors Operating at room temperature dramatically increases the convenience and decreases the cost of operating a detector. Silicon is the semiconductor material most often used for ionizing particles; but for X rays and gamma rays where roomtemperature operation, small size, and high sensitivity is required, CdTe or CZT is often used. Other materials such as HgI2 and GaAs have been extensively studied for many years. Cadmium Telluride (CdTe) and Cadmium Zinc Telluride (CZT) Detectors. CdTe is a very-high-atomic-number semiconductor and as such is very efficient at stopping X rays. For instance, at 60 keV (the average photon energy from a typical X-ray tube), 90% of X-ray photons stop in a 500 애m thickness of CdTe as opposed to 3 mm of silicon. The more common application of CdTe detectors is for single-photon gamma-ray spectrometers. Single-photon CdTe spectrometers are constructed of very-high-resistivity CdTe with two nearly ohmic platinum electrodes applied to the faces of the CdTe crystal. These detectors are typically 2 mm to 4 mm thick and are operated at a bias of approximately 100 V. The resistivity of the CdTe crystals is made high by doping the CdTe crystal with a deep level acceptor. The resistivity can be raised higher by adding zinc to form the ternary CZT alloy. Electronic Properties of CdTe and CZT. CdTe is a compound semiconductor with a bandgap of 1.44 eV. It has many attractive properties for use as a nuclear detector material. THMgrown material is usually doped with chlorine, which compensates the cadmium vacancies that are characteristic of CdTe grown at low pressure. Compensated material has a resistivity on the order of 109 ⍀-cm. The values of the 애 products for CdTe that are typically obtained are 3.5 ⫻ 10⫺3 cm2 /V for electrons and 2 ⫻ 10⫺4 cm2 /V for holes. Because the 애 product for holes is lower than for electrons, it tends to have a more significant effect on the energy resolution. The electronic properties of CZT depend on the fraction of Zn used. In the range of 5% to 20% zinc, the resistivity ranges from 1010 ⍀cm to 1012 ⍀-cm. As the amount of Zn increases, however, the mobility of the holes decreases. For CZT with 5% Zn the 애 product for electrons and holes are 2 ⫻ 10⫺3 cm2 /V and 5 ⫻ 10⫺5 cm2 /V, respectively. Energy Resolution, Charge Trapping, and Hole Tailing. The effects of electronic noise and broadening due to charge collection interact and the relative importance of the effects depends on the energy of the radiation. The use of RC filtering to optimize the signal-to-noise ratio can result in optimum filtering parameters that do not permit all of the charges generated by an event to be collected during the amplifier integration time. At low energy (below about 100 keV) the photons stop near the surface of the detector where the charges are collected uniformly since all of the electrons travel about the same distance toward the positive electrode and the holes travel about the same distance to the negative electrode. Because of this, the width of the photopeak is dominated by electronic noise and it is almost symmetrical. At higher energies the noise due to leakage current becomes less important because the signal is larger but the effects of charge collection become more prominent. Charge collection problems are seen as an asymmetric broadening on the low energy side of the photopeak. This is primarily due to the fact that the 애products of holes and electrons differ significantly and all of the charge may not be fully collected during the measurement
11
time of the amplifier. Typically, the 애-product of holes is significantly less than that for electrons. For compound semiconductors, charge collection effects at energies above a few hundred keV dominate the resolution, and photopeaks can be quite broad and asymmetric. Figure 2(b) shows a spectrum obtained using CdTe. This low-energy broadening and lowenergy plateau caused by poor charge collection is called ‘‘hole tailing.’’ When a gamma ray deposits energy in a semiconductor, it generates electron–hole pairs, with both the electrons and holes participating in the charge collection. Since the mobility–lifetime product of the electrons is much higher than the mobility–lifetime product of the holes, and since the distance they must traverse varies with depth of interaction, the collection time and thus the rise time depend upon the depth of interaction. In interactions which occur near the negatively biased front surface of the detector, the charge is mainly collected by electrons, with a very short collection time and high charge collection efficiency. In interactions occurring deeper in the crystal, the hole collection is more important, leading to slower rise times and lower charge collection efficiency. This leads to degradation in both energy resolution and efficiency since many of the events do not show up in the photopeak, but are in the low-energy tail as can be seen in the spectrum in Fig. 2(b). Room-Temperature Silicon Detectors Silicon Diodes. Silicon is technologically the most mature semiconductor, and it is readily available in high purity and relatively inexpensive. Figure 4 shows a variety of silicon diode configurations. Numerous preparation and fabrication technologies exist to fabricate devices. These features make it an attractive material for use in making detectors. The limitations of using silicon are its low atomic number and relatively low resistivity at room temperature. There are applications where these factors do not preclude the use of silicon. For example, large-area silicon diodes are used as alpha particle detectors because alpha-particles do not penetrate far and generally have high energies. A variety of diodes structures are used on silicon. The formation of a good semiconductor barrier is important to reduce the leakage current. The most common diode structures are surface barrier diodes and p–i–n diodes. The p–i–n structure uses a relatively thick silicon active region (100 애m to 500 애m) of very high resistivity silicon (1000 ⍀-cm to 10,000 ⍀-cm). This provides two significant advantages, the first of which is improved X-ray stopping power. More importantly, the thicker device has significantly lower capacitance than the surface barrier detector; and as seen in Eq. (11) above, the noise increases with increasing capacitance. Such devices are also attractive because of their very low cost (10 to 100 dollars). Avalanche Photodiodes. At room temperature, low energy performance is often limited by noise in the preamplifier. One method to overcome this problem is to design a device with internal gain. The avalanche photodiode (APD) is such a device. The avalanche photodiode is unique among solid-state radiation detectors in that it has internal gain. In its simplest form, an APD is a p–n junction formed in a silicon wafer, structured so that it may be operated near breakdown voltage under reverse bias. Compared to conventional solid-state sensors, relatively large output pulses are produced, along with an improved signal-to-noise ratio (18). APDs can be used directly to detect low energy X rays and particles or used cou-
12
RADIATION DETECTION
Metal
p
Semiconductor
n
(a)
(b)
n
Drift region Avalanche region
i (or lightly p) p (c)
(d)
p+
p+ p+ p+ n+ p+ p+ p+
(e) Figure 4. Various detector configurations used in silicon detector technology. (a) Simple planar structure used in Schottky diodes and photoconductors. (b) Simple pn diode. (c) A p–i–n which provides a larger active volume and lower capacitance. (d) Avalanche photodiode (APD). (e) Silicon drift diode (SDD).
pled to a scintillator for higher energy. Figure 2(c) shows a 웂-ray spectrum taken using an APD coupled to a CsI scintillator. Very small APDs have been in routine use in the telecommunications industry for some years and have also been used in a variety of other applications such as optical decay measurement, time-domain reflectometry, and laser ranging. They are most often used for the detection of optical radiation, which was their original purpose. In recent years, research has led to an improved device, allowing application to a wide variety of nuclear spectroscopy applications (19,20). In comparison to photomultiplier tubes, which are one of the most commonly used low-light-level sensors, APDs are smaller, more rugged, and more stable, have higher quantum efficiency, and use much less power. They are far more sensitive and have inherently better signal-to-noise ratios than other semiconductor photosensors. In addition, they may be operated without cooling and are insensitive to magnetic fields and vibration. These detectors may be fabricated to directly sense both high- and low-energy beta particles with high efficiency, and thus they provide a very attractive sensor for the proposed instrument. Figure 4(d) presents a schematic diagram of the cross section of an APD. It consists of several regions, including the
‘‘drift’’ region and the active junction, which contains the multiplication region. The ‘‘drift’’ region is typically 20 애m thick, the active region of the device is approximately 120 애m thick, and the multiplication region is less than 10 애m thick. Ionizing radiation or light with an energy greater than the bandgap that strikes the drift region will cause the generation of free charge carriers which then drift to the active region. No external field is present here, but a gentle gradient in dopant concentration causes a weak electric field, imparting a net drift of electrons toward the multiplication region. Since the dopant concentration in this region is relatively light, the carrier lifetimes are quite large and efficient transport is easily attained with the high-quality silicon now available. Within the multiplication region, the number of charge carriers is amplified in accordance with the gain of the device. Electrons entering this region quickly attain velocities large enough to cause knock-on collisions with bound electrons in the lattice. This process frees additional electrons, which undergo new collisions. The multiplication process occurs many times, with the result that a single electron generates hundreds or thousands of free electrons, thus producing a significant net gain in the current. APD gains of up to 10,000 at room temperature are possible, and the signal is proportional to the gain. The noise has a more complicated dependence. For low values of gain, the noise is almost constant; but at high values, it increases rapidly. There is some optimum value where the maximum signal-to-noise ratio is obtained. This optimum value is much higher than is achievable in detectors without internal gain. The higher signal size also relaxes the requirements on subsequent electronics. APDs as Particle Detectors. Although avalanche diodes were initially investigated for use as large-area optical sensors, they are also sensitive to directly incident ionizing radiation, including X rays, alpha particles, and beta particles. The high signal-to-noise ratio due to the internal gain makes them particularly useful for detecting low-energy radiation. The higher-energy beta particles, including those from 90Sr, 35S, and 32P, are detected with very high efficiency. APDs can be used to detect the low-energy beta particles from 3H with about 50% efficiency. APDs can also be used to detect low-energy X rays, such as the 5.9 keV X rays from 55Fe. At 5.9 keV, an energy resolution of 550 eV is achievable. The noise level in APDs can be as low as about 200 eV, so that noise does not limit sensitivity. APDs are sensitive to X rays over the range of about 1 keV to about 20 keV. Below 1 keV, most of the X rays are stopped in the front surface dead layer; above 20 keV, the radiation begins to be too penetrating for high efficiency. Silicon Strip Detectors. Silicon strip detectors and microstrip detectors are silicon devices which are fabricated using planar processing techniques and which can be made relatively large and have position sensitivity. They are used in high-energy physics experiments to determine the energy and position (track) of ionizing particles. One face of the device is fabricated with thin parallel strips which provide one-dimensional position resolution on the order a few micrometers. Silicon Drift Detectors. Figure 4(e) shows the cross section of a silicon drift diode (SDD). As discussed above, lowering the capacitance of the device improves performance. It is possible to significantly decrease the capacitance further using silicon drift diodes. This device uses a similar strip electrode structure to the microstrip detector, but strip electrodes are fabricated on both sides of the device. An electric field is cre-
RADIATION EFFECTS
ated that forces the charges to drift laterally in the silicon. There appears to be great promise in this relatively new Si device structure, the silicon drift diode (21–23). The low capacitance of drift diodes results from the use of a very small anode and in conjunction with a series of cathode strips held at varying bias voltages. The charges generated by an ionizing event drift laterally in the device until close to the anode where they are collected. The high mobility and long lifetimes in silicon make it possible to collect these charges over devices many millimeters across. The capacitance is reduced to the point that performance is limited by the stray capacitance in the system. To address this limitation, device structures have been made with the first-stage FET fabricated on the silicon as part of the device structure. New Detector Materials Although no other materials have been studied as extensively as Si, Ge, CdTe, and HgI2, a considerable level of interest has recently developed in three materials: GaAs, TlBr, and PbI2 (24). GaAs has seemed like an ideal candidate for many years because it has a bandgap similar to CdTe but has much higher electron mobility (8000 cm2 /V). However, the very short lifetimes limit the 애 product to below 10⫺4 cm2 /V-s. GaAs has been considered for uses where very fast measurement times or very high intensities are encountered. TlBr and PbI2 are two of several materials that, like HgI2, have wider bandgaps than Si or CdTe. Neither material is as advanced as HgI2, but both can be grown from the melt and have high environmental and chemical stability. Some of the other materials that have been investigated are listed in Table 2. In addition to single-crystal detectors, several materials have been examined for use in radiographic imaging in polycrystalline or amorphous thin film form. For example, Se and CdS have been used in the photoconductive mode as xeroradiographic films, and thin films of a number of materials including CdS, CdSe, and GaAs have been used to fabricate arrays of diodes or transistors for digital imaging. Amorphous silicon (a-Si) device technology offers the promise of very-large-area imaging detectors for medical radiography. By itself, the stopping power is quite low, but can be improved by adding an X-ray conversion layer to it. There is presently a great deal of interest in a-Se and PbI2 films for use on hybrid devices in which a layer of high-Z conversion material is bonded to a readout device such as a CCD or an a-Si imaging device.
SUMMARY Solid-state radiation detectors provide unique and diverse capabilities in many technological areas. Two factors have made this the fastest-growing segment of radiation detection technology. These are (i) the desire for digital radiography and (ii) the availability of low-cost, high-performance computers to handle the large amounts of data generated using a large number of solid-state detector elements. The limitations of existing technologies have made the search for new device structures (such as the drift diode) and new materials (such as PbI2 semiconductor films and LSO scintillators) very active areas of research and development which promise to advance the field at an unprecedented rate over the next decade.
13
BIBLIOGRAPHY 1. G. Knoll, Radiation Detection and Measurement, 2nd ed., New York: Wiley, 1989. 2. N. Tsoulfanidis, Measurement and Detection of Radiation, New York: Hampshire, 1983. 3. B. Shleien and M. S. Terpilak (eds.), Health Physics Handbook, Olney, MD: Nucleon Lectern Assoc., 1984. 4. J. S. Iwanczyk and B. E. Patt, ‘‘Electronics for X-Ray and Gamma Ray Spectrometers,’’ in T. E. Schlessinger and J. B. James (eds.), Semiconductors for Room Temperature Nuclear Detector Applications, Semiconductors and Semimetals, Vol. 43, San Diego: Academic Press, 1995, Chap. 14. 5. A. J. Dabrowski et al., Nucl. Inst. and Methods, 212: 89, 1983. 6. S. Kronenberg and B. Erkila, IEEE Trans. Nucl. Sci., NS-32: 945, 1985. 7. J. Bruckner et al., IEEE Trans. Nucl. Sci., NS-38: 209, 1991. 8. G. J. Sloan and A. R. McGhie, Techniques in Chemistry, New York: Wiley Interscience, 1988. 9. W. G. Pfann, Zone Melting, Huntington, NY: Krieger, 1978. 10. S. Brelant et al., Revue de Physique Applique´e (Strasbourg France), 1977, p. 141. 11. F. V. Wald and G. Entine, Nucl. Inst. and Methods, 150: 13, 1978. 12. J. F. Butler, C. L. Lingren, and F. P. Doty, IEEE Trans. Nucl. Sci., NS-39: 129, 1992. 13. M. Scheiber, W. F. Schnepple, and L. van den Berg, J. Cryst. Growth, 33: 125, 1976. 14. S. Faile et al., J. Cryst. Growth, 50: 752, 1980. 15. M. R. Squillante et al., Nuclear radiation detector materials, Materials Res. Soc. Symp. Proc., Vol. 16, 1983, p. 199. 16. E. M. Pell, J. Appl. Phys., 31: 291, 1960. 17. J. Llacer, Nucl. Inst. and Methods, 98 (2): 259, 1972. 18. G. C. Huth, IEEE Trans. Nucl. Sci., NS-13: 36, 1966. 19. G. Reiff et al., Nuclear radiation detector materials, Materials Res. Soc. Symp. Proc., Vol. 16, 1983, p. 131. 20. R. Farrell et al., Nucl. Inst. and Methods, A353: 176, 1994. 21. J. Kemmer et al., Nucl. Instr. and Methods, A253: 378, 1987. 22. B. S. Avset et al., IEEE Trans. Nucl. Sci., NS36: 295, 1989. 23. F. Olschner et al., Proc. SPIE 1734 Gamma-Ray Detectors, 1992, p. 232. 24. M. R. Squillante and K. Shah, ‘‘Other Materials: Status and Prospects,’’ in T. E. Schlessinger and J. B. James (eds.), Semiconductors for Room Temperature Nuclear Detector Applications, Semiconductors and Semimetals, Vol. 43, San Diego: Academic Press, 1995, Chap. 12.
GERALD ENTINE MICHAEL R. SQUILLANTE Radiation Monitoring Devices, Inc.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICA...CTRONICS%20ENGINEERING/40.%20Nuclear%20Science/W5215.htm
●
HOME ●
ABOUT US //
●
CONTACT US ●
HELP
Wiley Encyclopedia of Electrical and Electronics Engineering Radiation Monitoring Standard Article Clinton L. Lingren1, Daniel Weis2, George L. Bleher3, Donald P. Giegler4 1San Diego, CA, USA 2Escondido, CA 3Encinitas, CA 4Individual Consultants, La Jolla, CA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. : 10.1002/047134608X.W5215 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (143K)
Browse this title ●
Search this title Enter words or phrases ❍
Advanced Product Search
❍ ❍
Acronym Finder
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20...20ENGINEERING/40.%20Nuclear%20Science/W5215.htm (1 of 2)16.06.2008 15:30:08
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICA...CTRONICS%20ENGINEERING/40.%20Nuclear%20Science/W5215.htm
Abstract The sections in this article are Radiation Monitoring in Nuclear Power Plants System Overview Sources of Radioactive Material Units of Measure Important to Radiation Monitoring Typical Detectors and Monitor Types Monitor-Performance Parameters | | | Copyright © 1999-2008 All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20...20ENGINEERING/40.%20Nuclear%20Science/W5215.htm (2 of 2)16.06.2008 15:30:08
RADIATION MONITORING
33
RADIATION MONITORING With the nuclear age came a need to monitor radiation levels for the protection of workers and the population, and monitoring instruments to fill this need were first developed for projects in government and university laboratories. Early radiation monitors built in the 1950s were based on vacuum-tube electronics, which were then modified to use industrialized components that would withstand the environment of nuclear power plants. The cumbersome vacuum-tube instruments generated a lot of heat and required large, forced-air-cooled cabinets for housing readouts in the reactor control room. During the late 1950s and early 1960s, vacuum tube– based electronics were replaced with transistor-based, solidstate electronics that reduced the required space for the control-room instruments. Each radiation detector had its own channel of electronics, including signal-conditioning circuitry, control-room readout, and associated power supplies. For a typical channel, the detector with a small amount of signalconditioning electronics was located at the point of detection and the balance of the channel was mounted in or near the reactor control room. These analog-based, radiation-monitoring systems required a separate indicating system for each detection channel in the control room, and they had separate signal, high-voltage, low-voltage, and control-logic cables for each channel between the control room and the detector. This resulted in miles of multiconductor and shielded cabling in a typical plant to provide the required radiation indication at the control room. In the late 1970s and early 1980s distributed microprocessors were developed, replacing the early analog systems with digital logic. These new microprocessor-based monitoring systems replaced the miles of multiconductor cables with single twisted-pair cables in a loop or star interconnecting configuration and replaced analog indicating instruments with a computer display. Radiation is always around us and we are constantly being bombarded by radiation of subatomic particles and electromagnetic rays. Sources of the radiation from above us include our sun and solar system as well as the rest of the vast universe of space; and sources of radiation surrounding us include the soil and rocks in the earth and plants, animals, and people that are near us, as well as materials derived from those. Radiation includes both ionizing radiation in the form of X rays, gamma rays, alpha particles, beta particles, neutrons, protons, cosmic rays, etc., and nonionizing radiation in the form of the lower-energy portion of the electromagnetic spectrum, including electrical power in our homes, radio and video waves broadcast via cable or transmitted by air, visible light, and infrared energy emitted by bodies according to their temperature. Radiation monitoring is concerned with measuring and monitoring ionizing radiation. Little is known about the harmful effects of ionizing radiation at levels that are typically encountered in our environment (natural background levels). Effects of high doses of ionizing radiation have been documented as a result of such incidents as the atomic bombs dropped on Japan, the atomic accident at Chernobyl, and observation of the effects of therapeutic uses of radiation. It is generally assumed that all radiation is harmful and that people should recieve the minimum radiation exposure for what needs to be accomplished. Naturally occurring radiation at J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
34
RADIATION MONITORING
historically typical background levels serves as the reference from which allowable additional radiation-exposure limits are set. Medical diagnostic radiation is the primary source of increased radiation over background for most people. Radiation dose levels that cause death immediately or within a few weeks are well established. Radiation induced malignant tumors have been noted since the earliest use of X rays and other forms of ionizing radiation. Hypothetical increased risks forecasts for low-level radiation doses are based on linear or quadratic extrapolation of limited data where relatively large populations received very high doses. However, studies have not yet validated those models, and risk from low-level radiation may be lower than is generally postulated. Background radiation levels are different at different places on the earth due to elevation, the makeup of the soil, and related factors. For example, background radiation in Denver, Colorado is about twice that of San Diego, primarily due to the higher elevation with less air mass to absorb radiation from space, such as cosmic rays. People are typically exposed to higher background radiation in masonry buildings than in wood structures because of naturally occurring radioactivity in the materials in cement, brick, or rock. The human body contains radioactivity in the form of naturally occurring potassium-40, carbon-14, and other radioactive isotopes. According to current government regulations, if a laboratory animal were injected with the amount of radioactivity in an average person’s body, it would be considered radioactive (see Reading List). Commonly encountered radioactive objects include: glowin-the-dark radium watch dials made during the first half of the 20th century, thorium oxide coated gas-lamp mantles, smoke detectors containing about 1 microcurie of americium241, fluorescent lamp starters containing a minute amount of krypton-85, porcelain tooth caps colored with metal oxides that contain uranium to give an improved reflective appearance, radon gas in tightly built homes over radium-containing soil or rock and in water supplies, and potassium-containing fertilizers. Though these may give off ionizing radiation at levels that would require careful accountability in a laboratory or industrial environment under state or federal regulations, they are not usually considered to be hazardous and are not required to be monitored. In today’s society there are areas where the use of ionizing radiation that must be monitored according to state or federal government regulations to ensure that radiation does not pose a hazard to personnel or to the environment: areas include radiation for instructional use in schools and universities, medicine, industrial gauging, sterilization of medical supplies or food, mining and milling of radioactive ores, steel mills, space flights, fusion facilities, and nuclear reactors. The use of radiation sources at schools and universities may range from simple isotopic sources for demonstration purposes that are small enough to be exempt from regulations to fully operational nuclear test reactors where reactor physics is taught and isotopic sources are produced. The use of ionizing radiation in medicine includes electron beams, X-ray sources, and gamma-ray sources for transmission imaging or for therapy and gamma-ray sources tagged to pharmaceuticals and injected into the patient for determining organ function in nuclear medicine. The types of sources include isotopic sources, X-ray tubes, and linear accelerators. Xray tubes and linear accelerators generate ionizing radiation
only when power is properly applied; however, isotopic sources always emit radiation as a result of the natural decay of the radioactive isotope. The physicians or medical technologists are responsible for properly administering radiation to patients; they use radiation monitoring devices to ensure that source strengths and photo energies are proper. Those who handle the isotopic sources should not be exposed inadvertently and receive doses above allowable limits. Many industries use radiation sources for making routine measurements, such as material density, fill height of beverage containers, or material thickness. Radiation is also used to sterilize some medical supplies after the packaging is sealed. A typical radiation source is cobalt-60 with a half life of just over 5.3 years and a photon energy of just over a million electron volts; a typical sterilization dose is a million rads. Some foods are also exposed to gamma radiation to kill bacteria and prevent spoilage. Some seeds or bulbs are irradiated with lower doses to enhance growth and increase production. Some gemstones are irradiated to enhance color and brilliance. These industries and radiation facilities use radiation monitoring to calibrate and control exposure doses and to monitor personnel exposure. Increased incidence of lung cancer among underground miners exposed to radon and radon daughters in their occupations has been demonstrated in epidemiological studies of the inhalation of radon gas and its effects on the lung epithelium. Efficient control of radon and radon daughters in underground mines has been difficult. It is the role of radiation monitoring to identify radiation exposure and assist in its control in underground mines (both uranium mines and nonuranium mines) where radon gases may be present. The need for radiation monitoring in steel mills is fairly recent and is the result of 49 known incidents since 1973 where companies have inadvertently melted shielded radioactive sources, typically cobalt-60 or cesium-137. These incidents have not caused worker injuries but have resulted in economic harm to the companies with costs typically ranging from $5 million to $25 million per accidental melt of a radioactive source. These costs include loss of the melt, facility decontamination, and shut-down of steel production. The sources, generally, had been lost from licensees that had used them for industrial applications. It is also important to identify radioactively contaminated scrap, such as metal activated in a nuclear facility or contaminated in a melt from a lost source. To find a shielded source in a car load or truck load of scrap poses is very difficult. Radiation monitors and detectors are placed on spacecraft for several purposes. Measuring secondary emissions, induced by the absorption of protons or neutrons, from the surface of planets or moons can identify the elements on the surface of the planets or moons. Monitoring the levels of radiation impinging on the spacecraft can provide information for predicting effects on the materials in the spacecraft and can be used to turn off sensitive electronic components during times of unexpectedly high-radiation exposure where radiation damage occurs when the components are powered. Atomic fusion is another source of radiation; fusion combines light elements to create elements of greater atomic weight, neutrons, and excess energy, neutrons are eventually absorbed by the surrounding materials, typically resulting in radioactive isotopes. Radiation monitoring is required to mea-
RADIATION MONITORING
sure levels of radiation during facility operation as well as radiation levels from neutron activation. In the fission process in nuclear test reactors and nuclear power reactors, heavy atoms, such as uranium-235, absorb a neutron and split into fission products that include gamma rays, neutrons, alpha particles, beta particles, and lighter elements that are typically radioactive. Radioactive materials result from the fission process, as well as from activation from absorption of neutrons or other atomic particles. Radiation monitoring in nuclear power plants is discussed in detail later in this article. Any of the facilities discussed prior that has one or more licensable radiation sources must have instruments that can measure the radiation from the source(s) and must provide personnel monitoring devices, such as film badges, ring badges, or thermoluminescent dosimeters (TLDs), to monitor the dose that personnel receive who work where they may be exposed to radiation from the source(s) that exceeds 10% of federally established limits. A personnel dosimeter badge is worn only by the individual to whom it is issued and, when not being worn, it is stored where it will not be exposed to radiation. This badge provides a record of radiation exposure for evaluating potential adverse effects and for ensuring that no worker exceeds established limits, such as the annual total body effective dose equivalent limit of 5 rem. Federally established limits in the United States of America can be found in the US Code of Federal Regulations, 10 CFR 20.
RADIATION MONITORING IN NUCLEAR POWER PLANTS Radiation-monitoring (RM) systems are installed in nuclear power plants to satisfy U.S. Nuclear Regulatory Commission (USNRC) regulations and plant operating license requirements. The objective of those requirements is to protect both personnel and the environment from the effects of ionizing radiation. An installed system measures, displays, and records the presence and level of radiation and alerts plant personnel to excessive levels of radioactivity, and control actions are initiated automatically for required functions when levels exceed their limits. Monitoring radiation in nuclear power plants is often divided into categories according to application. Typical categories include area monitoring (1) for determining radiation levels in areas where personnel may be working or may have a need to enter; process monitoring (2,3) for determining radiation levels in processes in the plant; effluent monitoring (4) for determining amounts of radiation leaving the plant through any pathway; and perimeter monitoring for identifying any increase in radiation level at the perimeter of a plant. Both process monitors and effluent monitors can be further separated in two categories according to whether they monitor a gaseous stream or a liquid stream. The instruments can be grouped according to the design as area 웂-ray monitors, liquid monitors, and atmospheric gaseous, particulate, or iodine monitors. A typical radiation-monitoring instrument has many functions to perform: it may detect, display, and record levels of radiation in the plant and provide alarms when selected radiation levels are exceeded; it may monitor process flow lines for detecting radioactive leakage; it may monitor effluent for recording radioactivity levels and inhibiting excessive re-
35
leases from the plant; it may provide signals for control functions in other systems; it may provide samples for analysis for complying with USNRC Regulatory Guide 1.21; and it may provide postaccident monitoring in accordance with the requirements of NUREG 0737. Electrical equipment in nuclear power plants is separated into safety categories according to the functions performed in order to establish quality requirements for procurement, installation, operation, and maintenance. The typical catgegories are safety related and nonsafety related. The safety-related category implies that the equipment is essential to ensure the integrity of the reactor-coolant pressure boundary, the capability to shut down the reactor and maintain it in a safe shutdown condition, or the capability to prevent or mitigate the consequences of accidents that could result in potentially major offsite exposures to the public. Safety-related equipment has the highest quality requirements and must not cease to perform its functions when any single credible failure occurs with the equipment. Other categories are introduced for postaccident monitoring equipment that must operate following a design basis accident. (The design basis accident for a nuclear power plant is the worst credible accident postulated for the plant for the purpose of evaluating risks and potential hazards associated with siting the plant.) Most radiation-monitoring instrumentation is typically classed as nonsafety related, with some specific instruments identified either as safety-related or as postaccident monitors (5) that may have quality requirements similar to safety-related equipment. Requirements for safety-related equipment typically include demonstrated performance under normal and extreme service conditions and installation of redundant channels that are powered from redundant, safety-related power sources. Each channel may also be required to have demonstrated capability of performing its function under design basis service conditions following a design basis earthquake. The USNRC has established the Standard Review Plan (SRP) (6) as a guide for reviewing designs of nuclear power plants against requirements, including a review of radiationmonitoring systems. An owner of a nuclear power plant provides a safety analysis review to respond to all points of the SRP. The installed RM system must meet the commitments made in the final safety analysis review and the requirements of documents referenced therein.
SYSTEM OVERVIEW A nuclear power plant must have radiation monitors installed at strategic locations throughout the plant for monitoring radiation levels in order to meet regulatory requirements. In addition, laboratory instruments are used for analyzing collected samples of liquids or gases, and portable or handheld instruments are used for making surveys. Chemistry or health physics group members typically observe and record continuous radiation-monitor channels, but they typically rely on collecting and analyzing samples of effluents for final determination of the offsite dose, and likewise they rely on portable instruments for confirming radiation levels where people are working. Effluent samples are analyzed in laboratory instruments that measure ionizing radiation with excellent energy resolution for identifying the radioactive isotopes con-
36
RADIATION MONITORING
tained in the samples. These analyzers typically employ cryogenically cooled germanium detectors. The descriptions of radiation-monitoring instruments contained herein relate to instruments installed in nuclear power plants for continuous monitoring and do not include laboratory or hand-held instruments. The installed systems typically monitor magnitudes of radiation and initiate alarms when radiation levels approach established set points; however, because of the need to operate continuously in the plant environment, the energy resolution of the detectors is much poorer than what is available with laboratory instruments. Analog RM systems usually have a detector installed at each location where radiation levels are to be monitored with a small amount of signal-conditioning electronics, and the detectors are connected by long instrumentation cables to a cabinet at the reactor control room in which are installed signalconditioning electronics, alarms, readout devices, and power supplies associated with each detector. Digital RM systems are typically distributed computer systems that include a microcomputer and required power supplies located at or near each detector location, and a communication cable connects that location to a central computer at the control room and at a health physics office or other location where the information is to be used. The communication cable is usually a simple twisted-shielded pair cable. Typical locations in a pressurized water reactor (PWR) nuclear power plant where area monitors may be installed include the control room, radiochemical laboratory, hot machine shop, sampling room, reactor-building personnel access, refueling bridge, in-core instrumentation area, fuel storage area, auxiliary-building-demineralizer area, waste-gas-decay-tank area, drumming area, waste-holdup-tank area, chargingpump area, turbine-building area, and main stream lines. Typical monitoring points for gaseous process monitors include reactor-building-containment area for airborne gaseous and particulate monitoring, radioactive-waste-disposal-area vent, waste-gas header, fuel-handling-area vent, and control room for airborne-gaseous and -particulate monitoring. Monitoring points for liquid process monitors may include chemical- and volume-control-system-letdown line, radioactive-waste-condensate return, component cooling water, normal-sample-laboratory isolation, and main steam lines. Gaseous effluent monitoring may be performed in the plant vent stack, in the condenser air ejector, and in the containment purge stack. Liquid-effluent monitors are typically installed in the radioactive-waste discharge line, in the neutralization sump discharge line, in the turbine plant area sump, and in the steam-generator blowdown. The locations listed above are typical for a PWR plant but may be different in each plant.
SOURCES OF RADIOACTIVE MATERIAL The primary sources of radioactive material in light-watercooled nuclear power plants are the fission process in the reactor core and neutron activation. The fission process, the splitting of uranium atoms, emits neutrons, 웂 rays, and 웁 particles directly and creates radioactive elements in the fuelpellet regions. Those radioactive materials then decay primarily by emission of 웂 and 웁 radiation. Neutron activation occurs whenever any atom absorbs one of the neutrons that is emitted from a splitting uranium
atom. The absorbing atom gains atomic weight, thus becoming a new isotope of the same element, which may be unstable or radioactive and decay to a more stable state by emitting radiation. Neutron activation creates radioactive material in the fuel-pellet region; in the reactor-coolant region; in support structures, reactor vessel, and piping; and in the regions surrounding the reactor vessel including the air in that vicinity. In addition to neutron activation, some atoms may be activated by particle radiation emitted from fission products or from neutron-activated elements. The plant is designed to keep radioactive material contained; however, if systems become unsealed, some radioactive material may leak into the coolant through the fuel cladding, then from the reactor coolant through the pressure boundary or from coolant purification and radioactive-waste processing systems into secondary systems. It is this leakage that radiation-monitoring systems are expected to detect during normal reactor operation. 웂 rays and 웁 particles are the forms of radiation that are most readily detected. The concentration and quantities of radioactive material in various regions of the plant depend on the balance among production, leakage, and removal of individual isotopes. In the fuel-pellet region, production processes include fissionproduct production directly from fissioning uranium atoms, parent-fission-product decay, and neutron activation. Removal processes include decay, neutron activation, and leakage through cladding defects into the coolant. In the coolant region, production processes include (1) leakage of fission and activation products from the fuel-pellet region and from fuel cladding and core structures, (2) parent decay in the coolant, and (3) neutron activation in the coolant materials. Removal is by decay, by coolant purification, by feed and bleed operations, and by leakage. The most abundant isotopes in the coolant are radioactive noble gases 85Kr, 133Xe, and 135Xe during normal operation) and radioactive halogens (in particular 131 I). Neutron activation leads to two isotopes of particular interest, 16N and 14C 16N is produced by a neutron-proton reaction with 16O and decays by higher-energy 웂-ray decay with decay energies of 6.1 and 7.1 MeV. The half-life of 16N is 7.3 s. While there is substantial decay of 16N as it exits from the core and passes through the turbine, it must still be considered in the design of the turbine shielding for boiling water reactors (BWRs). The detection of 16N in the secondary side of PWRs may be used to monitor changes in steam-generator leakage. 14C is produced by neutron activation of 17O and 14N. The principal generation of tritium (3H) is from fission and neutron interaction with boron, lithium, and deuterium. The main leakage source is fission tritium released through fuelcladding defects. Tritium produced in the coolant contributes directly to the tritium inventory, while tritium produced in control-element assemblies contributes only by leakage and corrosion. Activation and corrosion of reactor core support structures produce corrosion products, forming a radioactive material commonly referred to as radioactive crud. Corrosion-product constituents are typically 60Co, 58Co, 54Mn, 51Cr, 59Fe, and 95Zr. Leakage Sources Any system containing radioactive materials in liquid form is a potential source of radioactive leakage, and radioactive leakage into the reactor-containment area comes from the re-
RADIATION MONITORING
actor-coolant system and coolant-purification systems. Leakage from systems containing potentially radioactive liquids is collected and processed by liquid radioactive-waste systems. Noble gases that are dissolved in liquid leakage may go out of solution and into the local atmosphere. Radioactive material can be released into effluents from secondary systems due to leakage. For PWRs, the amount of release depends on reactor-coolant radioactive material concentrations, reactor-coolant leakage rate, primary to secondary leakage rate, steam-generator blowdown rate, and secondary-system leakage rates. Abnormal leakage from the fuel region to the reactor coolant is commonly detected by monitoring the reactor-coolant-system (RCS) letdown stream, either continuously or on a sampling basis. In PWRs, reactor coolant remains liquid under pressurization in a primary coolant loop and transfers its heat through a heat exchanger to a secondary coolant loop that is converted to steam in the steam generator. The secondary system is typically monitored on steam-generator-blowdown, componentcooling-water, and liquid-radioactive-waste-processing systems to check for leakage from primary to secondary coolant loops. In BWRs, reactor coolant is converted directly to steam for use in the steam tubine. 웂 radiation levels external to the main steam lines are monitored to detect increased levels of radiation in the reactor coolant that may indicate problems such as significant fuel-cladding failure. A fuel-cladding failure would allow fission products, particularly noble gases, to be transported to the steam lines, which could cause the radiation level external to the steam lines to be well above normal background levels. In both PWRs and BWRs, condenser exhaust is monitored.
eration and reduce their radioactivity and chemical concentrations to levels of clean water acceptable for being discharged to the environment or recycled in the plant. Radioactivity removed from the liquids is concentrated in filters, ion-exchange resins, and evaporator bottoms, and these concentrated wastes are sent to a radioactive-waste solidification system for packaging and eventual shipment to an approved offsite disposal location. If the water is to be recycled to the reactor-coolant system, it must meet the water-purity requirements for reactor coolant. If the liquid is to be discharged, the activity level must be consistent with the discharge criteria of the U.S. Code of Federal Regulations, 10CFR20. These liquids normally pass through liquid radiation monitors prior to being recycled or discharged. UNITS OF MEASURE IMPORTANT TO RADIATION MONITORING Becquerel
Curie
Gray Rad
Reactor-Coolant-System Leakage Detection An increase in reactor-coolant-system leakage rate in a nuclear power plant of 1 gal/min must be identifiable within 1 h. USNRC Regulatory Guide 1.45, Reactor Coolant Pressure Boundary Leakage Detection System, outlines the means required for monitoring RCS leakage and indicates that this function must be provided also following a design basis earthquake. Three required means of monitoring leakage rate are (1) sump level and flow monitoring, (2) airborne-particulate radioactivity monitoring, and (3) either monitoring condensate flow rate from air coolers or monitoring airborne-gaseous radioactivity. The sump flow rate and airborne-particulate channels are found to be capable of indicating an increase in RCS leakage of 1 gal/min within 1 h under most operating conditions. However, airborne-gaseous monitoring was found to have a much longer response time. This shortcoming was identified generically in a USNRC staff memorandum and is mainly due to the long half-life of 133Xe, the major noble gas in the RCS, and the background radiation level from 41Ar, which is created by neutron activation of air around the reactor vessel. Most RM systems use gross energy measurement methods for RCS leakage detection. Potential leakage-detection improvements using spectral capabilities of new 웂-ray sensitive detectors for particulate and gaseous monitoring have been predicted, which might allow specific isotopes to be measured and separated from background radiation.
37
Rem
Roentgen
Sievert
The becquerel (Bq) was adopted in 1975 as unit of measure of activity, which is the measure of the rate of decay of a radioisotopic source. 1 Bq is one disintegration per second. The curie (Ci) is a measure of the activity or number of disintegrations per second of a radioactive source. It was originally an estimate of the activity of 1 g of pure radium-226. 1 Ci is 3.7 ⫻ 1010 disintegrations per second (Bq). The gray (Gy) is a measure of absorbed dose. 1 Gy is 1 joule per kilogram, or 100 rad. The rad is a measure of absorbed dose in units of energy per unit mass of the absorbing material. 1 rad is 100 ergs per gram. The magnitude of dose will depend on material properties as well as on the radiation source. Air is typically used as the basis of measurement. When water is substituted for air, the absorbed dose is nearly the same because the atomic number of water is nearly the same as for air. The rem is used to measure the effect of radiation on living organisms and was derived from the words radiation equivalent in man and is equal to the absorbed dose times a quality factor, Q. For 웂 rays and 웁 particles, Q ⫽ 1 and 1 rem is equal to 1 rad. For charged particles, Q is much larger than 1. The roentgen (R) is a measure of 웂-ray exposure in terms of the charge due to ionization by the exposing radiation in a unit of mass of the material. 1 roentgen is 1 electrostatic unit of charge in 1 cubic centimeter of air at standard temperature and pressure. The sievert (Sv) is a measure of the effect of radiation on living organisms and is equal to 100 rem.
TYPICAL DETECTORS AND MONITOR TYPES
Liquid Effluent
Area Radiation Monitors
Liquid-waste systems in a nuclear power plant collect and process radioactive liquid wastes generated during plant op-
Area radiation monitors continuously measure radiation levels at various locations within nuclear power plants including
38
RADIATION MONITORING
reactor-containment-building work areas and fuel-storage facilities for ensuring personnel safety. Area monitors have historically used Geiger-Muller (GM) tubes, ionization chambers, or scintillation crystals coupled to photomultiplier (PM) tubes, depending on the manufacturer and the sensitivity or range requirements. A block diagram of a typical GM-tubebased area monitor is shown in Fig. 1. In a GM tube (7) an avalanche breakdown occurs in the gas in the tube each time ionizing radiation is detected and then self-extinguishes. The magnitude of the resulting signal (an electronic pulse) is independent of the number of original ion pairs that initiated the process and therefore independent of the energy of the detected ionizing radiation. The electronics in a GM-tube-based area monitor senses these pulses and converts them to a signal that is proportional to their rate of occurrence. However, present-day GM-tube monitoring systems use energy-compensated GM tubes for which the number of counts detected is nearly proportional to the total energy absorbed. Thin-walled GM tubes used for area monitors are normally energy compensated for a linear response or ⫾20% or better for 웂-ray energies of 60 keV to 1.25 MeV. The filter is designed to attenuate the lower-energy 웂 rays below approximately 100 keV and to increase the responses of higher-energy 웂 rays by the effect of the high-Z material used for the filter. This energy-compensation effect is due to the complex contribution of primary photon transmission or attenuation and secondary-particle production or attenuation at various depths in the GM-tube wall or outer energy filter (8). In an ionization chamber (9) ionizing radiation is absorbed in the gas in the chamber, and the number of electron-ion pairs thus created is proportional to the energy of the absorbed radiation. The bias voltage on the ionization chamber sweeps the charge carriers to the electrodes, causing an electrical current to flow, and external circuitry typically measures the magnitude of the current from this ionization process. The current output signal from an ionizationchamber-based area monitor is proportional to the energy of the radiation absorbed in the gas in the chamber in the 웂-ray flux field. These area radiation monitors are often calibrated in units of R/h. When ionizing radiation is absorbed in a scintillator, a light pulse is produced. In turn the light pulse is converted into an electrical pulse in a PM tube that is optically coupled to the scintillator. Over a broad range of energies, the amplitude of the electrical pulse is proportional to the energy of the absorbed radiation. In a scintillator-based area monitor the output signal is proportional to the number of absorbed-radia-
tion events in the detector and independent of energy if the electronics just counts events and provides a count-rate output signal. If the current from the PM tube is measured, the signal will be proportional to the energy of the absorbed radiation. A typical pulse-counting area monitoring may have a range from 1 to 100,000 counts per minute. A typical ionization-chamber-based area monitor may have a range from 10⫺10 to 10⫺3 A, corresponding to a range from 1 to 10,000,000 rad/h. This high range may be used for applications such as postaccident monitoring inside containment. Process Monitors Process monitors provide information about radiation levels within the nuclear power plant’s liquid, steam, and gaseous processing and storage systems. Liquid monitors may have the detectors mounted in the liquid or steam line or may have them mounted off-line with a sample stream taken from the main line and flowing through the detector sample chamber. Gaseous monitors too can have the detectors mounted in the stream or in a sample chamber. If the monitor is in a sample chamber, the gas to be monitored, which could be from a duct or from an open area such as the control room, passes through that sample chamber. Process monitors typically measure radiation levels that are far below normal background levels, and, as a result, the detectors must be shielded from all external sources of radiation. Effluent Monitors Effluent monitors are similar to process monitors except that they monitor liquid or gaseous streams that leave the nuclear power plant boundary and may transport radioactivity. The monitors described below are applicable to both process and effluent monitors according to the application to which they are dedicated. Airborne Monitors Off-Line Particulate and Noble-Gas -Particle Scintillation Detectors. The 웁-particle sensitive particulate and noble-gas monitors incorporate plastic scintillators coupled to PM tubes. The gas-detector assemblies are of similar construction. These detectors use similar entrance windows, plastic scintillators, light pipes, and PM tubes. A typical particulate detector is built up from a plastic scintillator coupled to a PM tube and a 0.001-inch-thick aluminum entrance window used for 웁-particle detectors.
Audible and visual alarms
Incident radiation
GM tube with housing and wiring connectors
Horn and lights
x.xx ch 1
x.xx ch 1
Local electronics and display
Remote electronics and display
Figure 1. Typical block diagram of an area radiation monitor.
RADIATION MONITORING
During original prototype evaluation, detectors are tested with solid 웁-particle sources to obtain a high signal-to-noise ratio when setting the discrimination level. Without changing the detector’s alignment, responses are then obtained for calibrated solid or gaseous 웁-particle sources. The solid alignment sources are serialized and kept for future use. These sources are used for aligning production detectors and prototype detectors prior to isotopic calibration. After alignment of a production monitor to the same counting efficiency as the prototype detector that uses the same solid source, the production and prototype monitors have nearly identical responses to radioactive gases in the sample chamber or to activity on the filter. After a monitor has been calibrated at a factory and shipped to a customer, corrections to the calibration may be required to account for atmospheric pressure affects because the response of the detector to 웁-particle radiation in the sample chamber is determined, not only by the total activity in the sample chamber, but also by self-absorption in the sample gas, which changes with gas pressure. High-Temperature, In-Line, Noble-Gas, -Particle Scintillation Detectors. These detectors are designed to operate at high temperatures and to be installed directly into an air duct. The detector assemblies are similar to the off-line 웁-particle scintillation detectors with the following exceptions: A thin (0.007-inch-thick) CaF2 (Eu) crystal or a high-temperature plastic scintillator (0.010 inch thick) and a quartz light pipe are used in place of the plastic scintillator with a Lucite light pipe, and a high-temperature PM tube is used. These detectors use the same 0.001-inch-thick aluminum entrance window as do the other 웁-particle detectors. Prototype detectors are aligned using the same methods described for the off-line 웁-particle scintillator. After alignment, a prototype detector is installed into a test fixture to simulate the geometry of the intended installation. Iodine Detectors. Detectors used for the iodine channels normally consist of a 2-inch-diameter by 2-inch-long sodium iodide crystal with a 2-inch-diameter photomultiplier tube. The detectors are typically specified to have a maximum resolution of 8% for a 662 keV cesium-137 photopeak and may be supplied with an americium-241 pulser for pulse-height stabilization. The electronics associated with a typical iodine channel has an energy-window discriminator with adjustable lower and upper thresholds. When the amplitude of a pulse signal from the detector lies between the lower and upper thresholds, the signal is counted as a valid event. The typical output is the number of events per unit time that fall within the energy window. Since each detector has its own resolution and the system is operating as a single-channel analyzer, each detector’s response will be unique. The response from a calibrated simulated iodine-131 source (barium-133) for each detector may be used when calculating the individual detector’s expected responses for iodine-131. Upon completing the alignment of the iodine channel as a single-channel analyzer on the 356 keV photons of barium-133, the window will need to be readjusted to be centered on the 364 keV photons of iodine-131. A typical pulse-height stabilizer consists of a small sodium iodide crystal. The doped crystal provides a constant source of 웂-ray equivalent energy (GEE) in the form of light pulses. The light pulses are detected by the PM tube and converted
39
to an electrical pulse by the PM tube and the preamplifier. The GEE is produced from an americium-241 5 MEV 움-particle decay in the pulser crystal. These high-energy light pulsers are attenuated to an equivalent light energy of a 3 MEV 웂-ray decay at the time the pulser crystal is imbedded into the mother crystal. A typical preamplifier may contain three window circuits as shown in Fig. 1. One window is used to monitor the americium-241 stabilization signal from the detector, one is used to monitor the iodine peak, and one is used to monitor the background level at energies just above the iodine window. By monitoring the known stabilization source, compensation can be made for instabilities, such as from temperature variations in the gain of the PM tube and preamplifier. The background window can be used for active background subtraction. Particulate, Iodine, and Noble-Gas Monitor. Often the measurement of airborne-particulate radiation, radioactive iodine, and radioactive noble gas is combined into a single instrument for such applications as airborne containmentbuilding or vent-stack monitoring. A typical block diagram for such an instrument is shown in Fig. 2. Typical Postaccident-Effluent, Wide-Range, Gas Monitor. A stack gas monitor typically is mounted near the nuclear power plant vent stack and receives a sample of the gas in the stack, which has been collected with isokinetic nozzles mounted in the stack. A normal-range monitor would be constructed as described previously and may cover a range of about five decades. A wide-range gas monitor may cover a range of about 12 decades through the use of multiple detectors. An isokinetic nozzle is used to sample the air in the stack because, by keeping the velocity of the air entering the nozzle the same as the velocity of the gas bypassing the nozzle, a more representative sample of gas should be obtained for analysis in the monitor. The low-range detector may be a 웁-particle scintillator and the preceding description applies. The mid- and high-range detectors may be solid-state detectors of a material such as cadmium telluride. Liquid Monitors Liquid monitors typically employ a sodium iodide scintillator coupled to a PM tube to measure 웂 radiation in the liquid stream. The liquid monitor may be mounted in-line with the stream to be monitored, or a sample stream may be extracted from the main stream and routed through a sample chamber into which the detector of the off-line liquid monitor is mounted. An in-line liquid monitor typically is bolted directly into the liquid line with flanges on each end of the section of pipe that passes though the monitor. The monitor consists of a section of in-line pipe, a detector mounted adjacent to the pipe, and lead shielding surrounding the pipe and detector to prevent radiation from the surrounding area from entering the detector. An off-line liquid monitor typically has a sample chamber into which the detector is inserted, and liquid enters and leaves the sample chamber through small-diameter pipes. Lead shielding surrounds the sample chamber and detector
40
RADIATION MONITORING
External communication Local measurement and control electronics Flow instrument
Sample nozzle
Pump Sample line
Particulate detector
Iodine detector
Noble gas detector
Ventilation duct Figure 2. Block diagram of an airborne-particulate, iodine, and noble-gas monitor.
to prevent radiation from the area around the monitor from entering the detector. The volume of water near the detector in a liquid monitor causes Compton scattering of the radioactivity in the liquid so that the signal seen by the detector includes not only the primary 웂-ray energies but even more lower-energy signals from scattering of the primary photons. Therefore, there is typically no effort to distinguish specific energies of radiation in a liquid monitor. Perimeter Monitors Many nuclear power plants put radiation monitors around the perimeter of the plant site to measure dose levels at the site boundary. Communication with these monitors is often achieved by telephone lines or radio transmission. Perimeter monitors are typically high-sensitivity area monitors. At least one vendor has provided large-diameter, high-pressure ionization chambers and another vendor offers energy-compensated GM tubes. Typically requirements include an on-scale reading at normal background levels, a wide range for detecting significant radiation releases, and battery backup to avoid loss of data during a power outage. MONITOR-PERFORMANCE PARAMETERS In order for the monitors in the RM system to perform their required functions, they must operate within specific bounds of range, sensitivity, accuracy, and response time. The range is usually described in terms of the smallest and greatest magnitudes of activity or concentration for which the output signal is a valid representation and the radiation energies that may be included in the measurement. Sensitivity is a statement of how the output signal responds to a change in the measured variable. Accuracy is a measure of the uncertainty in the output signal. And response time is a measure of the length of time required for the output signal to change as a result of a change in the measured variable.
established by the scale and levels set for the output signal. The upper end of the range may be limited by the linearity or saturation characteristics of the detector or instrument. The scales of most radiation monitors are logarithmic, and the span of an individual radiation channel is typically five decades. A decade is a factor of 10, so that a scale from 1 to 100,000 is five decades. The American National Standards Institute (ANSI) Standard N42.18 Section 5.4.2 recommends that the span be at least four decades above the minimum detectable level (MDL). Spans of more than six decades often require the use of multiple channels with overlapping ranges. Guidance for selecting the ranges for specific monitors is provided in ANSI N42.18 and ANSI/ANS-6.8.2 for effluent monitors, in ANSI/ANS-6.8.1 for area monitors, and in Regulatory Guide 1.97 for postaccident monitors. For monitors that measure the concentration of radioactivity in gases or liquids, range and sensitivity are normally specified for a certain isotope or a distribution of isotopes as opposed to being specified over an interval of radiation energies. For monitors that measure dose rate, range and sensitivity are normally specified over an interval of radiation energies. Sensitivity is normally determined by the characteristics of the detector. For monitors that measure dose rate, the sensitivity is often given as the ratio of the change in output signal to the change in radiation level that caused the signal to change, for example, for an ionization chamber, sensitivity is normally given in units of (A)/(R/h) or for a GM-tube-based area monitor, the sensitivity is usually given in units of (cpm)/(R/h), where cpm is counts per minute. For monitors that measure concentrations of radioactivity, the sensitivity is normally stated as the minimum detectable signal, which is a function of the detector characteristics, the effectiveness of radiation shielding, and the magnitude of background radiation. Direct Measuring Instruments. A radioactive material concentration estimate A for a direct measurement is given by A=
Range and Sensitivity The range of an instrument may be limited on the low end by noise or by instrument precision and accuracy or it may be
y − Bkg Rd
where y is the detector response to the concentration A plus the background, Bkg is the detector response to background,
RADIATION MONITORING
and Rd is the detector response per unit of concentration. A direct measurement is, for example, measurement of a gas or liquid volume in a fixed geometry. Typical units for y and Bkg are counts per minute, and typical units for Rd are counts per minute per 애Ci/cm3. The uncertainty in the concentration estimate due to counting statistics alone depends on the magnitudes of the total count and of the background count during some fixed time. The uncertainty due to counting statistics is then translated into an uncertainty in the concentration estimate by dividing by detector response. The maximum sensitivity represents the lowest concentration of a specific radionuclide that can be measured at a given confidence level in a stated time (at a given flow rate, where applicable) under specific background radiation conditions (see ANSI 42.18, Section 5.3.1.4). The maximum sensitivity is commonly termed minimum detectable level (MDL) and is defined in terms of the uncertainty in interferences (termed background in radiation detection) and the response of the radiation detector: MDL =
CL sb Rd
where CL is the confidence level desired in the measurement (unitless) and sb is the background uncertainty in units of the detector output (e.g., counts per minute). A MDL that is termed minimum detectable concentration (MDC) is based on ANSI N42.18: MDC =
2sb Rd
Another MDL is termed lower limit of detection (LLD) (10). LLD =
4.66sb Rd
Indirect Measuring Instruments. Radiation monitors that view a medium through which a sample has been drawn (e.g., particulate channels that monitor the radiation buildup on a filter) have additional characteristics for the establishment of range. Detector response is stated in terms of output per unit of activity deposited on the filter medium. The quantity of activity on the filter for isotopes with half-lives much longer than the sample collection time is the product of concentration A times sample flow rate f times the sample collection time T. Then the concentration estimate becomes A=
y − Bkg Ri f T
where Ri is the detector output per unit activity on the filter [e.g., (counts per minute)/애Ci]. Then MDC =
2sb Ri f T
LLD =
4.66sb Ri f T
and
41
A specification of sensitivity also establishes the level above which the set-point value should be established. Set points should be well above MDLs in order to avoid spurious alarm/trip outputs due to statistical fluctuations in the measurement. See ANSI/ANS-HPSSC-6.8.2, Section 4.4.8. Ambient background radiation is specified in order to determine the quantity of shielding required around the radiation detector, and this specified ambient level should be greater than the levels expected during plant operation. Detector-assembly performance can be stated in terms of detector response per unit background radiation for a specified isotope. -Particle Detectors for Airborne-Radiation Monitoring. Airborne effluent is frequently monitored by drawing a sample from the effluent stream into a lead-shielded sample chamber. The sample chamber is viewed by a detector with a thin (around 0.010 inch), predominantly 웁-particle-sensitive detector, since nearly all noble gases emit 1 웁 particle per disintegration. 웁 particles that are in excess of several hundred keV will lose about 100 keV for entrance normal to the detector face. The lower discriminator must be set above noise in the detection system. The upper discriminator needs to be open ended or set above 500 keV due to the high-energy tail of the energy distribution deposited by the 웁 particles. This energy straggling for 웁 particles results from infrequent, large-angle scattering by which the 웁 particle can lose up to one half of its initial energy (11). The decays per disintegration are the first factor in determining the response of a detector. The isotopic distribution of radioactive elements during normal reactor operation is significantly different from that postulated for accident releases. The use of a 웁-particle-sensitive detector has the advantage that the number of 웁 particles per disintegration changes little from normal operation releases to the release postulated under accident conditions. 웂-ray-sensitive detectors are at a disadvantage because the number of 웂 rays released per disintegration rises dramatically from normal operation to postulated initial-accident conditions and drops as noble gases decay. The element seen in normal operation is predominately 133Xe, which undergoes 웂-ray decay about 35% of the time. Initial-accident noble gas yields around 2 웂 rays per disintegration. Over Range Condition An RM instrument should operate over its range within the required accuracy, and when radiation levels are significantly above the range (above full scale), the instrument must survive and continue to present an appropriate readout. When an instrument is in an over-range condition, it is important that the instrument output not fall below full scale with input levels up to 100 times greater than full scale. Count-rate circuits are typically limited in their maximum counting rates by the resolving time required to distinguish two consecutive input pulses. That resolving time is sometimes referred to as dead time, and if a second pulse occurs during the dead time, it is missed and not counted. This is sometimes referred to as count-rate loss due to pulse pileup. If count-rate loss gets so severe that the output actually decreases while the input is still increasing, the condition is called foldover. Radiation emission from radioactive atoms is a random process and has a Poisson statistical distribution.
42
RADIATION MONITORING
The resolving time of most count-rise instruments is equal to or greater than the width of the incoming pulse. As the input rate increases, the count-rate loss will increase until the output goes into saturation or even foldover. In some instruments, very high input rates have been able to freeze the circuit and cause the output rate to go toward zero. Such instruments are sometimes referred to as ‘‘paralyzable’’ and those that overcome this failing as ‘‘nonparalyzable.’’ The instrument output count rate n can be calculated, as shown by Evans (12), from the input rate N and the instrument dead time p, during which the circuitry cannot respond to a second input pulse, by the equation n = Ne−N p A useful method for estimating count-rate loss for a system that obeys Poisson statistics is to use the following approximation: when the output count rate is A% (any value below 10%) of the frequency represented by the inverse of the dead time, 1/p, the instrument has a count-rate loss of approximately A%. Many design methods have been used to eliminate or reduce the effects of foldover and prevent the output of an instrument from going below full scale when the input levels are above full scale. Accuracy The definition of accuracy from ANSI Standard N42.18 is, ‘‘The degree of agreement [of the observed value] with the true [or correct] value of the quantity being measured.’’ Accuracy cannot be adjusted nor otherwise affected by calibration. It is a performance specification against which a channel is tested. For channels with multiple components, the individual accuracies are combined as part of the overall accuracy. Accuracies, for monitors measuring concentration-related quantities, are normally specified for a certain isotope or a distribution of isotopes as opposed to radiation energies. Accuracies for monitors measuring dose rate are normally specified over an interval of radiation energies. ANSI Standard N42.18, Section 5.4.4, provides a guideline that the instrument error for effluent monitors should not exceed ⫾20% of reading over the upper 80% of its dynamic range. Response Time For safety-related equipment, system response times used in safety analyses include the response times of the individual subsystems performing the protective function. This typically consists of instrument response time and mechanical system response time. The system response time must be allocated among the subsystems. For a radiation-monitoring channel, the response time depends on the initial radiation levels, the increase in radiation level as a function of time, and the channel set point. ANSI Standard N42.18 recommends that radiation-monitoring-channel response time be inversely proportional to the final count or exposure rate. This characteristic comes naturally with analog count-rate circuits. The time constant is typically established by a resistance and a capacitance in the feedback path of an operational amplifier, and for logarithmic circuits the resistance is typically the forward resistance of a
diode at its current operating point. A factor of 10 increase in count rate will typically make the time response a factor of 10 faster. And, in these type circuits, if the change in input activity is a step function, the time response will be strictly a function of the end point and will not be affected by the starting point. Digital circuits, including software algorithms, are typically designed to emulate the time response of their analog counterparts. Thus fast response times can be provided at high radiation levels and longer response times at lower levels. Both types of circuits usually offer the ability to adjust the time constant to match requirements of the application. Slow response times are needed to provide stable, smoothed outputs at the low end of the range. ANSI Standard N18.42 recommends that response times at low radiation levels should be long enough to maintain background readings within the required accuracy.
BIBLIOGRAPHY 1. ANSI/ANS Publication HPSSC-6.8.1-1981, Location and Design Criteria for Area Radiation Monitoring Systems for Light Water Nuclear Reactors. 2. ANSI/ISA S65.03-1962, ANSI Standard Standard for Light Water Reactor Coolant Pressure Boundary Leak Detection. 3. USNRC Regulatory Guide 1.45, Reactor Coolant Pressure Boundary Leakage Station Systems. 4. American National Standards Institute Standard ANSI N42-18, Specification and Performance at On-Site Instrumentation for Continuously Monitoring Radiation in Effluents. 5. The requirements for postaccident monitoring equipment are outlined in USNRC Regulatory Guide 1.97, Revision 3, and NUREG 0737. 6. USNRC NUREG 0800, Standard Review Plan, Secs. 11.5, 12.3, and 12.4. 7. G. F. Knoll, Radiation Detection and Measurement. New York: Wiley, 1979, chap. 7. 8. D. J. Allard and A. M. Chabot, The N-16 gamma radiation response of Geiger-Mueller tubes, Health Phys. Soc. Annu. Meet., Washington, D.C., 1991. 9. G. F. Knoll, Radiation Detection and Measurement. New York: Wiley, chap. 5. 10. USRCC NUREG/CR-4007, Lower Limit of Detection: Definition and Elaboration of a Proposed Position for Radiological Effluent and Environmental Measurement, 1984. 11. N. Tsoulfanidis, Measurement and Detection of Radiation. New York: Hemisphere Publish., 1983, chap. 13. 12. R. D. Evans, The Atomic Nucleus. New York: McGraw-Hill, 1955, p. 785.
Reading List Fundamentals of Nuclear Medicine, N. P. Alazraki and F. S. Mishkin (eds.), New York: Soc. Nuclear Medicine.
CLINTON L. LINGREN DANIEL WEIS GEORGE L. BLEHER DONALD P. GIEGLER Individual Consultants
RADIO BROADCAST STUDIO EQUIPMENT
RADIATION PATTERNS. See ANTENNA THEORY. RADIATION PROTECTION. See DOSIMETRY. RADIATION RESISTANCE OF ANTENNAS. See LINEAR ANTENNAS.
RADIATION, SPONTANEOUS EMISSION. See SPONTANEOUS EMISSION.
RADIO AND TV TRANSMISSION. See LOW-POWER BROADCASTING.
RADIO ANTENNAS. See ANTENNAS. RADIO BROADCASTING. See ANTENNAS FOR HIGHFREQUENCY BROADCASTING.
43