Radio Interferometry and Satellite Tracking
For a complete listing of titles in the Artech House Space Technology and...
238 downloads
2157 Views
3MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Radio Interferometry and Satellite Tracking
For a complete listing of titles in the Artech House Space Technology and Applications Series, turn to the back of this book.
Radio Interferometry and Satellite Tracking Seiichiro Kawase
Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the U.S. Library of Congress. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library. Cover design by Vicki Kane
ISBN 13: 978-1-60807-096-1
© 2012 ARTECH HOUSE 685 Canton Street Norwood, MA 02062
All rights reserved. Printed and bound in the United States of America. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the publisher. All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Artech House cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark.
10 9 8 7 6 5 4 3 2 1
Contents
Preface
xiii
Part I Radio Interferometer
1
1
Overview of Part I: Radio Interferometer
3
2
Receiving Antenna
7
2.1
Receiving Points and the Baseline
7
2.2
Reference Point
8
2.3
Polarization
11
2.4
Sidelobe
12
2.5
Mechanical Stability
12
3
Receiving Equipment
13
3.1
Frequency Conversion
13
3.2
Receiving Routes
14
v
vi
Radio Interferometry and Satellite Tracking
3.3
Phase Stability
16
3.4
Reference Correction
17
3.5
Cable Stability Condition
19
3.6
Reference Coupler Reference
20 21
4
Phase Detection
23
4.1
Direct Phase Measurement
23
4.2
Separate Measurement
24
4.3
Fourier Transform
26
4.4
Problem of Image Spectrum
27
4.5
Signal Processing for Phase Measurement
28
4.6
Noise Reduction
32
4.7
Tracking Nonbeacon Signals Reference
35 37
Appendix 4A: Window and Phase Measurement
38
4A.1
Beacon Measurement
38
4A.2
Nonbeacon Measurement
39
5
Signal, Noise, and Precision
41
5.1
Required SNR
41
5.2
Signal Power and Noise Power
42
5.3
Beacon Downlink Budget
45
5.4
Tracking a Weak Signal
46
5.5
Estimates in PFD Reference
47 49
Contents
vii
6
Error Factors
51
6.1
Baseline Error
51
6.2
Phase Ambiguity
53
6.3
Atmospheric Refraction
55
6.4
Effect of Rainwater Reference
57 57
7
Design and Installation
59
7.1
System Layout
59
7.2
Reflecting Interferometer
60
Part II Geostationary Satellite Orbit
65
8
Overview of Part II: Geostationary Satellite Orbit Reference
67 69
9
Kepler’s Laws
71
9.1
Kepler’s First Law
71
9.2
Kepler’s Second Law
73
9.3
Kepler’s Third Law
74
9.4
Physical Meanings
75
9.5
Significance of Kepler’s Laws
80
10
Near-Stationary Orbit
83
10.1
Geostationary and Near-Stationary Orbits
83
10.2
Orbit with Small Eccentricity
84
10.3
Motion Due to Small Eccentricity
86
10.4
Motion Due to Nonstationary Radius
89
viii
Radio Interferometry and Satellite Tracking
10.5
Motions in an Orbital Plane
90
10.6
Motion Perpendicular to an Orbital Plane
91
10.7
Relative Position Coordinates Reference
93 95
Appendix 10A: Width of Figure 8-Like Locus
95
11
Changing the Orbit
97
11.1
Orbital Energy
97
11.2
In-Plane Orbital Changes
99
11.3
In-Plane Orbital Maneuver
101
11.4
Inclination Maneuver
103
12
Orbital Perturbations
107
12.1
Perturbing Forces
107
12.2
Nonspherical Shape of the Earth
108
12.3
Patterns of Longitudinal Drift
111
12.4
Solar Radiation Pressure
113
12.5
Position of the Sun
117
12.6
Long-Term Effect
118
12.7
Gravity of the Sun
120
12.8
Tilting of the Orbital Plane
122
12.9
Gravity of the Moon
125
12.10
Sun-Moon Combined Effect Reference
128 129
Contents
ix
13
Station Keeping
131
13.1
EW Keeping for Drift-Rate Control
131
13.2
EW Keeping for Eccentricity Control
133
13.3
Combined EW Keeping
136
13.4
NS Keeping
137
13.5
Factors Depending on Satellites Reference
138 141
14
Overcrowding and Regulations
143
14.1
Orbital Regulations
143
14.2
Problem of Overcrowding Reference
145 145
Part III Interferometric Tracking
147
15
Overview of Part III: Interferometric Tracking
149
16
Tracking and Orbit Estimation
151
16.1
General Concept
151
16.2
Styles of Orbit Estimation
152
16.3
Choice of Estimation Style
154
16.4
Software Units
155
16.5
Meaning of Orbit Estimation
157
16.6
Tracking Using an Interferometer Reference
158 160
17
Azimuth-Elevation Tracking
161
17.1
Azimuth-Elevation Angles
161
x
Radio Interferometry and Satellite Tracking
17.2
Azimuth-Elevation Interferometer
163
17.3
Detection Unit Vector of a Baseline
164
17.4
Orbit Estimation
166
17.5
Accuracy Considerations
168
17.6
Nonhorizontal Baseline
168
18
Longitude Tracking
171
18.1
Satellite Longitudes
171
18.2
Longitude-Monitoring Interferometer
172
18.3
Orbit Estimation
173
18.4
Interferometer Setup
175
18.5 18.5.1 18.5.2 18.5.3
Monitoring Examples Single Satellite Two Satellites Different-Band Satellites Reference
175 175 177 179 180
19
Range-Azimuth Tracking
181
19.1
Combined Tracking for Orbit Estimation
181
19.2
Merit of Combined Tracking
183
19.3
Interferometer Hardware and Performance
183
19.4
Station Keeping with Safety Monitoring Reference
185 186
20
Differential Tracking
187
20.1
Differential Tracking Concept
187
20.2
Interferometer Hardware
188
Contents
xi
20.3
Orbit Estimation
190
20.4
Possible Applications Reference
191 192
21
Rotary-Baseline Interferometer
193
21.1
Rotary Baseline
193
21.2
Rotary Baseline with Mirrors
195
21.3
Rotary-Baseline Interferometer
196
21.4
Operation and Data Processing
199
21.5
Orbit Estimation
203
21.6
Long-Term Monitoring
205
21.7
Error Considerations
207
21.8
Error Calibration
209
21.9
Nongeometrical Error Reference
210 212
22
Geolocation Interferometer
213
22.1
Geolocation: Principle and Problem
213
22.2
Weak-Signal Detection
215
22.3
Delay Limit and Delay Line
217
22.4
Correlation Processing
219
22.5
Time-Integration Effect
220
22.6
Problem of Satellite-Transponder Phase
222
22.7
Phase Measurement Accuracy
223
22.8
Locating the Earth Station
224
22.9
Transponder Frequency Errors
228
xii
Radio Interferometry and Satellite Tracking
22.10
Orbital Information
228
22.11
Quick Orbit Estimation Reference
229 230
About the Author
231
Index
233
Preface The worldwide growth of space telecommunications has caused a rapid increase in the number of satellites operating in geostationary orbits. Satellites are being placed in orbit with less and less distance separating them; sometimes the amount of separation is so small that satellite control needs to operate with extreme caution to ensure orbital safety. Satellites currently being planned for launch are competing for vacant orbital positions, with more and more effort being required for coordination with other satellites. Satellites are thus faced with the problem of an overcrowded orbit. The purposes of this book are to address this problem and to show how radio interferometers can be used for tracking and monitoring the orbits of geostationary satellites in the overcrowded environment. Radio interferometry is a passive means of satellite tracking, with high accuracies theoretically possible for observing direction angles. Its potential use was noticed during the early years of artificial satellites. In actuality, however, there have been few or no cases of interferometric tracking, in particular for geostationary satellites. This is because interferometers had some inherent difficulties in establishing operational accuracy. This book will demonstrate that we can overcome the difficulties to make the interferometer truly capable of precise satellite tracking. Satellites are faced with an additional problem. RF interference tends to occur when an Earth station emits unwanted signals to satellites. Locating such an Earth station on the map requires a special tracking method, which is based
xiii
xiv
Radio Interferometry and Satellite Tracking
on the same principle as the satellite tracking interferometer. So this topic is also covered in this book. Chapters of this book are grouped into three parts. Part I addresses the fundamentals of the interferometer. It starts by defining the concepts and terminology, such as baseline vector, reference points, and interferometric phase. Next it covers interferometer hardware, including antennas, receiving equipment, and signal processing for phase detection. The accuracy of the tracking measurements is discussed in terms of signal and noise and other systematic errors. The contents of Part I are the essential items that must be considered for every interferometer. Part II discusses the orbital dynamics of geostationary satellites. Because our tracking targets are geostationary satellites, we need to know in what manner they move, if they move, in orbit. Discussions start with the fundamental laws of orbits, then go through maneuvers and perturbations, until finally reaching the station-keeping methods. Discussions are straightforward, without relying on complex mathematical equations, because we prefer an approach that makes comprehension easy while not losing exactness. One can regard Part II as a concise, understandable discourse on the theory of geostationary orbits. Part III illustrates how interferometers are used for satellite tracking. Different types of interferometers are shown, because they have different purposes of tracking and orbit estimation. Parts I and II are frequently referred to, as they are put together to derive interferometer applications. Use of an interferometer for locating unwanted Earth stations is also discussed in Part III. In regard to the content, Part I is categorized as electronic engineering, whereas Part II covers mechanical engineering. The author has made every effort to write Part I such that it can be followed without trouble by those who are in mechanical engineering, and Part II by those in electronic engineering, because an understanding of Part III requires both. Chapters are thus straightforward and meant to be self-contained; external material is referred to only if it is truly worth referral. For this reason, showing long lists of references is not the style of this book. The author wishes to thank the National Institute of Information and Communications Technology (NICT), where he worked in satellite communications, tracking, and orbital dynamics. Kashima Space Technology Center, a local branch of NICT, was the operational site for the interferometers seen in Chapters 18, 20, and 21; all members of the engineering and administration departments who gave support to those interferometer projects are cordially thanked. The author’s deepest thanks go to the late Dr. Erik Mattias Soop. At his suggestion the author became interested in interferometric tracking and
Preface
xv
tried orbit estimation analysis during a visit to the European Space Operations Center in the 1980s, and that was the starting point for the author’s involvement in interferometric tracking of geostationary satellites. Interferometric satellite tracking is a relatively young technology in the history of geostationary satellites that started in the 1960s. The author hopes the present book will attract interest in this young technology, thus promoting its further development, for surely it will give us momentum to confront the problem of overcrowded geostationary orbits.
Part I Radio Interferometer
1 Overview of Part I: Radio Interferometer The radio interferometer, or simply interferometer as we will refer to it, provides a means of measuring the directional angle of downlink microwaves from a target satellite. Its basic idea is illustrated in Figure 1.1. Antennas receive the satellite microwaves, and the relative phases measured between the antennas are used to point to the satellite direction. The pointing direction has two degrees of freedom, which are often expressed in azimuth and elevation angles. Correspondingly, the phases are measured between antennas (1) and (2) and between (2) and (3). Originally, there existed a method for measuring satellite direction by using a large-diameter parabolic antenna with autotracking. One can actually regard the interferometer as deriving from the autotracking antenna. The principle of the autotracking antenna may be understood as illustrated in Figure 1.2. If the satellite is right in front of the antenna, then by symmetry the satellite microwave arrives at feed elements (a) and (b) at the same time. If the satellite is at some slanted angle as indicated by the broken lines in Figure 1.2, the microwave will arrive earlier at B than at A of the antenna dish and, hence, will arrive earlier at element (b) than at (a). Let us assume here that we are receiving a beacon signal from the satellite. The relative time difference of signal arrivals at (a) and (b) is then detected as a relative phase difference. A drive motor then slews the antenna, until the phase difference becomes zero. This makes the antenna point right to the satellite, and we determine the satellite direction by reading the slewing angle of the driving shaft. Sometimes there is a single feed horn, instead of two elements, placed at the focal point. In this case the horn may be regarded as (a) and (b) being combined, and the relative phase difference is detected by picking up a higher order uneven mode excited in the horn. If the phases at A and B are different, 3
4
Radio Interferometry and Satellite Tracking
Figure 1.1 Basic interferometer.
Figure 1.2 Principle of autotracking.
the phase at the focal point shows uneven distribution, thus exciting the higher order mode. So, the tracking principle is similar to that of the two-element case. Tracking the satellite direction thus relies on the existence of a relative phase difference between A and B of the antenna dish. If this is so, we can place small antennas at A and B, instead of using a large dish, to detect the relative phase of A and B. This is the basic principle of the interferometer, and because we can take the A and B pair horizontally and vertically across the large antenna dish, two pairs of small antennas should appear. In this way the interferometer
Overview of Part I: Radio Interferometer
5
takes the shape illustrated in Figure 1.1, with antennas (1) and (2) working as one pair and antennas (2) and (3) as another pair. In the early period of satellite communications, Earth stations employed large-diameter antennas, because during that period, satellites were small in size and mass, and so had less transmission power than nowadays. Over the decades satellites have evolved to have more and more transmission power; correspondingly, the use of large antennas in Earth stations has become less and less frequent. This means that the Earth stations are now losing the ability to measure the direction of satellites, and this is where the interferometer can show its significance. The interferometer has advantages over the autotracking antenna. First of all, the interferometer does not need a large-diameter antenna. Its small antennas are placed at fixed points and do not need drive mechanisms if the tracking target is a geostationary satellite. The accuracy of direction measurement improves with the distance between the antennas. Low-cost, accurate satellite tracking thus becomes possible. In contrast, however, the interferometer has its own problems. The interferometer is based on precise phase measurement, whereas in reality it is no easy task to measure precise phases in a practical environment with various error sources existing. The interferometer has only small parts of A and B taken out from the large dish, as was illustrated in Figure 1.2, with the major part between A and B being discarded. So, the interferometer lacks a sharp beam that should point toward the satellite, and this causes some indefiniteness in determining the direction. Getting over these problems is essential for realizing an interferometer. Part I addresses these problems and considers how to solve them while discussing the design of interferometer hardware. Chapters 2 through 4 discuss the most basic elements of the interferometer, including antennas, receiving equipment, and phase detection. Chapter 5 discusses the quality of satellite downlinks, which is also a basic element. Chapters 6 and 7 discuss system design and installation, while considering how to eliminate error sources in order to eke out the best performance of the interferometer hardware. The antennas we saw in Figure 1.1 make up pair (1)–(2) and pair (2)–(3). Both pairs have identical functions. For that reason, we focus our interest on a single pair of antennas throughout the discussions in Part I. For the time being we will consider receiving satellite beacons; later, we will also consider nonbeacon signals. Upon discussing the interferometer hardware, we will assume the frequency band of 3 to 4 GHz (C band), 11 to 12 GHZ (Ku band), or both. This is because increasing numbers of satellites are now using these frequency bands, which adds to the overcrowding problem of orbital positions and frequency channels. Interferometric tracking was once used when the earliest artificial satellites were put into low Earth orbits [1]. Its use, however, did not last long
6
Radio Interferometry and Satellite Tracking
because it was soon replaced by Doppler, range rate, or ranging. Afterwards, interferometric satellite tracking was not often discussed. Using the interferometer for geostationary satellites is thus a new concept for us, and this is why the following discussions start with those basic elements. Discussions will proceed mostly in a self-contained manner with concise information about satellite communication links, for instance, by [2], given as background in the chapters, in particular, in Chapter 5.
References [1] Bate, R. R., D. D. Mueller, and J. E. White, Fundamentals of Astrodynamics, New York: Dover, 1971, pp.135–136, 138. [2] Agrawal, B. N., Design of Geosynchronous Spacecraft, Englewood Cliffs, NJ: Prentice-Hall, 1986, Chap. 7.
2 Receiving Antenna Antennas used for interferometers are basically the same as those antennas used in satellite communication Earth stations. However, our particular purpose for interferometric phase measurement requires us to consider the antennas from a different point of view. We must consider a reference point for each antenna for measurement. We must also consider antenna polarization in a different way and mechanical rigidity. These points are discussed in the following sections.
2.1 Receiving Points and the Baseline The interferometer we are going to discuss is principally made up of two receiving points #1 and #2, as illustrated in Figure 2.1, so that the phase difference can be measured between #1 and #2. The line segment connecting points #1 and #2 is called a baseline. The baseline is a vector quantity defined by its length and orientation. The interferometer of interest will have a baseline length of several meters or longer, but not much longer than a couple of tens of meters, because the interferometer will be placed on the premises of a satellite tracking station. If the target satellite to be tracked is in the direction perpendicular to the baseline in Figure 2.1, then its downlink signals arrive at the two receiving points at the same time. This is because the satellite is distant enough with regard to the baseline, and so the lines of sight to the satellite at point #1 and point #2 are parallel to each other. If the satellite direction changes by an angle θ, then point #1 becomes more distant from the satellite than point #2, by B sin θ. If λ is the wavelength of the satellite downlink signal, then the signal phase at point #1 will show a delay of 2π B sin θ/λ with regard to point #2. This kind of phase delay is called the interferometric phase, and measurement of the 7
8
Radio Interferometry and Satellite Tracking
Figure 2.1 Principle of interferometer, with receiving points #1 and #2.
interferometric phases will provide information about the satellite’s direction angles and, hence, the satellite’s orbital motion. For example, suppose we have a 10m-baseline interferometer for receiving a beacon signal that has a 25-mm wavelength (in a 12-GHz band). If angle θ changes from 0 to 0.001 deg, the interferometric phase will show a change of 2.5 deg. So, detecting the phase to a few degrees of resolution provides good information about the satellite’s orbital position because the satellite position is supposed to be controlled within a limited bandwidth of 0.1 deg. This example may be thought as a basic model for our interferometer.
2.2 Reference Point The receiving points in Figure 2.1 are assumed to be dimensionless, whereas in reality, receiving antennas have dimensions. So, we need to define a reference point for each antenna before defining the baseline of the interferometer. The reference point is considered as follows. Suppose we have an ideal antenna, that is, an antenna with a structure that is ideally symmetric, as shown in Figure 2.2. The antenna has an axially symmetric main dish, and its primary center feed puts an axially symmetric radiation pattern onto the main dish. The antenna is receiving a signal from a satellite right in front of the antenna. Now, suppose we rotate this antenna around a pivot line P1 by a small angle; this makes the main dish move to the position shown by the broken line in Figure 2.2. This rotation will cause no changes in the phase of the signal received by this antenna. Similarly, we rotate the antenna around a pivot line P2 by a small angle, while P2 is orthogonal to P1. Again this rotation causes no changes in the phase of the received signal. The cross point of P1 and P2 then has a good property and is valid as a reference point when tracking the changing directions of the satellite.
Receiving Antenna
9
Figure 2.2 Reference point of an ideal antenna.
The reference point is defined in this way for an ideally symmetric antenna or a center-fed parabolic antenna. If the antenna has a nonsymmetric structure, as is the case for commonly used offset-fed parabolic antennas, its reference point can be found by testing. Suppose we have two antennas acting as an interferometer, and they are receiving a target satellite’s signal. The test for one of the antennas is illustrated in Figure 2.3. We rotate the antenna around its elevation pivot P, by a small angle ∆θ. This rotation will perhaps cause a small change in the interferometric phase, by ∆φ. Let R1 be a line parallel to the line of sight to the satellite, and assume that
Figure 2.3 Finding the reference point by conducting an elevation rotation test.
10
Radio Interferometry and Satellite Tracking
this line is away from the pivot by x. The reference point is then somewhere on the line R1 if x satisfies
∆φ = 2 π
x ∆θ λ
(2.1)
This test must proceed in a short time period, say, a few minutes, so that the change in the satellite’s direction will be negligibly small during the test and so the phase change ∆φ will come from the antenna rotation only. Because Figure 2.3 is a side view, R1 is actually for a plane parallel to the line of sight to the satellite and parallel to the elevation rotation axis. The reference point thus exists somewhere in this plane R1. After setting the antenna back to its original pointing position, we do one more test as illustrated in Figure 2.4. We rotate the antenna by a small angle ∆θ around its azimuth axis P. This rotation causes a small phase change of ∆φ. The reference point then exists somewhere on a line R2 parallel to the line of sight to the satellite, and R2 is away from the azimuth axis by y, with y satisfying
∆φ = 2 π
y ∆θ λ
(2.2)
Here again R2 is for a plane that is parallel to the line of sight to the satellite and parallel to the azimuth rotation axis; within this plane R2, the reference point exists. The reference point exists therefore on the line at which planes R1 and R2 cross each other. This line will cross the main dish surface, and this crossing point can be set as the reference point of the antenna. We do the same
Figure 2.4 Finding the reference point by conducting an azimuth rotation test.
Receiving Antenna
11
test for the other antenna to find its reference point, and finally, the baseline is defined as connecting the two reference points. Note that the reference point we have defined is different from the phase center of an antenna. The phase center is a hypothetical point at which a spherical wave originates. So, it applies to horn antennas or omnidirectional antennas, but not to an antenna that radiates a parallel beam. The reference point is an abstract entity, but we need to define it by those tests. If the satellite is not stationary, the antennas must rotate to keep pointing to the satellite. In such a case, the reference points may become moving points, which complicates the process. Because our target is a geostationary satellite, the antennas are fixed without driving. So, the reference points are regarded as fixed points. If the two antennas have identical shapes and their primary feeds have identical patterns of radiation, then we do not need to know their exact reference points. Simply mark the geometric center of the main dish surface of each antenna. Then, connecting the two center points determines the baseline. This is valid because the target satellite is distant enough, and so a slight parallel shift in the baseline’s placing makes no difference in directional tracking. Using identically designed antennas is thus a good choice in designing an interferometer.
2.3 Polarization A microwave is characterized by what kind of shape its electric-field vector, or E vector, traces out as the wave propagates. If the E vector rotates in such a way that the vector traces out a screw-like shape, the wave is said to be circularly polarized, and according to the direction of the rotation, it is called either righthand circular polarization (RHCP) or left-hand circular polarization (LHCP). If the microwave propagates with its E vector confined in a fixed plane, that is, the vector does not rotate, it is linearly polarized (LP). Old terrestrial microwave links used horizontal and vertical polarizations, and these names were adopted later in satellite links. Actually, E vectors of LP downlinks from satellites are not precisely horizontal or vertical, but at some skew angles from being horizontal or vertical, according to the geometry of the satellite and the Earth station; however, it is practical to regard them as being horizontal or vertical in approximate generic sense. Thus a satellite communication downlink should be polarized as either RHCP, LHCP, horizontal LP, or vertical LP, and any Earth station antenna must have the same polarization as the downlink to be received. This condition, however, does not necessarily apply to the interferometer. Suppose our interferometer antennas are set to linear polarization. The interferometer can then operate for RHCP and LHCP downlinks at a loss of 3 dB, and for downlinks of vertical and horizontal LP at a loss of 3 dB or near if
12
Radio Interferometry and Satellite Tracking
the antenna polarizer is set at some intermediate angle. Such a setting is favorable if there are two or more satellites in the receiving antenna beam and we want to track every satellite in order to determine if any close approach among satellites is going to happen; that is, if we want to try orbital safety monitoring. If two or more satellites are operating within a small orbital region, their beacons, or telemetry carriers, must have been assigned to different frequencies by coordination. It is then possible for the interferometer to track each beacon separately as a target. Note here that the two antennas of the interferometer must be set at equal polarization angles. If they are not equal, a constant error will arise in the interferometric phase for a CP downlink, and this error will be positive or negative according to which polarization (RHCP or LHCP) the downlink has. This kind of error must never happen if we try the abovementioned orbital safety monitoring.
2.4 Sidelobe Because our tracking targets are geostationary satellites, we can assume that the target satellites should stay within the beams of fixed receiving antennas if the antennas are not too large in diameter and if the antennas are set at right pointing positions. If one of the antennas has a pointing error such that the target downlink is received by the antenna’s first sidelobe, then an error of 180 deg would arise in the interferometric phase. Sidelobe reception can occur because interferometric phase detections can have high sensitivities, as we will see later. If we are trying orbital safety monitoring, a fatal error occurs if one satellite is received by two antennas’ mainlobes while another satellite is received by one antenna’s mainlobe and the other antenna’s sidelobe.
2.5 Mechanical Stability The antennas must have rigid structures so as to withstand wind pressures. Because the wind pressure changes with time, the antenna structure will suffer deformations in a vibrating manner. It is important for the vibrating deformation to settle down to zero after the relief of the wind pressure; that is, the deformation must be elastic. Nonelastic deformations may occur if the vibration causes any slip between bolt-fixed parts of the antenna. The antenna must be rigid enough in this sense, and this requirement for rigidity becomes stricter for interferometers than for satellite communication antennas.
3 Receiving Equipment The next consideration after the antennas is the receiving equipment. The receiving equipment transfers the satellite signals collected by the antennas to a phase-measuring unit. Microwaves from the satellite are in the frequencies of gigahertz or higher, but the phase-measuring unit can only accept signals in the frequencies of tens of megahertz, because the unit works at a frequency for digital sampling. So, the receiving equipment must convert the signal frequency downward without losing any of the phase information contained in the original signal. This is again a particularity of the interferometer as compared with the case of satellite communications. In the following sections, we discuss how to maintain phase accuracy when receiving the satellite signals.
3.1 Frequency Conversion The mechanism of converting a signal frequency is illustrated in Figure 3.1. Here, an incoming high-frequency signal, or a radio-frequency (RF) signal as it is often called, is converted down to a lower, intermediate-frequency (IF) signal. Suppose the RF signal is written as sin(wR t + φR), with frequency ωR and phase φR. An oscillator, usually called a local oscillator, generates a sinusoidal signal: sin(ωL t + φL), with frequency ωL and phase φL. The RF and the local signals are multiplied together, or mixed, as it is often said:
sin( wR t + φR ) × sin( wLt + φL ) (3.1) 1 1 = cos [( wR - wL )t + ( φR - φL )] + cos [( wR + wL )t + ( φR + φL )] 2 2
13
14
Radio Interferometry and Satellite Tracking
The mixing yields two signals with different frequencies. One frequency is ωR - ωL, which is lower than the incoming RF, and the other is ωR + ωL, which is higher. This relationship is illustrated in Figure 3.2. We want the lower frequency for the IF, so we pick it up by using a filter, while cutting away the higher one. Sometimes the local frequency is set higher than the RF to obtain the IF frequency as |ωR − wL|. Choosing the local frequency ωL thus allows us to obtain the IF in a desired frequency. This kind of unit is called a downconverter. Now, turn to (3.1) and look at the relationship between phases. The original RF signal has a phase φR, while the downconverted IF has the phase φR − fL. That is, a shift of phase occurs when the RF is converted down to an IF, and the amount of shift equals the local signal’s phase. This is an important property of the downconverter. The frequencies and phases of the signals are summarized in Figure 3.1.
3.2 Receiving Routes The interferometer has two receiving antennas, which corresponds to two receiving routes that make up the receiving equipment, as illustrated in Figure 3.3. The two routes are identical. A receiving route begins with a low-noise amplifier (LNA), an amplifier with a special property. (The reason for its use will be clarified later in Chapter 5.) Next to come are the downconverters. Theoretically speaking, a single converter may convert the RF down to the desired frequency by the choice of the local frequency. Practically, a local frequency too near the RF may cause difficulties in the electronics, so it is common to use two or even more downconverters. In the present case, the RF is converted to an IF, and the IF is once more converted down to the frequency to be input to phase detection. We know that a downconverter causes a shift of signal phase. This shift is additive if there are two converters. Our purpose is to measure the phase
Figure 3.1 Converting the frequency.
Receiving Equipment
15
Figure 3.2 Relationships among frequencies.
Figure 3.3 Receiving equipment diagram. LNA: low-noise amplifier; D/C: downconverter; LO: local oscillator; RO: reference oscillator; H: hybrid.
difference between the satellite signals at the end of route #1 and at the end of route #2. For this measurement to be correct, the phase shift due to frequency conversion in route #1 and route #2 should be equal. This is why the local signals are distributed from a common local oscillator (LO) to every downconverter, as illustrated in Figure 3.3. Actually, the first and the second converters need different frequencies for their locals. So, the LO distributes a signal, for example, in 10 MHz, and each downconverter synthesizes its local signal as needed in it, while making its local phase equal to the distributed signal. Besides the LO, there is a reference oscillator (RO) that generates a sinusoidal signal and distributes it as common signals into the receiving routes. This is primarily for testing the receiving routes, but then it will have a more important function, which becomes critical for the interferometer, as we will see later.
16
Radio Interferometry and Satellite Tracking
3.3 Phase Stability The above-mentioned requirement for the equal phase should be stated more precisely: the total phase delay over route #1 should be constant, and that over route #2 should be constant, and the two constants should be equal. Could this requirement for stable phase be fulfilled if we use standard components made for satellite communications? Normally, receiving routes in satellite communications are required to keep their phase delays stable enough so that the demodulation of phase shift keying (PSK) will work without error. This requirement is directed, however, to rapid phase fluctuations in the region of the rate of PSK. If phase fluctuations occurred at slowly changing rates, it would not affect PSK demodulations. So, we must watch out for possible slow fluctuation in the total phase delay over the receiving route if we use standard components for satellite communications. The possibility of slow phase fluctuation is tested as follows. Receiving equipment was made, after Figure 3.3, for the Ku band. The RO generates a simulated satellite beacon, and it is divided into two, to be input to the LNA in each route. By watching the phase detection results, we can determine whether the phase fluctuation in question exists. Figure 3.4 shows the result of this test. In this test, the RO generated a simulated Ku-beacon in 12,500 MHz. The phase was detected every second, and the phase data were collected for 20 sec in one session, and the session is cycled every 5 min. The data collected in this way for 2 days is shown in Figure 3.4.
Figure 3.4 Slow phase fluctuations in receiving routes for the Ku-band test case.
Receiving Equipment
17
Here, we observe the phase fluctuating as much as over 180 deg. The observed fluctuation may be attributed to downconverters. A downconverter is actually not as simple as that illustrated in Figure 3.1, but can be more complex if it has two or more conversion stages within it, with two or more frequency synthesizers to make local signals internally. Any phase fluctuations in these synthesizers will add up to yield the total fluctuation. Also, any filter used for selecting the wanted signal has its own phase delay and it may change gradually with temperatures in the long term. The total fluctuations thus originating in the receiving routes are being observed in a differential manner between route #1 and route #2 in Figure 3.3. The observed phase fluctuation is slow enough, so it would not affect PSK demodulations, while it is not negligible at all in our interferometric phase measurement.
3.4 Reference Correction We need to compensate for the phase fluctuations that are occurring in the receiving routes. To do this, we use the reference oscillator shown in Figure 3.3. Suppose that we are receiving a target satellite beacon and measuring its interferometric phase as φS. At the same time, we receive the RO signal, and measure its interferometric phase as φR. If the phase fluctuation occurs to the satellite beacon and to the RO signal in the same manner, then φS − φR will be the correct measurement of interferometric phase that we want to know. This will be true if the satellite beacon and the RO signal are not wide apart in frequency, and if φS and φR are measured at the same time. This process is called the reference correction, and the RO signal is called the reference signal. The LO supplies its signal also to the RO, so that the RO will have its phase locked, because this is better for the phase coherence of the system. If we receive a beacon coming from a satellite, it would be difficult to see whether the reference correction is working well, because the satellite is in motion. So, we consider a test, by slightly modifying the system in Figure 3.3, as illustrated in Figure 3.5. We place a simulating oscillator (SO) that simulates a satellite beacon, and combine its signal with the RO signal. The LO signal is not supplied to this SO, because in reality the satellite beacon’s phase has no relationship with the LO’s phase. The interferometer receives the simulated satellite beacon instead of the true satellite, and to this receiving signal we apply the reference correction using the RO. The simulated satellite is not in motion, so we can clearly observe the performance of reference correction. The test result is shown in Figure 3.6. The simulated beacon and the reference signal were 2 MHz apart in frequency in this case. Here, the phase is stable to within a fraction of a degree. If we look closely at the data, they do not seem to be purely random but include some small systematic undulations.
18
Radio Interferometry and Satellite Tracking
Figure 3.5 Adding an oscillator for the reference correction test. SO: simulating oscillator for satellite beacon.
Figure 3.6 Result of reference correction for the Ku-band test case.
So it would be reasonable to say that the reference correction reduces the phase fluctuation to the order of 1 deg. Ideally speaking, the reference correction should be able to reduce the phase fluctuation to zero. Actually, the fluctuation becomes nearly constant, but not to zero. This result suggests that the phase delay of a receiving route may have frequency dependence, and the difference between the beacon and reference frequencies was the reason for that nonzero constant or bias. Because it is impossible to set the satellite beacon and the reference signal at the same frequency, this kind of constant bias will be unavoidable in interferometric phase measurements. We must remember this kind of bias when applying our interferometer to satellite tracking.
Receiving Equipment
19
To summarize the discussions so far, we can assemble the receiving equipment by using standard units and components for satellite communications, and the use of reference correction stabilizes the system phase to the order of 1 deg.
3.5 Cable Stability Condition If the reference correction is to work properly, we must not forget one condition. In the diagram of Figure 3.3, the reference RO signal is distributed, after being divided, to route #1 and to route #2 by cables (1) and (2). Let us refer to the phase of the reference signal at the end of cable (1) as reference phase (1), and to that at the end of cable (2) as reference phase (2). Because we know that a constant bias can exist in the interferometric phase even after reference correction, it is not a prerequisite to set reference phase (1) and reference phase (2) precisely equal. What is required is that the difference between reference phases (1) and (2) should be constant; in other words, reference phases (1) and (2) should vary equally if they vary at all, and this becomes a problem of cable temperatures. Suppose we have a sample of coaxial cable that is 1m in length. When its temperature goes from 0° to 30°C, which corresponds to a winter-summer temperature variation, the copper line inside the cable becomes longer, through linear thermal expansion, by 0.5 mm. Meanwhile the permittivity of the dielectric fill of polyethylene inside the cable changes with temperature [1]. The permittivity ε then determines the velocity v of signal propagation along the cable as follows:
v =c / ε
(3.2)
Here, c is the velocity of light. The cable’s electrical length is inversely proportional to v, hence being proportional to ε. At 0°C the permittivity is 2.39, which makes the electrical length 1.55 times longer than real. At 30°C the permittivity is 2.37, which makes the electrical length 1.54 times longer. Hence the electrical length becomes 10 mm shorter over that temperature range. As a combined effect, the electrical length will become 9.5 mm shorter. This estimate, though a rough one, can be used for examining the cable temperature condition, as follows. Suppose, as an example, cables (1) and (2) in Figure 3.3 are both 5m long and are being used in a 10m-baseline interferometer. If the reference correction is to work properly, the difference between reference phases (1) and (2) must not change by more than 1 deg. Correspondingly, if the signal wavelength is 25 mm (in the 12-GHz band), the difference between the electrical lengths of cables (1) and (2) must not change by more than 0.07 mm. So, we must keep
20
Radio Interferometry and Satellite Tracking
the temperatures of cables (1) and (2) uniform to within 0.2°C. This would not be too difficult if we place covers over the cables to prevent the direct sunlight from irradiating the cables while allowing the air to flow around the cables so as to equalize their temperatures. Other cables used for transmitting IF signals and distributing local oscillator signals have electrical lengths that may also change with temperatures. The effects of their changing lengths will be removed after the reference correction. So, the possible imbalance in cables (1) and (2) as discussed above is the single source of cable phase errors in our interferometric phase measurement.
3.6 Reference Coupler The reference signal will be coupled to the LNA’s input port normally by using a directional coupler. If our target satellite has sufficient transmission power, then we will choose small-diameter receiving antennas. For a small antenna the LNA and downconverter are often combined into a single low-noise block converter that fits a small feed unit. In such a case, it is not practical to use a directional coupler for coupling the reference signal. A possible substitute is to use a test horn, as illustrated in Figure 3.7. The antenna dish has a 1.2m effective diameter, and it operates in the C band. The horn is attached to the edge of the dish and radiates the reference signal toward the feed unit. The radia-
Figure 3.7 Reference-coupling horn attached to an antenna dish. (Courtesy of NICT.)
Receiving Equipment
21
tion pattern of the feed unit is adjusted slightly wider than normal, so that the feed unit may pick up the reference radiation. This adjustment would cause a slight loss in antenna gain along with an increase in the antenna’s effective noise temperature. Practically, this is not a problem because the phase detection will have a sufficient sensitivity. In the test case shown in Figure 3.7, the coupling loss from the horn to the feed in the C band was 36 dB, and in this case the field strength required of the reference radiation was low enough that we did not need a radio license. The horn support must be rigid enough to prevent any change in the horn-feed distance, because otherwise the reference correction task would not work correctly.
Reference [1] Riddle, B., and J. Baker-Jarvis, “Complex Permittivity Measurements of Common Plastics over Variable Temperatures,” IEEE Trans. on Microwave Theory and Techniques, Vol. 51, No. 3, 2003, pp. 727–733.
4 Phase Detection In this chapter, we discuss how to measure the interferometric phase for a satellite beacon and for a reference correction signal. Before measuring the phase, we need to identify where the target beacon signal is by observing the signal spectrum. So, the input signal is processed by Fourier analysis, and the resulting spectrum is also used for determining the phase. Using this principle, we can create a diagram of a phase-measuring unit. The accuracy of phase measurements depends on the ratio of signal to noise. How to improve the accuracy of measurements by reducing the effect of noise is thus an important topic of this chapter. Our discussion starts with the measurement of beacon signals, and then nonbeacon signals are considered, to widen the tracking capability.
4.1 Direct Phase Measurement A simple idea for measuring the phase is illustrated in Figure 4.1. Consider here a satellite beacon for measurement. A bandpass filter selects the beacon signal x(t) from receiving route #1 and, similarly, y(t) from receiving route #2. Signals x(t) and y(t) have identical waveforms, although they are at different positions along the time axis. A time counter starts at the moment signal x crosses zero from negative to positive, and stops at the moment signal y crosses zero from negative to positive. The time interval thus measured can be converted into phase angle if the signal frequency is known. In this way, the phase difference between signals x and y can be measured directly. This kind of direct measurement might appear to be a clear and practical one, but in reality it has problems. First, any direct current (dc) component existing in the signal or any distortion of the signal waveform will cause an error in zero-cross timing. Second, the bandpass filter for signal selection has 23
24
Radio Interferometry and Satellite Tracking
Figure 4.1 Direct phase measurement by time-interval counting.
a delay time, and this delay time becomes longer if the filter’s passband is set narrower for better signal selection. One cannot assume that this kind of delay time should never change; rather we must regard the filter delay time as an error source in the time-interval measurement. These are the reasons why we do not adopt the idea of direct phase measurement for our interferometer.
4.2 Separate Measurement A different concept can be used for phase measurement in which the phases of x(t) and y(t) are measured separately, and their difference is calculated by subtraction. Consider a beacon signal x with frequency ω and phase φ, as follows:
x (t ) = cos ( wt + φ)
(4.1)
To measure the phase of this signal, we use a circuit like that illustrated in Figure 4.2. We prepare local oscillator signals, cos ωt and −sinwt, of which −sin ωt is made from cos ωt by using a 90-deg phase shifter. We then tune the local frequency ω to the incoming signal frequency. Signal x is then multiplied by the locals, and the results are made into time averages by integrating them over a period of time, to obtain Ix and Qx. The integration time periods are equal for Ix and for Qx, and the period is set long enough compared with the signal period, 2π/ω. The signal x(t) in (4.1) is written as
x (t ) = cos wt cos φ - sin wt sin φ
(4.2)
Phase Detection
25
Figure 4.2 Phase measurement for signal x. INT: time integration.
So, the Ix and Qx obtained after integration are proportional to cos φ and sinf, respectively. This is because the squared terms, cos2 ωt and sin2 ωt, have the same dc components, while the cross-term of cos ωt sin ωt vanishes. That is, Ix and Qx indicate the cosine and sine components of signal x(t); they represent the in-phase component and quadrature-phase component of x(t) with respect to the zero-phase reference of cos ωt. We can now calculate the phase of signal x, in reference to the local signal, as follows:
φx = tan -1 Q x I x
(4.3)
The phase of signal y(t) is measured in the same way by using an identical circuit prepared for y:
φ y = tan -1 Q y I y
(4.4)
Here, phases φx and φy themselves do not have any physical meaning, since the local signal may be at an arbitrary phase. That is, the local signal has no relationship with the incoming satellite beacon. After differencing, the effect of the arbitrary local phase vanishes, and we obtain the interferometric phase:
φ = φx - φ y
(4.5)
For this measurement process to work, the local frequency ω must be tuned right to the frequency of the incoming signal, because otherwise I and Q will both vanish after time integration such that determining the phase becomes impossible. This concept of phase measurement eliminates the problems that accompany the concept of direct measurement that was discussed earlier. The dc components in the incoming signals have no effect on I and Q. Signal distortions will have minimal effects, because harmonic components resulting from the
26
Radio Interferometry and Satellite Tracking
distortion will be nullified after time integration. Narrow bandpass filters are not needed, since the signal to be measured is selected by the tuning of the local frequency.
4.3 Fourier Transform The meaning of I and Q becomes clear if the signal processing in Figure 4.2 is written in terms of complex numbers. If we set
X ( w) = ∫ x (t ) e - j wt dt
(4.6)
then Ix is the real part of X(ω), while Qx is the imaginary part of X(ω), and fx is the argument of X(ω). Similarly, if we set
Y ( w) = ∫ y (t ) e - j wt dt
(4.7)
then Iy, Qy, and φy are, respectively, the real part, imaginary part, and argument of Y(ω). Equations (4.6) and (4.7) were written for one particular frequency ω and for particular signals x and y. If ω is regarded as a variable, then X is the Fourier transform of signal x, and this x is now regarded as the signal received from route #1. As ω varies, the local frequency in Figure 4.2 sweeps over the bandwidth of the received signal, to find a signal to be measured. In the same context, Y is the Fourier transform of y, the signal received from route #2. We can now define our concept of interferometric phase measurement, as illustrated in Figure 4.3. The received signals, x and y, are made into Fourier transforms, X and Y. Observe the power spectrum |X(ω)| to find a peak at some frequency w where a beacon signal exists, and at this ω determine the interferometric phase: φ = φx − φy. Finding the peak corresponds to the right tuning of the local oscillator as mentioned before. Note that arg [X*] = −arg [X]; that is, the argument of a
Figure 4.3 Interferometric phase measurement. FT: Fourier transform; CP: cross-conjugate product.
Phase Detection
27
complex number changes its sign for a complex conjugate. The interferometric phase is therefore calculated as φ = φx - φ y = arg X ( w)Y * ( w)
(4.8)
The cross-term X(ω) Y*(ω) is referred to as a cross-spectrum, and this is the key to the phase measurement. Note that |XY*| provides the power spectrum of the received signal. This is because x(t) and y(t) have identical waveforms, with only their phases being different. We will use this |XY*| for observing the power spectrum, because this is much better than |X | with regard to the signal-tonoise ratio; the reason will become clear later. To summarize, the phase is measured in the following steps: 1. 2. 3. 4.
Make signals x and y into Fourier transforms, X and Y. Set cross-conjugate spectrum XY*. Observe power spectrum |XY*| and find a peak at frequency ω. Determine the interferometric phase as an argument of X(ω)Y*(ω).
Steps 3 and 4 must be done for a satellite beacon signal and for a reference correction signal. The frequency of the reference correction signal must be set so that it will not fall on other existing signals, and this is to be checked in step 3.
4.4 Problem of Image Spectrum The Fourier analysis thus plays the key role in phase measurement. Handling the Fourier spectrum, however, needs care. Consider a signal, for example:
x (t ) = cos ( αt + β )
(4.9)
If we put this signal into the Fourier transform of (4.6), the resulting spectrum X(ω) will look like that shown in Figure 4.4, with two spectral lines at frequencies α and −a. This is because of the relationship of
1 1 cos( αt + β ) = e j ( αt + β ) + e - j ( αt + β ) 2 2
(4.10)
Accordingly, the phase, or the argument of X(ω), shows different values: β at ω = α, and −β at ω = −α. Suppose that the signal has changed its phase slightly, from β to β + ∆β, in (4.9). This change causes the argument of X(ω) to
28
Radio Interferometry and Satellite Tracking
Figure 4.4 Spectrum of a sinusoidal signal.
change by +∆β at ω = α, and by −∆β at ω = −α. That is, the Fourier spectrum shows a nonhomogeneous response with regard to phase when a phase shift occurs in the signal. Now, consider a signal with a bandwidth that has a frequency spanning from zero through B. This is actually for the signals output from the receiving routes after downconversions. Its Fourier spectrum will look like that shown in Figure 4.5. At any frequency component α, there is a corresponding negative frequency component of −α, owing to the relationship of (4.10). So, the spectrum has two parts, (1) and (2) in Figure 4.5, in symmetry with respect to ω = 0. Part (2) in the negative frequency side may be called an image spectrum of part (1) in the positive frequency side. It is the existence of an image spectrum that causes the above-mentioned nonhomogeneous response. The information contained in the signal is represented by part (1) alone, because part (2) simply mirrors part (1). That is, the image spectrum is a redundant part. The redundant part will consume storage memory and process time in data processing, without merit. So, we should consider eliminating the redundant part in signal processing.
4.5 Signal Processing for Phase Measurement On the basis of the discussions above, our phase-measuring unit has a diagram like that illustrated in Figure 4.6. This design follows the standard process of digital sampling with a fast Fourier transform (FFT). Signal processing is based
Figure 4.5 Presence of image spectrum.
Phase Detection
Figure 4.6 Diagram of phase-measuring unit. AD: analog-digital sampling; LPF: lowpass filter; DS: down sampling; INT: time integration. Double lines indicate the flow of complex-number data.
29
30
Radio Interferometry and Satellite Tracking
on the operation of complex numbers, so as to match the Fourier transform that operates for data in complex numbers. In the following, we trace how the processing proceeds while assuming specific design parameters for the unit so as to demonstrate a practical measurement case. The FFT is treated as a given, established technique; its details can be found in a reference, typically [1]. The signal from each receiving route is assumed to have a bandwidth of 20 MHz, which has been downconverted into the span from 0 to 20 MHz. Signal x is from receiving route #1, and y from receiving route #2. We will trace the processing of signal x, which also applies to the processing of signal y. Because the highest frequency component of x is 20 MHz, the sampling rate for x must be at least 40 MHz, so in the present case the sampling rate is set at 40.96 MHz to create a better relationship with the FFT cycle rate. The signal at this stage (a) has a spectrum that looks like Figure 4.7(a). Because of sampling, the spectrum pattern repeats along the frequency axis, as indicated by the broken lines. Here, the spectrum part that spans from −20 to 0 MHz is the image spectrum. Next, in Figure 4.6, a local oscillator generates a signal with a −10-MHz frequency, as being a function e jωt with ω being set at −10 MHz. Signal x and the local signal are multiplied together for mixing. At this stage (b), the signal spectrum becomes like Figure 4.7(b), with a shift of −10 MHz along the frequency axis. When the signal comes out of a lowpass filter with a passband of 10 MHz, its spectrum becomes like that shown in Figure 4.7(c′). Here, it is essential for the filter not to pass any frequencies higher than 10 MHz, so a slight drop-off near inside 10 MHz is tolerated. The filter thus cuts away the image spectrum.
Figure 4.7 Signal spectrums at stages of data processing.
Phase Detection
31
The highest frequency component is now 10 MHz, so that the sampling rate of 20.48 MHz suffices, and the spectrum will look like that shown in Figure 4.7(c). Although the sampling rate is halved, the signal has real and complex parts now, so the amount of information is the same as in the original signal. In Figure 4.6, disregard for the time being the spreader, and assume that the signal goes through from (c) to (d). We collect sample data {xi} over 50 µsec, so as to prepare a set of 1,024 data points. This data set is made into spectrum data {Xi} of 1,024 points by using the FFT. The spectrum {Xi} corresponds to Figure 4.7(c′), which has a span of 20 MHz. Similarly, signal y is converted into spectrum data {Yi}, and finally, cross-spectrum {Zi} = {Xi Y*i} is obtained. The data set {Zi} of 1,024 points is thus obtained every 50 µsec, while the data sets are integrated over a period of time, for example, over 1 sec, for smoothing. In other words, we obtain a vector Z every 50 µsec, and accumulate the vectors over 1 sec. We can now observe the power spectrum by | Zi |, determine where the object signals are, and determine the arguments of Zi for the satellite beacon and for the reference signal, at the repetition cycle of 1 sec. The phases of beacon and reference are thus measured at exactly the same timing, which is ideal for doing reference corrections. The measurement process works if the FFT runs in a repetition period of 50 µsec or shorter. If the FFT is not that fast, the measurement process must run offline. For example, we track a satellite for 1 min and save the data {xi} and {yi}. In the following 9 min, we process the data to obtain {Zi} before again tracking the satellite for 1 min. That is, we track the satellite intermittently every 10 min. This is doable if we know the correct target signal frequency. If we do not know it, we must search for it in the spectrum data, but this would be impractical if the wait time is too long. Two possible means of relief are to reduce the number of data points or to slow the signal processing clock rate, but each would result in a narrower span of spectrum observation. After all, the FFT should run in real time, and this is a matter of a trade-off between processing speed and spectrum observation span. Now, the spreader in Figure 4.6 operates as follows. The vernier local generates a local signal of positive or negative frequency, so as to shift the spectrum slightly along the frequency axis. Downsampling is done to thin out the data points. For example, one data point is picked up out of every four points while three points are discarded, to make 1/4 downsampling. As a result, data {Xi} will show a spectrum that has a span that is reduced by 1/4, that is, 5 MHz. This effect of the spreader corresponds to setting a center frequency and a frequency span on operating a spectrum analyzer. One can omit the spreader upon designing a phase-measuring unit; however, the spreader will be useful for searching for a target signal by fine tuning if we have to track an unknown satellite with an unknown downlink format.
32
Radio Interferometry and Satellite Tracking
The window in Figure 4.6 is to set proper weights for data {xi} and {yi}. If the trends of data xi near x1024 and that near x1 differ too much from each other, the power spectrum will suffer from distortion. To ease this problem, less weight is given to data xi near x1024 and near x1. Window patterns can be chosen from established ones [1]. Setting a window thus modifies slightly the apparent shape of the power spectrum, although it does not affect the phase measurement, as seen in the Beacon Measurement section of Appendix 4A at the end of this chapter.
4.6 Noise Reduction We have so far focused our attention on signals only, which cannot be free from noises. So we need to examine how noise affects our phase measurement. The representative noise in receiving satellite signals is the thermal noise that originates from the first-stage amplifier, as we will see in Chapter 5. It is an additive noise superposed on the satellite signal. Because the Fourier transform process is a linear one, signal and noise are in an additive relationship after the transform process. Suppose we have found, after observing the spectrum {|Zi|}, that a satellite beacon exists in Zi. We can then write
X i = b1 + n1
(4.11)
Y i = b2 + n2
(4.12)
where b1 and b2 are the components attributed to the beacon, and n1 and n2 are noise components. Considering that receiving routes #1 and #2 are of identical design, we assume that 2
2
b1 = b2 = S
n1 = n2 = N
2
2
(4.13) (4.14)
where S and N denote the beacon power and noise power. If we were to determine the argument of Xi, the situation would look like that shown in Figure 4.8(a). Noise n1 adds to signal b1 to cause an error δφ in the argument. If N is smaller than S, we can say that n1’s component orthogonal to b1 causes the error δφ. If we collect many samples of n1 and evaluate their orthogonal components as a root mean square (RMS), it will be equal to N 2 .
Phase Detection
33
Figure 4.8 Phase error caused by noise.
So, referring to Figure 4.8(b), we evaluate the phase measurement error level as follows:
RMS{ δφ} =
N 2S
(4.15)
That is, the error level is inversely proportional to the square root of the signal-to-noise ratio S/N. Let us now consider the determination of the argument of Zi. From (4.11) and (4.12), we can write
Z i = X iY i * = (b1 + n1 )(b2 + n2 ) * = b1b2* + b1n2* + b2*n1 + n1n2*
(4.16)
The first term on the right-hand side is our desired term; the other three are undesired terms. Set the desired D and undesired U as follows:
D = b1b2*
(4.17)
U = b1n2* + b2*n1 + n1n2*
(4.18)
The undesired U adds to desired D to cause an error δφ in the argument, as illustrated in Figure 4.9, which is quite similar to Figure 4.8(a).
Figure 4.9 Phase error caused by undesired U.
34
Radio Interferometry and Satellite Tracking
With regard to our purpose of determining the argument, the desired D may be called a kind of signal, whereas the undesired U is a kind of noise. One can then consider their powers. The power of D is simply 2
2
2
PD = D = b1 b2 = S 2
(4.19)
The power of U is evaluated, as its expected value, as follows: 2
2
2
2
2
2
PU = U = b1 n2 + b2 n1 + n1 n2
2
= SN + SN + N 2 = 2SN + N 2
(4.20)
In this equation, in the right-hand side, there appear to be cross-terms too, for example, b1 n2* b2 n1*. Noises n1 and n2 are statistically independent from each other as random variables, because they originate from different receiving routes. So, this term vanishes because its expected value is zero. Similarly, all other cross-terms vanish to allow this equation to hold. Next, we examine the effect of time integration. While the {Zi} are being integrated, the desired and undesired terms will accumulate in different ways, as illustrated in Figure 4.10. The D terms are constant, so they accumulate linearly, like [D] in Figure 4.10. After k-sample integration, the sum vector has a length k PD , and its power equals k 2PD . The U terms will accumulate in a random manner, like [U] in Figure 4.10. As the samples of U are added one after another, the end point of the sum vector goes away from the start point, gradually with fluctuations, and this is called a random walk. In such a case, it is known that the end point will be away from the start point most likely by k PU after adding k samples. Hence, the power of summed U terms becomes kPU. In this way, k-sample integration makes the desired PD increase to k2 PD, and the undesired PU increase to k PU, while the original values of PD and PU were as given in (4.19) and (4.20). So, the undesired-to-desired power ratio PU / PD after k-sample time integration will become
Figure 4.10 Effects of time integration on desired D and undesired U.
Phase Detection
35
PU k (2SN + N 2 ) 1 2N N 2 = = + 2 PD k 2S 2 k S S
(4.21)
By substituting this PU/PD into the N/S in (4.15), we can evaluate the phase measurement error level, as
RMS{ δφ} =
1 N N2 + k S 2S 2
(4.22)
If S/N is better than a few decibels, this evaluation becomes simply
RMS{ δφ} =
N kS
(4.23)
By comparing this evaluation with (4.15), we find that the measurement error is reduced by a factor of 2 k after integrating k samples. More precisely speaking, the factor 2 owes to making a cross-product of Xi and Yi, and the factor 1 k owes to the integrating. In our present case, k = 20,000 for a 1-sec integration, so the reduction factor is 1/100. This is equivalent to saying that the effective signal-to-noise ratio is improved by 40 dB, which is a substantial improvement. For this reason, we use { Zi }, rather than { Xi } or { Yi }, to observe the signal spectrum.
4.7 Tracking Nonbeacon Signals We have so far assumed that we are tracking a beacon of a satellite. If we try to track an unknown satellite during orbital safety monitoring, we must find out where its beacon is. This would be no easy task if the downlink spectrum spans over hundreds of megahertz. For such a situation we might prefer to track signals from a communication transponder, rather than a beacon. If a communication transponder signal comes into our phase-measuring unit, its response will look like that shown in Figure 4.11, where power spectrum { |Zi| } and phase spectrum {arg Zi} are sketched. The span of spectrum observation is 20 MHz, and this span will perhaps cover a part of one communication channel. We choose a subspan of the spectrum where the signal is at a good level and set it as our target. Over this target span, the phase will show a linear slope against the frequency axis, and this slope depends on the relative distance from two antennas to the satellite.
36
Radio Interferometry and Satellite Tracking
Figure 4.11 Measuring a signal that has a bandwidth.
The slope appears from the following mechanism. The phase φ is related to the above-mentioned relative distance, l: φ=
2π l λ
(4.24)
where λ denotes the wavelength. We know that λ = c/f, with c being the speed of light and f the frequency, so that (4.24) becomes
φ=
2 πf l = wT c
(4.25)
The phase φ thus shows a slope against ω, with the coefficient T = l/c denoting the relative delay time of the signals. Now, referring to Figure 4.11, we average the phases measured over the target span by fitting a line on the slope to determine the value of a phase at the center frequency ωC. This is equivalent to tracking a fictitious beacon existing at the frequency ωC. In the meantime, we place the reference correction signal at a frequency where the satellite signal level is low enough, as suggested in the figure. So, the reference will be placed near but not far from the edge of a communication channel. The target and the reference are set in this way with some degree of freedom, while they must be set in the same span of spectrum observation. Here, we recollect that the observed spectrum is slightly modified by the window applied to data {xi} and {yi}; see Figure 4.6. For the case of beacon
Phase Detection
37
tracking, the window had no effects on phase measurements. If we are measuring a nonbeacon signal, the effect of the window can be neglected, as discussed in the Nonbeacon Measurement section of Appendix 4A. The averaging of the phase as stated above has an effect on error reduction. If we use a single component Zi and its signal-to-noise ratio is S/N, the level of the phase measurement error is evaluated by (4.23). If we use m components of the Zi’s to average the phases and if these components are of uniform S/N, the error level will be reduced by 1 m , because the error components of different Zi’s are mutually independent. So, in this case the phase measurement error level becomes
RMS{ δφ} =
N mkS
(4.26)
For example, if we use a target signal with a 10-MHz bandwidth, then m = 512 and the factor of error reduction owing to averaging is 0.044 or, in other words, the effective S/N improvement is 27 dB. Measuring a signal with bandwidth thus enjoys twofold error reductions: by time integration and by frequency averaging. If the signal-to-noise ratio can be improved that much, we will be able to track weak satellite signals. Suppose there is an unknown satellite in the proximity of our own satellite. The unknown satellite points its beam not to us but toward some different service area, while its sidelobe sends us a weak signal. Our interferometer will be able to detect and track this kind of weak signal. Tracking a nonbeacon signal, however, requires caution. If the spectrum in Figure 4.11 seems to come from one unknown satellite, it is possible that actually some other satellite’s signal comes in and adds to the spectrum in superposition. In such a case, we must search for a frequency at which one satellite emits a signal while the other does not by careful signal monitoring. The spreader in Figure 4.6 will surely assist it. Tracking an unknown satellite may thus require some skill and patience in signal monitoring, while doubtlessly the capability for measuring nonbeacon signals will widen the range of our orbital safety monitoring.
Reference [1] Bracewell, R. N., The Fourier Transform and Its Applications, Boston: McGraw-Hill, 1999.
38
Radio Interferometry and Satellite Tracking
Appendix 4A: Window and Phase Measurement A window is a function of time, w(t), typically with a shape illustrated in Figure 4A.1(a). It is a real function, defined over the time span of sampled data. If the time origin is set at the center of the data span, the function is symmetric with respect to t = 0. Its spectrum, W(ω), is then a real function, symmetric about ω = 0. Its shape will look like that of Figure 4A.1(b), which has a small width along the ω axis. The effect of the window on phase measurements is examined as follows.
4A.1 Beacon Measurement When a signal x(t) is multiplied by the window w(t), the signal spectrum changes into a convolution of W(ω) and X(ω); let the resulting spectrum be denoted by X ′(ω). If the input signal is a beacon with frequency ωB, then X ′(ω) will show the same pattern as W(ω), while its center is placed at ω = ωB. Similarly, signal y(t) after being windowed has a spectrum Y ′ (ω), with X ′ and Y ′ showing identical patterns for the power spectrum. The cross-spectrum is Z ′(ω) = X ′ (ω) Y ′*(ω), with its peak power existing at ω = ωB. So, the phase is measured as being f = arg Z ′ (ωB ). At w = ωB we have
X ′ ( wB ) = W (0) X ( wB )
Y ′ ( wB ) = W (0)Y ( wB )
So, the measured phase is
φ = arg X ′ ( wB )Y ′ * ( wB ) = arg W 2 (0) X ( wB )Y * ( wB ) Since W 2(0) is a real number, we have
Figure 4A.1 Window function (a) and its spectrum (b).
Phase Detection
39
φ = arg X ( wB )Y * ( wB ) = arg Z ( wB ) That is, the window has no effects on the phase measurement.
4A.2 Nonbeacon Measurement If signal x(t) has a bandwidth, the convolution of X(ω) and W(ω) is obtained by the process illustrated in Figure 4A.2. Here, X(ω) and W(ω) have discrete sample values, and the coefficients a, b, and c correspond to those in Figure 4A.1(b). A sample value of X ′ is calculated, for any i, as
X i′ = + cX i - 2 + bX i -1 + aX i + bX i +1 + cX i + 2 +
The convolution is thus a linear combination of {Xi} operating in a sliding manner. Here, for simplicity, we set
X i′ = bX i -1 + aX i + bX i +1
This is to take only three terms; however, the following argument does not lose its validity if we take more terms for convolution. Similarly, for signal y(t), we set
Y i ′= bY i -1 + aY i + bY i +1 The cross-spectrum is then
Z i′ = X iY ′ i ′ * = [bX i -1 + aX i + bX i +1 ] bY i -* 1 + aY i * + bY i +* 1
Now, assume that we are receiving a white signal. When the data of Z ′i are collected in numbers and made into a time average, terms such as b Xi−1 a Y *i or
Figure 4A.2 Xi′ is obtained from { Xi } through convolution.
40
Radio Interferometry and Satellite Tracking
Figure 4A.3 Complex vectors in symmetry.
b Xi−1 b Y *i+1 or any other similar ones will vanish, because, for example, Xi−1 and Yi are statistically independent of each other. As a result, we have
Z i′ = b 2 X i -1Y i -* 1 + a 2 X iY i * + b 2 X i +1Y i +* 1
The terms in the right-hand side, being complex vectors, will look like those shown in Figure 4A.3. Because the signal is white, the magnitudes of Xi−1 Y *i−1, Xi Y *i, and Xi+1 Y *i+1 are identical. The vectors are at equal angular separations, because Z = X Y * has a linear phase slope against frequency, as given by (4.25). So, the vectors are in symmetry with respect to the vector a2 Xi Y *i. Hence, we have, for any i,
arg Z i′ = arg a 2 X iY i * = arg X iY i * = arg Z i This is why the window has no effects on the phase measurement.
5 Signal, Noise, and Precision In the previous chapter we discussed the precision of phase measurement as depending on the signal-to-noise ratio (SNR). To complete the discussion, we need to know the powers of signal and noise. So in this chapter we examine the parameters determining the signal and noise for the practical cases of satellite downlinks. This discussion allows us to estimate how sensitive our interferometer would be for detecting and tracking weak satellite signals.
5.1 Required SNR As mentioned in Chapter 2, the basic model of our interferometer takes the form of Figure 5.1. The interferometer should detect a change in the direction of the target satellite to a resolution of δq = 0.001 deg. This δθ corresponds to a change of δl = 0.18 mm in the relative path length if the baseline length is 10m. This variation δl then causes the interferometric phase to vary by
δφ = 2 π δl λ
(5.1)
where λ is the wavelength of the satellite signal. If we use phase detection as considered in Chapter 4, then from (5.1) and (4.23), we can estimate the minimum SNR required for detecting that change of δl: 2
2
λ kS 1 = = N δφ 2 π δl
(5.2)
We are considering here the effective SNR, because the original S/N is improved to k S/N by integrating k samples, with k acting as an integration gain. 41
42
Radio Interferometry and Satellite Tracking
Figure 5.1 Model interferometer.
We are interested in the C band and the Ku band as being the most congested cases of frequency and orbital uses. For these frequency bands, the evaluation by (5.2) becomes
36.4 dB … for the C band (4 GHz; λ = 75 mm)
(5.3)
26.9 dB … for the Ku band (12 GHz; λ = 25 mm)
(5.4)
These are the minimum required effective SNRs at which our interferometer can operate.
5.2 Signal Power and Noise Power Various parameters determine the SNR in a satellite downlink, as shown in Figure 5.2. Consider first the signal power. Suppose that the satellite is transmitting a beacon with a power PT , and that its radiation is isotropic. The radiated power will then be distributed uniformly over the whole surface of a sphere of radius d, when the radiation has propagated over the distance d. At this distance, place a receiving antenna with aperture area AR. That antenna will then receive a power, PT AR /(4πd 2). The satellite actually has a transmitting antenna with an aperture area AT , so as to radiate the beacon in a beam pointed toward the receiving antenna. Accordingly, the power received by the antenna increases by a gain factor GT as given by
GT = 4 π AT λ2 The receiving power, S, then becomes
(5.5)
Signal, Noise, and Precision
43
Figure 5.2 Parameters determining the downlink quality.
S = PT GT
AR 4 πd 2
(5.6)
The receiving antenna has its own gain fator GR as given by
G R = 4 π AR λ2
(5.7)
We can rewrite (5.6), by using (5.7), into the form of
S = PT GT L G R
(5.8)
with 2
λ L= 4 πd
(5.9)
This L is referred to as the free-space propagation loss, because it depends on the propagation distance and the signal wavelength only. The relationship
44
Radio Interferometry and Satellite Tracking
of (5.8) then tells us how much power is delivered from the satellite to an Earth station in terms of antenna gains and the propagation loss. Note that the areas AT and AR are the effective aperture areas of the antennas, and they are smaller than the areas measured geometrically. If the antenna radiates a microwave beam with its amplitude being distributed uniformly over the aperture, then its effective area equals the geometrical area, but in reality this is not possible. The amplitude must become smaller near the rim of the aperture, and this makes the effective area smaller by a factor of efficiency. The efficiency is usually from 60% to 70%. Consider next the noise power. The primary noise source in a satellite beacon downlink exists in the first-stage amplifier in the receiving equipment, that is, the amplifier that is right next to the receiving antenna’s feed unit. Any amplifier contains some resistive circuit elements, and any resistive circuit element placed at a finite temperature generates a thermal noise. The noises thus generated internally will be output from the amplifier, as being superposed to the output signal. If the first-stage amplifier has a sufficient gain, then we can disregard the noises generated in other later stage amplifiers. So, it is essential to use a first-stage amplifier that generates as little thermal noise internally as possible. In this context the first-stage amplifier is called a low-noise amplifier. There is one more noise source, which exists in the receiving antenna. This noise comes from the antenna’s sidelobe, because it picks up some of the thermal noise radiated from the ground, even if the antenna’s main beam is pointing to the satellite. These kinds of thermal noises are summed up and the total sum is modeled by a hypothetical resistor placed at a temperature T in Figure 5.2. That is, the resistor generates the equivalent thermal noise, and this noise is added to the received signal before being input to the amplifier. Because the hypothetical resistor represents all of the thermal noise sources, the receiving hardware is assumed to be noise free. The temperature T, which is referred to as the noise temperature, is thus for a theoretical temperature, not necessarily equal to the surrounding temperature. Given a noise temperature T [K], the noise power N [W] is calculated as
N = kBTB
(5.10)
where kB is Boltzmann’s constant: 1.38 × 10−23 [J/K], and B [Hz] is the bandwidth that the signal to be received occupies. These are the minimum essentials for considering our interferometer; more information about how signal and noise behave in a satellite link can be obtained from [1], for example.
Signal, Noise, and Precision
45
5.3 Beacon Downlink Budget If we know the powers of the signal and noise, we can estimate their ratio. Estimating the quality of a satellite link in terms of its signal and noise is often referred to as link budgeting. A case of link budgeting for a C-band beacon is shown in Table 5.1. Communication satellites have transponders with powers as high as hundreds of watts, whereas the beacon power is usually lower by orders of magnitude. Suppose the beacon power is 0.1W. The beacon is normally phase modulated so as to carry telemetry information; so a part of its power goes to the modulation sidebands and the residual carrier power works as the beacon. The beacon power is thus reduced slightly, for example, to that in Table 5.1. The transmitting antenna assumes a moderate size. The receiving antenna assumes the size of a VSAT (very small aperture terminal) antenna. The aperture-area efficiency and the noise temperature are those commonly assumed in the C band. The bandwidth is 20 kHz, as the beacon will be found in one frequency cell of the FFT and this cell is used for phase detection. The budget shown in Table 5.1 is thus practical, if not rigorous, for a beacon downlink. Usually in satellite communications we use the carrier-to-noise ratio, C/N, rather than S/N, in link budgets, because the received carrier is still to be input to the demodulation process before obtaining any communication signals. Here we are using the S/N because our desired signal is the beacon itself for our interferometer. The link budget in Table 5.1 takes into account the effective S/N improvement that occurs in the phase-measuring unit. The link budget as estimated in Table 5.1 shows an ample S/N margin. This margin owes to the integration gain in phase detection. Into this margin we can place possible losses, such as rain attenuation, atmospheric attenuation, Table 5.1 C-Band Beacon Downlink Budget (Frequency: 4 GHz; Wavelength: 75 mm) Beacon transmitting power PT −11.0 dBW 0.08 W Transmitting antenna gain GT 34.4 dB Diameter 1.5m, efficiency 0.7 Propagation loss L 196.3 dB Distance 39,000 km Receiving antenna gain GR 36.0 dB Diameter 1.8m, efficiency 0.7 Receiving power S −136.9 dBW Noise N −165.6 dBW Temperature 100 K, bandwidth 20 kHz S/N 28.7 dB Integration gain 43.0 dB 20,000 samples Effective S/N 71.7 dB Required S/N 36.4 dB S/N margin 35.3 dB
46
Radio Interferometry and Satellite Tracking
and receiving antenna polarization losses. One important possibility is the satellite antenna pointing loss. If a target satellite exists in our receiving antenna’s beam while the satellite is not pointing its antenna beam toward us, we must receive the signal from its antenna sidelobe, thus at a low level. The ample S/N margin would allow us to track such a satellite if we knew its beacon frequency, in the scenario of orbital safety monitoring. We are presently assuming a specific length for the interferometer baseline, as assumed in Figure 5.1. What would happen if we change the baseline length, for instance, to a half? The δl in (5.2) then becomes half, so the required effective S/N increases by 6 dB. Accordingly, we will rewrite the required S/N in Table 5.1. The baseline length, which is a basic design parameter, connects with the link budget in this manner. Another case of link budgeting, for a Ku-band beacon, is shown in Table 5.2. This is for a higher frequency, where the aperture-area efficiency tends to become lower, and the noise temperature tends to increase. With these factors taken into account, Table 5.2 is again a practical estimate of the link budget. The S/N margin is even higher compared with the C-band case, because antenna gains are higher for a shorter wavelength.
5.4 Tracking a Weak Signal Suppose we have to track an unknown satellite that does not point its beam toward us. We will then need to search out its spilt-over communication signal by spectrum monitoring, as mentioned in Chapter 4. This kind of signal can be extremely weak if the satellite antenna is duly cutting off its off-axis radiation. In such a case, we cannot establish a link budget in the style of Table 5.1 or 5.2. Then, what degree of weak signal can we find and track with our interferometer?
Figure 5.3 Finding an unknown, weak signal.
Signal, Noise, and Precision
47
Table 5.2 Ku-Band Beacon Downlink Budget (Frequency: 12 GHz; Wavelength: 25 mm) Beacon transmitting power PT −11.0 dBW 0.08W Transmitting antenna gain GT 43.3 dB Diameter 1.5m, efficiency 0.6 Propagation loss L 205.8 dB Distance 39,000 km Receiving antenna gain GR 44.9 dB Diameter 1.8m, efficiency 0.6 Receiving power S −128.7 dBW Noise N −162.6 dBW Temperature 200 K, bandwidth 20 kHz S/N 33.9 dB Integration gain 43.0 dB 20,000 samples Effective S/N 76.9 dB Required S/N 26.9 dB S/N margin 50.0 dB
If we observe the spectrum of a weak signal in the phase-measuring process as given in Chapter 4, it will look like that shown in Figure 5.3. If the signal level is seen above the noise level, for instance, by 10 dB, we can recognize that the signal exists. If the signal has a bandwidth of 10 MHz, we set it as our measurement target, and integrate the data samples along the frequency axis. There are 512 data samples, which yields an integration gain of 27 dB. The effective SNR then improves to 10 + 27 = 37 dB, and this satisfies the required SNRs given as (5.3) and (5.4) for the interferometer to operate. Now, that 10 dB mentioned above is for the SNR, for each frequency cell of the FFT, as obtained after the improvement by time integration. In that case, what was the original SNR before improvement? To estimate it, we refer to (4.21). Note here that the original S/N must have been small; so, we consider the term N 2/S 2 while disregarding the term N/S. That is, the signal-to-noise ratio has changed from an original value S/N to an effective value 2kS 2/N 2 after the processing of the cross-product and k-sample integration. If the effective ratio is 10 dB, the original S/N was −18 dB and this is the minimum required S/N for a signal to be found. Figure 5.4 illustrates such a signal existing at the minimum required level below the noise.
5.5 Estimates in PFD The minimum required level as discussed above can be expressed in a different form. Consider any bandwidth of 4 kHz in the target bandwidth, as illustrated in Figure 5.4. In a C-band case with noise temperature T = 100 K, the noise power existing in this 4-kHz bandwidth is, by (5.10), −172.6 dBW. The minimum required signal power in the same bandwidth is then −172.6 −18 = −190.6 dBW. This is a power coming from the receiving antenna’s effective
48
Radio Interferometry and Satellite Tracking
Figure 5.4 Signal existing at minimum required level.
aperture area, which was 1.78m2. Convert this to the power per unit area of aperture, which makes −193.1 dBW; to be exact, the unit should be written as dBW/m2 per 4 kHz. This is a form called power flux density (PFD), which is used to express the strength of satellite downlink signals as measured on the Earth’s surface. That is, we have figured out the minimum required PFD for a signal to be found by our interferometer. If we find a signal existing as such, and if the signal shows a bandwidth spreading over 10 MHz or more, then we can improve its effective SNR by applying the frequency-axis integration as mentioned earlier, to make the signal usable as a tracking target. Similarly, for the Ku band with T = 200 K, the minimum required PFD is calculated as −189.4 dBW/m2 per 4 kHz. Satellite communications and terrestrial communications sometimes share the same frequency band, and this is the case for the C band and Ku band. To prevent satellite downlinks from interfering with terrestrial links, regulations place a maximum allowable PFD for each frequency band [2], and any satellite downlink must not exceed the maximum PFD. Table 5.3 shows the PFD maximums, along with those minimums we have calculated. From the maximum down to the minimum, there is a wide dynamic range, and any signal coming in the range can be used as a tracking target. The PFD of a communication satellite’s downlink will be lower—but not much lower—than the maximum if it is received in its service area. On the other hand, the PFD of an unknown, spiltover signal can be much lower, by tens of decibels due to the satellite antenna’s Table 5.3 PFD Maximums and Minimums (in dBW/m2 /4 kHz) C Band Ku Band Maximum allowable −142 −138 Minimum required −193 −189
Signal, Noise, and Precision
49
off-axis radiation cutoff, while it is reasonable to expect that the wide dynamic range may well cover such a low-level signal.
References [1] Agrawal, B. N., Design of Geosynchronous Spacecraft, Englewood Cliffs, NJ: Prentice-Hall, 1986, Chap. 7. [2] ITU, Handbook on Satellite Communications, New York: Wiley Interscience, 2002, Chapter 9.
6 Error Factors In Chapters 4 and 5 we discussed the effect of thermally generated noises on phase measurements. These noises give rise to random errors in phase measurements. Meanwhile there are measurement errors of a different nature that are constant rather than random. They originate from baseline errors, phase ambiguity, and atmospheric refractions. These kinds of error factors are addressed in this chapter.
6.1 Baseline Error The baseline of an interferometer is a vector quantity as defined by its length and orientation. The baseline vector is determined in two steps: First, find the reference point for each antenna, and then survey the relative position of the reference points. The baseline vector as determined through this process will have an error of perhaps an order of a millimeter. An error in the baseline vector gives rise to a measurement error, as illustrated in Figure 6.1. Consider first the ideal case with no baseline errors shown in Figure 6.1(a). The angle θ is the satellite’s direction with respect to the baseline. The interferometer detects the relative path length: l = B sin q. Actually the interferometer is to measure the phase delay caused by the relative path length, but here we consider the l to be the variable being measured. If u is a unit vector pointing to the target satellite and B the baseline vector, the relative path is written as l = B ⋅ u, with the dot denoting an inner product. Suppose the baseline vector has an error δB, as in Figure 6.1(b). Depending on the geometry of δB relative to B, the error δB can cause an error in the baseline’s length or its orientation or, in general, to both. As the baseline changes to B + dB, an error in the measurement arises: 51
52
Radio Interferometry and Satellite Tracking
Figure 6.1 Error in baseline vector.
δl = u ⋅ δB
(6.1)
Note that the error δl vanishes if u and δB are orthogonal to each other. Would this error δl vary if the target satellite moves? If the satellite is geostationary, its motion observed at an Earth station appears typically as a variation of ±0.1 deg in the direction angle. If this motion is denoted by ∆u, the variation in the error δl is estimated as follows:
∆ ( δl ) = ∆u ⋅ δB ≤ ∆u ⋅ δB = 0.0017 mm
where |∆u| = sin (0.1 deg) and |δB | = 1 mm are assumed. This variation corresponds to a phase-error variation of 0.02 deg for a Ku-band case with λ = 25 mm, or 0.008 deg for a C-band case with λ = 75 mm, both of which are negligibly small. So, we can assume that a baseline error induces a constant error in phase measurements. We know that a constant bias error is likely to be present in phase measurements even after the process of internal reference correction, as noted in Chapter 3. The error caused by the baseline error is similarly a bias error. Precisely speaking, there is one more possible factor of bias error. As illustrated in Figure 3.3, reference correction signals must be distributed to receiving routes. Here, distribution cables (1) and (2) should have identical electrical lengths, while practically they may have a small difference, and this difference gives rise to a constant bias in phase measurements. The overall phase bias, that is, the sum of those three biases, is an unknown constant. Such a constant of unknown bias exists in every interferometer, and this must be calibrated by using some external reference. This process is often called zero calibration, and its necessity is common to every measurement for satellite tracking. The process of zero calibration will be considered for different cases of orbit estimation in Part III.
Error Factors
53
6.2 Phase Ambiguity If we have measured a phase and its result was, for example, 1 deg, then in reality, it may be that the true phase was 361 deg or 721 deg or any other like value. This fact brings about a problem, as illustrated in Figure 6.2. The target satellite is assumed to be at a direction angle θ, as marked by (a) in Figure 6.2, relative to the perpendicular of the baseline. For simplicity here, θ is assumed small so that the satellite direction is nearly perpendicular to the baseline. If B is the baseline length and λ the wavelength, the interferometric phase will take a value of
φ=
2π 2π B sin θ ≈ Bθ λ λ
(6.2)
Suppose the satellite direction changes hypothetically from θ to q + λ/B, as marked by (b) in Figure 6.2. The interferometric phase given by (6.2) should then change from φ to φ + 2π, but 2π is disregarded in the phase measurement and so the output phase φ remains unchanged. That is to say, the interferometric phase φ cannot point, which is true, but is false between (a) and (b). Similarly, any direction θ + nλ/B with n as an arbitrary integer makes a false direction. There appears thus a cycle of false directions with a period of λ/B. We have no means of excluding the false readings, so there is always a risk of considering a false one to be true, which causes an error. This problem is referred to as phase ambiguity, and every interferometer must face this problem. We can avoid the ambiguity problem if the target satellite exists in a known, finite zone, as illustrated schematically in Figure 6.3. If the period of the ambiguity cycle, λ/B, is greater than the width of the known zone, then we can identify the true direction. This can be the case for a geostationary satellite, because the satellite is normally kept within a longitude zone with a width of 0.2 deg. Let us arrange it so that the ambiguity cycle period will be, with
Figure 6.2 Phase ambiguity problem.
54
Radio Interferometry and Satellite Tracking
Figure 6.3 Eliminating the false directions.
margin, 0.4 deg or greater. If λ = 75 mm (C band; 4 GHz), the baseline length must then be 10.7m or shorter. There is thus an upper limit to the baseline length. The upper limit makes a recommended baseline length, as shorter baselines lead to lower tracking accuracies. Adopting the recommended baseline allows us to forget about the phase ambiguity as long as the satellite stays within the keeping zone. If the possibility exists for the target satellite to go out of its keeping zone, or if we need to use a longer baseline for enhanced accuracy of tracking, then we should eliminate the ambiguity problem by using two baselines as illustrated in Figure 6.4. The longer baseline with antennas A1 and A3 is for precise tracking measurements, although it gives false directions with a small period for the ambiguity cycle. The shorter baseline with antennas A1 and A2 is then used to identify the true direction by virtue of its enlarged period for the ambiguity cycle. The baseline of A1–A2 still has some degree of ambiguity, but this will be resolved by the finite width of the radiation pattern of the antenna. Another idea for eliminating the ambiguity problem is to combine the interferometric measurement with a different kind of tracking measurement. For example, ranging and interferometry could be one possible combination. One
Figure 6.4 Interferometer with two baselines.
Error Factors
55
more idea is to use an interferometer with a mechanically movable baseline. These ideas will be shown in detail in Part III.
6.3 Atmospheric Refraction One more potential error factor exists that is not inherent to the interferometer, but is in its surrounding environment. Microwaves from the satellite propagate through the atmosphere before arriving at the receiving antennas. The density of the atmosphere changes with altitudes, decreasing at higher altitudes. So, the ray of the microwave is refracted as it propagates through the atmosphere, as illustrated in Figure 6.5. Because of the refraction, the elevation angle of the satellite appears slightly higher than the geometrical elevation. This creates a problem because what we need for orbit estimation is the geometrical elevation. The excess elevation due to the refraction can be modeled as a function of elevation angle, and we can refer to a graph shown, for example, in [1] or alternatively use a fitting function:
δE L =
17.6 16 + 930tan(E L )
[deg ]
(6.3)
Here, EL is the elevation angle as observed. Subtracting the δEL from the observed elevation yields the geometrical elevation angle. The excess elevation due to the atmospheric refraction affects the interferometric measurement in the following manner. Consider an interferometer with a baseline vector B (see Figure 6.6). Vector B is decomposed into BA and BT, with BA being aligned with, and BT transverse to, the incoming microwave path. The relative path length measured by the interferometer is then given by
Figure 6.5 Atmospheric refraction.
l = B A cos (E L )
(6.4)
56
Radio Interferometry and Satellite Tracking
since the component BT has no sensitivity to elevation changes. Hence, we have
δl = -B A sin (E L ) δE L
(6.5)
This is how an excess elevation δEL causes an error to the interferometric measurement. One can correct for the error δl by using (6.3) and (6.5). Note, however, that the atmospheric correction may contain some uncertainty. Equation (6.3), or its original graph, is based on a representative model of the atmosphere. Precisely speaking, the atmospheric model should be different for different locations and different seasons, but it is no easy task to create a precise model that takes into account those variable conditions. In reality, we cannot find a precise model in a usable form for elevation refraction correction. Alternatively, what we can find is a precise model that tells us what amount of excess range is produced by the atmospheric refraction; see, for example, [2]. The interferometer is to measure the relative range from two antennas to the satellite; so, theoretically speaking, we could apply the excess-range model to the paths from the antennas to the satellite in a relative manner to correct for the atmospheric refraction. Applying the model, however, requires collecting atmospheric data, such as humidity, pressure, temperature, and so on, while some errors may remain uncorrected depending on conditions. So, it would be practical to use the simple model of (6.3) while allowing for some uncertainty, presumably of the order of 10%. The δl from (6.5) will then contain an uncertain part, and this makes an error factor in regard to atmospheric refraction. The effect of the atmospheric refraction on the interferometer depends on the geometry of the baseline relative to the satellite, as suggested in Figure 6.6. No effects will appear if the baseline is placed transversely to the incoming microwave path. The maximal effect appears if the baseline is placed along the
Figure 6.6 Decomposing the baseline vector.
Error Factors
57
microwave path. For example, at an elevation of 30 deg, δEL becomes 0.032 deg and its potentially uncertain part of 0.003 deg becomes the error factor for the maximal case. This is a small error, but not small enough to be neglected. So, the atmospheric error factor will be considered separately for different cases of interferometer applications in Part III.
6.4 Effect of Rainwater If the atmosphere interests us, what would be the effect of rainfall? When it rains, the water falling onto an antenna dish will make a thin layer of running water. The layer is a dielectric medium, so it causes some phase delay. This effect, however, will not affect the interferometric phase if the delays are equal for two antennas. If we compare an offset-fed parabolic antenna and a center-fed parabolic antenna, both pointing to a geostationary satellite, the offset-fed antenna has its dish placed in a position nearer to the vertical than the center-fed one. So, using offset-fed antennas will be a better choice because the water runs down more quickly, thus creating less unwanted phase delay. Precisely speaking, the thickness of the water layer may not be constant; more likely, it will vary from moment to moment in a fluctuating manner. Accordingly, the interferometric phase will show an error, sometimes a positive one and sometimes negative, thus averaging zero. For this to occur, the antennas should have identical shapes. This suggestion accords with the suggestion we made in Chapter 2 of using identically designed antennas for the accurate determination of the baseline.
References [1] Soop, E. M., Handbook of Geostationary Orbits, Norwell, MA: Kluwer, 1994, p. 229, Figure 8.2A. [2] Katsougiannopoulos, S., et al., “Tropospheric Refraction Estimation Using Various Models, Radiosonde Measurements and Permanent GPS Data,” in XXIII FIG Congress, Munich, Germany, October 8–13, 2006.
7 Design and Installation The most fundamental parameter to consider when designing an interferometer is no doubt the length of the baseline. Theoretically speaking, longer baselines enable better tracking accuracies; however, practically speaking, they may also carry the risk of causing an unpredictable phase error. Baseline design is thus a sensitive problem. This chapter addresses this problem and suggests a possible solution.
7.1 System Layout If we assume that our target satellite stays within its nominal longitude zone, we can then adopt the recommended baseline length, which is typically 10m for the C band or shorter for the Ku band. The antennas should be identical in size and shape, with VSAT-class antennas being suitable choices. Such an interferometer will be set in the layout suggested earlier in Figure 3.3. It occupies three sites: two antenna sites and one center site for phase measurement. Installing the interferometer system requires special care in terms of the phase balance of the reference distribution cables (1) and (2). Their nominal lengths must be identical and their temperatures must be uniform; that is, thermal phase balancing is required. So, we must avoid a situation in which one cable is shaded from sunlight by a big tree or a building, while the other cable is not, because in such a case the cables may become thermally unbalanced even if the air flows freely around the cables. The center site must supply local reference signals and dc power sources to the antenna sites, and they can be sent through the IF cables by frequency multiplexing.
59
60
Radio Interferometry and Satellite Tracking
If we need a longer baseline in order to enhance tracking precision, the system layout changes from that shown in Figure 3.3 to the one shown in Figure 7.1. There is an additional antenna that has a short baseline for resolving the phase ambiguity. The antennas for the short baseline will be placed as a close pair, so the system practically uses three sites similar to the two-antenna case. Three cables are used for distributing the references, and they must satisfy the same requirement of thermal phase balancing. Phases must be measured for the long baseline and for the short baseline; a single measuring unit can do this by means of alternate switching. The motion of a geostationary satellite as viewed from a ground station is not fast, so the unit measures first the long-baseline phase over a couple of minutes, and then switches to the short baseline to measure the phase over a couple of minutes, thus continuing the switching cycle.
7.2 Reflecting Interferometer Regardless of whether there are two antennas or three, the thermal phase balancing of the reference distribution cables is an essential requirement. This is not difficult to satisfy if the baseline is short enough. For longer baselines, however, the requirement becomes challenging. One cannot predict precisely what would happen to cable temperatures when long cables are routed in troughs or trenches or suspended in the air or handled in any other manner. So, there is a risk that the cables may show some thermal imbalancing when they are installed, but we have no means of measuring the degree of imbalance once the cables are installed. Clearing this problem is crucial to the design and installation of an interferometer.
Figure 7.1 Layout of three-antenna interferometer. LNC: low-noise amplifier and downconverter as a combined unit; PD: power divider.
Design and Installation
61
One idea to overcome the problem is illustrated in Figure 7.2. There is a plane mirror, which reflects the downlink microwave and guides it into a receiving antenna, while the other antenna receives the microwave directly. The receiving antennas can then be placed side by side, so the reference distribution cables become short enough. The problem of possible thermal imbalance thus vanishes. The system layout becomes compact if the antennas and measuring unit are placed at one site. The reflecting system assumes that the mirror and antenna are placed within a near-field distance, so that the diffraction of the microwave beam over that distance may be neglected. That is, the microwave propagation can be regarded as being virtually geometrical. This condition is well satisfied if the mirror-antenna distance is not more than tens of meters. The size of the mirror is determined geometrically to cover the antenna’s aperture. The mirror must cover it enough, because otherwise the receiving gain would suffer a loss and the scattering at the mirror edges would bring unwanted sidelobes. The mirror surface is conductive, so the signal phase changes by π after reflection. Precisely speaking, there may be some effect of wave theory so that the phase may not propagate as described by geometrical optics. This effect of phase discrepancy will be constant for the mirror and antenna placed at fixed positions. So, the phase discrepancy along with that π-change will make a constant phase bias, and this bias can be treated as one of those bias-error factors existing in the interferometer. The plane mirror and the antennas must of course have the same mechanical quality, that is, surface precision and mechanical rigidity. Note that the mirror affects the antenna polarization. If the downlink microwave is circularly polarized, its polarity changes from RHCP (right-hand circular polarization) to LHCP (left-hand circular polarization), or LHCP to RHCP after reflection. If the microwave is linearly polarized, its polarization angle changes after reflection. This must be remembered when installing the antenna.
Figure 7.2 Interferometer with a plane mirror.
62
Radio Interferometry and Satellite Tracking
Now, let us examine the whereabouts of the baseline vector. In Figure 7.3, the mirror produces an image of antenna #2 at #2′, so, the baseline vector is to connect #1 and #2′. (These points are for the antennas’ reference points.) The position of #2′ can be determined by surveying the geometry of antenna #2 and the mirror. Here, we should assume a possible error δθ in surveying the mirror’s pointing orientation. This error then causes an error δB in the baseline vector. The error dB is orthogonal to the line of sight to the satellite, so, it will cause no errors in interferometric phase measurements, as mentioned in Chapter 6. Different versions are possible for the reflecting interferometer, as illustrated in Figure 7.4. A symmetric version is shown in Figure 7.4(a), where the
Figure 7.3 Baseline vector of a reflecting interferometer.
Figure 7.4 Versions of reflecting interferometers.
Design and Installation
63
baseline vector connects the center points of the mirrors, and the antennas have the same polarity. The version shown in Figure 7.4(b) would be suitable if the mirrors are placed on a building’s rooftop, while the center site is placed on the ground for better accessibility. Figure 7.4(c) shows a version that includes a short baseline for ambiguity resolution. Symmetric versions are better choices for the reasons we have already discussed. Figure 7.5 shows an example of a plane mirror with 1.8m sides. A Kuband test baseline was formed in the style of Figure 7.2 by using this mirror and 1.2m-diameter antennas, to make a 40m baseline. Comparing the reflecting route and the direct-receiving route shows that the loss in the receiving gain due to the insertion of the plane mirror was not more than 1 dB. We will see the real use of plane mirrors for satellite tracking in Part III, where they are discussed in more detail.
Figure 7.5 Example of plane mirror (Courtesy of NICT).
Part II Geostationary Satellite Orbit
8 Overview of Part II: Geostationary Satellite Orbit Part II of this book discusses the orbits of geostationary satellites. The concept of a geostationary orbit may appear simple at first sight—the satellite and the Earth go around at the same pace, as illustrated in Figure 8.1, so that the satellite looks motionless when observed from the Earth. This is a simple, static image, but in actuality the orbit of a geostationary satellite is not that simple. The satellite is subjected to perturbing forces, such as the gravity of the Sun and Moon for instance. Accordingly, the orbit changes gradually with time, until it becomes no longer stationary. The satellite must then generate a restoring force, in order to counteract the perturbation and get back to its original stationary orbit. As a result of those perturbing and restoring forces acting on the satellite, the satellite will move relative to the Earth, and only if this motion is made small can the satellite be practically stationary. It is such dynamics that shape the stationary orbit. Our discussion of the dynamics of orbital motions starts with Kepler’s laws in Chapter 9. Usually in textbooks, Kepler’s laws are derived from the fundamental law of universal gravitation, as illustrated in Figure 8.2(a). This derivation is to solve a differential equation named an equation of motion. Solving this equation, however, often relies on mathematical devices of which the physical meaning is not very clear. So, we will choose another way, as shown in Figure 8.2(b). We regard Kepler’s laws as given observational facts and input them to the equation of motion. This allows us to study what kind of force is acting on the satellite. We will of course find out the law of universal gravitation by ourselves, and this allows us to place Kepler’s laws as the basis of our subsequent discussions.
67
68
Radio Interferometry and Satellite Tracking
Figure 8.1 Concept of geostationary orbit.
Figure 8.2 Understanding Kepler’s laws.
Our discussions will be focused on geostationary orbits in Chapter 10 onward. The orbit of a geostationary satellite is practically a near-stationary orbit, which is a near-circular orbit. If an orbit is not circular, then we assume it is elliptical. Actually, there is an idea for treating a near-circular orbit without using an ellipse. The shape of a near-circular orbit is virtually a circle, while only its center is displaced slightly from the Earth’s center. This is an approximation, which works well for practical cases of near-stationary orbits. This idea will help us simplify our subsequent discussions. How the orbit would change when those perturbing and restoring forces mentioned earlier act on the satellite will be discussed in Chapters 11 and 12. Discussed first is the changing of the orbit when the satellite fires a gas jet to generate a restoring force in an impulsive manner. The impulsive orbital change is then expanded to continuous, gradual orbital changes due to perturbing forces. In this way, we derive a theory of orbital perturbations, which is straightforward to follow because we are confining our object to geostationary orbits; otherwise, the theory would become much more complex. The resulting laws of orbital changes then allow us to consider how to keep the satellite stationary, as discussed in Chapter 13. The topics and discussions in Part II will thus make up a concise theory of geostationary orbits, as illustrated in Figure 8.3. The discussions are
Overview of Part II: Geostationary Satellite Orbit
69
Figure 8.3 Topics and discussions in Part II.
self-contained, without the need for external references in principle. If the reader prefers to derive Kepler’s laws in the way used by standard textbooks, [1] or [2] should be consulted for examples. If the reader is interested in a perturbation theory that does not confine itself to geostationary orbits but covers any orbits in general, refer for example to [3] or particularly [4], where an abyss of mathematical analysis awaits. If the reader wants to understand geostationary orbits over a wide range from fundamental dynamics through operational practices of satellite control, refer to [5], which is a classic volume. The last chapter of Part II addresses the problem of overcrowding geostationary satellites. Though the problem is closely related to the regulation of the use of the orbit, it is better understood on the basis of orbital dynamics and station keeping. The background structure of the problem will be described, with a possible solution being suggested, because everyone who plans or operates geostationary satellites cannot disregard this problem.
References [1] Bate, R. R., D. D. Mueller, and J. E. White, Fundamentals of Astrodynamics, New York: Dover, 1971, pp. 11–33. [2] Prussing, J. E., and B. A. Conway, Orbital Mechanics, New York: Oxford University Press, 1993, pp. 3–19. [3] Moulton, F. R., An Introduction to Celestial Mechanics, New York: Dover, 1970. [4] Brouwer, D., and G. M. Clemence, Methods of Celestial Mechanics, New York: Academic Press, 1961. [5] Soop, E. M., Handbook of Geostationary Orbits, Norwell, MA: Kluwer, 1994.
9 Kepler’s Laws The motion of planets around the sun was formulated by Kepler early in the 17th century. He discovered a set of three laws, which are known as Kepler’s laws. The laws apply as well to the motion of artificial satellites around the Earth, thus giving the sound basis of satellite orbits. The laws are shown one by one in the following sections, with their physical meanings clarified, where the original reference of the laws to “planets” and the “sun” has been changed to “satellites” and the “Earth.”
9.1 Kepler’s First Law A satellite around the Earth follows an elliptical orbit, with its one focus at the center of the Earth.
Figure 9.1 illustrates how to draw an ellipse. A moving point P is at a distance r from O and a distance r′ from O′, where O′ is a fixed point placed at some distance away from the origin O. If P moves while maintaining
r +r′ = L
with L being a constant, the locus of P then makes an ellipse, and its focal points are O and O′. This is often pictured by showing a pencil and a piece of thread of length L. The thread is pinned down at its ends to points O and O′, and the pencil is placed at P. By moving the pencil while keeping the thread stretched, we can draw an ellipse. The thread length L is equal to the major axis of the ellipse.
71
72
Radio Interferometry and Satellite Tracking
Figure 9.1 Drawing an ellipse.
Let us write the satellite position P in polar coordinates of radius r and angle θ, with respect to the Earth placed at the origin O. One can write the relationship for triangle OPO′ as follows: r ′ 2 = r 2 + D 2 - 2rD cos ( π - θ )
where D is the distance OO′. Since r′ = L − r, this equation becomes
2Lr + 2rD cos θ = L2 - D 2
hence, D2 1 - L2 r= D 1+ cos θ L L 2
If we set
a = L 2; e = D L
then we have the equation of an ellipse:
r=
a (1 - e 2 )
1 + e cos θ
(9.1)
Kepler’s Laws
73
where e is called the eccentricity of the ellipse. If e = 0, or namely D = 0, the orbit becomes a circle. If D increases as L is approached, then e approaches 1, and the orbit becomes elongated along its major axis. The eccentricity thus determines the shape of the orbit. The a in (9.1) is called the semimajor axis of the ellipse, and it reduces to the radius of a circle if e = 0. Hence a may be regarded as a generalized radius, and it determines the size of the orbit. The set of (a, e) thus specifies the size and shape of an elliptical orbit.
9.2 Kepler’s Second Law The radius of a satellite sweeps out equal areas in equal times.
Suppose that, in Figure 9.2, we have observed the satellite moving from A to B in a given length of time, and later, moving from C to D in the same length of time. In such a case we have two areas swept out by the moving radius of the satellite. The law states that the two areas will be equal, regardless of where the arcs AB and CD are. This is equal to stating that the rate of area-sweeping per unit time, or the area-sweeping rate, is constant wherever the satellite is in the orbit. So, the satellite moves faster when the radius becomes smaller, and vice versa. The speed of the satellite therefore becomes maximal at the perigee, and minimal at the apogee. If the orbit is circular, then equal areas means equal speeds, so the satellite moves at a constant rate of revolution. In Figure 9.3, line AB containing O is perpendicular to the major axis of the ellipse, and it divides the ellipse into areas S1 and S2. The time needed for the satellite to move from A to B via apogee C is proportional to area S1. Similarly, that time from B to A via perigee D is proportional to area S2. The satellite then spends more time in the S1 side than in the S2 side, and the former-to-latter proportion increases rapidly with the eccentricity of the orbit. Motions in elliptical orbits have this kind of dynamism.
Figure 9.2 Areas swept out by the satellite’s moving radius.
74
Radio Interferometry and Satellite Tracking
Figure 9.3 Apogee-side area S1 and perigee-side area S2.
9.3 Kepler’s Third Law The square of the orbital period is proportional to the cube of the semimajor axis.
The period here refers to the time required for the satellite to complete one revolution in the orbit. While the first and second laws refer to the motion of one satellite in its orbit, the third law refers to different orbits made by different satellites and states the relationship between these orbits. The third law states implicitly that the orbital period does not depend on the eccentricity. So, the orbits drawn in Figure 9.4 must have equal periods. This law is sometimes referred to as the law of power 3/2, because this is how periods are dependent on semimajor axes.
Figure 9.4 Orbits with equal periods.
Kepler’s Laws
75
9.4 Physical Meanings Kepler’s laws describe observations about orbital motions. Here, describe signifies the following. Suppose we do some experiments in our laboratory and collect a set of measurement data. We would then try fitting a curve to the data set, for example, a curve y = ax2 + bx + c, to determine the parameters a, b, and c. Kepler did the same kind of fitting of a curve. He found that an ellipse fits exactly the observed facts if the parameters, including semimajor axis and eccentricity, are chosen correctly. But why it should be an ellipse, why a constant area-sweeping rate, were not part of the question; the laws just describe facts. This is because the law of motion, namely, the equation of motion, was not known at the time. We now know the equation of motion. So we can input Kepler’s laws into the equation of motion and see what those three laws would mean in terms of dynamics. In other words, we will examine what kind of force is acting on the satellite if its motion obeys Kepler’s laws. To do that, we must prepare an equation of motion written in polar coordinates. In Figure 9.5, there is a satellite at r and θ, and some force F yet unknown is acting on it in two components: Fr along the radius and Fθ orthogonal to the radius. Figure 9.6 is for the same satellite, with x being its position vector. A unit vector I is placed at the origin O, and it always points to the satellite. When the satellite moves, the vector rotates around O to track the changing direction of the satellite. Another unit vector J, orthogonal to I, is placed at O. When I rotates, J also rotates so that the two vectors will always be orthogonal to each other. Suppose the satellite has changed its position by ∆x in a unit time, and correspondingly its polar angle has varied by ∆θ. This motion causes the vector I to change by ∆I, which is parallel to J. At the same time J will change by ∆J, which is oppositely parallel to I. From these considerations we can set
Figure 9.5 Force F in two components.
76
Radio Interferometry and Satellite Tracking
Figure 9.6 I and J making a tracking frame.
I = θ J, J = - θ I
(9.2)
I = θJ + θ J = θJ - θ 2 I
(9.3)
and, hence,
Since x = r I , we can write x = r I + 2r I + r I
(9.4)
Substituting (9.2) and (9.3) into (9.4) yields
(
) (
)
x = I r - r θ 2 + J 2r θ + r θ The equation of motion then takes the form F = m x
(
)
(
)
= m I r - r θ 2 + m J 2r θ + r θ
where m is the mass of the satellite. Separating the components for I and J yields
(
)
Fr = m r - r θ 2
1d 2 Fθ = m 2r θ + r θ = m r θ r dt
(
)
(9.5)
( )
These are the equations of motion written in polar coordinates.
(9.6)
Kepler’s Laws
77
We are now ready to input Kepler’s laws to the equations of motion, to find Fr and Fθ. First, refer to the second law. Suppose that, in Figure 9.7, the satellite has moved from A to B in a unit time, having caused a change ∆r in radius and a change ∆θ in polar angle. The area swept out by the satellite radius during that time is nearly equal to the area of triangle OAB. The height of B from OA equals (r+∆r) ∆θ ≅ r ∆θ, so the triangle area is (r2∆q)/2. If we set C = r 2 θ
(9.7)
then it equals twice the area-sweeping rate. The second law states that C is a constant; hence, it follows from (9.6) that Fθ = 0. That is, the force is acting on the satellite along the radius, either toward the Earth or away from the Earth. We next refer to the first law, and rewrite (9.1) as follows: p 1 + e cos θ
(9.8)
p = a (1 - e 2 )
(9.9)
r=
with From (9.8) we have
1 + e cos θ = p r
Differentiating both sides with respect to time yields
-e sin θ ⋅ θ = - pr r 2
hence,
r =
Figure 9.7 Area swept out in a unit time.
er 2 sin θ ⋅ θ p
78
Radio Interferometry and Satellite Tracking
We can eliminate θ by using (9.7): r =
e C sin θ p
Differentiating both sides one more time yields r =
e C cos θ ⋅ θ p
Substitute this into the equation of motion (9.5) as follows:
e Fr = m C cos θ ⋅ θ - r θ 2 p
Here, use cosθ = (p/r−1)/e from (9.8), and eliminate θ again. After arranging the terms, we find that Fr = -m
C2 pr 2
(9.10)
So, the force pulls the satellite toward the Earth, and it obeys an inversesquare law. If the force always points to the Earth as the satellite moves, then we can only think that the pulling force comes from the Earth. The above derivation so far is for one satellite revolving around the Earth. We now refer to the third law, which states that P 2 = K ⋅a3
(9.11)
holds for every satellite orbit around the Earth, with P being the orbital period and K a constant of proportionality. Referring to Figure 9.8, the area of an ellipse is πab, where b is the semiminor axis given by b = a 1 - e 2 , or b = ap by (9.9). So, the constant C, which was twice the area-sweeping rate, is calculated as
C =2
a ap p πab = 2π = 2π 3 P K Ka
Kepler’s Laws
79
Figure 9.8 Semimajor axis a and semiminor axis b.
With this substitution, (9.10) becomes Fr = -m
4 π2 1 K r2
By setting the constant part as
µ = 4 π2 K
(9.12)
µ r2
(9.13)
we have
Fr = -m
This is the equation for every satellite around the Earth. Now, if the Earth pulls a satellite with a force Fr, then the satellite must pull the Earth with the same force. This must be expressed in the form
Fr = - M
µ′ r2
where M is the mass of the Earth and µ′ is some other constant. This relationship is possible only if the force is expressed in the form of
Fr = -G
Mm r2
with G being a constant that does not depend on M or m. Note that this equation is exactly the law of universal gravitation. The constant µ in (9.13) equals GM, and to determine its value, we should observe a satellite orbit and measure
80
Radio Interferometry and Satellite Tracking
its period P and semimajor axis a. Then (9.11) and (9.12) will show that µ = 398,600 km3/s2. Correspondingly, the third law is stated more specifically as
P = 2π a 3 µ
(9.14)
9.5 Significance of Kepler’s Laws We have thus learned that the force acting on the satellite was the force of universal gravitation. Then at this stage we can consider a more fundamental problem: What should be the orbit of a satellite if it is pulled by the gravitation of the Earth? This is referred to as a two-body problem. It is now clear that Kepler’s three laws give us an exact solution to the two-body problem. This is why we can rely on the three laws as the basis for studying satellite orbits. Actually, the satellite motion may slightly differ from the two-body problem if perturbation is present, for example by the Moon’s gravity. Still in such a situation Kepler’s solution is effective if small corrections are taken into account for the perturbation, as we will see later. Kepler’s laws are able to visualize satellite motions by describing the geometry of the orbit and the variation pattern of orbital velocities, as we have seen. The set of semimajor axis, eccentricity, and other angular parameters that determine the orientation of the orbital ellipse in space is referred to as Keplerian orbital elements, and this set is used in daily operation of artificial satellites, despite Kepler’s laws having been discovered so long ago. Sometimes the semimajor axis is replaced by the orbital period through the relationship of (9.14), or sometimes the set of (a, e) is replaced by the perigee and apogee heights, to express the same contents as Keplerian elements.
Figure 9.9 Standing position of Kepler’s laws.
Kepler’s Laws
81
Figure 9.9 illustrates the standing position of Kepler’s laws. There is the fundamental layer that contains the equation of motion and the law of universal gravitation, and from this layer Kepler’s laws are derived. Kepler’s laws then describe what should be observed. On studying the orbits of satellites, we would normally refer to Kepler’s laws, rather than to the fundamental layer. This is analogous to studying electric circuits—normally we refer to Ohm’s law or Kirchhoff ’s law rather than to the fundamental equations, even if we know that everything in electricity and magnetism obeys the fundamental equations.
10 Near-Stationary Orbit Geostationary satellites are placed in particular orbits, which are circular orbits right above the equator at a specific altitude. An ideally stationary satellite would not move at all when observed from the ground; practically, however, satellites undergo some motion around their supposed stationary position, thus the term near-stationary satellite. Understanding the motion of near-stationary satellites is essential for an understanding of geostationary orbits. In this chapter we will study the motion of near-stationary satellites on the basis of Kepler’s laws.
10.1 Geostationary and Near-Stationary Orbits A satellite is said to be geostationary if its position relative to the solid Earth is fixed without motion. More precisely speaking, a geostationary satellite has an invariant position in the Earth-fixed, rotating coordinate frame. Three conditions must be satisfied for a satellite to become geostationary. First, its orbital revolution must be synchronized with the Earth’s rotation. The Earth rotates relative to the inertial space once in 86,164.1 sec, and this period must be the period of the orbit. Kepler’s third law [see (9.14) in Chapter 9] then determines the semimajor axis of the orbit to be a = 42,164.2 km. Second, because the Earth rotates at a constant rate, the satellite also must revolve around the Earth at a constant angular rate. Kepler’s second law then requires the orbit to be circular, with e = 0. The above-determined a then becomes the radius of the circular orbit, which is referred to as the synchronous radius. These two conditions make the satellite stationary in longitude. The third condition is to make the satellite stationary in latitude, by requiring the orbit to lie in the equatorial plane. That is, the inclination i—the angle of the orbital plane against the equa83
84
Radio Interferometry and Satellite Tracking
torial plane—must be zero. A satellite orbit satisfying these three conditions has only one parameter left for free choice: the satellite’s longitude measured along the equator, and this is called a stationary longitude. If a stationary longitude is given to a satellite, then its position is specified in three dimensions with its altitude being 35,786.0 km (i.e., a minus Earth’s radius) right above the equator at the given longitude. This position is referred to as the satellite’s geostationary position, or simply stationary position. The above discussion is for an ideally stationary satellite. Practically speaking, it is not easy to keep the three conditions perfectly all the time. So, tolerating small deviations from the ideal conditions is a common practice if it does not deteriorate the supposed mission of the satellite. Accordingly, the satellite goes into a near-stationary orbit and moves away slightly from its nominal stationary position and moves about the position. To understand this kind of orbital motion is to understand the nature of geostationary orbit. The motion breaks down into different kinds, of which some are constrained in the orbital plane and the other perpendicular to the orbital plane. We will analyze these kinds of motions separately in the following sections in order to clarify the motion of near-stationary satellites.
10.2 Orbit with Small Eccentricity We will start with the motion constrained in the orbital plane. So assume for the time being that the orbital inclination is zero. Consider what would happen if the eccentricity e differs from zero, while assuming the semimajor axis a to be geostationary. If e differs from zero, it is actually not unlimited. One can study the distribution of e for operational geostationary satellites, by referring to satellite orbital data that have been made public [1], and a result shown in Figure 10.1 tells us that e is as small as 0.001 or less. If e is this small, we can neglect e2 in the formula for an ellipse given by (9.1), and rewrite the formula as follows:
r = a (1 - e cos θ )
(10.1)
Neglecting the term ae 2 causes an error not exceeding 40m in satellite position. The presence of nonzero e in (10.1) makes the radius r decrease by ae at θ = 0, and increase by ae at q = π; these correspond to (1) and (3) in Figure 10.2. The radius neither increases nor decreases at θ = π/2 or θ = 3π/2; these are represented by (2) and (4) in Figure 10.2. One can then presume that the orbit has a circular shape while its center is displaced by ae. Let us show this by using Figure 10.3. A circle with radius a has its center at O′ which is away from the origin O by a small distance d. The radius r at an angle θ is then given by
Near-Stationary Orbit
85
Figure 10.1 Distribution of eccentricity, in the year 2010.
Figure 10.2 Orbit with a small eccentricity. Broken line: ideal orbit with e = 0.
d r ≈ a 1 - cos θ ′ a
(10.2)
where θ′= θ − ε, with ε being a small angle subtended by d. By approximating that
cos θ ′ = cos ( θ - ε) ≈ cos θ + ε sin θ
one can rewrite (10.2) as
86
Radio Interferometry and Satellite Tracking
Figure 10.3 Displaced circle representing the orbit.
d d r = a 1 - cos θ - ε sin θ a a
Since |ε| < d/a, the third term on the right-hand side is small—of the order of (d/a)2—so it can be neglected. Hence, the radius is given by
r = a - d cos θ
(10.3)
Equations (10.1) and (10.3) become identical if d = ae. That is, the circle centered at O′ with radius a represents the shape of the orbit. In subsequent discussions and chapters, we will use this kind of displaced circle to represent the orbit, with a denoting the circle’s radius and d = ae the displacement. Note that this approximation is valid for small-eccentricity orbits, and a is for the semimajor axis in usual terminology.
10.3 Motion Due to Small Eccentricity We need to find the revolution angle θ as a function of time in order to determine the orbital motion. Refer to Figure 10.2 and draw the same orbit once more to make Figure 10.4, where shaded areas (1) through (4) are the areas swept out by the radius in unit times. Kepler’s second law then states that these areas are all equal. So, owing to the variation in radius by (10.3), the rate of orbital revolution is faster at (1) and slower at (3) compared with that of the ideally stationary orbit, whereas at (2) and (4) the rate is neither faster nor slower.
Near-Stationary Orbit
87
Figure 10.4 Applying Kepler’s second law.
The revolution rate θ will then vary, over one revolution of the satellite, as illustrated in Figure 10.5. We presume that this variation is sinusoidal, because there was a sinusoidal term in (10.3). The revolution rate varies around W, the Earth’s rotation rate (7.292115 × 10−5 rad/s), at which an ideally stationary satellite revolves. Thus we set, as a trial,
θ = W + A cos Wt
(10.4)
with A being a constant. Turn to Figure 10.4 again, and let l1 ... l4 denote the arc lengths of the shaded areas (1) through (4). If we compare (1) with (2) or (4), we find that the radius is becoming shorter by the factor of (a−d )/a, so l1 must be longer than l2 or l4 by the factor of a/(a−d ). Hence, the revolution rate at (1) is faster than that of Ω by the factor of
Figure 10.5 Variation in revolution rate.
88
Radio Interferometry and Satellite Tracking
a a 2d ≈1+ a -d a -d a Similarly, the revolution rate at (3) is slower than that of Ω by the factor of
a -d a -d 2d ≈1a a a Hence, A = 2Ω d/a is suggested in (10.4), so we write
2d θ = W + W cos Wt a
(10.5)
The solution for θ should then be
θ = Wt +
2d sin Wt a
(10.6)
This is, however, a result based on a presumption. To prove it, we will show that the motion of (r, θ) satisfies Kepler’s second law. Let us write the radius r, by using (10.6) and (10.3), as follows:
2d r = a - d cos Wt + sin Wt a Expanding the cosine term in the right-hand side yields
2 d d r = a 1 - cos Wt + 2 sin 2 Wt a a
The small term of (d/a)2 can be neglected, so we have
r = a - d cos Wt
(10.7)
We can now write the area-sweeping rate times two, by using (10.7) and (10.5), as follows:
Near-Stationary Orbit
89
2 2d r 2 θ = (a - d cos Wt ) W 1 + cos Wt a
Arranging the terms yields
2 3 d d 2 2 r θ = W a 1 - 3 cos Wt + 2 cos 3 Wt ≈ W a 2 a a 2
Here, we neglect the terms of (d/a)2 and (d/a)3 as being small enough. The area-sweeping rate then turns out to be constant, satisfying Kepler’s second law. As a result, we have (10.6) and (10.7) to describe the satellite motion with a small eccentricity.
10.4 Motion Due to Nonstationary Radius Next, we set the eccentricity to zero, while allowing the orbital radius a to differ from the stationary radius slightly by ∆a. Then, according to Kepler’s third law [see (9.14) in Chapter 9], the orbital period P changes by ∆P while satisfying the differential relationship of ∆P 3 ∆a = P 2 a
(10.8)
This change of ∆P causes the orbital revolution rate to change from its original Ω = 2π/P to a different Ω′, as
W′ =
2π 2π ∆P ∆P ≈ W 1 = W P + ∆P P P P
So, from (10.8) we have
W′ = W -
3 ∆a W 2 a
(10.9)
Now, the orbit of an ideally stationary satellite is written simply by setting d = 0 in (10.7) and (10.6):
r =a
(10.10)
90
Radio Interferometry and Satellite Tracking
θ = Wt
(10.11)
If a changes to a + ∆a, and Ω changes to Ω′ of (10.9), the orbit then changes to
r = a + ∆a
θ = Wt -
3 ∆a Wt 2 a
(10.12)
(10.13)
These equations describe the satellite motion when the orbit has a slightly off-stationary radius.
10.5 Motions in an Orbital Plane Two kinds of motions are thus possible in the orbital plane: the motion obeying (10.6) and (10.7), and the motion obeying (10.12) and (10.13). Any superposition of the two kinds can be the orbital motion of the satellite:
r = a + ∆a - d cos (Wt - α)
θ = θ 0 + Wt -
3 ∆a 2d Wt + sin (Wt - α) 2 a a
(10.14)
(10.15)
Here, an arbitrary constant α is introduced; this is related to the displacement of the orbital circle. In Figures 10.2 through 10.4 we have drawn the circle as being displaced to the leftward direction, while actually the direction may be any in the orbital plane. That is, the displacement d should be a vector, and its orientation is to determine the constant α. Another constant θ0 is introduced because the choice of the origin of time t should be arbitrary. Note here that we need a reference direction from which the revolution angle θ is to be measured. We assume, for the time being, simply that the reference direction exists somewhere until we give its proper definition later. A question may arise in the above discussion of superposition: In (10.6) and (10.7), what if Ω changes to Ω′ = Ω + ∆Ω with d not being zero? Look at the term, for example, (2d/a) sin Ωt in (10.6). This term should change to
Near-Stationary Orbit
91
2d 2d sin (Wt + t ∆W ) ≈ [sin(Wt ) + t ∆W cos(Wt )] a a
Here, t ∆Ω must be small enough, because otherwise the satellite will drift away from its nominal stationary position. So, (d/a) × t ∆Ω is small to a higher order and so can be neglected. This allows us to study the two kinds of motions separately and then combine them by superposition. Neglecting higher order small terms this way is thus a key to the orbital analysis in the present chapter.
10.6 Motion Perpendicular to an Orbital Plane Now that we have clarified the motion in the orbital plane, we turn to the motion perpendicular to the orbital plane. Assume that the orbit has a small inclination i, while the satellite is stationary in longitude; that is, a equals the stationary radius and e = 0. In Figure 10.6, the surface of the paper is for the equatorial plane, and the north is toward this side from the surface. The orbital plane intersects the equatorial plane at the line that passes the Earth’s center O, and the orbit inclines in such a way that the part marked “+” comes to the north, or to this side. As the satellite revolves around O, it goes periodically to the north and to the south of the equatorial plane. If z denotes the satellite’s displacement from the equatorial plane to the north, it varies periodically to positive and to negative. This is a sinusoidal motion if the inclination is small, and is written as
z = c sin ( Wt - β )
(10.16)
Here, c equals ai, and the constant β depends on the orientation of the line of intersection in Figure 10.6.
Figure 10.6 Orbital plane intersecting the equatorial plane.
92
Radio Interferometry and Satellite Tracking
Giving an inclination to a geostationary orbit sometimes causes the satellite to plot a ground locus in a particular shape, like a figure “8,” as illustrated in Figure 10.7. How such a motion occurs is explained in Figure 10.8, where the orbit is projected onto the equatorial plane. When the satellite is near (1) in Figure 10.8, its radius is becoming contracted by the projection factor of cos i, while its velocity vector suffers no contraction since it is parallel to the equatorial plane. So, its revolution rate appears faster than the Earth’s rotation; this is for (1) in Figure 10.7. When the satellite comes near (2) in Figure 10.8, the velocity suffers the projection contraction while the radius does not; so, the revolution rate appears slower than the Earth’s rotation, and this is for (2) in Figure 10.7. The same argument applies to (3) and (4), with north–south symmetry. In this way, the satellite’s longitude shows two cycles of oscillation when the satellite completes one revolution, and this is why the figure 8-like locus appears. This reasoning suggests that the figure 8-like locus becomes visible if the inclination i is large enough to make cos i differ distinctly from 1. Actually, the longitudinal
Figure 10.7 Ground locus of satellite when inclination is large.
Figure 10.8 Explaining the figure 8–like motion.
Near-Stationary Orbit
93
width of the figure 8-like locus depends on the inclination, as shown in Table 10.1. The width appears proportional to i2 for the inclinations shown in the table (see Appendix 10A at the end of the chapter for the reason). The figure 8-like motion is thus visible only for large-inclination satellites, presumably retired satellites or defunct satellites. The motion of a satellite under normal control is therefore described by (10.14) and (10.15), with (10.16) in addition.
10.7 Relative Position Coordinates The position of an ideally stationary satellite is at
r = a ; θ = Wt ; z = 0
as given by (10.10), (10.11), and (10.16) with i = 0. This position can be conveniently used as a reference point for describing the motion of the satellite. As illustrated in Figure 10.9, the satellite position is measured relative to the reference point O, with coordinate axes R and L pointing to the radial and longitudinal directions, respectively. The R-L axes make an Earth-fixed frame that rotates with the Earth. One more coordinate axis Z points to the north, or to this side of the surface of the paper. If the satellite is not far away from the reference point, its motion is written in the R-L-Z frame, from (10.14), (10.15), and (10.16), as
R = ∆a - ae cos (Wt - α)
L = L0 -
3 ∆a Wt + 2ae sin (Wt - α) 2
Z = ai sin ( Wt - β )
Table 10.1 Visibility of Figure 8-Like Motion Inclination Width of Figure 8 (deg) Locus (deg) 0.5 0.002 1 0.009 2 0.035 4 0.14 8 0.56
(10.17)
(10.18)
(10.19)
94
Radio Interferometry and Satellite Tracking
Figure 10.9 Relative coordinates R-L for describing the satellite motion. The Z-axis, although not shown here, points toward this side. Small eccentricity produces an elliptical motion in R-L plane.
where L0 = a θ0, and d = ae and c = ai were used. We have already seen the two kinds of motions in the orbital plane, while they become more clearly visible in the R-L coordinates. One kind is an elliptical motion produced by the cosine and sine terms in (10.17) and (10.18), as illustrated in Figure 10.9. The ellipse is elongated to double along the L-axis, and the satellite moves toward +L when R is negative. The other is a linear, drifting motion along the L-axis, as illustrated in Figure 10.10. The drift occurs toward −L if R is positive, or toward +L if R is negative. Combining these two kinds of motions by superposition makes the in-plane satellite motion in general. As seen from the forms of (10.17), (10.18), and (10.19), the motion in Z is independent of the motions in R and in L. To summarize, we have established two sets of formulations for describing the orbital motion of near-stationary satellites. One is given by (10.14), (10.15), and (10.16) in normal coordinates, and the other by (10.17), (10.18), and (10.19) in relative coordinates. Each has its own use; for example, the former
Figure 10.10 Off-stationary radius causes a linear drift motion in the R-L plane.
Near-Stationary Orbit
95
is suitable for considering the strategy of orbital station keeping, whereas the latter is suitable for analyzing the variations in range, azimuth, or elevation of the satellite observed at an Earth station.
Reference [1] “Space Track,” http://www.space-track.org/perl/login.pl.
Appendix 10A: Width of Figure 8-Like Locus Suppose that, in Figure 10A.1, satellite A is in a circular orbit. The orbit has a unit radius and lies in the x-y plane. Satellite B is also in a circular orbit with unit radius, while it has an inclination i; the inclined orbit is projected onto the x-y plane in Figure 10A.1. If satellites A and B depart at the same time from the x-axis, they will show a difference ε in their revolution angles as observed on the x-y plane. When A has traveled to the angle θ, B has not yet gone that far. Here, BC appears shorter than AC, by the projection factor of cos i. So,
AB = sin θ (1 - cos i )
This AB, or equally AB ′, subtends an angle ε at O. If i is small, the angle is approximately
Figure 10A.1 Finding the width of the figure 8-like locus.
96
Radio Interferometry and Satellite Tracking
ε = AB ′ = AB cos θ = sin θ cos θ (1 - cos i )
If i is small, the cosine is approximately cos i = 1 - i 2 2
So we have
ε = sin θ cos θ
i2 i2 = sin 2 θ 2 4
The angle ε thus shows a peak-to-peak variation width of i2/2, and this becomes the width of the figure 8–like locus. The exact solution of ε, not relying on approximations, is given by
(1 - cos i ) sin θ ε = θ - tan -1 cos θ
The approximate and exact solutions provide the same result within the significant digits shown in Table 10.1. So, the approximate is accurate enough for those inclinations shown in the table.
11 Changing the Orbit We have so far assumed that the Earth pulls our satellite with the force of the inverse-square law and that this is the only force acting on the satellite. That is, we have assumed a two-body problem, and under this assumption the satellite orbit does not change with time. The size and shape of the orbit, as well as the orbital plane orientation in the inertial space, are all invariant. If, on the other hand, an orbit shows a change, then it means some extra force other than the two-body force is acting on the satellite. We now study how this kind of extra force gives rise to orbital changes. In this chapter, the extra force is assumed to be acting on the satellite for a short duration of time to cause an instantaneous change in the satellite’s velocity. The model we use is that of an orbital maneuver that happens when a gas-jet thruster is used by a satellite to generate a velocity change. In the following we will see, in the context of near-stationary orbits, how an orbit can be changed to a desired orbit through such a maneuver.
11.1 Orbital Energy Let us consider first the energy of an orbit, because that has a close relationship to the size of the orbit. The energy of an orbiting satellite is written, in terms of kinetic energy and potential energy, as
E=
v2 µ - 2 r
(11.1)
where v is the velocity and r is the radius. Note that the mass of the satellite, usually denoted by m, is omitted. The E from this equation is for the energy 97
98
Radio Interferometry and Satellite Tracking
per unit mass of the satellite, and if the satellite mass is m kilograms then its energy should be times m. This idea comes from the fact that a 1-kg satellite and a 1,000-kg satellite will show the same orbital motion given the same initial condition. Omitting m this way is a common practice when discussing satellite orbits, and it is not only for energy but also for force, momentum, and angular momentum. The energy in the form of (11.1) thus denotes the energy associated with the orbit, rather than that associated with a particular satellite. Now, the motion of a near-stationary satellite was given in Chapter 10 by (10.7) and (10.5) as follows:
r = a - d cos Wt 2d W cos Wt θ = W + a
One can evaluate the orbital energy at any moment of time, since it is conserved. We evaluate it at the moment such that Ωt = π/2, and at this moment the radius becomes a, while the velocity v is found to be v 2 = (r )2 + (r θ )2 = (Wd )2 + (Wa )2
d2 = W 2a 2 1 + 2 ≈ W 2a 2 a
The angular rate Ω is related to the orbital period P, and the period is related to Kepler’s third law, (9.14), as follows:
W=
2π µ = P a3
(11.2)
µ a
(11.3)
Hence, the velocity satisfies v2 =
The orbital energy from (11.1) therefore becomes
E=
µ µ µ - =2a a 2a
(11.4)
Changing the Orbit
99
That is, the orbital energy is determined by the radius a alone, without depending on the eccentricity. This is an important property of the orbital energy.1
11.2 In-Plane Orbital Changes Suppose, in Figure 11.1, that a satellite is in a circular orbit with radius a and orbital velocity v. If the velocity increases instantaneously by ∆v at the moment the satellite passes point P, what would happen to the orbit? The ∆v is aligned to the tangential direction, and its magnitude is small compared to that of v. The ∆v makes the kinetic energy increase by
v2 ∆E = ∆ = v ∆v 2
(11.5)
This increase becomes the increase in the orbital energy, because no change occurs in the potential energy at the moment the ∆v has occurred. If the orbital energy E increases, then according to (11.4) the orbital radius a must also increase, while satisfying the relationship of
µ - µ ∆E = ∆ = 2 ∆a 2a 2a
(11.6)
By equating (11.5) and (11.6), and using (11.3), we have
Figure 11.1 Effect of tangential ∆v. Broken line: new orbit after change. 1. We found this property for small-eccentricity orbits, although it is actually known that the property is for any eccentricities of elliptical orbits.
100
Radio Interferometry and Satellite Tracking
∆a = 2a
∆v v
(11.7)
So, the new orbit after the velocity increase has an orbital radius of a + ∆a. Here, the orbital radius refers to the radius of the displaced orbital circle that expresses a near-circular orbit, as discussed in the previous chapter. The new orbit, drawn as a broken line in Figure 11.1, must pass the point P. So, the center of the orbital circle must move from the origin O to O′, with O′O being equal to
d = 2a
∆v v
(11.8)
The new velocity at P, namely, v + ∆v, must be orthogonal to the new radius O′P. So, O′ exists on the line PO. Let us move on to Figure 11.2, where the satellite is in a circular orbit with radius a. A velocity change ∆v is applied to the satellite at the moment it passes P, while in this case the ∆v points in the radial direction. The velocity v1 after the change has virtually the same magnitude as the original v if ∆v is small. So, no change occurs in the orbital energy and, hence, no change in the orbital radius a. Meanwhile, the direction of v1 is deflected from that of the original v by a small angle ∆v /v. To match this deflection, the orbital circle must rotate around P by the angle ∆v /v. Consequently, the center of the circle moves from O to O′, until PO′ becomes orthogonal to the velocity v1. Hence OO′ equals
d =a
∆v v
with the direction of OO′ being orthogonal to PO.
Figure 11.2 Effect of radial ∆v.
(11.9)
Changing the Orbit
101
11.3 In-Plane Orbital Maneuver We have observed that the tangential ∆v is able to change both the radius and eccentricity. Hence, it is from the tangential ∆v that a practical strategy of orbital maneuvering develops, as discussed next. Figure 11.3 shows a maneuver that uses two ∆v’s for changing the orbital radius. The first ∆v takes place at time t, as marked with (1), and the second ∆v takes place at t + P/2 as marked with (2), where P is the orbital period (or 23 hours, 56 minutes, 4 seconds). The first ∆v makes the orbital radius increase by 2a∆v /v, and makes the center move from O to O′. The second ∆v makes the orbital radius increase once more by 2a∆v/v, while making the center move from O′ back to O. As a result, the maneuver makes the radius increase by 4a∆v /v, while leaving the eccentricity unchanged. The choice of the maneuver start time t does not affect the result. The radius can be decreased if the two ∆v’s are negative. Figure 11.4 shows a maneuver for changing the eccentricity. First at time t, as marked with (1), a positive ∆v makes the orbital radius increase by 2a∆v
Figure 11.3 Radius-changing maneuver.
Figure 11.4 Eccentricity-changing maneuver.
102
Radio Interferometry and Satellite Tracking
/v, and makes the center move from O towards O′ by aDv/v. The second ∆v, which is negative and taking place at t + P/2 as marked with (2), makes the orbital radius decrease by 2a∆v /v, while making the center move toward O′ once more by a∆v /v. As a result, the maneuver makes the orbital-circle center move by 2a∆v /v from O to O′, while leaving the radius unchanged. The line OO′ can be set to any desired direction by choosing the maneuver start time t. Figure 11.5 shows a modified version, where the negative ∆v comes first and the positive ∆v comes next, to yield the same result. Thus, there are two kinds of maneuvers; one is for changing the radius alone, with the parameters being
+∆v a , + ∆v a , unspecified t and the other for changing the eccentricity alone, with
+∆vb , - ∆vb , specified t
Then by the principle of superposition, a two-impulse maneuver with parameters
∆v a + ∆vb , ∆vb - ∆vb , specified t
changes an orbit to any targeted orbit with desired radius and eccentricity. This maneuver is doable if the satellite has thrusters facing the east and the west (i.e., no radial thrusters are required) and this is actually the way in which satellites control their orbits.
Figure 11.5 Eccentricity-changing maneuver—modified version.
Changing the Orbit
103
11.4 Inclination Maneuver One more type of orbital maneuver is designed to change the inclination of the orbital plane, as illustrated in Figure 11.6. A satellite is in a circular orbit with a velocity v, and the orbit lies in the equatorial plane. At the moment the satellite passes point P, a ∆v pointing to the north is applied. This makes the orbital plane incline from the equatorial plane, by an angle
i = ∆v v
(11.10)
Since the new velocity v1 has virtually the same magnitude as the original v, the orbital energy remains unchanged, and so does the orbital radius. Figure 11.7 illustrates a different view of the inclination maneuver. A satellite revolving around O at a velocity v has an angular momentum of
H = av
Here, a is the orbital radius, and H as a vector is perpendicular to the orbital plane. Applying the ∆v is equal to applying an impulsive torque T to the orbital plane, as illustrated in the figure, and it causes a change ∆H in the angular momentum. This makes the vector H become oblique as H1. To this H1 the orbital plane must be perpendicular. So, the orbital plane becomes inclined by
Figure 11.6 Inclination maneuver.
Figure 11.7 Inclination maneuver from a different view.
104
Radio Interferometry and Satellite Tracking
i = ∆H H
Since ∆H = a ∆v, we have the same result as i = ∆v / v. The orientation of H as a vector is important because it determines the orientation of the orbital plane. Let us then consider a unit vector made from H, and project the unit vector onto the equatorial plane. This is illustrated in Figure 11.8, where the projected vector lies as u in the equatorial x-y plane, with the magnitude of u measuring the angle of inclination. Any orbital plane is represented in this way by using a u vector. Suppose, in Figure 11.8, that u is for the orbital plane of our satellite, and uT is for a target orbital plane that should be reached next. Then we need a change ∆u to occur, and this occurs if an impulsive torque T acts on the orbital plane, as illustrated in Figure 11.9. For this torque to act, the satellite must generate a ∆v at the moment it passes either P or Q; P is 90 deg ahead of the direction of the desired ∆u, and Q is opposite P. If at P, the ∆v points to this side of the paper surface, or to the north, while at Q it points to the south. The
Figure 11.8 Projected unit vector u to represent orbital plane.
Figure 11.9 Maneuver takes place when satellite is at P or Q.
Changing the Orbit
105
target is then reached if the magnitude of ∆v is set to ∆v = v | ∆u |. In this way, a satellite changes its orbital plane to a targeted orbital plane through a singleimpulse maneuver by using a thruster facing the north or south.
12 Orbital Perturbations We saw in the previous chapter that an extra force impulsively acting on the satellite changes the satellite’s orbit. In this chapter we discuss various types of extra forces that are not generated by the satellite itself, but originate from various sources existing in the space environment. These forces are small in magnitude, and they act on the satellite continuously over long periods. The resulting orbital changes occur gradually, at a slow pace as time passes; such changes are called perturbations. Perturbations are small at first, and some types of perturbations grow larger with time, so that the orbit finally becomes nonstationary. This is a serious problem in the orbital operation of satellites. We analyze the problem in this chapter and determine the effects of such perturbations.
12.1 Perturbing Forces The forces that cause perturbations to the satellite orbit, or the perturbing forces as we will refer to them, depend on the type of orbit. For a geostationary orbit, four major forces act on the satellite: • A force resulting from the nonspherical shape of the Earth; • A force caused by solar radiation pressure; • The gravity force of the sun; • The gravity force of the moon. These forces are extremely small compared with the two-body gravitational force of the Earth. So their effect on a satellite’s orbit are small at first. In some cases, however, the small changes accumulate as time passes, to grow 107
108
Radio Interferometry and Satellite Tracking
larger and become visible. In other cases, the small changes partially add to each other and partially cancel each other and, hence, do not grow larger as time passes. We focus our interest here on those perturbations that grow larger with time. We refer to these as long-term perturbations. In the following sections, we analyze the long-term perturbations caused by the four major forces listed above. The forces produce perturbations in different mechanisms, so we will analyze them separately one by one. This discussion may appear lengthy at first sight, but using the simple diagrams from Chapter 11 will help us develop a concise, straightforward theory of perturbations.
12.2 Nonspherical Shape of the Earth If we cut the Earth into halves at its equator, the cross section looks almost circular. Precisely speaking, it is not circular but slightly elliptical, with its maximum radius and minimum radius differing by no more than 140m. Suppose our stationary satellite is placed in the geometry illustrated in Figure 12.1. Here, the elliptical shape of the equatorial cross section is exaggerated. If we assume a two-body problem, the satellite would be pulled toward the Earth’s center O, but actually the pulling force is slightly deflected owing to the extra mass existing near the bulging part A. So, the force acting on the satellite has a small, accelerating component F toward the tangential direction. We will show that this force F causes long-term perturbations in the satellite’s longitude. The force F vanishes, owing to symmetry, if the satellite is due above the bulging part (A or A′) or if it is away from the bulging part by 90 deg (B or B′). On the world map, B and B′ are located in the Indian Ocean and the Eastern Pacific Ocean. The tangential force F thus depends on the stationary longitude λ of the satellite, and it is known as F(λ) and is plotted in Figure 12.2.
Figure 12.1 Equatorial section of the Earth makes an ellipse.
Orbital Perturbations
109
Figure 12.2 Tangential force as a function of longitude.
The force F acting on the satellite is small wherever the longitude is, so the force would not cause any significant change to the orbit during one orbital revolution or so of the satellite. One can then assume that a constant F acts on the satellite during one revolution. This approximation allows us to analyze the orbital change as follows. The force F acting on the satellite over a short period ∆t causes its velocity to increase by ∆v = F∆t. Then by (11.7), the orbital radius a will increase slightly by
∆a =
2a 2a ∆v = F ∆t v v
(12.1)
According to Kepler’s third law, or by (10.9), the orbital revolution rate Ω changes by
∆W = -
3W ∆a 2 a
(12.2)
This ∆Ω becomes the change in the satellite’s drift rate, that is, the rate at which the satellite’s longitude λ drifts with time. This is simply written as
∆λ = ∆W So, from (12.1) and (12.2) we have
110
Radio Interferometry and Satellite Tracking
∆λ = -
3W F ∆t v
Since v = aΩ, we have finally
λ = - 3 F ( λ) a
(12.3)
Here, the force F is no longer constant, because l will drift slowly. This is the equation that determines the drift motion of λ in a long-term perturbation in longitude. Meanwhile the tangential force F has the effect of moving the center of the orbital circle, as illustrated in Figure 12.3. The ∆v at 1 makes the center move as indicated by the (1) in Figure 12.3, and similarly that at 2 as (2), that at 3 as (3), and so on. These movements of the center will cancel each other when the satellite completes one revolution. So, there is no long-term increase in the orbital eccentricity; only some oscillatory variation is present. Let us turn to (12.3) for the perturbation in λ. Setting
G ( λ) = -3F ( λ) a
simplifies the equation, as
λ = G ( λ)
Figure 12.3 Eccentricity perturbation does not grow larger.
(12.4)
Orbital Perturbations
111
Here, it looks as if λ is accelerated by some hypothetical force G. This G as a function of λ is shown in Figure 12.4. Though Figures 12.2 and 12.4 look similar, they have different physical meanings.
12.3 Patterns of Longitudinal Drift If force G is a function of position λ, it has a potential U such that
G = - ∂U ∂λ
From G(λ) in Figure 12.4 we can derive the potential U(λ), as plotted in Figure 12.5. The potential curve has two peaks and two bottoms, and they correspond to the places in Figure 12.1 with the same markings. If a satellite is placed at B in Figure 12.5, it will not move because it is at the bottom of the potential curve. If the satellite is placed somewhere away from B, for example, at (1), then it will start moving toward B, pass B, and reach (1′). Then it turns back and henceforth will come and go between (1) and (1′) in an oscillating manner. Similar motion will occur, for example, between (2) and (2′). Consider an oscillatory motion with a small amplitude about point B. If the amplitude is small enough, then in Figure 12.4, G(λ) varies linearly with λ about B. Accordingly, the motion becomes a harmonic oscillation. One can then examine the period of the oscillation by measuring the slope of G(λ) against λ at B; the result becomes 740 days. In relation to this type of long-term motion, one opinion is that the oscillatory motion would attenuate gradually such that the longitude would finally
Figure 12.4 Hypothetical force G as a function of longitude.
112
Radio Interferometry and Satellite Tracking
Figure 12.5 Potential U of force G.
converge to B or B′. Drifting satellites would thus finally accumulate near B or B′. This opinion may come from the analogy that oscillating motions usually have damping factors if they are small; for example, in an electric resonance circuit there is some resistance that attenuates the oscillation, and in a mechanical vibration with a spring and a mass there is some friction to damp the oscillation. This is, however, not the case for orbital longitudinal oscillation; there is no damping in the oscillatory motion. The coming and going motion in longitude will never stop. Suppose the satellite is placed at (3) in Figure 12.5. It will pass B, get over the peak A′, and pass B′ to reach (3′). Then it turns back and will henceforth come and go between (3) and (3′) in near-round trips. This kind of near-round motion occurs if the starting point is in the region between 143 and 180 deg east. This region exists because peak A is higher than A′, or in other words, A–A′and B–B′ in Figure 12.1 are not in perfect symmetry. For the Earth’s gravitational field, it is not as simple to show neat symmetry. A model of the Earth’s gravitational field, called the gravity model, requires a number of coefficients for spherical harmonic expansion. Gravity models were developed one after another in pursuit of better precision, with the pursuit starting from before the 1970s and continuing today. For our case of geostationary satellites, models in the 1980s are precise enough [1]. From this model the functions F(λ) and G(λ) were set. Using the model together with numerical integration makes it possible to calculate the exact orbital motion with perturbation. Figure 12.6 shows the applicable numerical calculations plotted against time in days. The numerical results agree with the qualitative discussions given earlier for the cases of (1)–(1′), (2)–(2′), and (3)–(3′).
Orbital Perturbations
113
Figure 12.6 Perturbations in longitude, by numerical integration.
12.4 Solar Radiation Pressure Suppose, in Figure 12.7, that a satellite is in a circular orbit with radius a and velocity v. The sun exists at the right-hand side, and its light is incident in parallel rays on the satellite to generate a pressure force F. For the time being, we assume that the force F is constant in magnitude and direction. The effect of this force is to cause slow variations in the orbital eccentricity, or the perturbation of the orbital eccentricity. Our task is to examine how the center of the orbital circle moves.
Figure 12.7 Solar radiation pressure and radial velocity.
114
Radio Interferometry and Satellite Tracking
The force F acting over a period ∆t causes a velocity increase of F∆t. If the satellite is at (1) in Figure 12.7, this increase has a radial component:
∆v = F ∆t cos θ
Note that the revolution angle θ is measured in reference to the direction toward which F is acting. This ∆v causes no changes in the orbital radius. The ∆v causes, according to (11.9), the orbital circle center to move, from O to O1, by
d =a
∆v aF = cos θ ∆t v v
So, the center will move at the rate of
d = K cos θ
with
K =
aF v
If we consider this rate of motion as a vector, it corresponds to (1) in Figure 12.8. The vector has its end point on a circle, whose diameter OP has a length K. As the satellite passes to (2), (3), and (4) in Figure 12.7, the vector end point moves to (2), (3), and (4) in Figure 12.8. Note that the sign of ∆v changes from positive to negative during the motion from (2) to (3) in Figure 12.7. When the satellite travels half an orbit, the vector end point traces a circle. The vector in Figure 12.8 can be decomposed into a constant vector and a rotary vector, as illustrated in Figure 12.9. The constant vector has a length K/2,
Figure 12.8 Vector representing the rate of motion.
Orbital Perturbations
115
Figure 12.9 Decomposing into constant and rotary vectors.
and its end point becomes the origin of the rotary vector that traces circularly through (1) to (4). The constant vector yields a steady motion of the center at the rate of K/2, thus yielding a long-term perturbation. The rotary vector will not yield such a long-term effect, because the vectors cancel each other during one revolution; it merely yields some periodic motion. Let us turn to Figure 12.10, to consider the tangential velocity. If the satellite is at (1) and the force F acts on it during ∆t, the increase in its tangential velocity will be ∆v = F ∆t cos θ
Note that angle θ is measured in reference to the direction orthogonal to F. This ∆v causes, by (11.7), a change in the orbital radius:
∆a =
2a 2a ∆v = F ∆t cos θ v v
Figure 12.10 Solar radiation pressure and tangential velocity.
116
Radio Interferometry and Satellite Tracking
When this change is integrated over one revolution, it becomes zero because cos θ is periodic. That is, the orbital radius does not change in the long term. Meanwhile, by (11.8), the ∆v causes the center to move, from O to O1 in Figure 12.10, by
d = 2a
∆v 2aF = cos θ ∆t v v
So, the center will move at the rate of
d = 2K cos θ
The rate of motion of the center as mentioned above is represented by a vector corresponding to (1) in Figure 12.11. The vector end point is on a circle, whose diameter OP has a length 2K. As the satellite moves through (2), (3), and (4) in Figure 12.10, the vector end point moves through (2), (3), and (4) in Figure 12.11. When the satellite travels half of an orbit, the vector end point traces a circle. Here again the vector is decomposed into a constant vector and a rotary vector. The constant vector has a length K, and this yields the long-term perturbation. Thus, there are two long-term motions, with K/2 and K being their moving rates, and they add to each other. So, the combined motion has the rate of (3/2)K or
3aF d = 2v
(12.5)
This motion is directed orthogonally to the direction toward which the force F is acting.
Figure 12.11 Vector representing the rate of motion.
Orbital Perturbations
117
12.5 Position of the Sun We now need to describe the exact position of the sun. Its position is measured with x-y-z coordinates that make an inertial frame, as illustrated in Figure 12.12. Here, S is the sun, and O is the center of the Earth. The z-axis contains the north pole of the Earth, so the x-y plane is the equatorial plane. Defining the orientation of the x-axis requires a reference direction, and it is set as follows. If we stand at O and observe the sun, then it moves around in 1 year to trace a circle with radius R. In reality, however, it is the Earth that goes around the sun, but here we describe what is observed. In Figure 12.12, arc AB is a quarter of the orbital circle of the sun. The orbital plane that contains the orbital circle has a fixed orientation in the x-y-z frame, and this plane is inclined from the equatorial plane by an angle of δ0 = 23.4 deg. The orbital plane is shown as if it is a solid plane so that its inclined geometry may be seen clearly. Now, the orbital plane of the Sun and the equatorial plane cross each other at line OA, and in this line we set the x-axis. The moment when the sun crosses the equatorial plane going from the south to the north is referred to as the vernal equinox1. The x-axis therefore points to the sun at the moment of the vernal equinox, and this is the standard definition for the x-axis. In Chapter 10 we mentioned the need to establish a reference direction, but left it for later; now this is done. This
Figure 12.12 Position and motion of the sun.
1. The vernal equinox is an instantaneous event. The day that contains this event is called the day of the vernal equinox. What we are referring to is the instantaneous event.
118
Radio Interferometry and Satellite Tracking
definition of the x-y axes is applied to all figures that appear subsequently in this chapter, unless otherwise specified. The position of the sun is alternatively measured by two angles that appear in Figure 12.12. One is α, the azimuth angle measured along the equatorial plane, which is called right ascension. The other is δ, the angle of separation from the equatorial plane, called declination. Here, the solid orbital plane is partially removed in order to show the geometry of α and δ. The sun leaves point A of the x-axis at the moment of the vernal equinox, and travels in time t to the revolution angle of Ψt. Three lengths are marked with “x” in the figure, and they satisfy the following relationships to Ψt :
cos δ cos α = cos Ψt
(12.6)
cos δ sin α = sin Ψt cos δ0
(12.7)
sin δ = sin Ψt sin δ0
(12.8)
These equations tell us how α and δ vary as the sun moves.
12.6 Long-Term Effect We can now analyze more precisely the effect of the radiation force F. If the sun is away from the equatorial plane, namely, if δ > 0, we must consider the component of the pressure force along the equatorial plane. Hence, the magnitude of F should be in the form of
F = F0 cos δ; F0 = C
A M
(12.9)
Here, C = 4.56 × 10−6 [N/m2] is the constant for the flux density of sunlight, and A and M are, respectively, the cross-sectional area and the mass of the satellite. If the satellite is a black body, then A equals its geometrical cross section. If the satellite reflects some of the incident light, then A becomes effectively larger, while never exceeding two times larger. The effective A thus depends on the shape and material of the surface of the satellite. This dependence may be complex for a large satellite with many antennas and appendages, and in such a case the evaluation of effective A may have some error. Precisely speaking, the effective A may vary when the direction of the incident light changes, but here we approximate A as being constant. If the reflection is specular, the
Orbital Perturbations
119
effective direction of force may change, but we assume for simplicity that the force is aligned with the direction of the sunlight. From (12.5) and (12.9), the rate of motion of the center is written as d = K cos δ
(12.10)
Here, the constant K has been reset to
K =
3aF0 3F0 = 2v 2W
If the rate of motion from (12.10) is regarded as a vector, its direction depends on α, the right ascension of the sun, as illustrated in Figure 12.13. By writing the vector in x and y components, and using (12.6) and (12.7), we have
dx = K cos δ sin α = K cos δ0 sin Ψt dy = -K cos δ cos α = -K cos Ψt
These equations tell us how the center would move in a long term. If the center was at the origin at the vernal equinox, namely, t = 0, it then moves as follows:
d x = d 0 cos δ0 (1 - cos Ψt )
(12.11)
d y = -d 0 sin Ψt
(12.12)
where
Figure 12.13 Rate of motion in reference to the sun.
120
Radio Interferometry and Satellite Tracking
d0 =
K 3 A = C Ψ 2 ΨW M
(12.13)
The motion of the center thus traces out an ellipse in 1 year, as illustrated in Figure 12.14. Set the constants as Ψ = 1.99 × 10−7 rad/s, Ω = 7.29×10−5 rad/s, and the satellite parameter as, for example, A /M = 0.01 m2/kg. The ellipse size then becomes as 2d0 = 9.4 km. Its minor axis is shorter than its major axis by the factor of cos δ0 = 0.92. If we assume δ0 = 0 for simplicity, the motion of the center becomes circular, and this would do as well for a preliminary study with moderate accuracy. The long-term behavior of orbital eccentricity is thus simple if the satellite parameter A /M is given properly.
12.7 Gravity of the Sun When gravity forces come from the sun and the moon to act on a satellite, the forces cause gradual changes in the inclination of the orbital plane. This is called the perturbation of the orbital plane. The mechanism of this perturbation is, in principle, the same for the sun and for the moon, only they cause different magnitudes of perturbation because they are different in mass and different in distance from the Earth. Let us consider first the gravity of the sun. Set its position as illustrated in Figure 12.15, where S is the sun and O is the Earth’s center. In this Figure the x-axis is temporarily set so that the sun will exist in the x-z plane. This is different from Figure 12.12, and is done this way to make our discussion easier. Our satellite in Figure 12.15 is at some position (x, y) in the x-y plane. The satellite is pulled by gravity toward the sun, and this pulling force has a z-component F. This F is the perturbing force that acts on the orbital plane.
Figure 12.14 Yearly perturbation of orbital eccentricity.
Orbital Perturbations
121
Figure 12.15 Position of the Sun in a temporary coordinate frame.
Let us denote the position of the sun by x = X and z = Z. The perturbing force F is then given by
F =
Z µ ⋅ ( X - x )2 + y 2 + Z 2 ( X - x )2 + y 2 + Z 2
(12.14)
Here, the constant µ is for the universal gravity constant times the mass of the Sun. The distance from O to S is R = X 2 + Z 2 , so (12.14) is rewritten as follows: µZ ( X + Z - 2xX + x 2 + y 2 )3/2 µZ = 2 (R - 2xX + x 2 + y 2 )3/2 1 µZ = 3 ⋅ 3/2 R 2xX x 2 y 2 1 + + R2 R 2 R 2
F =
2
2
Since R is much larger than x or y, we neglect the second-order terms of x/R and y/R, hence obtaining
µZ 2xX F = 3 ⋅ 1 - 2 R R As a result we have
-3/2
≈
µZ 3xX ⋅ 1 + 2 R3 R
122
Radio Interferometry and Satellite Tracking
F =
µZ µZ 3 Xx + 3 ⋅ 2 R3 R R
(12.15)
Let us write the first term on the right-hand side as F0 =
µZ R3
(12.16)
This F0 is for the value of F when it acts on O. If F and F0 are not equal, a torque arises that operates to rotate the orbital plane. Consider one component of this torque: T = (F - F0 ) ⋅ x
This is a torque trying to rotate the orbital plane about the y-axis, as shown in Figure 12.15. If our satellite is in a circular orbit with radius a, and its angular rate of motion is Ω, one can write x = a cos Ωt. Then from (12.15) and (12.16), the torque becomes
T =
µZ 3 X 2 µZ 3 X 2 2 x = 3 2 a cos Wt R3 R2 R R
We average this torque over one orbital revolution. The factor cos2Ωt then becomes 1/2, and by using the relationships Z = R sin δ, X = R cosd, we have
T =
3µ 2 a sin δ cos δ 2R 3
(12.17)
This is the torque that causes long-term perturbations. We should also consider the torque about the x-axis, but the torque vanishes, owing to symmetry, after taking an average over one revolution.
12.8 Tilting of the Orbital Plane If a satellite goes round the Earth, it has an angular momentum H, which is orthogonal to the orbital plane, as noted in the previous chapter. If a torque T acts on this orbital plane over ∆t, it gives an increment of ∆H = T∆t to H, as illustrated in Figure 12.16. Accordingly, the orbital plane tilts by an angle ∆H /H in ∆t. If the torque continues to act, the orbital plane continues its tilt-
Orbital Perturbations
123
Figure 12.16 Tilting motion of an orbital plane.
ing motion at an angular rate T/H. To this orbital plane we attach a unit vector orthogonally, as illustrated in the figure. By observing how the unit vector changes, we can describe the tilting motion. This is illustrated in Figure 12.17, where the unit vector is projected onto the equatorial x-y plane. The projected unit vector should appear somewhere as u, but here its changing rate u is shown instead. This u represents the angular rate of the orbital plane’s tilting motion. From (12.17) and H = a2Ω, we can write
u =
T = L sin δ cos δ H
(12.18)
3µ 2R 3 W
(12.19)
with
L=
Here, we reset our temporary coordinate frame to the original definition so that the x-axis will point to the sun at the vernal equinox. Regard u in (12.18)
Figure 12.17 The orbital plane has a rate of tilting motion.
124
Radio Interferometry and Satellite Tracking
as a vector, and see its direction in Figure 12.17. Then its x and y components are
ux = L sin δ cos δ sin α u y = -L sin δ cos δ cos α Hence, by using (12.6) through (12.8), we have
1 cos 2 Ψt ux = L sin δ0 cos δ0 2 2
u y = -L sin δ0
sin 2 Ψt 2
(12.20)
(12.21)
These equations tell us how u would change in the long term. Assume u = 0 at the vernal equinox, namely, t = 0. Then we find two kinds of changes, or motions. One is from (12.20); its first term yields a motion
u x = (L 2 ) sin δ0 cos δ0 × t
(12.22)
This is a linear motion, as marked with a (1) in Figure 12.18, which means a steady increase in orbital inclination. Set the constants as R = 1.50 × 108 km and µ = 1.33 × 1011 km3/s2. The increase in the inclination then becomes 0.27 deg per year, and this is the long-term perturbation by the sun. Meanwhile the second term in (12.20) together with (12.21) yields a periodic motion:
Figure 12.18 Orbital plane is tilting in linear and elliptical motions.
Orbital Perturbations
u x = -u 0 cos δ0 sin 2 Ψt u y = u 0 (cos 2 Ψt - 1)
125
where u0 =
L sin δ0 4Ψ
This motion traces out an ellipse in half a year, as marked with a (2) in Figure 12.18. If motions (1) and (2) in the figure are generated for 1 year and they are combined by superposition, its locus becomes as illustrated in Figure 12.19. Here, u1 denotes the long-term motion per year, while u0 is for the lateral width of periodic motion, and their ratio is calculated as follows (Y is for 1 year): 2u 0 u1 =
=
2u 0 (L 2) sin δ0 cos δ0 × Y 1 1 = = 0.17 ΨY cos δ0 2 π cos δ0
(12.23)
12.9 Gravity of the Moon The gravity of the moon can be analyzed by the same procedure used above. In Figure 12.12, S is regarded as the moon, which moves around the orbital circle in 1 month. Set the constants for the moon as R = 3.84 × 105 km, Ψ = 2.66 × 10−6 rad/s, and µ = 4.90 × 103 km3/s2. The resulting pattern of perturbation is the same as that shown in Figure 12.19, while the pattern is now for the period
Figure 12.19 Perturbation of the orbital plane due to the Sun, for 1 year.
126
Radio Interferometry and Satellite Tracking
of 1 month, which begins at the moment when the moon crosses the equatorial plane from the south to the north. The rate of increase of the inclination from (12.22) is 0.043 deg per month, or 0.58 deg per year. The per-year rate of increase is thus about two times larger for the moon than for the sun. Although the moon’s mass is much smaller, it is much closer to the Earth, and owing to this closeness, the constant L from (12.19) becomes larger for the moon. Precisely speaking, the case for the moon is a little more complicated. In Figure 12.12, the orbital plane is crossing the equatorial plane at line OA; this line is called the line of node. For the case of the sun, the line of node was always in the x-axis. For the case of the moon, however, the line of node may move away from the x-axis, for example, like OA′ in Figure 12.12. The angle between line OA′ and the x-axis, that is, the angle of node, varies maximally to ±13 deg. This is a periodic variation, with its period being 18.6 years. Correspondingly, the pattern of perturbation changes from Figure 12.19 to Figure 12.20. The pattern is rotated around O, and the angle of rotation is equal to the angle of node. As the angle of node changes to positive and then to negative, the pattern is sometimes like that of (1) and sometimes like that of (2). If we refer to Figure 12.12 once again, the angle of inclination δ0 is no longer constant for the moon, even though it was constant for the case of the sun. The angle δ0 varies between 18.3 and 28.5 deg, while averaging 23.4 deg in the long term. This is a periodic variation, with its period being the same 18.6 years. Consequently, the rate of increase of the inclination from (12.22) varies between 0.48 and 0.67 deg per year, while being centered at 0.58 deg per year. Hence, in Figure 12.20, the size marked with an asterisk (∗) becomes variable. Also, the ratio of the lateral width of periodic motion becomes variable, as suggested by (12.23). In short, the moon’s orbital plane varies its orientation in the inertial space, and for this reason the perturbation pattern is modulated from Figure 12.19 to Figure 12.20.
Figure 12.20 Perturbation of the orbital plane due to the Moon, for 1 month.
Orbital Perturbations
Figure 12.21 Combined sun-moon perturbation for different years.
127
128
Radio Interferometry and Satellite Tracking
12.10 Sun-Moon Combined Effect The combined perturbation due to both the sun and the moon is known if the separate perturbations are made into superposition; that is, if the effects of the separate perturbations are added together. The rate of increase of the inclination then becomes between 0.75 and 0.94 deg per year, or 0.85 deg per year on average. If the perturbation patterns for the sun (Figure 12.19) and for the moon (Figure 12.20) are prepared for 1 year and they are made into superposition, then it makes the patterns shown in Figure 12.21. Each pattern shows large undulations and ripple-like smaller undulations. Large undulations are due to the sun, with a half-year period, and ripple undulations are due to the moon, with a half-month period. The moon’s perturbation in Figure 12.20 shows variable directions like (1) or (2), and this is the reason why the patterns in Figure 12.21 show different directions in different years. The patterns for 2000 and 2020 are nearly identical; this is because the angle of inclination and the angle of node for the moon vary with a period of 18.6 years. The perturbations shown in Figure 12.21 were calculated by the numerical integration of orbital motion, starting at the day of the vernal equinox. The perturbation patterns we have derived by theory are thus in agreement with the numerical results. If the combined sun-moon perturbation is observed over a very long term, which means sufficiently longer than 18.6 years, it shows on average a steady trend of drift toward the positive direction of the x-axis, at the rate of 0.85 deg per year. The preceding discussion would suggest that the orbital inclination increases boundlessly as time passes, but this is not the case. There is one more extra force that begins to act on the satellite when its orbital inclination becomes larger. This force comes from the shape of the Earth. The Earth’s shape is oblate, so that its polar radius is shorter than its equatorial radius. If the satellite stays in or near the equatorial plane, then because of symmetry, the oblateness does not produce any extra force. If the satellite goes away from the equatorial plane, the extra force begins to modulate the inclination perturbation. As a result, the increasing trend of the inclination will stop at a maximum of 14.8 deg, and then turns back to decrease, finally reaching 0 after 54 years, and this pattern recurs in cycles [2]. For this reason the present analysis limits its effectiveness to within a few to several degrees of orbital inclination, which is effective enough for the orbits of the near-stationary, operational satellites in which we are interested.
Orbital Perturbations
129
References [1] Lerch, F. J., S. M. Klosko, G. B. Patel, and C. A. Wagner, “A Gravity Model for Crustal Dynamics (GEM-L2),” J. of Geophysical Research, Vol. 90, No. B11, 1985, pp. 9301–9311. [2] Soop, E. M., Handbook of Geostationary Orbits, Norwell, MA: Kluwer, 1994, pp. 87–88.
13 Station Keeping If a satellite is initially placed in a stationary orbit, it will sooner or later start moving about and moving away from its nominal stationary position, because perturbations make the orbit gradually change. So, we must make orbital corrections on a regular basis to keep the satellite stationary. This process is called station keeping. A common practice of station keeping is to determine a fixed boundary for the satellite’s position, with boundary lines set at 0.1 deg in longitude and latitude relative to the intended stationary position. A satellite staying inside the boundary is regarded practically as stationary. In this chapter we discuss the station-keeping method, making use of the formulations of orbital maneuvers and perturbations that we established in earlier chapters. Perturbations are different for longitudinal and latitudinal motions, so, station keeping is considered separately as east-west (EW) keeping and north-south (NS) keeping. Our discussions will clarify how the satellite moves under station keeping and also determine the cost of ∆v for performing the station-keeping function.
13.1 EW Keeping for Drift-Rate Control If a satellite is regarded as near stationary, its longitude λ would vary by not more than a fraction of a degree from its nominal stationary longitude. So, in Figure 12.2 in Chapter 12, we can set the force F(λ) to a constant F at the nominal longitude of the satellite. Assume for the time being that F > 0. Then according to (12.1), the orbital radius a will increase at the rate of ∆a/∆t = 2aF/v. That is,
131
132
Radio Interferometry and Satellite Tracking
the radius a increases linearly with time, as illustrated in Figure 13.1. If F(l) is constant, G(λ) is also constant and here G < 0. Then from (12.4), λ = G yields a free-fall motion in λ, and it plots a parabola as marked with a (1) in Figure 13.1. We must keep l inside the boundaries of λ1 and λ2, where λ1 and 2 are usually separated by 0.2 deg minus some small allowance for guard bands. So, before λ comes near λ1, we need an orbital correction, #1. The correction is to decrease the radius a, because it has increased too much, and this correction can be done by means of the radius-changing maneuver described in Chapter 11. If the correction is done properly, the longitudinal drift rate λ , or simply the drift rate, changes its sign from negative to positive. The drifting λ will then plot the parabola marked by the (2) during the time until the next correction, correction #2, becomes necessary. The change from (1) to (2) is analogous to the motion of a free-falling ball when it bounces on the floor. The next correction, correction #2, takes place in the same way as #1, and this process is iterated regularly. The radius a will thus increase and decrease periodically, while it averages the stationary orbital radius, aS. If the value of F is negative, the curves in Figure 13.1 will be upside down. The parameters of station keeping are determined as follows. In Figure 13.1, T is the period between maneuvers. The parabola has the shape of free-falling motion as given by λ = Gt 2/2, so its segment (2) has a height |G |(T/2)2 /2. This equals W, the longitudinal width that the satellite occupies during the drift. That is,
W = GT 2 8
Figure 13.1 Controlling the longitudinal drift rate where a is the orbital radius; λ is the satellite longitude; and t represents time.
Station Keeping
133
where G represents the absolute value of G. Shorter maneuvering periods make the width W narrower. The velocity change ∆v required for a maneuver is equal to FT. That is, the orbital correction offsets the effect of force F as being accumulated over the period T. Choose an example of λ = 117 deg east, at which F takes its maximal value of 66 × 10−9 m/s2, with G = 0.0020 deg/day2. Then T = 20 days makes W = 0.1 deg, which fits inside the boundaries with a margin. The required ∆v for each maneuver is 0.11 m/s2. We are interested in the ∆v required per year, because it is a basic parameter for determining the budget for satellite propellant consumption. The per-year ∆v is estimated simply as F times a year, or 2.1 m/s. This is the maximal estimate of ∆v required for driftrate control.
13.2 EW Keeping for Eccentricity Control If a satellite is initially placed in a circular orbit, the orbit will soon become noncircular, because it is perturbed by the solar radiation pressure force, as described in Chapter 12. If the orbit has an eccentricity e, then according to (10.18), the satellite position oscillates about its nominal stationary position to ±2ae along the tangential direction. Correspondingly, the longitude of the satellite oscillates to ±2e radians. This oscillation may cause a problem in EW keeping, as discussed next. Communication satellites tend to have large cross-sectional areas, because they need a lot of power for the mission equipment and, hence, need a wide area for the solar array. For example, a cross-sectional area of 120 m2 with a mass of 2,000 kg makes an area-to-mass ratio of A/M = 0.06 m2/kg. Refer to Figure 12.14, and set here δ0 = 0 for simplicity. The orbital circle center will move away from O maximally to 56 km, thus giving a maximal eccentricity of 0.0013. The longitude then oscillates with an amplitude of 0.15 deg, or a peakto-peak width of 0.3 deg. This exceeds the standard longitude-keeping width of 0.2 deg. That is, station keeping is impossible here. A simple idea for solving this problem is illustrated in Figure 13.2. In this figure, the x-y plane represents the equatorial plane, with the x-axis pointing to the vernal equinox. This setting of x-y coordinate axes applies to all figures that appear subsequently in this chapter. Now, the drift motion of the orbital circle center starts not from O but from somewhere else, so that its yearly locus (here approximated as a circle) will have its center at O. The eccentricity then becomes constant, while being reducing to a half. The longitude oscillation becomes 0.15 deg peak-to-peak, and this would be adequate for station keeping. The ratio A/M, however, may become even larger as the design of communication satellites develops. So, we need more active control of the eccentricity.
134
Radio Interferometry and Satellite Tracking
Figure 13.2 Halving the eccentricity.
The control of eccentricity is schematically described as follows. In Figure 13.3(a), the circle with radius d0 represents the above-mentioned yearly drift motion of the orbital circle center. We modify this yearly motion as follows. Divide the yearly circle equally into n segments: (1), (2), (3), …, (n). If n is large enough, each segment approximates a line. Now, look at the motion that makes segment (1). Let this motion start from point 1s, which is given in Figure 13.3(b). The motion will then draw the line segment 1s–1e. If point 1s is chosen right, this line segment has its middle point at O. That is, we shift segment (1) to segment 1s–1e to be centered at O. Similarly, we shift segment (2) to segment 2s–2e, segment (3) to segment 3s–3e, and so forth. This is done by a series of orbital corrections, as follows. When the center has traveled along 1s–1e and has reached 1e, the center is then moved to 2s by an orbital correction. When
Figure 13.3 Controlling the eccentricity where (b) and (c) are magnified about point O.
Station Keeping
135
the center has traveled along 2s–2e and has reached 2e, the center is then moved to 3s by an orbital correction, and so forth. Each correction is done using the eccentricity-changing maneuver described in Chapter 11. Under this control, each line segment has the length of 2πd0/n, so every line segment fits inside a boundary circle with radius πd0/n. The maximal eccentricity is therefore reduced by a factor of π/n compared with that of d0 in Figure 13.2. This is ideal for eccentricity control, if we are ready to do frequent maneuvers. The ∆v required per year for the ideal control is estimated as follows. The circle in Figure 13.3(a) has a circumference of length 2πd0. Consider a hypothetical maneuver that is designed to move the orbital circle center over a linear path of the same length 2πd0. The ∆v required for this hypothetical maneuver then represents the ∆v required per year. From (11.8), set ∆v =
v 2 πd 0 2a
Here, d0 is given by (12.13), to be d0 =
3 A C 2 ΨW M
with parameters defined as follows: Ψ: orbital revolution rate of the Sun; Ω: orbital revolution rate of the satellite; C: constant for the flux density of the sunlight, 4.56 × 10−6 [N/m2]. As a result, we have
∆v =
3v 2 π A 3 A C = YC 4a W Ψ M 4 M
In this equation, v = a Ω was used, and 2π/Y = Y is for 1 year. If the area-to-mass ratio A/M is given in m2/kg, this result is written as ∆v [m/s] = 108 A/M. For our example of A/M = 0.06, the ∆v estimate is 6.5 m/s. This example suggests that control of eccentricity tends to require more ∆v than control of the drift rate.
136
Radio Interferometry and Satellite Tracking
13.3 Combined EW Keeping In the preceding discussions we assumed that the drift rate control and the eccentricity control were separately planned. Combining the two may lead to some economy of the ∆v cost, as follows. In Figure 13.1, we consider planning a drift-rate maneuver and an eccentricity maneuver as a combined set at #1, and similarly at #2, and so on. Let us look, for example, at #2 precisely. Here, the eccentricity maneuver is planned, as illustrated in Figure 13.4, so as to move the orbital circle center from 2e to 3s. Now, suppose here that the drift-rate maneuver is done with a single impulse. This makes the orbital circle center move, as illustrated in Figure 11.1 in Chapter 11. If the timing of the single impulse is chosen right, the orbital circle center moves, in Figure 13.4, from 2e to some 2e′ along the line 2e–3s. The task of the eccentricity maneuver is then to move it from 2e′ to 3s, resulting in some economy of ∆v. The drift-rate maneuver and the eccentricity maneuver can be combined into a single maneuver with two impulses, with their magnitudes not being equal here. One can consider a more practical way to control eccentricity, as illustrated in Figure 13.3(c). Here, in contrast to part (b), the lines do not pass O. Accordingly, the distance between 1e and 2s, that between 2e and 3s, and so on, becomes smaller, and this means less ∆v is required for the maneuvers. Controlling eccentricity in this way would suit any satellite if its area-to-mass ratio A/M is not too large and if the eccentricity boundary is not too small. Now, look at a line, for example, 2s–2e in Figure 13.3(c). This line may look, in Figure 13.5(a), like (a) for one satellite, or like (b) or (c) for different satellites in reference to an eccentricity boundary circle. If we observe various satellites at a time, their orbital circle centers would then be distributed around area (d). This is consistent with Figure 13.5(b), the actual distribution for operational
Figure 13.4 Combined drift rate-eccentricity maneuver.
Station Keeping
137
Figure 13.5 Distribution of eccentricity: theory (a) and actual (b).
satellites as observed from orbital data made public [1]. The orbital data are for a chosen season of the year 2011 when the sun is placed as marked on the figure, and the boundary circle is for e = 0.0005. Most satellites seem to use the practical control, although the possibility of those distributed close to O may be using the ideal control.
13.4 NS Keeping The perturbation by the sun and the moon causes a tilting motion of the orbital plane, as illustrated in Figure 12.21. In that figure, the orientation of the orbital plane was represented by a projected unit vector, and its motion over a long-term average was to drift toward the positive direction of the x-axis. NS keeping is designed to confine the projected unit vector inside a boundary circle with the radius usually being set to 0.1 deg, as illustrated in Figure 13.6. The broken line represents the long-term average motion, so the orbital correction is to move it back, before it goes out of the boundary circle, toward −x as marked by the (a). This is done using a maneuver that gives the satellite a northward ∆v when the satellite is at A in Figure 13.7, or a southward ∆v when the satellite is at B. If the correction is for a full 0.2 deg like Figure 13.6(a), the ∆v is 10.7 m/s in magnitude, from (11.10). The rate of the long-term drift is between 0.75 and 0.94 deg per year, so the ∆v per year is between 40.1 and 50.3 m/s, averaging 45.2 m/s. That is, NS keeping requires more ∆v than EW keeping, by an order of magnitude. This is why the propellant consumption for NS keeping is the major factor used to determine the operational lifetime of a satellite. The tilting motion of the orbital plane has also y components, as seen in Figure 12.21 in Chapter 12. Correspondingly, the orbital plane may sometimes
138
Radio Interferometry and Satellite Tracking
Figure 13.6 NS keeping and the boundary circle.
Figure 13.7 NS maneuver timing.
have to be moved back like (b) or (c) in Figure 13.6. If we must choose a smaller boundary circle, the y-component motion becomes significant relative to the circle, and this may cause trouble. For example, consider a situation like that illustrated in Figure 13.8. A correction now takes place like (a), and the subsequent drift motion becomes like (b), but this motion reaches the boundary too soon. This trouble can be avoided if the correction takes place like (c); the subsequent motion then becomes like (d) and this will keep the drift motion inside the boundary longer. For this to occur, the maneuver is done at some point A′ or B′ in Figure 13.7. This maneuver, however, requires more ∆v because (c) is longer than (a) in Figure 13.8.
13.5 Factors Depending on Satellites Another factor may increase the required ∆v in relation to the design of the satellite. Theoretically speaking, the thrust for NS keeping should point to the north or to the south, but actually it may be off-pointing from the north or
Station Keeping
139
Figure 13.8 NS keeping with a smaller boundary circle.
south, as illustrated in Figure 13.9. This is to avoid the problem of the thrust plume hitting the solar array panel to generate an attitude disturbance torque or to dirty the surface of the panel. The required ∆v increases inversely proportionally to the cosine of the off-pointing angle. If the NS thrust is off-pointed, the ∆v has a projected component in the orbital plane. If this in-plane component points to the tangential direction, then it causes an error in longitudinal drift rate; this must be avoided definitely. So, the NS thrust is usually set so that its in-plane projection will have a radial component only. Consequently, the NS maneuver yields a change in the eccentricity. This change, however, will be canceled if the NS maneuver is done with two equal impulses, with the first one being done at A in Figure 13.7 and the second one at B. If the thruster is nevertheless set so that it points directly to the north or south, the NS maneuver may be prohibited during some period of time so
Figure 13.9 Off-pointing NS thrust.
140
Radio Interferometry and Satellite Tracking
that the unwanted action of the thrust on the solar panel may be avoided. The maneuver timing is then shifted, for example, from the original A to somewhere like A′ in Figure 13.7. This makes the orbital correction less efficient, which means an increase in ∆v is required. In this way the ∆v cost of NS keeping may include some surplus that depends on the design of the satellite. A special satellite design may also affect the NS-keeping maneuver. A very large antenna on board the satellite may have a flexible structure if it is made as light as possible. Such a structure may start a slow, vibrating motion when excited by the strong impulse of the NS maneuver, and it takes time for the vibration to attenuate. If such a vibration is undesired, the NS maneuvers must be planned with increased frequency so that the impulse for each maneuver becomes small enough to excite a vibrating motion as small as desired. Ultimately, the maneuvers are planned to take place every time the satellite passes A or B, or both, in Figure 13.7. This style of NS maneuver would suit a satellite with electric or ionic thrusters. As a result, the orbital inclination will be kept small. In this way, the NS keeping will depend on individual satellites. Figure 13.10 shows the distribution of the orbital inclinations for operational satellites, as observed in January 2011, from orbital data made public [1]. The distribution spreads in the boundary circle of 0.1 deg, without showing any significant features. Hence, we can say simply that a satellite under NS keeping will undergo an oscillating NS motion with an amplitude mostly smaller than 0.1 deg, while how the oscillation amplitude varies with time will be different for individual satellites.
Figure 13.10 Distribution of orbital inclination.
Station Keeping
Reference [1] “Space Track,” http://www.space-track.org/perl/login.pl.
141
14 Overcrowding and Regulations If geostationary satellites were observed collectively as a whole, they would look like they formed a large ring that circles the Earth at an altitude of 35,786 km above the equator. This is referred to as the geostationary orbital ring. Its value is crucial for communications, broadcasting, and weather satellite services, but its spatial capacity for placing satellites is not infinite. There is growing concern about capacity as more and more satellites are launched into the orbital ring. In this chapter, we review the current regulations for the placement of satellites in orbit, mainly from the viewpoint of orbital safety. In the following discussion, the wording geostationary orbit or simply orbit, does not signify any particular orbit of a satellite; it instead refers to the above-mentioned geostationary orbital ring as being a collective entity.
14.1 Orbital Regulations The International Telecommunication Union (ITU) has established rules to regulate the use of the geostationary orbit. The rules apply to satellites that emit radio waves in the orbit. Because we can hardly think of a satellite that does not emit any radio waves, the regulations apply to virtually every satellite. The regulation prescribes the following procedure [1]: Anyone who plans to place a new satellite in the orbit must publish in advance, via ITU, its information including orbital position in longitude, the frequencies of the satellite’s radio emissions, and other relevant items. This information is then input for coor-
143
144
Radio Interferometry and Satellite Tracking
dination; that is, to examine whether the new satellite would cause any radio interference with the radio frequencies already in use by existing satellites. If any possibility of interference is identified, the new satellite must modify its orbital position or radio frequencies so as to prevent the interference. This coordination is based on the first-come first-served principle. When the coordination is finished, the new satellite has its own orbital position and radio frequencies assigned. The assignment information is written into a database document, or the master register, as it is called. Frequencies recorded in the master register indicate that their use has been authorized by the ITU and that they are protected from interference as long as they are in use by the satellite operating at its assigned orbital position. This is a rough sketch of the coordination-assignment procedure; more details are described in [2]. The position in longitude assigned to the satellite becomes its nominal longitude when the satellite operates in the orbit. The satellite must be kept inside the ±0.1-deg boundaries relative to the nominal longitude, as recommended by the ITU [3]. The longitude-keeping zone thus has a width of 0.2 deg, which is called a longitude slot or an orbital slot. A narrower width for the slot would allow more slots to exist and, hence, allow more efficient use of the orbit. But, on the other hand, it increases the workload of station keeping, with EW maneuvers taking place in shorter periods, as discussed in Chapter 13. The slot width was determined from considering all of these factors. The slot width is thus prescribed in longitude, but not in latitude, by the regulation. In terms of the quality of communication services, the satellite should be stationary equally in longitude and in latitude, so the common practice of station keeping is to apply the same standard of ±0.1 deg to both longitude and latitude. This decision is made by the satellite operators, not by the ITU and, in fact, some satellites move in latitude by more than 0.1 deg if this does not cause trouble in terms of users’ access to the satellite. Another rule is applied to a satellite when it nears its end of life. The satellite must be transferred, before its propellant is used up, to an orbit outside the geostationary orbit so that it will never cross the geostationary orbit. The satellite is switched off after it has reached the outer orbit, as recommended by ITU [4]. The satellite then undergoes uncontrolled orbital motion, with its eccentricity changing with time owing to the solar radiation perturbation (see Chapter 12). Accordingly, its perigee altitude changes with time, but the perigee must not come close to the geostationary orbit. The recommendation considers this factor, and specifies a target radius for the outer orbit that is dependent on the satellite’s area-to-mass ratio. Orderly, efficient, safe use of the geostationary orbit is thus maintained, in theory, if every satellite follows the regulations.
Overcrowding and Regulations
145
14.2 Problem of Overcrowding The theory faces some difficulty, however, when there are too many satellites in the geostationary orbit. Because it is beneficial to be a first comer in the coordination procedure, early entries tend to crowd into the procedure, to complicate the coordination. Once coordination is finished and a new satellite has had its orbit and frequency assigned, sometimes the supposed satellite does not come into the orbit because its plan had a poor technical basis or poor financial basis or no basis at all. A satellite that exists only in the master register document is called a paper satellite. Paper satellites make the coordination task even more complex, thus hindering the efficient use of the precious orbital positions and frequencies. Another problem arises when double assignment to an orbital slot occurs. The motive of coordination is solely to prevent radio interference, so the process does not check to determine if two or more satellites that use different frequencies are being assigned to one and the same slot. The satellites going into the same orbital slot may experience close encounters when they start operating. This problem belongs to the satellite operators, because ITU takes no responsibility; this makes the problem more serious if the satellites belong to different operators or different nations. Usually the operators are able to keep track of their own satellites only. So, determining if the satellites are coming too close to each other is a matter of concern for orbital safety. One could consider either a regulatory approach or a technical approach to these problems. The former would require a reorganization of the rules and improvement in the procedure of coordination and assignment. If this attempt is too difficult, then we need to turn to a technical approach. One idea is satellite monitoring in which satellite downlinks are received that identify the satellites’ orbital positions in some organized way. Exact monitoring provides the latest information about the uses of orbits and frequencies, which is valuable for coordination. Unused assignments in the master register, if any, should be determined. Also, precise satellite positions known from monitoring will help maintain orbital safety. This idea will have growing significance if there are still more satellites going into the orbit from now on.
References [1] ITU, Handbook on Satellite Communications, New York: Wiley Interscience, 2002, Chap. 9. [2] Elbert, B. R., The Satellite Communication Applications Handbook, Norwood, MA: Artech House, 2003, Chap. 12.
146
Radio Interferometry and Satellite Tracking
[3] ITU, “Station-Keeping in Longitude of Geostationary Satellites in the Fixed-Satellite Service,” ITU-R Recommendation S.484-3, Geneva: ITU, 1992. [4] ITU, “Environmental Protection of the Geostationary-Satellite Orbit,” ITU-R Recommendation S.1003-1, Geneva: ITU, 2004.
Part III Interferometric Tracking
15 Overview of Part III: Interferometric Tracking Part III of this book discusses how interferometers can be used for satellite tracking and orbit estimation and features actual application cases. Discussions are based on Parts I and II, so they will be referred to in many places. The first topic for us to discuss is this: What is orbit estimation? What information is gathered by tracking? How should the information be processed, using what kind of software? The concept and principle of orbit estimation is thus described in Chapter 16. The interferometer must gather good information in order to provide good orbit estimation, and this consideration leads to the idea of a prototype interferometer. The prototype interferometer is developed into a realistic form in Chapter 17, where we discuss interferometric tracking in terms of the usual azimuth and elevation angles, and connect it to the orbit estimation principle. These chapters thus show the physical meaning of tracking and orbit estimation. After those two chapters, our discussion turns to particular subjects. Satellite tracking is done for different purposes and, correspondingly, different kinds of interferometers can exist. If a number of satellites are crowded into an orbit, it is particularly important to control their orbital longitudes. Chapter 18 demonstrates the case of an interferometer that has been designed specifically for tracking satellite longitudes, and examines its use for precise longitude keeping. If station keeping must have increased accuracies from now on, yet avoid the use of sophisticated hardware, then one option is to combine the interferometer with another type of tracking. Thus, Chapter 19 illustrates the case of a combined system of interferometry and ranging capabilities. An interferometer that uses optical cables shows good tracking performance, which in turn can provide accurate orbit estimation. 149
150
Radio Interferometry and Satellite Tracking
If two or more satellites are coming too close to each other in the overcrowded orbit, their relative motion is a sensitive problem. Consequently, Chapter 20 covers the case of an interferometer that has been designed specifically for differential tracking. It tracks the relative motion of satellites with precision, while using simple hardware. Chapter 21 discusses a specially designed interferometer that has a mechanically movable baseline. Its special design is aimed at removing the problems of phase error and ambiguity that exist inherently in interferometry. As a result, the interferometer becomes able to watch and monitor the orbit of any satellite, known or unknown, with precision. A powerful tool of tracking and orbit estimation thus becomes reality. All of these interferometers are for real cases, not just for theories on the desk. In Chapter 14 we addressed the problem of overcrowded geostationary orbits, and suggested that accurate orbital monitoring should be the key to the problem. The interferometers we are now going to see will surely give us that key. Finally, Chapter 22 discusses a different kind of interferometer, one that has an inverted geometry—the antennas are high up in the orbit and they point toward the Earth. Its purpose is to track down an illegitimate Earth station that is emitting unwanted signals. This kind of interferometer is now in growing demand, as more and more cases of RF interference occur. The chapter discusses the fundamental principles of this important interferometer. Although this interferometer has a purpose other than satellite tracking, it has a close relationship to the interferometer for tracking and orbit estimation, as we will see. Part III thus draws a complete picture of what the application of interferometry can do for satellites in geostationary orbits that are dealing with orbital overcrowding and increasing RF interference.
16 Tracking and Orbit Estimation If an interferometer or any other equipment is used in satellite tracking, its purpose is to acquire data needed for orbit estimation. So, before considering interferometers in detail, the reader should be informed about orbit estimation. This would include the principle of orbit estimation, an outline of estimation software, and operational conditions of orbit estimation. This chapter covers these topics in simple terms, so that they may be imagined with their physical meanings. In the following sections, tracking and orbit estimation are described first as a general concept. Later, the use of an interferometer is discussed, and finally, the significance of the interferometer in satellite tracking is pointed out.
16.1 General Concept Tracking and orbit estimation of a geostationary satellite are illustrated, as a general concept, in Figure 16.1. There is a target satellite that emits radio signals, and the signals arrive at an Earth station with tracking equipment, which is a radio interferometer in our particular case of interest. In more general cases the tracking equipment is a ranging system, which uses signals that make the round trip between the Earth station and the satellite, and there may be two or even more Earth stations involved. In some cases an angle-pointing antenna is also used. Whatever the equipment may be, tracking continues over hours and days to yield a set of observation data, and the data set contains the information on the orbital motion of the satellite. Orbit estimation is a process of extracting the orbital information from the tracking observation data, and it works in a software system as follows. First, an initial guess of the position and velocity of the satellite is made. This is done 151
152
Radio Interferometry and Satellite Tracking
Figure 16.1 Satellite tracking and orbit estimation.
by assuming a satellite that is ideally stationary at its nominal position. Using the initial guess, a software unit called an orbit generator will determine where the satellite was supposed to be when the tracking observations were made. This is then input to a tracking model, which calculates theoretically what should have been the values of the observation data. One can then compare the calculated data with the observed data, to determine the difference between them. This is the “Observed minus Calculated,” or “O minus C,” entry that appears as O−C in Figure 16.1. The O−C would be small if the initial guess was correct, but this does not usually occur; some correction must be applied to the position and velocity. This correction is given as being the O−C times a coefficient, with the coefficient determined depending on how the tracking was accomplished. The orbit estimation software is thus a kind of simulator. It simulates the motion of the satellite and the function of the tracking equipment as existing in the real world. The smaller the O−C, the better the simulation is performing. If the O−C becomes small enough, the position and velocity represent the true orbital motion, and this is the principle of orbit estimation.
16.2 Styles of Orbit Estimation The process of orbit estimation may be viewed along the time axis, as illustrated in Figure 16.2. The initial guess of position and velocity is set at some reference time t0. The observation data are collected at times t1, t2, …, tn. The data observed may be generally a complex of various types of measurements made at different Earth stations. The type and station for each data point must
Tracking and Orbit Estimation
153
Figure 16.2 Orbit estimation timeline for batch processing.
have been specified by some tracking schedule. Correspondingly, the calculated data are prepared such that they match the same tracking schedule, and this allows the O−C to be obtained over the tracking period of time from t1 to tn. This O−C determines how much the position and velocity should be corrected. Then after the correction, the newly obtained O−C will become smaller than the previous O−C. This process of correction is iterated, perhaps three or four times but not much more, until the O−C value is reduced enough that the position and velocity need no more correction. The position and velocity thus obtained supply the orbital elements at reference time t0. Orbit estimation is thus a process of improving the initial guess of the orbital elements until the guess becomes accurate. Another style of orbit estimation is possible, as illustrated in Figure 16.3. Observation data are collected in steps at times t1, t2, …, ti, ti+1, …, over a prolonged period of time; the position and velocity are also estimated at each time step. In this context the position and velocity are regarded as making a six-dimensional vector, which is referred to as a state vector. That is, a state vector will
Figure 16.3 Orbit estimation timeline for sequential processing.
154
Radio Interferometry and Satellite Tracking
be estimated at each step ti. Suppose that, at the present time step ti, we have estimated the state vector. When we have a new observation data point at next time step ti+1, the action for estimation is as follows. From the last estimate of the state vector at ti, predict the state vector at ti+1 using an orbit generator. From this prediction, make the calculated data at ti+1 using a tracking model. One can then make the O−C at ti+1. Note that this O−C is a small data set, because it corresponds to the data collected at a single step only. Using this O−C, improve the predicted state vector at ti+1. That is, we advance the state vector from ti to ti+1 by simple prediction, and improve it by using the observation data obtained at ti+1. This step-by-step improvement from ti to ti+1 will be repeated, from ti+1 to ti+2, from ti+2 to ti+3, and so on. In this way we obtain at each time step the most likely estimate of the position and velocity of the satellite. At t1 we set an initial guess of the state vector in the same way as mentioned before.
16.3 Choice of Estimation Style The first type of orbit estimation discussed above is called batch estimation; the second is called sequential estimation. Theoretically speaking, the two will yield the same result if the forces acting on the satellite are known exactly and are reflected in the orbit generator. The sequential estimation, however, may provide better performance, for example, in a case like the following. Suppose that, in Figure 16.2, an orbital maneuver takes place at some time between t1 and tn. The force applied to the satellite during the maneuver is known theoretically, but in reality the thruster may generate a force slightly more in magnitude, or less, than planned. To take this occurrence into consideration, the ∆v produced by the maneuver is set as an additional unknown parameter to be estimated together with the orbital elements. If more parameters are to be estimated, generally more observation data are needed. So, we set a longer tracking period. If the tracking period becomes longer, then another maneuver ∆v may occur. So, we need to set the tracking period even longer, thus falling into an endless spiral. This is possible if orbital maneuvers take place as frequently as in the case mentioned in Chapter 13 for NS keeping. This problem can be dealt with by means of sequential estimation. Suppose that, in Figure 16.3, a maneuver occurs at some time between ti and ti+1. The state vector is predicted from ti to ti+1, with the maneuver ∆v being taken into account, while some inaccurate part of ∆v will cause an error in the predicted state vector at ti+1. Correspondingly, the O−C made at ti+1 will show an increased value. Hence a correction is applied to the orbital state at ti+1. That is, the observation made at ti+1 is used to correct for the error caused by the inaccurate part of the maneuver ∆v. A correction will be made at ti+1 if the maneuver
Tracking and Orbit Estimation
155
inaccuracy was small and the observation data at ti+1 were good; otherwise, the correcting action will continue for more time steps. If we have to track an unknown satellite, such as a satellite from a different operator or a different nation, we do not know when and how the maneuver takes place, if it takes place at all. In such a case it is the Dv itself, not its inaccurate part, that causes the error in the predicted state vector. A large O−C will then appear, thus telling us that a maneuver has occurred. The O−C will be reduced again, as time steps pass, and the rate of reduction would depend on the magnitude of the ∆v. Sequential estimation as such will be the only possible estimation method to use if we have to monitor the orbit of an unknown satellite in the overcrowded environment of the stationary orbit.
16.4 Software Units The orbit estimation software includes various units, as illustrated earlier in Figure 16.1. Their functions are as follows if outlined concisely. The orbit generator advances the position and velocity of a satellite, or the state vector, from any time t0 to any t1 while considering the forces acting on the satellite, as illustrated in Figure 16.4. The forces include the two-body attraction of the Earth and the perturbing forces. The major perturbing forces are the four that were discussed in Chapter 12. The forces are added up and integrated with respect to time by using a numerical integration method. Though we figured out the perturbations by theory in Chapter 12, here in orbit estimation we usually rely on numerical integration because the numerical method provides better accuracy than the theory. Calculation of the perturbing forces requires the use of various data sources: the ephemeris of the sun and moon, a table defining the nonspherical shape of Earth’s gravity potential, and the instantaneous orientation of Earth in the inertial space at a given time. It refers also to a table of maneuvers that is used to determine if any ∆v is being planned. In
Figure 16.4 Orbit generator outline.
156
Radio Interferometry and Satellite Tracking
batch estimation (Figure 16.2) the orbit generator advances from t0 to any time between t1 and tn, whereas in sequential estimation (Figure 16.3) it is stepwise from ti to ti+1; these functions of time-advancing can be provided by the same orbit generator. The orbit generator may be used with any satellite, with the exception of the maneuver ∆v, which depends on the satellite’s hardware. The numerical integration tends to require a complex software code, especially if its goals is high precision. To ease this problem, a simplifying method can be used. This simplified method decomposes the orbital motion into the two-body motion and perturbed motions. The motion resulting from the twobody attraction is solved exactly by Kepler’s laws, whereas the motion resulting from the perturbation is calculated by numerical integration. The two results are then summed up. This reduces considerably the workload of numerical integration because the perturbing forces are very small compared with the twobody attraction. The result is a high computation speed accompanied by a reduction in the coding size and complexity of the numerical integration [1]. The tracking model of Figure 16.5 illustrates how the tracking equipment functions when linked with the satellite. If the equipment is an interferometer in our particular case of interest, there are two antennas or more in the Earth station. Their positions are given in the Earth-fixed coordinate frame. So, the position of the satellite is converted into the same Earth-fixed frame by referring to the Earth orientation. If the satellite and the antennas are positioned in a common coordinate frame, the signal propagation paths from the satellite to the antennas can be determined by geometry. This allows for theoretical calculation of the phase differences between the antennas, hence, the “calculated” values. The tracking model thus reflects the design and installation of the tracking equipment. As the signal from the satellite propagates through the
Figure 16.5 Tracking model outline.
Tracking and Orbit Estimation
157
atmosphere, its refraction effect is considered so that the calculation will correspond to the actual observation. The orbit generator can be regarded as an input/output system; that is, a state vector at t0 is input and a state vector at t1 is output. One can then create a differential relationship in the form of ∂(output)/∂(input), which makes a matrix. Similarly, the differential relationship is set for the tracking model, and by using these relationships we can determine the correction coefficient, which is also a matrix, that should be multiplied with the O−C. The correction-coefficient matrix is determined on the basis of the least-squares principle for batch estimation, or on the basis of an algorithm typically known as Kalman filtering for sequential estimation [2].
16.5 Meaning of Orbit Estimation We saw that orbit estimation is used to determine a set of six parameters: three for position and three for velocity. Determining the six parameters has a specific meaning if the orbit is near stationary, as discussed next. We can represent the motion of a near-stationary satellite by using a relative coordinate frame centered at the satellite’s nominal stationary position, as illustrated by Figures 10.9 and 10.10 in Chapter 10. The motion was given in (10.17), (10.18), and (10.19), which are reproduced here:
R = ∆a - ae cos (Wt - α)
L = L0 -
3 ∆a Wt + 2ae sin (Wt - α) 2
Z = ai sin ( Wt - β )
(16.1)
(16.2)
(16.3)
Here, R, L, and Z are radial, longitudinal, and north axes, respectively. The nominal orbital radius a and the Earth rotation rate W are given constants; so we find here six parameters at our disposal: ∆a, e, α, L0, i, and β. The determination of six parameters as mentioned earlier corresponds to determining the six parameters found here. We know, from Chapter 12, that the orbit changes slowly with time owing to perturbations. Accordingly, the parameters found here must be regarded as changing slowly with time. For example, the gravity of the sun and moon makes the orbital plane tilt, thus making i and β in (16.3) change slowly. This is reflected in orbit estimation; so, the parameters are deter-
158
Radio Interferometry and Satellite Tracking
mined as at some time of reference, that is, at t0 in batch estimation (see Figure 16.2), or at any ti in sequential estimation (see Figure 16.3) The presence of perturbation brings an additional unknown parameter that should be estimated together with orbital parameters. The perturbation by the solar radiation pressure included a factor A/M, with A being the cross section and M the mass of the satellite, as in (12.9). This A is an effective value that depends in a complex manner on the shape and surface material of the satellite, so its true value is often difficult to determine. Thus, one common practice is to set the factor A/M as an unknown parameter. Correspondingly, seven elements are estimated in batch estimation, and a seven-dimensional state vector is estimated in sequential estimation.
16.6 Tracking Using an Interferometer We find that, from the relationship of (16.2), observing the satellite motion in L allows us to determine the parameters L0, ∆a, e, and α. This can be seen from Figure 16.6. The motion in L comprises a linear part and a sinusoidal part. From the linear part, we know the slope (∆a) and the constant (L0), and from the sinusoidal part we know its amplitude (e) and phase (α)—this reasoning follows [3]. Similarly, from (16.3), observing the motion in Z allows us to determine i and β. This leads us to an idea for how to track using an interferometer, as illustrated in Figure 16.7. Our target satellite S is in the neighborhood of its nominal stationary position O, and the position of S is measured in relative coordinates: R for radial, L for longitudinal, and Z for north. The interferometer is placed on the ground at A right under O, with its baselines AB and AC being set horizontally. If baseline AB is parallel to the L-axis, then it detects the satellite motion in L, hence allowing L0, ∆a, e, and α to be determined. If baseline AC is parallel to the Z-axis, then it detects the satellite motion in Z, hence allowing i and β to be determined. Orbit estimation is thus made possible by the interferometer.
Figure 16.6 Determining the orbital parameters.
Tracking and Orbit Estimation
159
The motion in L has a sinusoidal term with a period of 1 day, so observing the L motion for 1 day will enable us to determine the parameters, and this is similar for the motion in Z. Hence the period of time needed for tracking is basically 1 day, which is an important fact that applies generally to stationarysatellite tracking. From the same reasoning we should point out that the interferometer must have two baselines. Suppose that there is only one baseline and it points to an arbitrary orientation between AB and AC. What this baseline detects then is a linear combination of the L component and the Z component of the satellite motion. So, it is impossible to determine the L motion parameters and the Z motion parameters separately. Orbit estimation is therefore impossible, if orbit estimation means to determine fully the six orbital parameters. Now back to Figure 16.7; suppose that tracking continues for 2 days. From the first day of tracking, we obtain the first set of six orbital elements by batch estimation, and from the second day the second set of six orbital elements. The two sets should be consistent with each other if variations due to the perturbation are taken into account. If the two are not consistent, then we know that the account of perturbation was not correct, perhaps because the factor A/M for the solar radiation pressure was incorrect. We can then correct it so that the two sets will become consistent. In other words, orbit estimation needs basically 2 days of tracking if the factor A/M is included as an estimation parameter. The interferometer illustrated in Figure 16.7 may be regarded as a prototype interferometer. It is not practical because its location geometry is too special, but it was useful for theoretical reasoning of tracking and orbit estimation. Practical interferometers will be derived from this prototype, as we will see in the following chapters.
Figure 16.7 Prototype interferometer.
160
Radio Interferometry and Satellite Tracking
References [1] Tanaka, T., and S. Kawase, “A High-Speed Integration Method for the Satellite’s Ephemeris Generation,” J. of Guidance and Control, Vol. 1, No. 3, 1978, pp. 222–223. [2] Pocha, J. J., An Introduction to Mission Design for Geostationary Satellites, Norwell, MA: Kluwer, 1987, pp. 164–198. [3] Soop, E. M., Handbook of Geostationary Orbits, Norwell, MA: Kluwer, 1994, pp. 252–256.
17 Azimuth-Elevation Tracking In the previous chapter we pointed out that the prototype interferometer makes orbit estimation possible. In this chapter we take a step forward to show that any interferometer with two independent baselines makes orbit estimation possible. First we discuss why satellite directions are commonly measured in azimuth and elevation angles, and connect this discussion to the interferometer with two baselines in general. We then analyze the geometry of baselines relative to the satellite, and finally, we find the conditions for accurate orbit estimation.
17.1 Azimuth-Elevation Angles If a satellite is viewed at an Earth station and its direction is measured, the direction should be measured by two angles, which are most commonly azimuth and elevation. As illustrated in Figure 17.1, the azimuth measures the angle from north to the satellite clockwise in the horizontal plane, and the elevation is the angle between the satellite and the horizontal plane. Azimuth and elevation angles are closely related to the pointing mechanism of antennas used in Earth stations. The antenna must rotate about two axes in order to point to the satellite accurately. Because it must rotate under the presence of gravity, it is reasonable to first set one axis vertical. This axis supports a rotary stage, on which the second axis, the horizontal, is set. The second axis always remains horizontal if the first axis turns. The antenna will then rotate smoothly if the weight of the antenna is balanced about the horizontal axis. The vertical and horizontal axes thus become the axes about which azimuth and elevation angles are measured. That is, the “azimuth and elevation” is used to name angles and also to define the type of the rotary mechanism of the antenna.
161
162
Radio Interferometry and Satellite Tracking
Figure 17.1 Azimuth and elevation angles. N: north; Z: zenith.
The azimuth-elevation antenna is able to point to a satellite in any direction, except for a satellite near the zenith. If the satellite comes near the zenith, the azimuth axis is forced to turn very quickly, thus causing trouble with the rotary mechanism. This occurs for satellites flying in low Earth orbits, for example communication or Earth observation satellites in polar orbits. A geostationary satellite, however, does not come near the zenith unless the Earth station is deliberately located right under the satellite. Using an azimuth-elevation antenna is thus reasonable for tracking and pointing a geostationary satellite. If an antenna is operating exclusively for one stationary satellite, then it does not need to rotate over a wide range of angles. Suppose that, in Figure 17.2, the satellite has its nominal stationary position at S0, while the satellite is actually at S. When the satellite moves from S0 to S, it moves by a horizontal angle H and a vertical angle V. If the satellite is subject to station keeping with the standard width of 0.2 deg, the angles H and V are limited approximately by the same width. The changing width of the elevation is thus 0.2 deg, while that of the azimuth becomes larger by a factor of 1/cos(elevation). If the antenna is able to change its azimuth and elevation by these widths with some margin, the antenna keeps tracking and pointing to the satellite. It is thus common for an Earth-station antenna to have limited motion ranges for azimuth and elevation.
Figure 17.2 Horizontal H and vertical V angles.
Azimuth-Elevation Tracking
163
17.2 Azimuth-Elevation Interferometer If a limited-motion antenna is tracking a near-stationary satellite to measure azimuth and elevation angles, its measuring function can be substituted by an interferometer, as illustrated in Figure 17.3. Here, similar to Figure 17.2, S0 and S are the nominal and actual satellite positions, and the angular distance between them is given by H and V, the horizontal and vertical angles, respectively. The interferometer has two baselines AB and AC in the horizontal plane. The baseline AB is orthogonal to S0A, the path of the downlink, while the AC aligns with the horizontally projected path of the downlink; so the two baselines are orthogonal to each other. The baseline AB then detects the angle H, and baseline AC detects the angle V. The interferometer thus detects the direction angles equivalent to azimuth and elevation, in reference to the nominal satellite position. Consider what happens if the baseline geometry is changed, as illustrated in the example of Figure 17.4. Here, the baselines AB and AC have been rotated by 45 deg in the horizontal plane. Let A, B, and C denote the phases of the downlink signal when received at antennas A, B, and C, respectively. Then what the baseline AB detects is x = B − A, and what the AC detects is y = C − A. Now, suppose we convert the detected x and y into x − y and x + y as follows:
x - y = B -C
(17.1)
B +C x+y = - A × 2 2
(17.2)
Figure 17.3 Azimuth-elevation interferometer.
164
Radio Interferometry and Satellite Tracking
Figure 17.4 Changing the baseline geometry.
The x − y is what a supposed baseline BC would detect, so it is for the angle V, or the elevation. The x + y is what a supposed baseline AD would detect, where D is at the middle of B and C; so it is for the angle H, or the azimuth. In other words, we found a linear combination of x and y that provides the azimuth, and another linear combination of x and y that provides the elevation. This way of reasoning will apply if baselines AB and AC have arbitrary lengths and orientations. Hence, if a two-baseline interferometer outputs a set of phase data x and y, we can then find linear combinations of x and y that provide the H and V angles, or the azimuth and elevation, respectively. The coefficients for making linear combinations depend on the geometry of the baselines relative to the satellite. Any two-baseline interferometer may be called in this sense an azimuth-elevation interferometer.
17.3 Detection Unit Vector of a Baseline Let us clarify how a baseline detects the motion of a satellite. Suppose that a baseline is placed in a relative geometry with respect to the satellite, as illustrated in Figure 17.5. The satellite has its nominal stationary position at S0, and the baseline AB has a length b. The position S0 is distant from the baseline by r, and is at an angle ψ relative to the baseline. The plane that contains AB and S0 is called P. Consider a unit vector u that lies in the plane P; this u is orthogonal to the line connecting the baseline and S0. (Which point of the baseline to connect does not matter since b << r.) The satellite is actually at some position S, while this S may or may not be in the plane P. If the satellite moves from S0 away to S by a distance x, the baseline AB detects the x through its component
Azimuth-Elevation Tracking
165
Figure 17.5 Detection unit vector u associated with a baseline.
along the vector u. If this component is denoted by xu, then it is detected by the interferometer as follows:
δl = x ub cos ψ r
(17.3)
Here, l denotes the relative path length from the satellite to the antennas. The interferometer will output the change δl as measured by the phase angle:
δφ = 2 πδl λ
(17.4)
with λ being the wavelength. That is to say, the change in satellite position is detected in one dimension as xu along the vector u. In this sense, the vector u may be referred to as a detection unit vector of the baseline. Rewrite (17.3) as
δl = x ube r
(17.5)
be = b cos ψ
(17.6)
with
denoting the length A′B′ in Figure 17.5. This A′B′ may be regarded as an effective baseline, and its length is shorter if the satellite is not right in front of the baseline AB. Let us move on to Figure 17.6, where an interferometer with baselines AB and AC is placed on the ground. The R-L-Z axes are the relative coordinates centered at the nominal satellite position S0. The interferometer may be located anywhere, with its baseline orientations being arbitrary, so this is a practical geometry for an interferometer. The baseline AB has its detection unit vector
166
Radio Interferometry and Satellite Tracking
Figure 17.6 Practical interferometer geometry.
u1, and similarly AC has u2. An interferometer collects tracking data through vectors u1 and u2. Now, let us recollect our idea for a prototype interferometer that was presented in Chapter 16. If we refer to Figure 16.7, we see that the prototype interferometer has its detection unit vectors u1 and u2 that align with the coordinate axes of L and Z. On the other hand, in the present case of Figure 17.6, the vectors are skewed relative to the coordinate axes, and the degree of skew depends on the interferometer location and the baseline orientations. Our task is to show that orbit estimation is possible even if those detection unit vectors become skewed.
17.4 Orbit Estimation We now show that any two-baseline interferometer makes orbit estimation possible. We have considered making a linear combination of the outputs x and y from two baselines, as in (17.1) and (17.2) for example. This corresponds to making the baselines’ detection unit vectors u1 and u2 into a linear combination
Figure 17.7 Making vector u3 from u1 and u2.
Azimuth-Elevation Tracking
167
vector, so as to make a new detection unit vector. Let us then consider, in Figure 17.6, making a new detection unit vector from u1 and u2. This is illustrated in Figure 17.7. Here, from u1 and u2 we are making a unit vector u3 that lies in the R-L plane. This is simply made as follows: Vectors u1 and u2 determine a plane Q. This plane Q crosses the R-L plane, and in this crossing line the vector u3 exists. We have thus made a new detection unit vector u3 lying in the R-L plane, and this u3 has an important role. In Figure 17.8, we place a new coordinate axis L′ in the R-L plane that aligns with vector u3. The motion of the satellite being projected onto the L′-axis is then detected by the vector u3. This means that our interferometer can detect the L′-component of the satellite motion in the R-L plane. Now, the satellite motion in the R-L plane was an elliptical motion (see Figure 10.9) and a linearly drifting motion (Figure 10.10) as combined by superposition. This is illustrated in Figure 17.8, where the satellite is moving along the ellipse, and the ellipse is drifting at a constant rate along the L-axis. In the case of the prototype interferometer, the satellite motion was projected onto the L-axis, and this L component was detected as being tracking data. In the case of our practical interferometer, the satellite motion is projected onto the L′-axis, and the L′ component is detected as tracking data. If the prototype interferometer is able to determine the satellite motion in the R-L plane, then our practical interferometer can do it as well, since the direction of the L′-axis is known. This is how the practical interferometer makes it possible to determine satellite motion in the R-L plane. Now, go back to Figure 17.7, and look at vector u2. What this vector detects is a linear combination of the satellite motion components in R, L, and Z. Because we know the motion components in R and L, we know the motion
Figure 17.8 Determining the R-L motion.
168
Radio Interferometry and Satellite Tracking
component in Z. As a result, we can determine the R-L motion and the Z motion, meaning that we can do orbit estimation by using the interferometer. In the real processing of orbit estimation, the satellite motion is modeled as it is in three dimensions, without any distinction of components. The reason we have considered separately the determination of R-L motion and that of Z motion is for convenience of geometrical interpretation of orbit estimation.
17.5 Accuracy Considerations We can now discuss the accuracy of orbit estimation. In Chapter 2 we assumed an accuracy of 0.001 deg for the angle observation using a model-case interferometer with an effective baseline length of 10m, and in Chapter 5 we confirmed that the assumed accuracy can be obtained if the interferometer is properly designed. That accuracy of angle observation allows us, as illustrated in Figure 17.9, to observe the satellite position to an accuracy level of ux = r sin(0.001 deg). So, this ux will be essentially the accuracy level of L and Z after determining the motion of the satellite. Also, the ux will be the accuracy level of R, because the motion in L and that in R are correlated, as can be seen by comparing (16.1) and (16.2). If the satellite position is written in longitude and latitude, its accuracy is ux/(orbital radius), which is essentially 0.001 deg. Note that the above discussion of the orbit estimation accuracy assumes the following conditions: First, the effective baseline lengths should be essentially equal for the two baselines. Second, the detection unit vectors, u1 and u2 in Figure 17.6, should point to different directions, ideally being different by 90 deg. If their pointing directions are near each other, the plane Q in Figure 17.7 becomes indefinite. In such a case it becomes difficult to determine accurately the L-R motion through the vector u3, causing a loss in orbit estimation accuracy.
17.6 Nonhorizontal Baseline We have so far assumed that interferometers have horizontal baselines. This assumption should not be forced, however, on an interferometer that tracks
Figure 17.9 Accuracy level ux for satellite position; A′B′ is an effective baseline (see Figure 17.5).
Azimuth-Elevation Tracking
169
a satellite at a low elevation angle. Consider the example illustrated in Figure 17.10. The transversal baseline AB lies horizontally without problem, whereas the other baseline, AC, becomes too long if it lies horizontally and if it should have a proper length of the effective baseline. One idea then is to incline the AC like AC′ to make the baseline shorter. The antenna C′ will perhaps find its place somewhere on a building’s rooftop. Another idea is illustrated in Figure 17.11, where two antennas are on the rooftop, with the vectors u1 and u2 pointing to good directions. This idea may be modified by using the plane mirrors discussed in Chapter 7; two such mirrors are on the rooftop, as illustrated in Figure 17.12, with the receiving antennas placed at one ground site. Various baseline settings may be possible in this way on the condition that the effective baseline lengths and the u-vector pointing directions are properly considered.
Figure 17.10 Tracking a low-elevation satellite.
Figure 17.11 Setting nonhorizontal baselines.
170
Radio Interferometry and Satellite Tracking
Figure 17.12 Nonhorizontal baselines using plane mirrors.
18 Longitude Tracking As more and more satellites are sent into geostationary orbit, the longitudinal spacing between satellites is decreasing. Precise control of satellite longitudes is thus a problem of growing importance for maintaining orbital safety. This chapter proposes the use of an interferometer particularized for tracking satellite longitudes [1]. This interferometer has a single baseline that is placed at a chosen orientation so that orbit estimation will work selectively for longitude estimation. In the following sections, an experimental setup for the interferometer is described, along with orbit estimation results.
18.1 Satellite Longitudes Any satellite that operates in geostationary orbit must have an orbital slot of its own, as illustrated in Figure 18.1. A slot has a width of 0.2 deg in longitude, and within the slot a satellite is allowed to move about during its station keeping. One slot is exclusively for one satellite’s use, so any other satellite should have its own orbital slot, and some space should exist between the satellites to act as a safety zone. The width of the safety zone must be decreased, however, as more and more satellites are sent into the orbit. To accommodate the maximum number of satellites, the orbital slots would have to adjoin, as illustrated in Figure 18.2, meaning that there would no longer be any safety zones between satellites. Any satellite that strays out of its own slot could possibly experience a close encounter with other satellites. For this reason, the control of satellite longitudes must be precise. Although one orbital slot is primarily for one satellite, in some cases two or even more satellites are forced to operate in one and the same orbital slot, as noted in Chapter 14. In such cases, a slot will often be divided into subslots 171
172
Radio Interferometry and Satellite Tracking
Figure 18.1 Orbital slots of satellites.
Figure 18.2 Adjoining orbital slots.
in longitude, and each satellite will have its own subslot. As a result, these narrower subslots must adjoin without safety zones, which means that satellite longitudes need to be even more precisely controlled than ever. Control of satellite longitudes will thus be a key issue when determining future uses of the geostationary orbit if the overcrowding tendency of the orbit continues without changing. In other words, there is a strong demand for accurate, reliable monitoring of satellite longitudes.
18.2 Longitude-Monitoring Interferometer The demand for accurate, reliable monitoring of satellite longitudes can be answered by doing orbit estimation, for instance, by using an interferometer with two baselines. If we are interested solely in orbital longitudes, we can then consider an interferometer with a single baseline that solely detects satellite longitudes. Such an interferometer is possible if its geometry fulfills some condition, as discussed next. Referring to Figure 18.3, let S0 denote the nominal stationary position of a target satellite, with R and L denoting the relative coordinate axes we introduced earlier. The interferometer is placed on the ground at A. Consider a plane P that contains S0 and is orthogonal to the line of sight AS0. The plane P intersects the R-L plane at a line L′. Consider a plane Q that contains the lines AS0 and L′. In this plane Q the baseline should be placed; this is the condition required for the interferometer. If the baseline fulfills this condition, its detection unit vector u lies in the R-L plane. The use of this baseline allows us
Longitude Tracking
173
Figure 18.3 Geometry of longitude-monitoring interferometer.
to determine the satellite’s R-L motion in the same way as we saw in relation to Figure 17.8. The L component of the motion then provides the satellite longitude. In Figure 18.3, the plane Q intersects the Earth’s surface to make a curve, and a baseline, such as AB, placed along this curve becomes a horizontal baseline. If the baseline is placed horizontally in this way, its orientation will be one of those shown in Figure 18.4. The orientation is given in azimuth angles, where north-south is 0 and east-west is 90 deg. The azimuth depends on the latitude and longitude of the interferometer site, where the longitude is measured relative to the satellite. Suppose, in Figure 18.3, that the interferometer site is at the same longitude as the satellite. The azimuth is then, by symmetry, 90 deg (i.e., east-west baseline) regardless of the latitude of the interferometer site. If the satellite and the interferometer site have different longitudes, the baseline azimuth then differs from 90 deg; such a difference is practically not much more than 10 deg. If the interferometer site is far away from the subsatellite point, it will observe the satellite at a low elevation angle. In such a case a nonhorizontal baseline may be better, like AB ′ in Figure 18.3, according to the discussion related to Figure 17.10.
18.3 Orbit Estimation If the interferometer has only one baseline, orbit estimation does not work in its original sense. But practically it will work for our purpose as follows. If monitoring continues over a prolonged period, orbit estimation should use
174
Radio Interferometry and Satellite Tracking
Figure 18.4 Orientation of horizontal baseline.
the method of sequential processing. On starting the estimation processing, we make an initial guess of the orbital state by assuming that the satellite is moving in the equatorial plane, that is, without north-south motions. As the processing continues, the orbital state will be improved with regard to the motion in the equatorial plane, while improvement will not occur to the north-south motion since the baseline has no sensitivity to the north–south motion. That is, necessary and sufficient information for estimating satellite longitudes is collected by the interferometer and is processed by the orbit estimation. We are thus supposing fictitiously that the satellite moves only in the equatorial plane and, correspondingly, we set the estimation processing as follows. Sequential estimation processing requires the use of a matrix called a state variance-covariance matrix. Its diagonal elements represent to what degree of precision the state-vector elements have been determined. We give small values to the diagonal elements that correspond to the north-south position and velocity; for example, 0.1m for position and 0.01 mm/s for velocity. The estimation processing then assumes that the north-south satellite motion has already been determined. As a result, orbit estimation works for determining four elements associated with the motion in the equatorial plane, rather than determining fully six elements, and this is exactly what we need for our purpose. We noted in Chapter 6 that an error may exist in the modeling of atmospheric refraction. This error is maximized in a case where the baseline is placed along the incoming path of the satellite downlink. Such a case occurs if the interferometer site is located near the equator. If we rule out such a special
Longitude Tracking
175
case, the effect of atmospheric error on longitude monitoring is less than a few milli-degrees, which may be practically ignored.
18.4 Interferometer Setup An experimental setup for the longitude-monitoring interferometer operating in the Ku band (11–12 GHz) is illustrated in Figure 18.5. Downlink signals from the satellite are reflected by plane mirrors before being directed into fixed receiving antennas. This is the idea introduced in Chapter 7. Two mirrors are used for each receiving route in the present setup. The mirrors are squares with 2m sides, and the fixed antennas are 1.8m in diameter with G/T = 23.2 dB/K. Because plane mirrors are used, the fixed antennas are placed side by side. This placement allows short cables to distribute the reference oscillator signals and the local oscillator signals, thus ensuring phase stability. The baseline of this interferometer is the line that connects the two front-end mirrors, which is placed horizontally and has a length of 13m.
18.5 Monitoring Examples 18.5.1 Single Satellite
Figure 18.6 shows a case of longitude monitoring for 3 days. The interferometer site is near the target-satellite longitude. So, the baseline is set at an azi-
Figure 18.5 Interferometer setup with plane mirrors. LNA: low noise amplifier; D/C: downconverter; Phase: phase-measuring unit; H: hybrid; RO: reference oscillator; LO: local oscillator. Site longitude: 140.66 deg E, latitude: 35.95 deg N.
176
Radio Interferometry and Satellite Tracking
Figure 18.6 Longitude monitoring, with bias calibration.
muth near 90 deg, according to Figure 18.4. A downlink beacon is received, with its phase data being measured every second. Averaging 60 seconds of data makes one observation set, and observation sets are collected every hour for orbit estimation. The length of the baseline and the wavelength of the downlink determine the ambiguity cycle of the interferometer, as noted in Chapter 6, and in the present case it is around 0.1 deg in longitude. That is, the curve plotted in Figure 18.6 may be correct if it is shifted upward or downward by 0.1 deg times an arbitrary integer. This ambiguity is resolved if we know where the orbital slot of the satellite is. We know that the interferometric phase contains some unknown bias. Calibrating the bias needs an external reference, for instance, asking the satellite operator to provide orbital elements. In the present case the reference was optical observations [2]. A telescope with a 35-cm diameter takes pictures of the satellite during the night, with a field of view of 1 deg. There are stars in the field of view, and their positions are found in a star catalog. The satellite direction is then determined in reference to the stars, to an accuracy of 0.001 deg. That is, the line connecting the telescope and the satellite is determined. This line crosses the geostationary sphere whose radius is the nominal stationary radius, and this crossing point determines the longitude and latitude of the satellite to a good approximation. The optical data plotted in Figure 18.6 were obtained in this way. The phase bias was then adjusted so that the longitude from the interferometer will coincide with the first three points of optical data. Once the
Longitude Tracking
177
phase bias has been calibrated, the interferometer and orbit estimation work well for longitude monitoring, as seen in the figure. Figure 18.7 shows a longer period of monitoring for the same satellite. An EW maneuver occurs on day 251, while the orbit estimation follows the subsequent orbital change, without referring to any information about the maneuver. This is because the sequential estimation has been set so as to consider a small force possibly acting on the satellite in any direction in the equatorial plane. Such a force is called a process noise, and its order of strength is given by a variance-covariance matrix. Setting this matrix properly, or tuning the processing as it is called, allows the orbit estimation to follow the orbital change if a maneuver occurs with unknown timing. The monitoring result shown in Figure 18.7 thus represents a cycle of EW station keeping, while the interferometer and orbit estimation are simply running continuously without interruption. What is happening to the longitudinal control of a satellite is thus monitored clearly. 18.5.2 Two Satellites
Figure 18.8 shows a case in which two satellites operating in the same orbital slot were monitored. The longitudes are plotted in relative values. The satellites are at the nominal longitude of 110 deg east; so, according to Figure 18.4, the baseline orientation had to be changed by a few degrees. This was done by changing the positions of the front-end mirrors, by about half a meter each, and adjusting their pointing axes. Phase calibration was made roughly so that
Figure 18.7 Long-term monitoring.
178
Radio Interferometry and Satellite Tracking
Figure 18.8 Monitoring two satellites in relative longitude.
the satellite longitudes would come near their nominal longitude. Longitudes were then monitored for each satellite, and the results were made into relative values by subtraction. The downlink beacon frequencies of the satellites were close to each other, with a difference of less than 10 MHz. In such a case the phase bias of the interferometer will show nearly the same value for the two satellites. If we make their longitudes into relative values by subtraction, the effect of the bias will then vanish. That is, relative longitude monitoring will operate without relying on precise bias calibration. To check this supposition, optical observation data that has been made into relative values are plotted in Figure 18.8. Here, the optical data exist both for day and night; this is because optical data were processed by orbit estimation and orbit generation. The interferometer and the optical reference data agree with each other, thus showing that our earlier supposition was correct. If the two beacons have significantly different frequencies, for example, if one is in a higher part of the Ku band and the other is in a lower part, the interferometric phase bias may be different. That is, the phase bias may be frequency dependent. This dependence can be evaluated by internal calibration. Referring to Figure 18.5, we set the RO to simulate one satellite beacon, and then the other beacon, while blocking the downlinks from the satellites. Measuring the interferometric phase allows us to calibrate the frequency dependence.
Longitude Tracking
179
Correcting for the frequency dependence then allows us to monitor the relative longitude with proper accuracy. Now, if we compare Figures 18.7 and 18.8, we see different patterns of orbital changes. The changing pattern that looks like a parabola in Figure 18.7 comes mainly from the orbital perturbation caused by the nonspherical shape of the Earth, as discussed in Chapter 12. This perturbation has a common effect on two or more satellites if they are placed close to each other. This is why the parabola-like pattern is absent in the relative longitudes shown in Figure 18.8. The relative motion there has a linear trend, if the satellites are coming closer to each other or going away from each other. If we monitor relative longitudes in the style of Figure 18.8, we should remember that each satellite is undergoing orbital changes like those in Figure 18.7. 18.5.3 Different-Band Satellites
Figure 18.9 shows monitoring for two satellites, of which one has a beacon in the Ku band and the other a beacon in the C band. Phase calibration was accomplished for each satellite separately. The absence of data for a few days for the Ku-satellite is due to the absence of a beacon signal. Monitoring results suggest that the satellites have been doubly assigned to the same nominal position, as they are using different frequency bands. This is why their orbital slots look narrower than the standard width of 0.2 deg, with a possibility that their slots
Figure 18.9 Monitoring two satellites in different frequency bands.
180
Radio Interferometry and Satellite Tracking
are subdivided slots. Monitoring results also suggest that the station keeping for one satellite and that for the other are coordinated, so as to maintain some space between satellites for safety. Longitude monitoring thus allows us to understand what is happening to the satellite orbits in an overcrowded situation. If the two satellites are operating in different frequency bands, in principle we need two separate sets of interferometers for monitoring. But actually we can consider an interferometer that operates in different frequency bands with switching, which is the type of interferometer that was used in the present case. Switching the frequency bands while maintaining phase stability requires some technology, which will be detailed later in Chapter 21.
References [1] Kawase, S., “Interferometric Monitoring of Satellite Longitudes,” Int. J. of Satellite Communications and Networking, Vol. 23, No. 1, 2005, pp. 67–76. [2] Umehara, H., “Ground-Based Optical Scan and Parallel Orbit Determination of NearGeosynchronous Objects,” paper AIAA-2003-2373 presented at 21st Communications Satellite Systems Conference and Exhibit, Yokohama, Japan, April 15–19, 2003.
19 Range-Azimuth Tracking We saw in the previous chapter that an interferometer with a single baseline makes orbit estimation partially possible in that it can be used to determine four orbital elements. Determining the full six elements requires adding one more baseline. Instead of doing that, however, we can consider using a different type of tracking in combination with a single-baseline interferometer. One idea is to combine ranging and the interferometer. This idea appeared first as a theory [1, 2], and was later put into practice [3], which resulted in good performance of orbit estimation [4]. In the following sections, we discuss how the ranging-interferometer combination makes orbit estimation possible, and learn about the interferometer hardware and its performance as discussed in published reports. Also, we will see the significance of the idea for safe station keeping of satellites.
19.1 Combined Tracking for Orbit Estimation The distance of a satellite from an Earth station can be determined if a signal travels around the station-satellite path and its travel time is counted. This is called ranging, which is an established way of tracking geostationary satellites. Let us examine how combining the ranging process with an interferometer allows for orbit estimation by referring to Figure 19.1. The combined system is placed at an Earth station A, and the satellite has its nominal stationary position at S0. The relative coordinates of R-L-Z are set similar to those in Figure 17.6. A unit vector u1 is set at S0, along the same direction as line AS0. When the satel-
181
182
Radio Interferometry and Satellite Tracking
Figure 19.1 Detection unit vectors for ranging and interferometer.
lite is displaced away from S0, the range will show a change, and this change will be equal to the component of the displacement along the vector u1. So, the vector u1 is regarded as the detection unit vector for ranging. If u2 denotes the detection unit vector of the interferometer baseline, then u2 is orthogonal to the line AS0. The vectors u1 and u2 are therefore orthogonal to each other, and this is the important point of the combined system. We reason that orbit estimation works well by following the same reasoning as that based on Figures 17.6 and 17.7. Note that there are singular cases where the above reasoning does not hold. They include, in reference to Figure 19.1, the following two: • Vectors u1 and u2 both lie in the R-L plane. This occurs if the Earth station is on the equator and the baseline points to the east-west. In this case the north–south motion of the satellite cannot be determined. • Vectors u1 and u2 both lie in the R-Z plane. This occurs if the Earth station is at the same longitude as the satellite and the baseline points to the north-south. In this case the east-west motion of the satellite cannot be determined. These cases must be excluded from our consideration. A desirable orientation for the baseline is for it to be near orthogonal to the line of sight to the satellite. Such an orientation leads to a longer effective baseline. This corresponds to AB in Figure 17.3, which is for detecting the azimuth of the satellite. In this sense the combined system may be called a range-azimuth tracking system. The above-noted singularities will not occur if the baseline orientation is so chosen. The baseline placed in the geometry of azimuth tracking will suffer no errors if there is uncertainty in the atmospheric refraction modeling, as was seen in Chapter 6.
Range-Azimuth Tracking
183
Figure 19.2 Range determines satellite longitude.
19.2 Merit of Combined Tracking Ranging has an important effect, as can be seen in Figure 19.2. Suppose our satellite is at S1 in the orbit. Its range is measured at station A as ρ1 while A and S1 are supposed to be at different longitudes. The range comprises two parts: a constant part, and an oscillatory part coming from the oscillatory motion of the satellite. We consider here the constant part only, by averaging out the oscillatory part so that ρ1 denotes the constant part. Suppose the satellite moves from S1 to an S2 at a slightly different orbital longitude. The range then increases from ρ1 to a greater ρ2. That is, different ρ1 and ρ2 correspond to different satellite positions S1 and S2. Ranging is therefore able to determine the satellite longitude if the ranging station and the satellite’s nominal position are at different longitudes. The combined system thus makes orbit estimation possible, without relying on external references for phase calibration. This means, in more practical terms, that we may set the phase bias of the interferometer as an unknown parameter to be estimated together with orbital elements. The satellite longitude is then estimated as relying on the absolute value of range, that is, on the accurate calibration of ranging. This must be done by correct positioning of the ranging antenna and by correct calibration of signal delays in the Earth station and in the satellite.
19.3 Interferometer Hardware and Performance Figure 19.3 outlines the setup of the interferometer as described in published reports [3, 4]. Each receiving route operates as follows: The downlink micro-
184
Radio Interferometry and Satellite Tracking
Figure 19.3 Interferometer setup. Operational frequency: Ku band (12 GHz); LNA: low-noise amplifier; OT: optical transmitter; OR: optical receiver; D/C: downconverter; LO: local oscillator; Phase: phase-measuring unit.
wave at the LNA output is converted by an optical transmitter (OT) into an optical signal, which is then transmitted through a phase-stable optical cable to a central site. At the central site, the signal is converted by an optical receiver (OR) into a microwave signal again, which is then downconverted for phase measurements. Phases are measured by the same kind of FFT-based processing described in Chapter 4. The use of optical cables allows the downconverters to be placed closely side by side, which then allows short cables to distribute common LO signals. The LNA and OT are placed in a feed-unit box and the box’s internal temperature is maintained at a constant temperature ±3°C. These designs ensure the stability of phase, while allowing the baseline to be long as 250m. The operation and performance of the interferometer are reported as follows [3, 4]. Measured phase data are preprocessed before entering orbit estimation. Phases may vary beyond the range of 0 to 360 deg as the satellite moves. If the phase changes for example from 360 to 361 deg, the measured phase becomes 1 deg; or if the phase changes from 0 to −1 deg, the measured phase becomes 359 deg. That is, the measured phase may show a leap of 360 deg. This kind of leap is reconnected by continuously measuring the phase and tracing its changing pattern. The reconnected phase data, which make a curve like a sinusoid with a period of 1 day, are input to orbit estimation. One cannot prevent an unknown constant from entering the reconnected phase data, but this is not
Range-Azimuth Tracking
185
a problem because it will be estimated as an unknown parameter when the orbit estimations are being made. Phases are measured to within a resolution of 5% of a wavelength, which corresponds to a resolution of 0.0003 deg of the angle pointing to the satellite, for the given baseline length. As a result, the “Observed minus Calculated” (O−C) value for the interferometer was less than 0.0002 deg in orbit estimation. This is an improvement by an order or more compared with the azimuth and elevation angles obtained from conventional satellite pointing antennas. The combination of ranging with an interferometer is thus proven to be a promising idea for orbit estimation.
19.4 Station Keeping with Safety Monitoring What would happen if the combined ranging-interferometer system imports the idea of longitude monitoring discussed in the previous chapter? A possible answer is illustrated in Figure 19.4. A combined system is placed at the control station for satellite S1. The baseline of the interferometer is set to the particular orientation specified for longitude monitoring. This orientation will differ from that of the azimuth-detecting baseline, but the difference is not large so that practically there is no problem with regard to effective baseline length. Suppose there is another satellite S2, which is so close to S1 as to worry the control station. The interferometer receives signals from satellite S1 and from satellite S2 if they are in the same frequency band. This can occur if, for example, S1 uses an upper part of the Ku band and S2 uses a lower part of the same band as a
Figure 19.4 Combined system with orbital safety monitoring.
186
Radio Interferometry and Satellite Tracking
result of frequency coordination. The interferometer then monitors the orbital longitudes of S1 and S2 in a relative manner. The control station is thus able to pay attention to the longitudes of S2 for orbital safety, while doing station keeping for S1. Also, the control station for S2 may have the same kind of system so as to monitor the longitudes of S1. This is good for safe station keeping, particularly in cases where S1 and S2 are from different operators or different nations such that close coordination between control stations is difficult.
References [1] Soop, E. M., Handbook of Geostationary Orbits, Norwell, MA: Kluwer, 1994, p. 230. [2] Soop, E. M., “One-and-a-Half Tracking Systems; Minimum Requirement for Geostationary Orbit Determination,” paper presented at International Symposium on Space Dynamics, CNES, Toulouse, June 19–23, 1995. [3] Pedersen, F. H., “Interferometer for High Precision Orbit Determination,” Proc. Data Systems in Aerospace, Prague, June 2–6, 2003, ESA SP-532, pp. 39.1–39.10. [4] Rosengren, M., J. De Vicente-Olmedo, and F. Pedersen, “Keeping Track of Geostationary Satellites,” ESA Bulletin, No. 119, August 2004, pp. 64–68.
20 Differential Tracking We saw in Chapter 18 that the longitude-tracking interferometer was able to operate without needing external references for phase calibration if its purpose is to track relative longitudes. That is, relative tracking is a simplified way of using the interferometer. Pursuing this idea of simplification leads us to the concept of differential tracking—we estimate solely the relative motion of satellites, while paying no attention to the motions of individual satellites. The setup and performance of an experimental differential tracking interferometer [1] is reviewed in this chapter. We also discuss the possible applications of differential tracking.
20.1 Differential Tracking Concept In this book thus far we have considered interferometers that have phase-stabilizing devices. The interferometer in Chapter 18 uses plane mirrors, and that in Chapter 19 uses optical transmissions—those devices allow the downconverters to be placed side by side for stable phase measurements. What would happen if an interferometer has no such devices for phase stabilization while having long baselines with coaxial cables simply connecting the antenna sites? The changing temperatures of the cables will surely cause phase errors, thus causing errors in orbit estimation. If however, the interferometer tracks two or more satellites that exist in the beam of the receiving antennas, and if those temperature-caused errors are common to the satellites, then we can expect the orbit estimation errors to be common to the satellites, meaning that their relative motions will be known correctly without errors. In other words, taking
187
188
Radio Interferometry and Satellite Tracking
differences between tracking observations for two or more satellites will have the effect of common error cancellation. This is referred to as differential tracking, and it is worth consideration if we are interested primarily in estimating the relative orbital motions of satellites. The common error mentioned above includes atmospheric refraction. Because the microwaves from the satellites travel in virtually the same path through the atmosphere, refraction commonly occurs. Therefore, the effects of refraction will be removed as being common errors. This is also important for differential tracking.
20.2 Interferometer Hardware The reported case of the differential tracking interferometer is as follows. It has two baselines, of which one has the setup illustrated in Figure 20.1. The baseline has a length of 110m, and the antennas have 1.2m-diameter for receiving beacons from two satellites in the Ku band (11.7 GHz). The IF signals in a 90-MHz band are transmitted through cables from antenna sites to the phasemeasuring site, while LO signals are supplied to antenna sites in 100 MHz, with all cables being conventional coaxial cables. The measuring site is near one antenna site, so that the cables for one side are much longer than those for the other side, and the cables are routed through ground trenches without particular considerations for temperature. Phases are measured for two beacons with the same timing so that common error cancellation may work best. The other baseline, 130m long, has the same setup, while one antenna site and the phase-measuring site are used commonly. The baselines are placed roughly in the geometry of azimuth-elevation tracking as illustrated in Figure 17.3. Figure 20.2 shows examples of interferometric phase data obtained from one baseline that has a 110m length. The panels marked “A” and “B” are for
Figure 20.1 Interferometer baseline for differential tracking. LNC: low-noise converter; D/C: downconverter; LO: local oscillator; Phase: phase-measuring unit.
Differential Tracking
189
Figure 20.2 Tracking data for two satellites; separate and differential.
separate satellites, with their beacon frequencies being different by 7 MHz. Looking at ∗1 in Figure 20.2, the data show a local change. Such a change cannot originate from the motion of the satellite, because the change is too quick compared with the orbital period of 1 day. So this change is regarded as an error from temperature variations in the cables or from any fluctuation in hardware components. Similar errors are seen at ∗2, ∗3, or anywhere. Tracking data with such errors being present everywhere would not yield quality orbit estimations. These errors, however, vanish when the data are made into differential data, as shown in panel “A−B.” Note in this panel that the phase leaps of 360 deg have been reconnected. Common error cancellation is thus working, though the interferometer has no devices for phase stabilization. Data from the other baseline showed much the same result. To what extent the error cancellation has actually worked will be known from the result of orbit estimation.
190
Radio Interferometry and Satellite Tracking
20.3 Orbit Estimation If our orbit estimation is for the relative motion of satellites, then its process must change from the standard one, as illustrated in Figure 16.1, to a different one, as illustrated in Figure 20.3. Interferometric observation data for satellites A and B are made into differential data, as marked by the O. Correspondingly, the calculated data for satellites A and B are made into differential data, as marked by the C. The quantity O−C thus operates through a correction coefficient to improve the orbit of satellite B. Meanwhile the orbit of satellite A is assumed as known, and in the present case its orbital elements were provided by the satellite operator. When the orbit estimation is finished, the relative motion is the difference between orbit A and orbit B. Orbit estimation processing must consider the problem of phase ambiguity. The period of ambiguity for the given baseline length and for the Ku-band beacon frequency is around 0.01 deg in satellite directional angles. Correspondingly, the orbit estimation will have multiple solutions that differ from each other by 0.01 deg in satellite orbital position. This ambiguity is resolved by referring to the orbital information of satellite B, although this information does not need to be much more precise than 0.01 deg of satellite orbital position. In the reported case this information also comes from the satellite operator. When the orbit estimation has converged, the O−C as converted into the satellite directional angle was at a level of 0.00012 deg one sigma [1]. One-sigma level times three is practically the maximal value, which becomes 0.00036 deg. Meanwhile, the interferometer we saw in Chapter 19 had an orbit estimation result with O−C being less than or equal to 0.0002 deg in satellite directional
Figure 20.3 Relative orbit estimation using differential tracking. DIF: Where data are made into differential data.
Differential Tracking
191
angles. These two levels of O−C are equivalent if the different baseline lengths are taken into account; one is longer than the other by a factor of around 2. They became equivalent obviously because common error cancellation has worked well in the differential interferometer. The preceding comparison of orbit estimation results suggests that the accuracy is essentially the same for the relative satellite position obtained from a differential interferometer and for the absolute orbital position obtained from a phase-stabilized interferometer if the baseline lengths are equal, and this is a reasonable suggestion to summarize the results.
20.4 Possible Applications We have already discussed the scenario in which an orbital slot can be divided in longitude into subslots if two or more satellites must be placed there. If even more satellites need to be placed, however, those subslots will become too narrow for satellites to perform east–west station keeping. At that point, the concept of using a simple, one-dimensional dividing of longitude should be abandoned and a more advanced policy of dividing used. The advanced policy calls for the volume of an orbital slot to be divided in three dimensions in a dynamic manner, which is referred to as inclination and eccentricity separation [2]. When observed from the earth, satellites following this policy will look as if they form a ring like a necklace, with the satellites moving in synchronism along the ring. They also resemble synchronized swimming, with a number of swimmers making a ring-like formation. The formation of satellites as a whole is then put under station keeping inside one orbital slot. Maintaining this kind of three-dimensional formation is assumed to be possible if the satellites are controlled by one and the same operator, because if there is any error in orbit estimation then it will be common to the satellites and so the relative orbital control for formation keeping will not suffer from the error. This supposition fails, however, if the satellites are from different operators or different nations. If the key to the satellite formation keeping is the relative control of their orbits, then our differential tracking provides the means for relative orbit estimation. The interferometer has simple hardware and works if there are downlink beacons from satellites in a common frequency band. Because the operators know the nominal orbits of the satellites, they can be used for resolving the ambiguity problem. Orbit estimation using sequential processing will allow operators to determine the present positions and motions of their satellites relative to others, thus securing the formation and station keeping of satellites. Satellites in the future may possibly have some on-board hardware that will allow them to detect relatively the motions of neighboring satellites. For
192
Radio Interferometry and Satellite Tracking
example, ranging between satellites with additional angle observations determines the relative orbital motion [3]. Carrying a GPS receiver is also a possible means of on-board orbit estimation [4]. How quickly the use of these types of on-board hardware will spread—if they spread at all in the cost-competing world of geostationary satellites—is a sensitive problem. So, for the time being, ground-based differential tracking will be practical if the need arises to monitor the relative orbital motion of satellites.
References [1] Kawase, S., and F. Sawada, “Interferometric Tracking for Close Geosynchronous Satellites,” J. of Astronautical Sciences, Vol. 47, No. 1, 1999, pp. 151–163. [2] Soop, E. M., Handbook of Geostationary Orbits, Norwell, MA: Kluwer, 1994, pp. 136–140. [3] Kawase, S., “Intersatellite Tracking Methods for Clustered Geostationary Satellites,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 26, No. 3, 1990, pp. 469–474. [4] Vetter, J. R., “Fifty Years of Orbit Determination: Development of Modern Astrodynamics Methods,” Johns Hopkins APL Technical Digest, Vol. 27, No. 3, 2007, pp. 239–252.
21 Rotary-Baseline Interferometer If we use an interferometer for monitoring the overcrowded satellite orbit, then what will the purpose of monitoring ultimately be? Its purposes will include scanning the orbital arc from end to end to see which satellite is where in order to correct and update the master register; determining and alerting if any satellites are coming too close to each other in the orbit; and collecting orbital information for any satellites in need of action in overcrowded situations. The interferometer must then point its beam to any orbital position and determine satellite positions without ambiguity. The interferometer will need to operate in two or more frequency bands. Is such an interferometer possible? This chapter introduces an interferometer that will serve the purposes mentioned. It is an interferometer with a baseline that moves mechanically. In the following sections, we discuss the principle and design of such an interferometer, as has been published in a report [1], along with its operation and error calibration in detail.
21.1 Rotary Baseline The problem of ambiguity is a fundamental problem that arises in any interferometer. Theoretically speaking, it can be solved by using a number of antennas to place longer and shorter baselines, as discussed in Chapters 6 and 7. Practically, however, it is no easy task to ensure phase stability to all of the baselines involved. Instead, we can consider using a single, movable baseline with two antennas, as illustrated in Figure 21.1.
193
194
Radio Interferometry and Satellite Tracking
Figure 21.1 Rotary baseline concept.
The antennas, A1 and A2, are mounted on a horizontal, rotary arm. When the arm rotates, with the angle θ slowly increasing, the antennas move. As they move, the phase φ measured between the antennas will vary, as marked by the “A” in Figure 21.2. The phase will show leaps of 360 deg as it is measured in the range of 0 to 360 deg. If the phase is measured continuously, we can reconnect the leaps as noted in Chapters 19 and 20. The reconnected phase will plot a sinusoidal curve that looks like the one marked “B” in Figure 21.2. The curve has a peak, and its position in θ determines the azimuth of the satellite, while the peak-to-peak amplitude of the curve determines the elevation. Here, an unknown constant is added to the reconnected phase to make a bias, which becomes 360 deg times an unknown integer. This is not a problem, however, because it is the θ position and the shape of the curve, not the absolute values
Figure 21.2 Phase leaps and their reconnection.
Rotary-Baseline Interferometer
195
of φ, that determine the azimuth and elevation. The problem of ambiguity is thus resolved.
21.2 Rotary Baseline with Mirrors Although the concept illustrated in Figure 21.1 looks simple, it does have difficulties. Applying the weights of the antennas to the ends of the arm may cause problems with the mechanical precision. When the arm rotates, the cables that connect the antennas to the receiving equipment must bend and twist, thus causing errors in signal phases. To avoid these difficulties, the concept is modified as in Figure 21.3. In the modified concept, a downlink microwave is first reflected by mirror M1 and then by mirror M2, before being directed to a fixed antenna A1. The same process occurs at mirrors M3, M4, and antenna A2. Mirrors M1 and M3 are on the rotary arm, while M2 and M4 are at fixed positions. The mirrors are all planar. When the arm rotates, the pointing angles of the mirrors are controlled so that the microwaves will be guided correctly into the fixed antennas. The phase delay along the reflected path M1–M2–A1 and that along M3–M4–A2 are calculated from geometric path lengths. If these delays are removed from the measured phase, one can then regard M1–M3 as the baseline. The baseline is thus able to rotate in the horizontal plane. The interferometer can be operated in two frequency bands if it is set as illustrated in Figure 21.4. There is another pair of fixed antennas, A1′ and A2′, prepared for another frequency band. Mirrors M2 and M4 tilt so as to guide the microwaves either into A1 and A2 or into A1′ and A2′, thus switching the
Figure 21.3 Mirror-guided rotary baseline.
196
Radio Interferometry and Satellite Tracking
Figure 21.4 Switching the frequency band.
frequency band. Still another frequency band may be added if there is space for more antennas.
21.3 Rotary-Baseline Interferometer On the basis of the modified concept, a rotary-baseline interferometer for the Ku and C bands was built, as shown in Figure 21.5. Mirrors M2 and M4 are not shown in this figure because trees obstruct the view. A scale model of the complete view is shown in Figure 21.6. Note that there is only one pair of fixed antennas for the Ku band. Signal routes are shown for one frequency band in Figure 21.7, whereas in reality different sets of LNA and D/C are used for
Figure 21.5 Ku/C-band rotary baseline interferometer. (Courtesy of NICT.)
Rotary-Baseline Interferometer
197
Figure 21.6 Scaled model providing a complete view. (Courtesy of NICT.)
Figure 21.7 Signal routes for one frequency band. LNA: low-noise amplifier; D/C: downconverter; Phase: phase-measuring unit; H: hybrid; RO: reference oscillator; LO: local oscillator.
different frequency bands. Antennas through downconverters are placed side by side, so the signals from the LO and RO are distributed using short cables. Specifications for the interferometer are summarized in Table 21.1. Mirror-antenna distances M2-A1 and M4-A2 are 16m, and mirror-mirror distances M1-M2 and M3-M4 vary from 15 to 28m as the arm rotates. These are short distances, so the reflected microwave beams propagate without expanding due to diffraction. Inserting the mirrors thus causes negligibly small losses.
198
Radio Interferometry and Satellite Tracking
Frequencies: Fixed antennas: Rotary arm: Plane mirrors: Phase measurement: Site location:
Table 21.1 Interferometer Specifications Ku band (11.70–12.75 GHz); C band (3.7–4.2 GHz) 1.8m-diameter, linear polarization; G/T = 23.2 dB/K (Ku), 16.8 dB/K (C) Length: 13m; driving speed: 1 deg/s; angle precision: 0.001 deg Size: 2m × 2m; driving speed: 1 deg/s; angle precision: 0.01 deg Bandwidth: 20 MHz; FFT points: 1,024; FFT repetiton rate: 20,000/s 140.7 deg E, 36.0 deg N
Note the following about the geometry of mirrors in Figure 21.3: Mirrors M1 and M2 are paired, and M3 and M4 are paired, regardless of the arm rotation angle. The microwave paths M1–M2 and M3–M4 are not crossing each other in the figure, while in some region of the arm rotation angle the paths M1–M2 and M3–M4 do cross each other. This does not matter, because the microwaves will not interact with each other if they cross each other. What matters is the blocking of the path by a mirror. As the arm rotates, at some particular arm position, mirror M1 comes into the path M3–M4 and blocks it, thus making phase measurement impossible. This problem of mirror-path blocking is avoided by properly operating the arm, as discussed later. Phase detection is based on the design and process discussed in Chapter 4. Owing to the FFT repetition rate, the downlink SNR improves equivalently by 43 dB (see Chapter 5). The equivalent SNR improves even more if a bandspread signal is chosen as a tracking target, maximally by 30 dB if the signal is spread fully across the bandwidth. This allows the fixed antennas to be small in diameter. Too small a diameter would not suit our purpose because too wide a beam would then receive other satellites besides the target satellite. A beam width of 1 deg was chosen for the Ku band, and correspondingly the antenna diameter was specified as in Table 21.1. As a result, the interferometer has ample SNR margins, so that it detects the phases even if the target downlink includes only transponder noises with no communication signals present, or if the downlink with communication signals comes from a sidelobe of the satellite antenna. Table 21.1 specifies the precision of angle pointing differently for the arm and for the mirrors for the following reason: Suppose, in Figure 21.8, that
Figure 21.8 Effect of mirror-pointing error.
Rotary-Baseline Interferometer
199
a microwave propagates from satellite S to antenna A via a mirror. The mirror makes an image of antenna A at A′, so we can regard SA′ as the propagation path. If the mirror has an error δ in its pointing angle, the antenna image A′ then moves to A″. The path SA″ becomes shorter than SA′, so the measured phase will have an error of 2π(SA′−SA″)/λ, where λ is the wavelength. This error becomes approximately 4πdd2/λ for a distant satellite and a small δ, with d being the mirror-antenna distance. Meanwhile, an error ∆ in the arm rotation angle causes a phase error of the order of 2πL∆/λ, with L being the arm length. The error in the mirror pointing angle is thus less critical than that in the arm rotation angle. The mirrors are designed so that the elevation rotation axis will lie on the mirror’s surface and the azimuth/elevation rotation axes will cross each other at the center of the surface. This cross point is referred to when determining the geometry of mirrors. Polarization angles need care when mirrors reflect the microwave. For example, if the downlink microwave is horizontally polarized, its polarization angle may differ from the horizontal after being reflected by a mirror, and the difference varies when the mirror moves. This may cause an error to the phase measurement if the downlink microwave is circularly polarized. To avoid such an error, the antenna polarization angles must be adjusted every time the orientation of the arm is changed or else corrections must be applied to the measured phases after collecting the data. The arm and the mirrors are geometrically symmetric, so when they rotate around their axes the gravity forces acting on them are always balanced. This is good for mechanical precision, and as a result, very small motors smoothly drive the whole mechanism.
21.4 Operation and Data Processing If our purpose is to determine the azimuth and elevation of a satellite, the interferometer operates as follows. First, choose a target satellite at some supposed nominal orbital longitude, with its latitude assumed to be 0. Point the receiving beam to that orbital position. Observe the downlink spectrum and find a target signal for tracking, preferably a beacon, and determine its frequency. If using a beacon is not possible, then find some band-spread signal and determine its center frequency. Next, advance the arm orientation, step by step to angles θ1, θ2, …, θN, for one revolution in N steps, while controlling the mirror angles so that the receiving beam will always point to the target orbital position. At each step θi, measure the phase, and set it as φi(M ). If one step is 15 deg and the phase measurement takes 10 sec at each step, then it takes around 15 min for the arm to make one revolution—this is called rotary-mode operation. The step angles θi
200
Radio Interferometry and Satellite Tracking
are chosen so that none of them will fall into that particular arm position where the mirror-path blocking occurs as mentioned earlier. The measured data are processed as follows. Define the interferometer geometry as illustrated in Figure 21.9, where θ is the arm rotation angle from the north. Prepare a function that represents geometrically the relative path length:
f ( α, ε, θ ) = [Satellite - M 3 - M 4 path length ]
- [Satellite - M 1 - M 2 path length ]
(21.1)
where α and ε are the azimuth and elevation of the satellite. Actually, there are path lengths M2–A1 and M4–A2 as seen in Figure 21.3, but they are constant so their relative length can be treated as being a part of the constant bias that will be introduced later. The nominal azimuth and elevation of the satellite, (α0, ε0), are known to correspond to the supposed target orbital position. Using the path length function, set
φi(O ) = φi(M ) - (2 π λ) f ( α0 , ε0 , θi ) , i = 1,2,N
(21.2)
where λ is the wavelength of the received signal. If the satellite is exactly at its nominal orbital position, then φi(O ) will not vary with i. That is, an off-nominal satellite position makes φi(O ) undulate with i; hence, φi(O ) is called the off-nominal phase. An example of off-nominal phase for a Ku-band satellite is shown in Figure 21.10, where the phase leaps by 360 deg at the asterisks (∗). Reconnecting these leaps yields the reconnected off-nominal phase φi(1), which looks like the plot shown in Figure 21.11. From the reconnected off-nominal phase, set
Figure 21.9 Interferometer geometry.
Rotary-Baseline Interferometer
201
Figure 21.10 Off-nominal phase, Ku band.
Figure 21.11 Reconnected off-nominal phase.
φi(2) = φi(1) + (2 π λ) f ( α0 , ε0 , θi ) , i = 1,2,N
(21.3)
This is called the reconstructed phase. To the reconstructed phase, fit a theoretical curve defined by
φi(T ) = (2 π λ) f ( α, ε, θi ) + C
(21.4)
where α, ε, and C are parameters that are determined using the least-squares method. Precisely speaking, azimuth α and elevation ε vary with time. Normally they vary little during one rotary-mode run if the satellite is stationary, so we assume they are constant. If they are varying fast, we add two parameters
202
Radio Interferometry and Satellite Tracking
that denote the variation rates of α and ε, to determine five parameters by the fitting. The parameter C, when determined by the fitting, will include the unknown bias due to phase ambiguity, the relative phase delay between paths M2–A1 and M4–A2 in Figure 21.3, and any other phase biases existing in the interferometer. Figure 21.12 shows an example of the fitting residual: φi(2) - φi(T ). The smaller the residual, the better the interferometer has operated. Here the residual is 8.9 deg in RMS, which corresponds to a length of 0.6 mm for the Ku band, and this is the equivalent mechanical precision with which the interferometer has operated. The azimuth α and elevation ε of the satellite are thus determined. Note that the elevation includes the effect of atmospheric refraction. Applying the correction of (6.3) yields a geometrical elevation. If the nominal α0 and ε0 were not accurate enough, the off-nominal phase shown in Figure 21.10 would undulate more, which would cause difficulty when reconnecting the phase. If this occurs, one may have to use a reconnection tool, like the one illustrated in Figure 21.13. The tool plots the off-nominal phase φ(O)i in the sense of φ(O)i + 360n deg, with n = ±1, ±2, ±3 ..., so there are multiple points for the off-nominal phase at each i. Reconnecting the phase requires determining how phase points connect when we go from i to i + 1. In Figure 21.13(a), connections between phase points are not clear because the nominal α0 and ε0 were not very accurate. Then we try to correct the supposed satellite position in longitude and latitude by making trial corrections in a searching manner. This is shown in Figure 21.13(b), where the latitude has been corrected by a degree. The connections between phase points are now clear, so that reconnection is done at once. Such a case occurs mostly when the satellite has an increased orbital inclination. Reconnecting the phase may thus require a manual operation, which takes time. In any case, monitoring an unknown satellite takes time, because it starts with searching for a beacon or any other usable target signal, and proceeds
Figure 21.12 Residual of fitting.
Rotary-Baseline Interferometer
203
Figure 21.13 Tool for search and reconnection.
to determining and setting its polarization. If reconnection takes time, it will be a small part in the entire process of monitoring.
21.5 Orbit Estimation One set of azimuth and elevation is obtained by one set of rotary-mode operation. Repeating the rotary mode continuously to collect azimuth-elevation data sets would then make orbit estimation possible. But this is too much for the driving mechanism of the interferometer. Orbit estimation should preferably work with less motion of the arm and its mirrors. One idea is to limit the arm motion between two positions, for example, between 135 deg (position 1) and 225 deg (position 2). The arm moves from position 1 to position 2, as illustrated in Figure 21.14, and phase data are collected before and after the arm moves, that is, at ∗1 and at ∗2. One hour later, the arm moves back from position 2 to position 1, with phase data being collected similarly, at ∗3 and at ∗4. This is repeated every hour, with the arm moving back and forth between the two positions. The two positions of the arm look like a figure X, hence it is called the X mode. Positions 1 and 2 are good positions, because the mirror-path blocking mentioned earlier does not occur there.
204
Radio Interferometry and Satellite Tracking
Figure 21.14 X-mode operation. Phase data are collected at the “o” marks.
The arm moves from one position to the other in a few minutes. So, the set of data collected at ∗1 and ∗2, for example, is virtually for two existing baselines with a 90-deg separation, which is good for orbit estimation. Measured data are sent for orbit estimation each time the data are collected, so that the estimation runs sequentially. The data of arm-angle θ are also sent to orbit estimation, because the tracking model must know the exact arm angle at the time of phase measurement. Before starting the X mode, the azimuth and elevation of the satellite must be determined by the rotary mode. This set of azimuth and elevation is used, on starting the X mode, for identifying the right solution out of the multiple solutions induced by the phase ambiguity. That is, the rotary mode serves the X mode as starter. The parameter C determined by the rotary-mode starter is also referred to by the tracking model in X-mode estimations. In the rotary mode, the output elevation was made into a geometrical elevation by correcting for the atmospheric refraction. On the other hand, in the X mode, atmospheric correction is done in the orbit estimation, because its tracking model contains a model of atmospheric refraction.
Figure 21.15 X-mode orbit estimation for two satellites. Relative satellite positions are shown for every hour, in stereograph. Axes: E for east longitude, N for north latitude, with 0.02-deg divisions. Altitude axis points to this side of the paper surface.
Rotary-Baseline Interferometer
205
A case of X-mode orbit estimation is shown in Figure 21.15. Two satellites were tracked in the Ku band, and their orbits were estimated for each. Satellite motions were then made into relative motions, and were plotted in the figure. Two similar-looking graphs are plotted in the box. Look at the left-side graph with your left eye, and at the same time look at the right-side graph with your right eye. It requires a little skill, but if it goes well you will see three dimensionally the relative motion of the satellites. The cross point of the E/N axes marks the collision point, while the stereograph shows clearly how the satellites are avoiding a collision by maintaining a certain separation distance between them. This shows the orbit estimation results in a particular way for reference, while what we should see is that the X mode is able to monitor the exact orbital motions of satellites, in real time, with precision. We discussed a differential interferometer in Chapter 20, which was used solely to estimate the relative orbital motion. In the present case, the motions are estimated for each satellite separately, before being made into relative motions. If orbital elements of a satellite are required, they are determined as follows. First, run the X mode and monitor the satellite motion in longitude and latitude for a while. Orbital maneuvers are then identified if there are any. Choose a period of 1 or 2 days in which no maneuvers have occurred, and for this period do orbit estimation by batch processing. The initial guess needed for it will be obtained from the X-mode estimations.
21.6 Long-Term Monitoring Orbital motions can be monitored over the long term, with the arm-mirror motion being made even less, in the following way. First, the X mode runs for a couple of days. Next, the arm is fixed to that particular orientation specified in Chapter 18 for longitude monitoring. Orbit estimation continues so as to improve the motion in the equatorial plane, without improving the out-of-plane motion. This control of orbit estimation is similar to that noted in Chapter 18. Longitude monitoring continues for an arbitrary length of time, with the outof-plane motions being simply predicted. After that, the X mode is run again, to improve the satellite motions in-plane and out-of-plane. (Here, the X mode does not need the rotary-mode starter.) The X mode and the longitude monitoring thus run by turns for any length of time. The longitude-monitoring interferometer that appeared in Chapter 18 was actually for the rotary baseline interferometer, with its arm being fixed at the specified orientation and mirrors having fixed pointing angles. A case of long-term monitoring is shown in Figure 21.16. A satellite in the C band is tracked, with the X mode running for the periods marked with an “X.” Satellite motions in longitude and latitude thus seen here are consistent
206
Radio Interferometry and Satellite Tracking
Figure 21.16 Long-term monitoring of satellite motions.
with those motions assumed for a satellite under station keeping, which we learned about in Chapter 13. The X mode on day 305 provides the latitudes for later days by prediction, and the prediction agrees with the next X mode on day 310. This is similar for the next X mode, but a discrepancy at the X mode appears on day 325. This is because an NS maneuver has occurred somewhere between day 318 and day 324 but we do not know that until the X mode runs on day 325. This is the cost of reducing the arm-mirror motion, but this style of long-term monitoring allows us to see how an orbital slot is being occupied by a satellite, or to see in what manner an unknown satellite is moving about in the proximity of our important satellite. We noted in Chapter 18 that the longitude-monitoring process suffers a negligibly small error due to atmospheric refraction. In the X mode, the orbit estimation must rely on the approximate refraction model of (6.3). If we look at the longitude data in Figure 21.16, there is no visible discrepancy between
Rotary-Baseline Interferometer
207
the X mode and the longitude monitoring. So, using the approximate model of (6.3) is practical for X-mode orbit estimation.
21.7 Error Considerations Accurate processing of the measured phase data requires accurate modeling of the relative path length, that is, accurate setting of function (21.1). The function refers to the geometrical positions of the mirrors, of which two are moving with the rotating arm. The geometry of the mirrors will be determined by using a survey tool, to an accuracy of a few millimeters, perhaps to 1 mm, but probably no more. So, we must know how geometry errors affect the accuracy of interferometer operation. Potential sources of geometry errors include the following: • Arm length error: lengths from the rotary pivot to the ends may have errors L1 and L2. • The arm rotation angle may have a bias A. • The arm rotation plane may tilt from being horizontal. When one end of the arm points to the north it may be elevated by X, and when the same end points to the east it may be elevated by Y, thus X and Y define the tilt. The path length function is then written in the form of f (α, ε, θ; L1, L2, A, X, Y ), with L1, L2, A, X, and Y denoting the geometry error parameters. Using this function, we can simulate an error evaluation as follows. For a satellite assumed to be at azimuth α0 and elevation ε0, create a simulated data set:
φi(S ) = (2 π λ) f ( α0 , ε0 , θi ; L1 , L2 , A , X ,Y ) , i = 1,2, , N To this data set, fit a theoretical curve defined by
φi(T ) = (2 π λ) f ( α, ε, θi ;0,0,0,0,0) + C
to determine α and ε. The errors in azimuth and elevation are then evaluated as α−α0 and ε−ε0. Do this simulation by setting each one of L1, L2, A, X, and Y to a small value while setting the others to zero and assuming the satellite at various orbital longitudes. Simulations were run in this way for five error items, with the results shown in Figure 21.17. Because the interferometer is to monitor satellites that are supposed to be in 0.1-deg orbital slots, the azimuth-elevation error should be less than 0.01 deg or preferably even less. This requirement al-
208
Radio Interferometry and Satellite Tracking
Figure 21.17 Error evaluation simulation results. α: azimuth error; ε: elevation error. The interferometer site is at 140 deg E.
Rotary-Baseline Interferometer
209
lows none of the error items in the figure to be neglected. Note that the error patterns are the same for L1 and for L2. We consider one parameter, L = L1 + L2, in place of separate L1 and L2. That is, we consider the error in the end-to-end length of the arm by assuming L1 = L2. The path length function is then written as f (α, ε, θ; L, A, X, Y ). More precisely speaking, position errors of mirrors M2 and M4 may also be error items, and also a constant error in the relative heights of mirrors M1 and M3. When each of these items was given an error of 1 mm, the azimuthelevation errors were less than 0.002 deg. So, these error items can be neglected. The length of the arm may change with the air temperature. The temperature varies in a range of 30 deg or more in a year, and correspondingly the steel-made arm will vary its length by as much as 4 mm. So thermometers are placed to measure the arm temperatures. The arm length is corrected each time the path length function is referred to in data processing. The error parameter L is then regarded as a constant. If a large parabolic antenna is used for direction finding, its pointing accuracy will be affected by thermal conditions when the sunlight heats up the structure unevenly and distorts it. Such a distortion would be too complex for accurate modeling. In the present case, on the other hand, the heating causes a simple one-dimensional change in the length, which is simple to model.
21.8 Error Calibration It is now clear that we must determine four unknown parameters, L, A, X, and Y, for error calibration. The error calibration uses reference satellites with known azimuths and elevations, as follows. There are five reference satellites, of which one is chosen at the south end of the interferometer site, two at the east and west ends of the visible orbital arc, and the others in between. For each (2 ) satellite #j, run the rotary mode and obtain reconstructed phase data: φ j ,i for (2 ) i =1...N. When this is done for five satellites, we have the data set as: φ j ,i for i =1...N, j = 1...5. To this data set, fit a theoretical curve defined by
(
) (
)
φ(jT,i) = 2 π λ j f α j , ε j , θi ; L , A , X ,Y + C j
where L, A, X, Y, and Cj are fitting parameters, with αj and εj being the known azimuth and elevation and λj the wavelength for reference satellite #j. The constant term C may be different for different satellites, hence it is set as Cj. Errors were calibrated using reference satellites in the Ku band. Their azimuths and elevations were obtained from optical observations similar to the case in Chapter 18. The calibration results were L = 2.1 mm, A = 0.017 deg, X = 0.8 mm, and Y = −1.4 mm. Here, the parameters of length-dimension are
210
Radio Interferometry and Satellite Tracking
not far from a millimeter, so the result is reasonable when considering the accuracy of the survey tool. The fitting residual, φ(j2,i) - φ(jT,i) for i =1...N and j = 1...5, showed a level not different from that for a normal Ku-band rotary mode, and this is also a reasonable result. These results suggest that the calibration has performed well. The path length function that appeared in (21.1) through (21.4) is for this function after error calibration.
21.9 Nongeometrical Error We have so far assumed that the phase delay along a mirror-reflected path is calculated by geometry. This may not be precisely true, because geometrical optics is an approximation of the microwave propagation. Suppose that a plane wave is propagating from a mirror to another mirror. Its simplified model is illustrated in Figure 21.18. If we stand on Huygens-Fresnel’s principle, waves are propagating from P1 to Q, P2 to Q, ..., Pn to Q, and they make the wave at Q by superposition. Their propagation distances are longer than the intermirror distance d, except for the wave from P2. So, the phase delay may not be equal to the geometrically calculated delay. Although this is an elementary discussion, it suggests that some nongeometrical error might exist in the phase delay and it would depend on the intermirror distance and on the frequency band. The suggested nongeometrical error is checked by experiment using a target satellite that has downlinks in the Ku band and the C band. Run the rotary mode in the Ku band and determine the azimuth and elevation (α, ε). Assuming this (a, ε), predict the reconstructed phase for the same satellite in the C band. Meanwhile the reconstructed phase is obtained from the rotary mode in the C band. If the two show any difference, then it is for the nongeometrical error as seen relatively for the Ku and C bands. The result is shown in Figure 21.19, where three target satellites were used, with one being at the south end of the interferometer site and the other two at the ends of the visible orbital arc. The suggested error does exist, and varies when the intermirror distances vary with the rotating arm.
Figure 21.18 Plane-wave propagation between mirrors.
Rotary-Baseline Interferometer
211
Figure 21.19 Nongeometrical error as seen relatively for Ku and C bands. ∗: measured; o: smoothed by curve fitting.
Owing to the errors shown in Figure 21.19, tracking in the Ku band and in the C band may yield different results for the same satellite. Such a situation should be avoided by using the following treatment. The errors in the figure are smoothed by using a fitting polynomial curve, with a quadratic curve sufficing for the present case. The error as given by the smoothing curve is then added if the measurement is in the C band to the geometrically calculated phase delay in the data processing. This corrects for the C-Ku relative phase error, and after the correction, the rotary mode in the C band and that in the Ku band will yield the same result for the same satellite. There is then a question we must answer finally: What was the effect of nongeometrical errors on that calibration in the Ku band? When the arm makes one revolution in the rotary mode, an intermirror distance varies between its maximum and minimum, with periodicity. Correspondingly, the nongeometrical error existing in this intermirror path would vary in the same periodicity. Meanwhile, errors with the same periodicity will be produced if parameters L, X, or Y differ from 0. So, these parameters must have been determined so as to absorb the effects of nongeometrical errors when the calibration was made by using reference satellites in the Ku band. This is (2 ) (T ) inferred, because otherwise the fitting residual (φ j ,i - φ j ,i ) in the calibration would have to have shown a higher level to be noticed. That is, parameters L, X, and Y are calibrated for geometrical and nongeometrical errors in a combined manner. For this reason, the error parameters cannot be determined by survey tools alone, no matter how precise.
212
Radio Interferometry and Satellite Tracking
Controlling geometrical and nongeometrical errors allows the interferometer to operate properly. Half a year after the calibration in the Ku band, the interferometer was compared with optical observations for an accuracy test. It was accurate to 0.005 deg or better in azimuth and elevation for the Ku and C bands, thus proving its durable accuracy for orbital monitoring purposes.
Reference [1] Kawase, S., “Radio Interferometer for Geosynchronous-Satellite Direction Finding,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 43, No. 2, 2007, pp. 443–449.
22 Geolocation Interferometer We are now going to discuss a different kind of interferometer. The geolocation interferometer has antennas that are placed in orbit, and its tracking target is an Earth station at some unknown location. The target station is tracked because it is emitting an unwanted signal that causes RF interference to a satellite communication link. Tracking is being used then to locate the station on the map and take measures against the interference. This idea appeared decades ago [1, 2], and its importance is growing as RF interference tends to occur in the increasingly congested frequency channels of satellite communications. When compared with the satellite-tracking interferometer, the station-locating interferometer contains more complex technical elements. In this chapter, we study the most fundamental ideas underlying the station-locating interferometer, and draw a simple, understandable image of the locating process.
22.1 Geolocation: Principle and Problem Communication satellites are able to cover wide areas of service, but they are sometimes vulnerable to interference if an Earth station emits an unwanted uplink signal to them. This kind of unwanted signal may be the result of careless operation of the Earth station, such as a frequency, polarization, or antenna pointing setting that has been set incorrectly. Sometimes the signal may be a deliberate jamming signal. If interference occurs, we want to locate the Earth station on the map—this process is called geolocation—to find the station and get the signal turned off. The principle of geolocation is illustrated in Figure 22.1. An Earth station at P is transmitting an unwanted signal to satellite S1. The downlink of S1 is received at a monitoring station by antenna A1. While the station at P is 213
214
Radio Interferometry and Satellite Tracking
beaming its signal toward S1, a small portion of the signal will be directed to S2, a satellite adjacent to S1. If S2 is by chance operating in the same frequency band as S1, then it will relay the signal to its monitoring station, to be received by antenna A2. The monitoring station has a phase-measuring unit to detect the relative path delay: P–S1–A1 minus P–S2–A2. If we know the positions of S1 and S2, we know the relative path length: S1–P minus S2–P, hence obtaining location information for P. This is similar to the principle of the interferometer we know already, with only its baseline S1–S2 up in the orbit and the tracking target P on the ground. The principle of geolocation may look simple at first sight, but it has practical problems: 1. Signals are weak. The antenna of station P sends the signal to S2 through its off-axis radiation pattern. If S1 and S2 operate in the same frequency band, their orbital positions are separated by at least 2 deg, and usually 4 deg or more. So, the signal relayed by S2 to A2 will have a level that is extremely lower than the signal relayed by S1 to reach A1, presumably lower by several tens of decibels. Could such a weak signal be detected and measured? 2. There is only one baseline S1–S2, so how does this baseline locate the Earth station in two dimensions? 3. Each satellite has its own local oscillator for frequency conversion. The oscillator runs with its own phase, which is unknown to us. What would happen to the phase measurement at the monitoring station?
Figure 22.1 Principle of geolocation.
Geolocation Interferometer
215
Answering these questions will help our understanding of the geolocation principle, as discussed next.
22.2 Weak-Signal Detection As we saw in Chapter 4, the interferometric phase is measured by the process illustrated in Figure 22.2. Signals x(t) and y(t) received from satellites S1 and S2 are made into Fourier transforms: X(ω) and Y(ω). Their cross-product X Y * = Z(ω) is then used to determine the phase. Here, x(t) and y(t) represent the data sampled and stored as x(ti), y(ti), i = 1…m. Similarly, X(ω) and Y(ω) are for sample data: X(ωi), Y(ωi), i = 1…m; hence, Z(ωi) = X(ωi) Y *(ωi). Because y(t) is a weak signal, Y(ω) has a low level and Z(ω) will have a poor SNR, as illustrated in Figure 22.3. The level of signal Z may be well below the noise level, but we must detect it, as discussed next. If the signal Z(ω) in the figure has a bandwidth B, then its phase angle, arg Z(ω), has a linear slope against frequency ω, as noted in Chapter 4. If the values of Z(ω) are written as complex vectors, they will look like Figure 22.4, (1), for example. We are interested in their phase angles, so their magnitudes are set equal for simplicity here. If we add those vectors together, they tend to cancel each other because they are pointing in all directions. So, we try to correct their phases so that the vectors will point in the same direction, as in Figure 22.4, (2). The vectors will then add to each other in the same way as was illustrated in Figure 4.10 [D], while the noise components will add as [U], thus yielding an improved SNR. Because arg Z(ω) has a linear slope against ω, correcting the phases takes the form of e juω Z(ω), where u is used to set the slope of phase correction. The phase-corrected vectors are thus added together, as follows:
R (u ) = ∑ Z ( w) e ju w w
Figure 22.2 Interferometric phase measurement. FT: Fourier transform.
(22.1)
216
Radio Interferometry and Satellite Tracking
Figure 22.3 Signal below noise level.
Figure 22.4 Correcting the phases: (1) before; (2) after.
The function R(u) takes its maximum at some u = uC if this u makes the vectors align as in Figure 22.4, (2). If u goes away from uC , R(u) decreases rapidly. So, R(u) has a peak at u = uC , as in Figure 22.5. This figure indicates that the energy of the signal that was distributed originally over the bandwidth B is now concentrated in one place: u = uC . If the peak exceeds the noise level, then one can recognize the existence of the signal and determine uC . We are thus determining the uC as being a slope of phase, while actually this uC is for the relative delay time of the signals arriving at the monitoring station. This is confirmed by the relationship of units, as u [⋅] × ω [rad/s] = phase [rad], hence the unit of u is [sec]. In Chapter 4, we considered tracking a target signal with a bandwidth. In that case, we determined the phase as at the band-center frequency, by taking an average. Upon taking the average, we looked at the slope of the phase, but after that we forgot about the slope. In the present case, we are interested in the slope of phase, not in the value of phase itself. The relative delay time
Geolocation Interferometer
217
Figure 22.5 Detecting a weak signal.
thus determined will not be very precise. For example, suppose the signal has a bandwidth of 1 MHz, and phases are measured with a poor resolution to ±180 deg. The delay time is then determined to be
2 × π [rad] = 1 [ µs ] 2π × 1 × 106 [rad/s]
or 300m in path length. This is a coarse resolution, but has significance for geolocation because the interferometer baseline is as long as a 1,000 km or more.
22.3 Delay Limit and Delay Line The slope of the phase as mentioned earlier must not be too steep. Suppose the slope is as steep as that illustrated in Figure 22.6, case (1) or case (2). Here, the phase increases by π for each step increment of frequency ω. (We are solely interested in slope, so disregard absolute values of phase.) Because phases with 2π differences are identical phases, we cannot distinguish case (1) and case (2); that is, we cannot determine the slope. The slope can be determined only if it is less sloped than (1) or (2), like (3). Hence, the slope should be limited by mπ/B if Z(ω) contains m samples. The relative delay time that can be measured is thus limited by
TL = m π B
(22.2)
Now, go back to Figure 22.2, and look at signals x and y as being sampled; this part is illustrated more precisely in Figure 22.7. In this figure, signals are sampled and stored for a period of time, TS . If the maximum frequency of the signal is f, there are then f TS waves in the period of TS . The sampling theorem
218
Radio Interferometry and Satellite Tracking
Figure 22.6 Slope of phase against frequency.
Figure 22.7 Signals x and y with relative delay T. DL: delay line.
then claims that m = 2 f TS, so the bandwidth of the signal is B = 2π f = mπ/TS. Hence, from (22.2) we have
TL = TS
(22.3)
This equation is important in Figure 22.7. Here, signals x(t) and y(t) have a relative delay T. This delay can be determined provided that x(t) and y(t) contain a common piece of waveform. If T exceeds TS , x(t) and y(t) cannot have such a common waveform, so there is no way to determine the delay. That is, T can be determined only if |T | < TS, and this is what (22.3) means. For this reason, delay lines are inserted, as illustrated in the figure. The amount of delay is set so that the sampled data of x(t) and y(t) may include a common piece of waveform. We do not know the relative delay beforehand, because the target
Geolocation Interferometer
219
location is unknown. So, we set the delay by trial and error, until a peak appears somewhere in Figure 22.5. Delay lines must exist in the S1 route and in the S2 route in Figure 22.7, because the relative delay may be positive or negative. When T is determined as T = uC, then T plus the line delay makes the real delay that we want to determine.
22.4 Correlation Processing The preceding discussion is written in mathematical terms as follows. If Z(ω) contains many samples, the adding together by (22.1) is virtually for
R (u ) = ∫ Z ( w) e ju wd w = ∫ X ( w)Y * ( w) e ju wd w
(22.4)
Here, by definition, Y ( w) = ∫ y (t ) e - j wt dt
(22.5)
So (22.4) becomes
R (u ) = ∫ X ( w) ∫ y * (t )e j wt dt e ju wd w = ∫ ∫ X ( w) e j w(t +u )d w y * (t ) dt
(22.6)
The definition of the inverse Fourier transform allows us to write
1 X ( w) e j w(t +u )d w = x (t + u ) 2π ∫
(22.7)
So, from (22.6) we have, if disregarding the factor of 2π,
R (u ) = ∫ x (t + u ) y * (t ) dt
(22.8)
This is called the correlation function of x and y. The function has a peak if the waveforms of x(t + u) and y(t) match each other at some u = uC , and this peak corresponds to that seen in Figure 22.5. The correlation function of (22.8) may be seen by analogy, as in Figure 22.8(a). Here, x and y are real functions, and they are printed on transparent films. The x-print is a bar chart, with opaque bars whose widths are modulated by the value of x(t). The y-print is made similarly, but it is a negative. The films
220
Radio Interferometry and Satellite Tracking
Figure 22.8 Correlation processing seen by analogy.
are placed against a white wall, and we are seeing through the films toward the wall. If we slide the relative position of films by changing u, then at some u = uC the bar chart patterns of x and y will match each other, to show total blackness like in Figure 22.8(b). This is for the correlation peak and, although it is a negative peak here, it determines the relative delay of x and y.
22.5 Time-Integration Effect While the function R(u) now has an elevated peak, as seen in Figure 22.5, the peak level may still be lower than the noise level. If so, we must collect multiple samples of Z(ω) at times t1, t2, …, tk, to obtain Z(t1, ω), Z(t2, ω), …, Z(tk, ω) in succession, as illustrated in Figure 22.9. Look at these samples for a
Geolocation Interferometer
221
Figure 22.9 Collecting k samples of Z.
fixed frequency, for example, for the band-center frequency ωC ; that is, look at samples Z(ti, ωC) for i = 1, 2, ..., k. If we write them as complex vectors, they will look, for example, like (1) in Figure 22.9. The phase angle varies with time in linear drift, because the satellites are moving. Although they are referred to as “stationary,” they do move, as we saw in Chapter 13, and their motions are not negligible here in a timescale of, say, tens of seconds or more. We add those vectors together, while correcting their phases in the same way as we did in the case of Figure 22.4. The addition then takes the form of
R (v ) = ∑ Z (t , wC )e jtv t
(22.9)
where e jtv is the phase-correcting factor. The function R(v) will have a peak magnitude at some v = vC if this v makes the vectors align, as in (2) of Figure 22.9, for example. The parameter v has a unit of [rad]/[s]; this is for an angular frequency. This v corresponds to the Doppler frequency shift due to the satellite’s motion. Although we have set ω to a particular ωC in (22.9), actually w an be any frequency in the signal bandwidth. Precisely speaking in this case, the phase-correcting factor should be dependent on ω; but practically, the common factor e jtv operates equally well for every ω if the signal bandwidth is moderate at tens of megahertz or so. We can now add the sample data of Z(t, ω) with respect to frequency ω and with respect to time t as follows:
R (u ,v ) = ∑ Z (t , w) e j wu e jtv t, w
(22.10)
222
Radio Interferometry and Satellite Tracking
This addition is virtually for
R (u ,v ) = ∫∫ Z (t , w)e j wu e jtv d w dt
(22.11)
This is the inverse Fourier transform of Z(t, ω) in two dimensions if we disregard the 2π factors. The function R(u, v) will have a peak at some (u, v) = (uC , vC ), as illustrated in Figure 22.10. The peak appears there because the uC and vC make the Z(t, ω) vectors all align. A wide bandwidth of the signal makes the peak sharp in u, and collecting many successive samples of Z makes the peak sharp in v. As a result, the peak level is now expected to come above the noise level. Detecting a weak signal thus leads to measuring uC and vC . If we turn to Figure 22.8(a) as an analogy once more, the films are longer for the present case because they represent more sample data. The effect of the Doppler frequency shift has made a film slightly expand or contract in length; suppose this change has occurred to the y-film here. Owing to this change, the matching of the x-pattern and y-pattern becomes imperfect, as in Figure 22.8(c), so that the correlation peak becomes dim. We try to correct the length of the y-film by adjusting v, as in Figure 22.8(a), while sliding the relative position u. A sharp peak will then be found somewhere in the u-v plane, as shown in Figure 22.10.
22.6 Problem of Satellite-Transponder Phase When a satellite relays a signal, the signal is frequency converted by using a local oscillator on board the satellite. The oscillator has its own phase, which is unknown to us. Signals x and y are relayed by satellites before being measured at the monitoring station. So, some unknown phase factor, e j(αt+β), must come
Figure 22.10 Correlation peak in the u-v plane.
Geolocation Interferometer
223
into the measured phase. Here, α is for the frequency difference of the local oscillators of two satellites, and β is for a constant phase difference, with α and β being unknown parameters. The phase factor enters (22.11) and operates simply as
R (u ,v ) = ∫∫ e j αt e j β Z (t , w)e j wu e jtv d w dt
(22.12)
provided that Z contains no image spectra (refer to Chapter 4 for a discussion of image spectra). In this equation, α will operate to shift the origin of v, thus causing a change to the resulting value of vC . That is, α becomes an error in the measurement of the Doppler frequency shift. After this consideration, one can regard (22.12) as being changed as follows:
R (u ,v ) = ∫∫ e j β Z (t , w)e j wu e jtv d w dt
(22.13)
Our intention was to calculate (22.11), but actually what is being calculated is (22.13) with an unknown factor e jβ being included. If β is constant, (22.13) becomes
R (u ,v ) = e j β ∫∫ Z (t , w)e j wu e jtv d w dt
(22.14)
So, we watch the absolute value |R | upon searching for a peak of R(u, v) from (22.13) or (22.12). In reality, parameters α or β, or both, may gradually change with time. If the changes are not small, the function R(u, v) from (22.12) will have a dim peak, because the phase-aligning effect seen in Figure 22.9, from (1) to (2), becomes imperfect. So, the integration of (22.12) should be for a limited period of time during which α and β are practically unchanging, presumably for tens of seconds to hundreds of seconds, as depending on the stability of the satellite local oscillators.
22.7 Phase Measurement Accuracy The delay time and Doppler frequency shift, uC and vC, are thus obtained from the processing as given above. But what would be their accuracies? To discuss this, we refer to Chapter 4, with some modification. We first modify (4.13). Because signal powers are different for satellites S1 and S2, we must write them separately as
2
2
b1 = S1 ; b2 = S 2
(22.15)
224
Radio Interferometry and Satellite Tracking
The downlink quality is good for satellite S1, but poor for S2, so we assume that S1 N S 2
(22.16)
Referring to (22.15), rewrite (4.19) and (4.20) as
PD = S1S 2
(22.17)
PU = S1N + S 2 N + N 2
(22.18)
Hence, (4.21) becomes
PU S1N + S 2 N + N 2 N = = PD kS1S 2 kS 2
S2 N N 1 + S + S ≈ kS 1 1 2
(22.19)
Substitute this PU /PD into the N/S in (4.15), and consider the effect of m-component averaging that was seen in (4.26). As a result we have
RMS {δϕ} =
1 2mk
N S2
(22.20)
That is, the error level of phase measurement is controlled by the signalto-noise ratio of the S2 downlink. For this reason, using a large-diameter antenna for A2 in Figure 22.1 improves the phase measurement accuracy. Once we know the phase error level from (22.20), we know the error level of phase slope against the frequency axis and that against the time axis, thus knowing the error levels of uC and vC. In other words, the condition required for doing geolocation is known from (22.20).
22.8 Locating the Earth Station Suppose we have now measured uC and vC for the target Earth station. Let us consider how to make use of this measurement for locating the station. Assume that the positions of the satellites S1 and S2 are known, as illustrated in Figure 22.11. Consider the midpoint of S1 and S2, and right under the midpoint on the ground is point O. At O place a horizontal plane H, and on this plane NS and EW are the north-south and east-west lines crossing each other at O. Here, we regard the plane H as the ground surface. This is a rough assumption be-
Geolocation Interferometer
225
Figure 22.11 Locating the Earth station.
cause the ground surface is curved, but the assumption makes our subsequent discussions simple while not causing much loss of exactness. From the relative path delay uC , we know the difference of the path lengths from station P to the satellites: PS1 − P S2. This relationship determines a curve C on plane H, by the principle of hyperbolic navigation, and somewhere on this curve the station P must exist. This is the first step in locating the station, but not yet in two dimensions. As time passes, the satellites move. They might move, after hours, as illustrated in Figure 22.12. Correspondingly, the curve C would rotate, to position C ′. Theoretically speaking, station P is located at the crossing point of C and C ′. But actually, the satellites do not move much. So, C and C ′ cross each other nearly at a grazing angle. The figure illustrates an ideal case of satellite motion, but yet the location of P is likely to have an area of uncertainty elongated along the curves. To obtain a more precise location, we refer to the Doppler frequency shift measured as vC . The Doppler shift is due to the relative velocity of two satellites. So, one can assume that S1 has a velocity while S2 does not, without loss of generality, as illustrated in Figure 22.13. Consider this first: What if the satellite has a velocity vZ to the north? The Doppler shift will then show different values: a positive value if the station is at P1, or a negative value if the station is at P2, or zero if the station is at P0, thus depending on the station’s north-south location. So, the measured vC narrows down the position of P to a single point somewhere on that curve C in Figure 22.11. If the satellite has a radial velocity vR in Figure 22.13, the Doppler shift will not show much difference if the station is at P1, P0, or P2; so this velocity is not much help for locating the station.
226
Radio Interferometry and Satellite Tracking
Figure 22.12 Satellite motion causes curve C to rotate.
Figure 22.13 Satellite velocity and station location.
If the satellite has a longitudinal velocity vL, as illustrated in Figure 22.14, the Doppler shift shows different values if the station is at P1, P0, or P2, thus providing EW location information. This is of little use because we know already that the station is on the curve as given in Figure 22.11. We have thus used a simplified geometry to discuss the process of location. Those who are interested in more exact, precise analysis of location should refer to, for example, [3, 4]. As can be seen from the preceding discussions, good location requires good satellite velocity to the north or south [5]. The best condition with regard to satellite velocities is shown in Figure 22.15(a), where the satellite velocities have maximal north-south components in the opposite directions. If this condition happens at the present moment, then after 6 hours the velocities will both become zero. This is because satellite motions are periodic; so, after 12 hours
Geolocation Interferometer
227
Figure 22.14 Longitudinal velocity of satellite.
Figure 22.15 Satellite velocity conditions: (a) best; (b) worst.
the best condition comes again. That is, the best condition for location comes every 12 hours. The worst condition is shown in Figure 22.15(b), where the satellites have the same north-south velocity components in the same direction. In this case the relative north-south velocity is always zero, so two-dimensional location never becomes possible. The condition that happens in reality is something between the best and the worst. For this reason, location performance changes with time if a Doppler shift is used for location purposes. If satellites S1 and S2 are owned by the same operator, there is a way to keep their motions near the best condition, as follows. Chapter 13 showed how to keep the orbital inclination within a limit. This was done, as illustrated in Figure 13.6, by keeping the projected unit vector inside a boundary circle. Now, the area inside the boundary circle is divided into two, with one side being the positive-x side and the other the negative-x side. Keep one vector in the positive-x side, and keep the other vector in the negative-x side, while separating them as much as possible along the x-axis. This plan allows the satellites to have
228
Radio Interferometry and Satellite Tracking
north–south velocities that are always pointing in directions opposite to each other, which yields maximum relative velocities.
22.9 Transponder Frequency Errors We saw that the unknown parameter α in (22.12) causes an error in the Doppler shift measurement. If this error is present, station P will be located to an incorrect position somewhere on the curve in Figure 22.11. To solve this problem, we use a reference Earth station placed at a known position. It has an uplink to satellite S1 and to S2 in the same frequency band as the target station. If we locate the reference station, the result will be an incorrect position on the curve because of the error α. One can then calibrate the error α so that the reference station may be located in the correct position on the curve. This idea is similar to the common error cancellation that we discussed in Chapter 19. So, we should measure the target station and the reference station at the same time in parallel for the best performance of error calibration.
22.10 Orbital Information We saw that the existence of a Doppler shift due to the satellite’s north-south velocity component is essential for two-dimensional location. Velocities also exist, however, in radial and longitudinal components, and they produce Doppler shifts. The measured vC is for the sum of all of these Doppler shifts. So geolocation is possible only if every component of the velocity is known. Geolocation thus requires information on the position and velocity of the satellite, that is, orbital elements of the satellite. Orbital elements are available when they are made public [6], but there is a problem. A satellite undergoes orbital maneuvers from time to time for station keeping. At the moment a maneuver takes place, the orbital elements that had been obtained for the satellite become useless, and we must wait days for new elements to be provided. So it is a matter of chance whether we can obtain valid orbital elements for a particular satellite at a given moment. The situation is the same if we ask a satellite operator for orbital elements, because the operator also needs to collect tracking data for days before orbit estimation can be done. We cannot wait for days, because we need to take quick action against RF interference. Orbit estimation is thus a fundamental problem in geolocation. One idea is to use two or more reference stations. The error α is calibrated in principle by using one reference station. So, two or more reference stations bring redundant information, and this information may be used for estimating the satellite position and velocity. This is a challenging idea, which requires a lot of reference stations with good, stable uplink frequencies and good geometrical
Geolocation Interferometer
229
arrangements relative to the satellites. The idea is incorporated in some systems in use [7].
22.11 Quick Orbit Estimation Let us consider here a direct approach to quick orbit estimation. This approach uses the rotary baseline interferometer that we saw in Chapter 21. The concept is illustrated in Figure 22.16. Here, S is either for satellite S1 or S2. The rotary mode of the interferometer determines the line of sight to S, and on this line the position of S is set at its nominal range, by assuming a nominal orbital radius. Next, determine the velocity components v1 and v2, which are transversal components. To do this, set the baseline orientation so that the detection unit vector will point in the direction of v1 or v2 and measure the phase variation rate. This is done to determine the direction changing rate, and this rate times the nominal range is the velocity. The components v1 and v2 should be orthogonal to each other, but it is no problem if they are not aligned to the longitudinal or north directions. The third component v3, along the line of sight, is set to 0 because we cannot determine it. A rough, but quick, estimation of position and velocity is thus obtained. What would happen if we use the quick estimation for location? The assumed nominal satellite range has some error, which can be regarded as equivalent to a bias existing in uC . This bias will be mostly offset by using a reference station. The problem is the velocity v3; we have set it to 0, so the vC as measured for the reference station is attributed to the transponder frequency. In reality, there is some velocity v3, as illustrated in Figure 22.17. If the target station P is near the reference station R, the effect of v3 on the Doppler shift is common for the two stations and so would cause little error when locating P. This condition fails rapidly, however, as P goes away from R. This problem may be eased by using one more reference station, because the use of one more makes it possible to estimate v3 to some accuracy.
Figure 22.16 Quick estimation using rotary baseline interferometer.
230
Radio Interferometry and Satellite Tracking
Figure 22.17 Problem of velocity component v3.
As time passes, the interferometer collects tracking data for orbit estimation, and a set of orbital elements becomes available within a day. This is probably quicker than waiting for new orbital elements to be made public. The wait time can be made shorter by some truncation of orbit estimation; this is to set the drift rate, or the ∆a terms in (16.2), to 0. There are then three elements in the in-plane satellite motion. Orbit estimation converges quickly, in a quarter of a day or so, and its accuracy will be better than that of the above-mentioned quick estimation. We know that the best condition for location comes every 12 hours. So we may have to wait for a few hours before doing location, and during that time, we could try a quick estimation or truncated orbit estimation if we have access to the interferometer. For this reason, it is worthwhile to place a rotary baseline interferometer for common use through remote access, to ensure quick action in orbit estimation for every satellite.
References [1] Smith, Jr., W. W., and P. G. Steffes, “Time Delay Techniques for Satellite Interference Location System,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 25, No. 2, 1989, pp. 224–231. [2] “Method and System for Locating an Unknown Transmitter,” U.S. Patent No. 5008679, 1991. [3] Ho, K. C., and Y. T. Chan, “Geolocation of a Known Altitude Object from TDOA and FDOA Measurements,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 33, No. 3, 1997, pp. 770–783. [4] Pattison, T., and S. I. Chou, “Sensitivity Analysis of Dual-Satellite Geolocation,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 36, No. 1, 2000, pp. 56–71. [5] Haworth, D. P., et al., “Interference Localization for EUTELSAT Satellites—The First European Transmitter Location System,” Int. J. of Satellite Communications, Vol. 15, No. 4, 1997, pp. 155–183. [6] “Space Track,” http://www.space-track.org/perl/login.pl. [7] Hayes, M., “Locating and Resolving Sources of Satellite Interference to Improve Spectrum Efficiency,” Satcom Technology, February 2011, pp. 3–5.
About the Author Seiichiro Kawase was born in Hokkaido, Japan, in 1950. He received his B.E. (1972) and M.E. (1975) in mechanical engineering from Tokyo Institute of Technology, and his Ph.D. (1994) in electronic engineering from the University of Tokyo. He worked with the National Institute of Information and Communications Technology (formerly the Radio Research Laboratory/Communications Research Laboratory) from 1975 to 2010 in the fields of satellite communications, tracking and control, and orbital dynamics, and was appointed executive researcher in 2005. He was a flight dynamics leader when the first domestic communication satellites began to orbit in 1982–1983. He was a visiting scientist at the European Space Operations Center, ESA, in 1984–1985. He was a member of CCIR SG4 Working Party 4A in 1991, when he drafted the recommendation “Environmental Protection of the Geostationary-Satellite Orbit,” which later became ITU-R Recommendation S.1003, main text. Since 2010 he has been at the National Defense Academy, where he is a professor of aerospace engineering. Dr. Kawase is a member of the American Astronautical Society.
231
Index Amplifier noise, 44 Angles azimuth, 161–62 elevation, 55, 161–62 of inclination, 126 observation accuracy, 168 polarization, 199 Antennas effective aperture areas, 44 mechanical stability, 12 nonsymmetric, 9 phase center, 11 pointing loss, 46 polarization, 11–12 receiving, 7–12 reference points, 8–11 sidelobes, 12 transmitting, 42 VSAT, 46 Apogee-side area, 74 Area-sweeping rate, 88–89 Atmospheric refraction, 55–57 effect on interferometer, 56–57 elevation angle and, 55 illustrated, 55 model, 56 Autotracking antennas interferometer advantages over, 5 principle of, 3, 4 Azimuth angle, 161–62 determination, 202 errors in, 207
nominal, 200 by rotary-mode operation, 203 target orbital position and, 200 Azimuth-elevation tracking, 161–70 angles, 161–62 baseline geometry, changing, 164 interferometer, 163–64 Azimuth rotation test, 10 Baselines azimuth-detecting, 185 detection unit vector of, 164–66 for differential tracking, 188 error, 51–52 geometry, changing, 164 horizontal, orientation of, 174 length, 46, 59 long, measurement, 60 nonhorizontal, 168–70 short, measurement, 60 two, interferometer with, 54 Baseline vectors decomposing, 56 reflecting interferometer, 62 Batch estimation defined, 154 processing timeline, 153 See also Orbit estimation Beacon C-band, 45 Ku-band, 46, 47 measurement, 38–39 tracking, 36–37 233
234
Radio Interferometry and Satellite Tracking
Bias calibration, longitude monitoring with, 176 Cables distribution, phase balance, 59 electrical length, 19 IF signal transmission, 20 optical, 184 stability condition, 19–20 Calibration bias, 176, 178 error, 209 phase, 179 Combined EW keeping, 136–37 Constant vector decomposition into, 114, 115 length, 116 Convolution, 39 Corrective-coefficient matrix, 157 Correlation function, 219 Correlation processing, 219–20 Cross-spectrum, 39 Delay limit, 217–19 Delay line, 217–19 Delay time, 217 Design and installation reflecting interferometer, 60–63 system layout, 59–60 Detection unit vectors of baseline, 164–66 for ranging, 182 Differential tracking, 187–92 applications, 191–92 baseline for, 188 concept, 187–88 data, 189 defined, 188 hardware, 188–89 orbit estimation, 190–91 Directional couplers, 20 Direct phase measurement, 23–24 Displaced circle, 86 Doppler frequency shift, 221, 223, 225 Double assignment, 145 Downconverters, 14 Downlink beacon, budget, 45–46 primary noise source, 44
quality parameters, 42, 43 signal-to-noise ratio (SNR), 224 Downsampling, 31 Drift, longitudinal, 111–13 Drift rate, 109 combined eccentricity maneuver, 136–37 control, 131–33 longitudinal, 132 Earth equatorial section of, 108 gravitational field, 112 nonspherical shape, 108–11 orientation, 156 Earth station locating, 224–28 satellite velocity and, 226 in transponder frequency errors, 228 East-west (EW) keeping combined, 136–37 for drift-rate control, 131–33 for eccentricity control, 133–35 See also Station keeping Eccentricity change maneuver, 101, 102, 135 combined drift-rate maneuver, 136–37 distribution of, 85, 137 EW keeping for control, 133–35 halving, 134 inclination separation, 191 perturbation, 110 small, motion due to, 86–89 small, orbit with, 84–86 yearly perturbation of, 120 Elevation angle, 161–62 determination, 202 errors in, 207 nominal, 200 by rotary-mode operation, 203 target orbital position and, 200 Ellipse drawing, 71, 72 eccentricity of, 73 equation of, 72–73 Equation of motion, 75–77 Equatorial plane, 118 Equatorial radius, 128 Error factors, 51–57
235
Index atmospheric refraction, 55–57 baseline error, 51–52 effect in rainwater, 57 phase ambiguity, 53–55 Errors calibration, 209–10 controlling, 212 evaluation simulation results, 208 mirror-pointing, 198 nongeometrical, 210–12 phase, 33, 35 positioning of mirrors, 209 with same periodicity, 211 transponder frequency, 228
Geostationary orbits concept illustration, 68 forces, 107 inclination to, 92 overview, 67–69 See also Orbits Geostationary position, 84 Gravity models, 112 of moon, 125–26 of sun, 120–22 Ground locus, 92
Fast Fourier Transform (FFT), 28–31 cycle rate, 30 digital sampling with, 28 repetition period, 31 Figure 8-like locus width, 95–96 Figure 8-like motion, 93 First law, Kepler, 71–73, 87 Fourier transform, 26–27 Free-space propagation loss, 43 Frequencies conversion, 13–14 relationships among, 15 slope against phase, 218
Inclination angle of, 126 distribution of, 140 eccentricity separation, 191 linear motion and, 124 maneuver, 103–5 In-orbital plane changes, 99–100 maneuver, 101–2 Interferometers autotracking principle, 3, 4 azimuth-elevation, 163–64 defined, 3 feed horn, 3–4 geolocation, 213–30 geometry, 166 illustrated, 4 longitude-monitoring, 172–73 model, 42 operation and performance, 184 overview, 1–6 phase measurements, 5 prototype, 159 receiving points and, 7–8 reflecting, 60–63 rotary-baseline, 193–212 Interferometric phases, 7–8 Interferometric tracking. See Tracking Intermediate-frequency (IF) signals, conversion to, 13–14 International Telecommunication Union (ITU), 143–44 Inverse Fourier transform, 222
Geolocation defined, 213 principle of, 213–14 problems, 214 weak signal detection, 215–17 Geolocation interferometer, 213–30 correlation processing, 219–20 delay line and delay limit, 217–19 locating Earth station, 224–28 orbital information, 228–29 phase measurement accuracy, 223–24 quick orbit estimation, 229–30 satellite-transponder phase and, 222–23 time-integration effect, 220–22 tracking data, 230 transponder frequency errors, 228 Geometry baseline, 164 error sources, 207 interferometer, 166
Huygens-Fresnel principle, 210
236
Radio Interferometry and Satellite Tracking
Kalman filtering, 157 Keplerian orbital elements, 80 Kepler’s laws, 81 equation of motion, 75–77 first, 71–73, 87 physical meanings, 75–80 second, 73–74, 86–87, 88 significance of, 80–81 standing position of, 80 third, 74, 89 understanding, 68 Kinetic energy, 99 Kirchoff ’s law, 81 Law of power 3/2, 74 Left-hand circular polarization (LHCP), 11–12, 61 Line of node, 126 Link budgeting C-band beacon, 45 defined, 45 Ku-band beacon, 46, 47 Local oscillators (LO) local signal distribution, 15 in reference correction, 17 Longitude-monitoring interferometers, 172–73 Longitudes, 171–72 Longitude tracking, 171–80 different-band satellites, 179–80 interferometer setup, 175 long-term monitoring, 177 monitoring examples, 175–80 monitoring with bias calibration, 176 orbit estimation, 173–75 single satellite, 175–77 two satellites, 177–79 Longitudinal drift, 111–13 Longitudinal slots, 144 Longitudinal velocity, 226, 227 Long-term effect, 118–20 Long-term monitoring, 205–7 Long-term perturbations, 108 Low-noise amplifiers (LNA), 14 Maneuvers combined drift-rate eccentricity, 136–37 eccentricity-changing, 101, 102, 135
inclination, 103–5 in-plane orbital, 101–2 radius-changing, 101 two-impulse, 102 Mass of satellite, 76, 97 Master register, 144 Moon gravity of, 125–26 perturbation of orbital plane due to, 126 sun combination effect, 127–28 Motion(s) due to nonstationary radius, 89–90 due to small eccentricity, 86–89 long-term, 116 long-term monitoring of, 206 near-stationary orbit, 98 in orbital plane, 90–91 periodic, 124–25, 226–27 perpendicular to orbital plane, 91–93 rate of, as a vector, 114 R-L, 167 tilting, 123–24 Near-stationary orbit, 83–96 defined, 83 motion of, 98 with small eccentricity, 84–86 Noise downlink, source, 44 phase error caused by, 33 reduction, 32–35 thermal, 44 time integration and, 34 Noise power, 42–44 calculation, 44 source, 44 Noise temperature, 44 Nonbeacon measurement, 39–40 Nonbeacon signals, tracking, 35–37 Nongeometrical errors, 210–12 Nonhorizontal baselines, 168–70 with plane mirrors, 170 setting, 169 See also Baselines Nonstationary radius, 89–90 North-south (NS) keeping boundary circle and, 138 dependence on individual satellites, 140 maneuver timing, 138
237
Index
off-pointing thrust, 139 with smaller boundary circle, 139 See also Station keeping
Off-nominal phase defined, 200 Ku-band, 201 reconnected, 201 Off-stationary radius, 94 Ohm’s law, 81 Orbital circle moving center of, 110 orbital plane containing, 117 Orbital energy, 97–105 Orbital information, in geolocation interferometer, 228–29 Orbital parameters, 158 Orbital period, 98 Orbital plane changes, 99–100 containing orbital circle, 117 incline, 103–4 intersecting equatorial plane, 91–93 moon, 126 motion perpendicular to, 91–93 motions in, 90–91 projected unit vector representation, 104 rotation torque, 122 sun, 125 tilting of orbital plane, 122–25 vector representation, 104 Orbit estimation, 166–68 accuracy, 168 batch, 153 choice of style, 154–55 defined, 151 differential tracking, 190–91 geolocation interferometer, 229–30 longitude tracking, 173–75 meaning of, 157 perturbation and, 158 process of, 152 quick, 229–30 rotary-baseline interferometer, 203–5 sequential estimation, 153 software, 152 software units, 155–57 styles of, 152–54
tracking and, 151–59 tracking model, 156 X mode, 204, 205 Orbit generator defined, 155 as input/output system, 157 outline, 155 Orbits changing, 97–105 displaced circle representation, 86 with equal periods, 74 near-stationary, 83–96 with small eccentricity, 84–86 Overcrowding, 145 Paper satellites, 145 Parameters downlink, 42, 43 orbital, 158 SNR, 42, 43 Perigee-side area, 74 Periodic motion, 124–25 Perturbations defined, 107 eccentricity, 110 forces, 107–8 gravity of the moon, 125–27 gravity of the sun, 120–22 in longitude, numerical integration, 113 longitudinal drift, 111–13 long-term, 108, 118–20 nonspherical Earth shape, 108–11 orbital plane, due to moon, 126 orbital plane, due to sun, 125 position of the sun, 117–18 solar radiation pressure, 113 sun-moon combined effect, 127–28 tilting of orbital plane, 122–25 Phase ambiguity, 190 arbitrary, 25 balance, 59 calibration, 179 factor, 222–23 fluctuations, 16–17 leaps, 194 off-nominal, 200, 201 reconstructed, 201 reference, 19
238
Radio Interferometry and Satellite Tracking
Phase (continued) slope against frequency, 218 stability, 16–17 Phase detection, 23–37 direct phase measurement, 23–24 image spectrum and, 27–28 noise reduction, 32–35 nonbeacon signals, 35–37 rotary-baseline interferometer, 198 separate measurement, 24–26 Phase error level evaluation, 35 noise, 33 undesired U, 33 Phase measurements, 5 accuracy, 223–24 ambiguity, 53–55 averaging, 36, 37 beacon, 38–39 cable errors in, 20 circuit, 24, 25 concept, 25 direct, 23–24 error level, 224 FFT repetition period and, 31 Fourier transform, 26–27 illustrated, 25 image spectrum and, 27–28 long baseline, 60 nonbeacon, 39–40 range-azimuth tracking, 185 separate, 24–26 short baseline, 60 signal processing for, 28–32 steps, 27 unit diagram, 29 window and, 38–40 Plane mirrors interferometer setup with, 175 nonhorizontal baselines with, 170 reflecting interferometer, 63 Plane-wave propagation, 210 Pointing loss, 46 Polarization, 11–12 Polar radius, 128 Power flux density (PFD) defined, 48 estimates in, 47–49 maximums/minimums, 48 Prototype interferometer, 159
Quick orbit estimation, 229–30 Radiation force, 118 Radius change maneuver, 101 effect of, 100 equatorial, 128 nonstationary, 89–90 off-stationary, 94 orbital, 99, 100 orbital energy and, 99 polar, 128 satellite, 73–74 synchronous, 83 Rainwater, effect of, 57 Range-azimuth tracking, 181–86 hardware and performance, 183–85 merit of, 183 for orbit estimation, 181–82 phase measurements, 185 safety monitoring, 185–86 station keeping, 185–86 system layout, 182 Ranging defined, 181 detection unit vectors for, 182 satellite longitude determination, 183 Rate of motion of center, 119, 120 in reference to the sun, 119 tilting, 123 vector, 114, 116 Receiving antennas, 7–12 Receiving equipment, 13–21 cable stability condition, 19–20 frequency conversion, 13–14 phase stability, 16–17 reference correction, 17–19 reference coupler, 20–21 Receiving points, 7–8 Receiving power, 42–43 Receiving routes, 14–15 Reconstructed phase, 201 Reference correction, 17–19 ideal, 18 Ku-band test case results, 18 oscillators, 17, 18 summary, 19 Reference couplers, 20–21 Reference oscillators (RO)
239
Index in reference correction, 17 sinusoidal signal generation, 15 Reference phase, 19 Reference points, 8–11 finding by azimuth rotation test, 10 ideal antenna, 9 location, 10 phase center versus, 11 Reflecting interferometer baseline vector, 62 illustrated, 63 mirror surface, 61 plane mirror, 63 versions of, 62 Regulations, 143–44 Relative path delay, 225 Relative position coordinates, 93–95 Revolution rate, 87, 109 RF signals, 13–14 Right-hand circular polarization (RHCP), 11–12, 61 R-L motion(s), 167 R-L-Z axes, 93, 165 Root mean square (RMS), 32–33 Rotary baseline, 193–95 concept, 194 with mirrors, 195–96 Rotary-baseline interferometer, 193–212 error calibration, 209–10 error considerations, 207–9 geometry, 200 geometry error sources, 207 Ku/C-band, 196 long-term monitoring, 205–7 mirror-antenna distances, 197 mirror-pointing error effect, 198 off-nominal phase, 200, 201 operation and data processing, 199–203 orbit estimation, 203–5 phase detection, 198 polarization angles, 199 reconstructed phase, 201 rotary-mode operation, 199 scaled model, 197 search and reconnection tools, 203 signal routes, 197 specifications, 198 Rotary-mode operation, 199 Rotary vector, 116
Sampling theorem, 217–18 Satellites drifting, 112 drift rate, 109 elliptical orbit, 71–73 forces acting on, 107 ground locus of, 92 longitudes, 171 low-elevation, tracking, 169 mass of, 76, 97 paper, 145 radius, 73–74 Satellite-transponder phase, 222–23 Second law, Kepler applying, 86–87 defined, 73–74 satisfaction of, 88 See also Kepler’s laws Semimajor axis, 79 Semiminor axis, 79 Sequential estimation defined, 154 processing timeline, 153 See also Orbit estimation Sidelobes, 12 Signal power, 42–44 Signals bandwidth, measuring, 36 nonbeacon, tracking, 35–37 weak, detection of, 215–17 weak, tracking, 46–47 wide bandwidth, 222 Signal-to-noise ratio (SNR), 37 downlink, 224 effective, 41, 42 parameters, 42, 43 required, 41–42 Simulating oscillators (SO), 17, 18 Slots adjoining, 172 longitudinal, 144 of satellites, 172 Solar radiation pressure, 113–16 radial velocity and, 113 tangential velocity and, 115 Spectrum at data processing stages, 30 image, problem of, 27–28 sinusoidal signal, 28 weak signal, 47
240
Radio Interferometry and Satellite Tracking
State variance-covariance matrix, 174 State vectors, 153 Stationary position, 84 Station keeping, 131–40 accuracies, 149 defined, 131 east-west (EW), 131–37 north-south (NS), 137–40 range-azimuth tracking, 185–86 Sun gravity of, 120–22 moon combination effect, 127–28 perturbation of orbital plane due to, 125 position in temporary coordinate frame, 121 position of, 117–18 rate of motion in reference to, 119 Synchronous radius, 83 System layout, 59–60 Tangential effect, 99 Tangential force, 109, 110 Tangential velocity, 115 Test horns, 20–21 Thermal noise, 44 Third law, Kepler, 74, 89 Tilting in elliptical motion, 124 in linear motion, 124 motion, 123 orbital plane, 122–25 rate of motion, 123 Time-integration effect, 220–22 Torque averaging, 122 in orbital plane rotation, 122 in tilting orbital plane, 122–23 Tracking, 5–6 azimuth-elevation, 161–70 beacon, 36–37 differential, 187–92 frame, 76 geolocation interferometer, 230 with interferometers, 158–59 longitude, 171–80 low-elevation satellites, 169 model, 156
nonbeacon signals, 35–37 orbit estimation and, 151–59 overview, 149–50 range-azimuth, 181–86 theoretical reasoning, 159 unknown satellites, 37 Transponder frequency errors, 228 Two-baseline interferometers, 166 Two-body problem, 80 Two-impulse maneuver, 102 Vectors constant, 114, 115, 116 detection unit, 164–66 end point, 114 phase-corrected, 215–16 rate of motion as, 114, 116 rotary, 115 state, 153–54 Velocity change, 100 conditions (best and worst), 227 Earth station location and, 226 increase, 114 longitudinal, 226, 227 magnitude, 103 in orbital energy, 98 tangential, 115 Vernal equinox, 123, 124 VSAT (very small aperture terminal) antennas, 46 Weak-signal detection, 215–17 Weak signals spectrum, 47 tracking, 46–47 Window function, 38 X mode defined, 203 longitude monitoring and, 206–7 operation, 204 orbit estimation, 204, 205 X-Y-Z frame, 117–18 Zero calibration, 52