This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
%
(8.5)
R = lim A-i A'—oo — *
Tf-L(l)* V *
Z^
(2)2
< 1.
Zs
We will use the least square estimates df , dt of d\, di as they have been shown to be asymptotically efficient [8]. THEOREM 10. Under the conditions of Theorems 8 and 9 and (8.5), formulas (8.3) and (8.4) remain valid if F*(X) is computed using xt — dt
oo. But we can choose k so large that || s&(X) — i?(X) || < 5 with prob ability 1 — 5. This together with the uniform continuity of
'G-V .
Assume for the moment that #< has the spectral density {l/2ir)g-l0C)fl{\Y and con sider the asymptotic behavior of D i r V G ^ A T 1 as »-+ °o4 Now carry out a GrammSchmidt orthogonalization procedure with respect to xh • ■ •, #„ starting with #i, The first p equations are ^ n # i = £i (4.20)
^ 2 i % + ^22^2 = ?a
The residual £'s are orthonormal. At the next step I have (4.21)
&*i + • • • + &>tfp+i = £*+i *= ifo+i
since #* satisfies the difference equation (4.22)
#*-
*?*
with the rjtS orthonormal. From then on we have the equation (4.23)
gP%k-\
Let the matrix
h go *«?+!»= h+p— Vk+p J
«
!
,
■
w-
du. 0
diid^i
(4.24)
*
dpidP2
A =
' ' dpp
ip & ~ i * - gi go
&
'
' g2 g l gQ
Then (4.25)
A*=£ =
where £ is a vector of orthonormal random vectors. A is a nonsingular transformation taking a v •, #n into fi,* * •, £w. On taking the covariance matrix of both sides of equa tion (4.25) I have (4.26)
AGA' = / 111
TIME SERIES ANALYSIS
175
so t h a t
14.2 7)
G~i = A'A.
Let {g„y, u,v=
1,- ■ •, »} = G-\ Then oo
1
(4.2 8)
^M =2^+^"+« « = — CD
if p or ji is greater than £. Here gv is understood to be zero if v < 0 or v > p. But CO
(4.29)
2
^
«;+•&.+««^r^-^(X)'jf(X)rfX.
Now the (£, $)th element of D^vG^ipD^1
is
Here ym - ( 1 / 2 T ) / V**(X)'*(X)I*X. Now 8n is the sum of at most ip2 terms of the form
Since every element of gz}* is smaller in absolute value than Sr1 for some small but fixed 5 > 0, the elements of (*)'
-i
(a>
converge to zero as ^—> oo, But then
(4 33)
'
Jg»'' »<»»») V«
=
2 ^ £ S U(-X)-g(-X) l^M^CX).
I have now shown that (4.34)
H m D - V G - V ^ 1 ^ / " U ( ~ ?0'g ( ~ X) .
In the same way one can show that (4.35)
l i m Z ) - y # - V Z t I a s s /"* [A ( - X ) ' A ( - X ) - d t f ( X ) ] .
On letting e | 0 the desired result (4.36)
lim^ViJ-V^-T^-rt/^C-X)
-rfJf(X)]
is obtained. The matrix (4.36) can be seen to be nonsingular by using the argument used at the end of the proof of theorem 1. 112
176
THIRD BERKELEY SYMPOSIUM: ROSENBLATT
5, Asymptotic efficiency It is interesting to investigate those types of regression for which the least-squares estimate is asymptotically efficient in the class of linear unbiased estimates for all ad missible spectra/(X). In most cases the covariance matrix R is unknown and a reasonably large sample size is required to get adequate precision in estimating it. For this reason it would be convenient if one could use the least-squares estimate instead of the Markov estimate since the least-squares estimate does not require knowledge of R. Even if R is known, it may be difficult to compute R"1 which is required for the Markov estimate. In view of the results already obtained, the least-squares estimate will be asymptotically efficient if (5.1)
r - i f [ / ( - X ) -dM{\)]T-ir[f-n-W
-
for all admissible /(A). The case of interest is that in which the process %% and the regres sion vectors have real components. Because of this/(X) and M(X) must satisfy the addi tional restraints (5.2)
/CXW(-X)
When k = 1, asymptotic efficiency of the least-squares estimate has been discussed in U. Grenander [1] and U. Grenander and M. Rosenblatt [2], [3]. In the one-dimensional case it is convenient to set (5.3)
r(X) - M ( X + ) - A f ( - A - )
and rewrite (5.1) in the form (5.4)
T"1 f*f
(\) dTMT-i
Hf-HD
dT{\)
=1.
Equation (5.4) is satisfied for all positive continuous /(X) if and only if T(X) increases only at a finite number of points 0 g Xi < * ■ • < Xg g w, q < s, and the jumps (5.5)
2\«AT(X<)
satisfy the relations TiT-lTj~8ijTi.
(5.6)
These conditions are satisfied if one has a polynomial regression (5.7)
w i - M — ' + A-i*1-1,
a trigonometric regression (5.8) mt « ft cos JXH
V &h cos t\8i + p,1+1 sin 2Xi+ • • - + ft^+a, sin t\$t
(with the points X* distinct), or more generally a mixed polynomial and trigonometric regression q
(5.9)
»« = £
Uz^i
a 1 U
S
v*=i
A..*""1 c o s ^ + £
q
u^i
„* 2 li
S i f c , , * ' - 1 sin«X„ v=*i
(with points X* distinct) [3]. Obviously the sine terms in this last regression form dis appear if \i = 0. Notice that these regression sequences include most of those used in 113
TIME SERIES ANALYSIS
177
standard statistical work. It is easy to construct a regression sequence where the leastsquares estimate is not asymptotically efficient. Consider (5.10)
w< = /?(a 0 + #icos JXH
hflfcCos*Xft),
0 < X i < - - - < \* > where the constants ai are known. Such a regression has a form similar to the pulse trains encountered in communication theory. In the case of such a regression T(\) in creases only at the points 0, Xi, • ■ •, X*. The jump of T(\) at 0 is aQ
(5.11)
z
1=1
and the jump at X^ j = 1, • • •, k, is i^2
(5.12)
1
h
* ?'=i
The asymptotic efficiency of the least-squares estimate #L in the class of linear unbiased estimates is
(5,13)
g1 .
In section 7 I shall discuss the question of how much additional information about the spectrum/(X) is required to construct an estimate with the same asymptotic mean square error as the Markov estimate. In the case of multidimensional time series, new phenomena arise. Considerfirstthe case of a polynomial regression. If each component of the time series has a polynomial regression, that is, (5.14)
imt^y^ifaP-1,
i=l,<••,£,
the least-squares estimate #& is still asymptotically efficient. However, if the different components have polynomial regressions of different orders, the least-squares estimate is no longer asymptotically efficient [5]. A simple example is that in which the mean value of the first coordinate of a two-dimensional time series is unknown while the mean value of the second coordinate is known to be zero, Then (5.15)
mir=z^(pt)
^ = (0/'
The function M(X) increases only at zero and the jump at zero
114
1
7§
THIRD BERKELEY SYMPOSIUM: ROSENBLATT
Thus
fUi-V, (5.17)
-dM(\)}
=/u(0)
•;-* / * [/- 1 ( - X W M (X) ] = -
fn(0)
/n (°) /» <°) ~ /?« CO)
The asymptotic efficiency of the least-squares estimate is (5.18)
1
/l2 (0)
/u(0)/«(0) '
If there is a mixed polynomial and trigonometric regression (5.9) not only must the same regression form occur in each component, but one must also have xsu = 2su if X„ ■* 0. Thus, the least-squares estimate will be asymptotically efficient in the case of a regression (5.19)
W i - f t ^ + ft^' + ft^ +
ft^
but not in the case of a regression
(5.20)
mi
= ft p? + ft v? = ft (COS0'X) + ft ( c o s ° J ,
X* 0.
It is worthwhile examining this last regression in a little more detail. The function M(X) increases at two points, X and — X. The jumps AAfii(X) = A M U ( - X ) = ( * jj) (S.21)
A4f»(X) = AM12(-X) =(2 *) AM21(X) = A M 2 i ( - X ) = ( ^ Q ) (5.22)
AM22(X) = A M 2 2 ( - X ) = ( Q ° ) . Thus
(5. (5. 23,
£i/(-x).« ( x„=( Re ; ;: ;»
£/«<»)
and md (5.24) / l l r ^ - X ) -cZM(X)] 1 / / » (X) / u (X) / « ( X ) - | / H ( X ) |» V - Re / „ (X)
- Re / u (X) \ /1X (X) >*'
It is worthwhile noting that one does have asymptotic efficiency of the least-squares esti mate if Im /w(X) = 0, In the multidimensional case k <£ 2 set (5.25)
2V(X) = { r - V 2 / , M ( X ) T - ^ ; 115
j , 1= 1, • . . , * } .
TIME SERIES ANALYSIS
179
Then equation (5.1) can be rewritten (5.26)
T [ / ( - X ) -dNW)
/ " [ / - K - X ) -dN(\)}
« J.
I have not been able to get simple necessary and sufficient conditions on N(\) for equa tion (5,26) to be satisfied for all admissible/(X). However, one can get simple conditions of this type when N(\) is known to increase only at zero. This corresponds to the inter esting case of a polynomial regression. Equation (5.26) can then be written as U*N)(f~l
(5.27)
«J,
where f = /(0) >0
(5.28) and
N=AN^0.
(S.29) Set (5.30)
Ntt-
{ptNa;
,s]
P,q=l,
and (5.31)
19* J .
WU-NU+NH,
Since/ = /(0) and N = AiV(O), the elements of / and N are all real THEOREM 3. Letf and N be positive definite and positive semidefiniie symmetric matrices, respectively. The equation (/•iV)(/- 1 -iV r ) « f
(5,32)
is valid for all positive definite symmetric f if and only if h j =
NuNti = SUNH ;
k ,
2 Nu = I (5.33)
WuWik = Nu Wjk = Wih
j ^ k ,
Nu, i^k,
Wu Wki = 0 , WijNM=NkicWi}-
= 0,
I;
j?*k,l; i,
J9*k.
Consider first the case in which/ is a diagonal matrix
(5.34) Equation (5.32) then becomes (5.3 5)
/=
\ °
\ J
,
^liNu^tfNij-I. 116
X,- > 0 .
186
THIRD BERKELEY SYMPOSIUM: ROSENBLATT
This equation is valid for all positive X* if and only if (5.36)
NuNjj=
StjNu
a n d ^ i V i i = L Since N is positive semidefinite and NUNH — SijNa, it follows that (5.3 7)
WijNkk^NkhWu-O
if i,j T* k and (5.38)
WijWki^O
if i 7* k, I and j ^ k, L Now consider the case in which
(5.39)
/ - ( / a fz J
and is positive definite, On differentiating equation (5.32) with respect to/a twice, equa tion (5.40)
WU^Nu+Nn
is obtained. On differentiating equation (5.32) first with respect to ft and then with respect to/2, equation (5.41)
NiiWw=WiiNn
is obtained. Now let (5.42)
/ l /2 fz J h /4 /B 0 / fz /B /« 0 i>
and be positive definite. Differentiate equation (5.32) with respect to ft, f2, and/g. I now get (5.43)
MuTii**TiJTn.
All the other equations (5.33) are obtained by taking some subscripts other than 1, 2, 3 or interchanging subscripts. By using the conditions (5.33), one can readily verify that equation (5.32) is satisfied with any positive definite symmetric/ 6. Some computations It is worthwhile looking at a process with a stationary residual of a special and simple form to see how good the asymptotic theory considered is for finite samples. Consider a process (6.1)
y« = *< + jSi + ft*
where the residual xt is a first order autoregressive scheme with covariances (6.2)
'.~T?r-3' 111
- K p < l .
181
TIME SERIES ANALYSIS
Here p is the correlation coefficient of the scheme. The regression coefficients ft, £2 of the scheme are unknown and to be estimated. The covariance matrices of the leastsquares estimate f}L and the Markov estimate i3M are considered. I have already noted that (6.3)
E (pL - p) (0* - jS) ' - (*V) ~VR
~l
and (6.4)
( & * - « ' « (^i?~V)~ : <
E{fa-P)
Here when the sample size is n (6.5) so that (6.6)
(*V> - 1 -
/2(2»+i) ' n(n — 1) 1
- 6
i
« ( » — 1) 12 «(» 2 — 1)
- 6
\»(M — 1) Straightforward but tedious manipulations lead to
(n— l)n
(«-2)(l-p)H-2(l-p) (6.7) <prR-lv = (re—l)w. -(1-p)2
(1-P)»+(»+P)(1-P)
(»-l)»(2n-l)
d-p)!
+«2(l-p)-p2+»pJ
+ <»+p)(l-p) One can similarly show that n (l-p)» (6.8)/Ity-
»(»+!) 2(l-p)«
2p(l-pB) (l-p«)(l-p)»
w(»+l) 2(l-p)»
(«+l)p(l-pw) (l-p2)(l-p)2
(»+l)p(l-p») (l-p»)(l-p)»
*»(»+!) ( 2 » + l ) 6(l-p)«
'*
»(»+!) P
(1 + P ) ( 1 - P ) 3
, 2p2(l-pn+1) s (I+P)(I-P)* ' (i+P)(i-p) ; 2/*»+«(»+1)
The covariance matrix as given by the asymptotic theory is
(6.9)
I (l-p)M
1n -6 n2
«2 12 nz
to the first order. The (i}j)th elements of the covariance matrices of both the least-squares and Markov estimates, % j = 1,2, are given in Table I for the sample sizes n = 10,15, 20, SO and cor relation coefficients p = — .8, — .6,- • •, .8. The approximation suggested by asymptotic theory is also given. 118
TABLE I COVARIANCE MATRICES OF THE LEAST-SQUARES AND MARKOV ESTI MATES (AND AN ASYMPTOTIC APPROXIMATION OF THE COVARIANCE MATRICES) OF A LINEAR REGRESSION, RESIDUAL FIRST-ORDER AUTOREGRESSIVE MATRIX ELEMENTS p
(1, 2)-(2, 1)
(i, i)
(2, 2)
»=10
+ .2
(a) (b) (c)
0.65196 0.64406 0.62500
-0.091312 -0.090046 -0.093750
.016602 .016372 .018750
-.2
(a) (b) (c)
0.35898 0.34109 0.27778
-0.052116 -0.049425 -0.041666
.0094753 .0089863 .0083333
+ .4
(a) (b) (c)
0.99189 0.94826 1.11111
-0.13464 -0.12785 -0.16667
.024880 .023245 .033333
-.4
(a) (b) (c)
0.29685 0.27749 0.20408
-0,043812 -0,040614 -0.030612
.0079656 .0073844 .0061224
+ .6
(a) (b) (c)
1.69108 1.54892 2.50000
-0.21501 -0.19421 -0.37500
.039090 ,035311 .075000
-.6
(a) (b) (c)
0.27142 0.22345 0.15625
-0.040923 -0.032950 -0.023438
.0074402 .0059909 .0046876
+ .8
(a) (b) (c)
3.48150 3.17040 10.00000
-0.35878 -0.32391 -1.50000
.065229 .058893 ,30000
(a)
0.32541 0.18379 0.12346
-0.051326 -0.027257 -0.018518
.0093318 .0049559 .0037037
-.8
(h)
(c)
« = 15
+ .2
(a) (b) (c)
0.42883 0.42456 0.41667
-
.040942 .040469 .041667
.0051178 .0050587 .0055556
-.2
(a) (b)
0.21958 0.21716 0.18519
-
.021498 .021227 .018519
.0026873 .0026534 .0024692
(a)
0.68983 0.66268 0.74075
-
.064546 .061576 .074075
.0080684 .0076970 .0098767
(a) (c)
0.17508 0.16642 0.13606
-
.017363 .016383 .013606
.0021704 .0020478 .0018141
(a) (b) (c)
1.29297 1.18142 1.66669
-
.11604 .10428 ,16669
.014505 .013034 .022223
+ .4 -.4 + .6
(c)
(b) (c)
(a) Least-squares, (b) Markov, and (c) Asymptotic.
182
to o
oo
CT*P
crP
Q*P
CNCocn s e n rf*.
CN cncn
O.CNJ--.
O O O
ONCnC
ON ON ON ON 0 0 0 0
0 0 0
§ 0O 0O
O00h»
ON C O t ^ ONCnCo ^rtoco
-<*ON\0
OOO
Cn J-*. Co O >£* »-*. O O O
ggg
_ 5 0 ON COrfx CN ON >£*
OOO OOO
>o ggg
tototo cn en o s O C N H* OCnlo
O O O
o
M
CAVOQN
Cnoooo Cntocn
CnCnCn
OOO 0 0 0
o
+
Mi
tofcoiss tO 1— *->■ tO tf^-00 tOrf^O
poo
o
+
O O O O O O 0 0 0 if^Oi-sr ON c o s tO t O S \OOVO ONONrf^
O N * - * Co
t o OO O N
O O O 4^ Cnoo
OOO
I l l
tOJ^OO OOO
cn i-»- vo O 0 0 Co
*
O O O Co *-»• >-»■
CnCO -
]
O O O O
O O O O O O O O O CnONS OOCnCn CnQOCO VOOOO CnOOCo
Cn»—O NO 0 0 Cn CnCoOO
OOVONO
O O O O O O cnON-
Ml
tOCOOO ON>f>.
O b ►—• ^'"OO oototf^
O O O
ON
O O O O O O NO ON ON Co t o 0 0 >JOCn CnON^ OONCn
ovoo
* > 4 M NO cnONCN
OOO
M
O H ^ O
t o NO O CnCnCo OONOO
♦-»• O >-*■
ON
+
S^S? t^pw 0000
Q ***. 0 0 S f ° ^ ^ ^ S
^P?&
CnbOtO
O O O O O M ^ ^ ^ ^tfe^
bo
+
O O O OO o s
eC n s s o O O Otos
O O O O O O SCO ON C
soo
O O O O O O ON s t o CnOO O Co O O S O*>*00
III
rfi.rfi.H*
ocn£»
I-* t-L K * O *-*• t o toooco
O O O
rf^ Co co >-»• if* cn ON4*. 00 ONJ^S SCn(-».
OOO 0 0 0
O O O rf^CO CO •—■ ON s ON J-^CN CN ON Cn S S N ©
I I I
C N C N ^
cn\o-
O O O CnCnCn Cn o t o Cnvo«
H M K O O »-^ rf^vo 0 H* 00 «o SCo s
O O O 0 0 0
*-»• C o c n s j N J O
s—* >-* J—«l O M l-A ^ Cn C N
O O O
I I I
VOO«
oorfs.^
O O O H-> H-> K->. Oo Cn cn ooc\~o;
rf*00O Co t o co O O H ' CO
888
Co »~* Co 00 to 0
O O O to to to CO tO CO
I I I
Co Co Co t-^ t-^ )-* tOONVO cn-vjtfx O O O
OOO
+
to
u
a
1 00
+
Cn c o Co
ON S H - ^ ON CN ON ON-O;-O; S O N ON
C \ t o co
I
1—*.
Ococn lpM-*tO *-^Cnto S N O I-*
1-1»—».
0 0 0
^s&
ON
ocoo
rf^-oco
NO t o Cn s t f s ^
O O O O O O t-L ►— tO
O O O O *-^ H* 0O O C N tO Co rfs. CO NO O 0 0 0 VO Cn
o^#-
00 t o 00 VOCWO
O O O 00 t o co O O C N O
CNts>NJ CNKt^ 0 > 0 < t S O >—■ CnOOCn
{—»• J - i 1—k.
oNOOOrfi* osro
0 0 t o bo
0 * 0t—»•0 1—». H Co ON VO
O O O
rf* O C o V-* t O ON S C O Co
O O O
1 1 1 ! I I 1 i1
cn
0 J—»• *-* OOOcn t o ON 0 0 COCN W* O C n s
O O O
£*E& &&£,
C>/-v/^
00
8.
to
1
^ 1 ^ u g ) i? ^* to J^ « $* V a to 1-3
3
^ »H*
"O
5
184
THIED BERKELEY SYMPOSIUM: ROSENBLATT TABLE I—Continued MATRIX ELEMENTS
(l, i)
p
(i, 2 ) » ( 2 , 1)
(2,2)
n«50-—Continued
8t
0,044081 0.043302 0,040816
— ,0013209 .0012933 .0012245
(c)
0,46697 0.44575 0.50000
-
.013595 .012856 .015000
,00053312 .00050415 .00060000
(a) (b)
0.035245 0.033457 0.031250
— .0010643 .0010010 ,00093751
.000041737 .000039256 .000037500
+ .8
(a) (b) (c)
1.61368 1.43485 2,00000
— .045419 .039365 .060000
.0017810 .0015437 .0024000
-.8
(a) (b) (c)
0.031137 0.026625 0.024691
-.4
+ .6 -.6
(a)
j (c)
-
,00095752 .00079766 .00074074
.000051799 ,000050719 .000048979
,000037549 .000031281 .000029629
7. Some special examples In section 5 a few special but interesting types of regression sequences were considered where the least-squares estimate of the regression coefficient was not asymptotically efficient in the class of linear unbiased estimates. I now consider two of these regression sequences to find out what information about the spectrum /(X) is required to construct an estimate of the regression coefficient with the same asymptotic mean square error as the Markov estimate. The first example is that of a one-dimensional process (7.1)
yt=*t+P
where xt is stationary and (7.2)
V&k cos t\k ■>
0 < Xi< ••• <X fc .
Note that
(7.3)
®n = 2 V*~n C^ + ^ S aH '
An estimate of 0 which has the same asymptotic mean square error as the Markov esti mate is
for (7,5)
Ep*~(3 + 111
0(~)
TIME SERIES ANALYSIS
185
and
(7.6)
^(iB»)~lf-4^ + l V
ai.2
\
\ -l i
Notice that the only information about the spectrum/(X) required for the construction of jS* is knowledge of the ratios
The second example is a two-dimensional process (7.8)
Vt=**i + P
y y)
(37)
with
0. Any point k t K has the form k = (t, x, y). Let x = x(k), y — y(Jc) be the x, y coordinates of k. Further, let NXtV(k) be the collection of all points k (in K, of course) with x, y coordinates the same as those of points in N(k) r\ K. Take k' to be an element in the spectrum of fi. There is then a neighborhood Nx(k) such that N1(k)kfN1(k)
(38)
CNx,y(k).
This follows from the continuity of the multiplicative operation. But then n(NXtV(k)) > 0 for any neighborhood N(k) of k. For (39)
/*(#..,(*)) « / <*..,<»>(W3) dv^is,) d»(s2) dvin)(sz)
and for some n (40)
*(n)(A^(&)) > 0.
Thus there is a point k" e NXfV(k) in the M spectrum. Given the neighborhood N(k) there is a point &' e i\T(fc) with a corresponding point fc" t Nx,y(k), x(k") = #(&'), y(k") = 2/(&')> that is, in the /x spectrum. There are then points k{', k'2f such that k['knkf2> = kf. This implies that there are neighborhoods N(k['), N(k"), NQc'2') of k[f , &", k'2f such that (41)
N{k[')N{k")N(V2') C W ) C N(k)
by the continuity of the multiplicative operation and the structure of K, Then all of K is the spectrum of ju since (42)
v™ * M * »(m) = /i.
The proof is complete. Let us now consider a product measure p on K of the form (43)
!z =
xXaX(3
where x is the normed Haar measure (x(T) == 1) of the compact topological group T and a and # are regular probability measures on X, Y respectively. 173
302
M. ROSENBLATT
We shall show that such a measure is an idempotent on K. First consider a set G t (B which is a product set, that is, G = A X B X C where A, B, C are sets of the Borel fields on T, X, Y respectively. Let $~XG = {$' | ss' t G}. But (44)
p(2\G) « f p(s^G)p (ds) » [
pis'^Pids).
Now (45)
Afr-1©) - £ £ x(Mx(s), y")TlA)
da(x') dfi{y") -
x(A)a(B)
by the invariance of Haar measure. Thus (46)
p(2\G) -
f
x(A)«(B)/i (da) = x(A)a(B)p{0 = /z(G).
Since Pm(G) = /z((?) for all product sets, the relation holds for all G in the field of finite sums of product sets and hence for all G in the Borel field (B generated by these sets. Lemma 7. Let K be a completely simple compact semigroup. Any measure p of the form (43) is an idempotent measure on K. We shall now show that for a fairly wide class of completely simple compact semigroups, the p measures are the only idempotent measures. Theorem 5. Let K be a completely simple compact semigroup with the function
'}< dYik),
E dim
It ftp) = >*-, dF(X),
or the stronger assumption (for a later computation)
1. PERIODIC SOLUTIONS Consider a periodic solution of the Burgers equation with period 2IT and spatial mean zero; that is, con dition (1.8) is satisfied. For continuous initial data, it has already been noted that the solution can be given in terms of a positive solution
(,>0. The continuity of initial data implies that
2WS<°°.
so that the expansion
(2.1)
yo — v
(2.5) (2.6)
Even if (2.4) or (2.4') are not satisfied, they will be satisfied at some later time t by y, exp (—/ V ) so that the qualitative features of our conclusions are valid for any periodic solution satisfying (1.8). Now
zt(0 = 2 M S — -
I
>v-*,tl
x exp [-(jj + ■ • • + sJ+JpfK'J (2.7) and
*-«0 + l) *
I
J V ■" >".t+1C-Ji
x exp [-f>* + - ■ ■ + O ^ ' K * - 1 (2.2)
If we formally took at the Burgers equation in the Fourier domain,
- -M%(0
'
s'+i)
'1+ " ■ +•(+!-*
I^exp(-./V) «?((/*)
4(0 + IZWVm t
•*
0 as t —► oo,
(3.11)
with (p(x, 0) a bounded process stationary in x, of (3.7) absolute value less than c and mean zero. Here Z(-) is the random spectral function corresponding to as t -* oo. The energy content of the solution u of the 9?(x, 0). We shall assume that cp(x, 0) has second-order Burgers equation given above is the same as that of the spectral density f(X) and fourth-order cumulant solution ific1^ of the heat equation as t -»• oo. spectral density g(A , A , A ). These will be well x 2 3 The rate of decay of total energy is governed by J ]uj2 defined and continuous if the second-order moments dxmd and fourth-order cumulants are integrable as functions of the one and three linearly independent differences of , - r f ] ( * + ?)-*. (3.8) spatial arguments respectively (see Ref. 5 for a dis Thus cussion of higher-order spectra). As before, \
oo, so that f M x , OP dx ^ V c - 4 f J(c + ? ) < ^ -
as f -> oo. Since
(3.12) as t —► oo. The rate of dissipation of energy is given by (3.5), with ux given by (3.8). Then
as r -^ oo and
with
E \(c +
is the standard normal density. Let C = I, x = x(T) = B(T) + z,/(2 log T)K Then for a > 0, LEMMA
P[m&x{YT(t + akx~2a), 0 ^ k ^ n] > x] ( A -8)
=
(x)) .
But we can bound the expression on the left of (A.31) from above by /'[max {YT(v + s): 0 ^ j ^ t] > x] + P[min {Yr(v + s): 0 ^ s ^ r) < - 7 ] and from below as in (A.26) where we add A„a+1, ■■-, A2„a+1 with AMa+j [min {Y,,,(v + kax~v"): (j - \)n ^ A: < _/«}} < - y}. Now by (A. 10) (A-32)
\
S & . P(AsAMa+i+1)
=
= o{x^
1^1 = 2 w*- If moments up to order s exist there is a Taylor expansion ;|m|
(m)
nt) = S|m|S8 — ^ — tm + 0( I f |S) 362
SPECTRAL DENSITY ESTIMATES
1169
with
tm = nJLi ip, m! = n^-i ™A and the moments M(m) =
^....^
=
£ ( 1 ] ^ ZfO-
There is a corresponding Taylor expansion for log