This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
(t) one may consider in some cases that A(t) vanishes, and replace tD(t) by w 1 • In the case of a narrow-band spectrum, the probability density of the time r during which the random function is above (below) the zero level ("the distribution law of the half-period") has the following approximate expression
77Ll 2 r J(r) ~ 2[(77- wlr)2 + 6.2r2]312'
whose accuracy increases with the decrease of the quotient Ll/w 1 .
SOLUTION FOR TYPICAL EXAMPLES
Example 37.1 random function
Find the average number of passages per unit time for the 0(t) =
(v) are determined from tables 9T and ST. The significance level is 5 per cent. 43.17 In Table 70 the results of several measurements of a quantity X are given.
TABLE
Limits of the interval
m,
Limits of the interval
to to to to to
m,
Limits of the interval
x,
x, 75 77 79 81 83
70
77 79 81 83 85
2 4 12 24 25
85 87 89 91 93
to to to to to
m,
x, 87 89 91 93 95
32 24 23 22 20
95 to 97 97 to 99 99 to 101
8 3 I
13
n=
2:
i:::: 1
m,
=
200
Using the chi-square test, test that the data agree with the normal distribution law, and with the convolution of the normal and uniform distributions whose parameters are to be determined from the results of measurements. Remember that for the random variable X = Y + Z, where Y and Z are independent and Y obeys a normal distribution law with zero expectation and variance a 2 and Z obeys a uniform distribution law in the interval (a, {3), the probability density ,P(x) is given by the expression ,P(x)
=
1 2({3 - a)
[
-
To determine the estimates of the parameters a, a, {3, appearing in the formula for ,P(x), it is necessary to derive, from the data, the estimates for the expectation :X and the second and fourth central
318
METHODS OF DATA PROCESSING
moments P, 2 and P, 4 , after which the estimates of a, a, (3 are given by the equations: -2
a
(/3 -
12
iX) 2
J52
-
= JL2 -
=
-2
,u,2 -
5 Jl-4' 6
J5
-2 52JL2-(5fL4,
/3+a
-
-2-=x. 43.18 For 602 samples, the distance r (in microns) of the center of gravity of an item to the axis of its exterior cylindrical surface is measured with the aid of a control instrument. The results of the measurements appear in Table 71. TABLE
71
Intervals of values r,
m,
Intervals of values r,
0 16 32 48 64
40 129 140 126 91
to to to to to
16 32 48 64 80
80 96 112 128 144
to to to to to
96 112 128 144 160
m,
45 19 8
3 1
Using the chi-square test, verify that the data obey a Rayleigh distribution f(r)
=
~2 re-r2f2a2;
the estimate of the parameter a should be determined in terms of the estimate r for the expectation by the formula
M[r]
=
aJ~.
Use the 5 per cent significance level. 43.19 Table 72 gives the results of 228 measurements of the sensitivity X of a television set (in microvolts). TABLE
72
x,, mk xk mk Xk mk ---- - ---- - ---- 200 1 450 650 33 19 250 2 500 34 700 13 300 11 550 31 750 8 350 25 20 600 800 3 400 28
I
I
43.
319
TESTS OF GOODNESS-OF-FIT
Using the chi-square test, determine the better fit between the normal and the Maxwell distribution whose probability density is defined by the formula f( ) . x
=
J~71' (x -a3Xo) 2 exp { - (x-2a2x 0)2 } '
Assume the expectation M [X] of X and a are related by the formula M[X] = x 0 + 1.596a. For simplicity, select as x 0 the smallest observed value of X. 43.20 A lot of 200 light bulbs is tested for lifetime T (in hours) and gives results as in Table 73. TABLE
Class No. i 1 2 3 4 5 6
Limits of the class t, to tH1 0 300 600 900 1200 1500
to to to to to to
300 600 900 1200 1500 1800
73
No. in the class m,
Class No. i
Limits of the class t, to t<+1
No. in the class m,
53 41 30 22 16 12
7 8 9 10 11 12
1800 to 2100 2100 to 2400 2400 to 2700 2700 to 3000 3000 to 3300 more than 3300
9 7 5 3 2 0
Using the chi-square test, test that the data obey an exponential distribution law whose probability density is expressed by the formula f(t)
=
>.e-M.
The significance level should be taken equal to 5 per cent. Consider the fact that the parameter >. of the exponential distribution law is related to the expectation of the random variable T by the formula 1 >. = M[T( 43.21 A lot of 1000 electronic tubes is tested for lifetime. Table 74 gives the lifetime intervals (ti; ti + 1 ) before breakdowns occur and the corresponding sizes mi of the classes; ti are expressed in hours. Using the chi-square test, verify the hypothesis that the experimental data agree with the Weibull distribution law. The distribution function F(t) for this law is given by the formula F(t)
where
r(x) is the r-function.
=
1- exp{ -(bftr},
320
METHODS OF DATA PROCESSING TABLE 74
No. of the interval i Limits of the interval t, to t,+l No. in the class
m, No. of the interval i Limits of the interval t, to t,+l No. in the class
m,
1
2
3
4
5
0 to 100
100 to 200
200 to 300
300 to 400
400 to 500
78
149
174
165
139
6
7
8
9
10
11
500 to 600
600 to 700
700 to 800
800 to 900
900 to 1000
more than 1000
107
77
50
32
27
2
The parameters l (the expected value ofT) and m should be computed from the data. Take into account that m is related to the standard deviation a by the formula where
- 1;
a/i is the coefficient of variation. In Table 32T, there are given the values of bm and Vm as functions of m. Knowing Vm, we can find n and bm from this table. The following is a section of this table (Table 75). Vm
=
TABLE
75
m 1.7
1.8
0.892 0.889
0.605 0.575
43.22 The position of a point M in the plane is defined by rectangular coordinates X and Y. An experiment consists of measuring the angle cp made by the radius-vector of a point M with the y-axis (Figure 36). The results of 1000 measurements of cp rounded-off to the nearest multiple of 15 degrees and the numbers m1 of appearances of a given value cp; are shown in Table 76.
43.
321
TESTS OF GOODNESS-OF-FIT
FIGURE 36
76
TABLE cp,, degrees
m,
-82.5 -67.5 -52.5 -37.5
155 118 73 59
cp~,
m,
degrees -22.5 -7.5 7.5 22.5
49 48 48 53
cp~,
degrees
m,
37.5 52.5 67.5 82.5
67 66 111 153
If X and Yare independent normal variables with zero expectations and variances equal to a 2 and (l/4)a2 , respectively, then z = tan cp must obey the Cauchy distribution (the arctan law), 2 f(z) = 7T(z2
+ 4)
Assuming that there are no errors in the measurements of cp and that the round-off errors may be discounted, test, by using the Kolmogorov test, the validity of the preceding assumptions made about X and Y if the significance level is 5 per cent. 43.23 To check the precision of a special pendulum clock at random times, one records the angles made by the axis of the pendulum and the vertical. The amplitude of oscillation is constant and equal to ex = 15°. The results of 1000 such measurements, rounded-off to the nearest multiple of 3°, appear in Table 77.
TABLE
77
"'measured in degrees
m,-no. of occurrences of a,
"'measured in degrees
-13.5 -10.5 -7.5 -4.5 -1.5
188 88 64 86 62
4.5 7.5 10.5 13.5
1.5
m 1-no. of occurrences of a, 74 76 81 100 181
322
METHODS OF DATA PROCESSING
Assuming that the round-off errors may be discounted, test, using the Kolmogorov test, the hypothesis that the data agree with the arcsine distribution law if the significance level is 5 per cent. 43.24 To check the stability of a certain machine, the following test is conducted every hour: a sample of 20 items selected at random is measured and, using the results of the measurements, one computes in the ith sample the unbiased estimate of the variance a}. The values of at for 47 such samples are given in Table 78. TABLE 78
i
fi~
i
-2 a,
i
-2 a,
1 2 3 4 5 6 7 8 9 10 11 12
0.1225 0.1444 o:1296 0.1024 0.1369 0.0961 0.1296 0.1156 0.1764 0.0900 0.1225 0.1156
13 14 15 16 17 18 19 20 21 22 23 24
0.1444 0.1600 0.1521 0.1444 0.1024 0.0961 0.1156 0.1024 0.1521 0.1024 0.1600 0.1296
25 26 27 28 29 30 31 32 33 34 35 36
0.1681 0.1369 0.1681 0.0676 0.1024 0.1369 0.0576 0.1024 0.0841 0.1521 0.0676 0.1225
I
1-i 37 38 39 40 41 42 43 44 45 46 47
-2 a,
0.1089 0.1089 0.0784 0.1369 0.0729 0.1089 0.0784 0.5121 0.1600 0.1681 0.1089
I
Using the chi-square test, test at a 5 per cent significance level the hypothesis of proportionality of the variances; that is, test the assumption that there is no disorder, which means that the dispersion varies with the measured dimension of an item. Take into account the fact that if this hypothesis is valid, the quantity qi
(ni =
a2
l)ar
obeys approximately a chi-~quare distribution law with (ni - 1) degrees of freedom, where a 2 is the unbiased estimate for the variance a2 of the entire main population and can be computed by the formula _2
a
=
27'~1 at(ni -
N-m
I)
'
where ni = n = 20 is the number of items in each sample, m = 47 is the number of samples and N = 27': 1 ni = 940 is the total number of items in all samples. 43.25 There are m = 40 samples of n = 20 items each, and for the ith group there is given as an estimate for the expectation .X';, a randomly selected value Xn from the ith sample Xn (for example, the first in each sample), and for the variance, the unbiased estimate of the variance at for the dimension x of an item. The values of .X';, Xn, af for the 40 samples appear in Table 79.
43.
323
TESTS OF GOODNESS-OF-FIT TABLE
I
i
x,,
a~
x,
\
79 i
--1-[I;-----;;;-----;--2 182 152 38 195 145 40 3 4 81 134 32 5 149 124 37 6 143 144 31 133 142 31 7 8 132 143 34 111 109 42 9 121 30 10 156 11 103 93 35 12 61 118 45 13 149 116 38 123 40 14 209 15 124 106 39 16 52 181 46 17 147 102 32 18 145 124 31 19 128 125 34 20 98 119 32
I
x, I
x,,
af
-2-1-~ ~1--;-22' 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
112 49 116 138 120 120 104 121 99
123 109 100 115 108 125 170 132 114 155
108 97 106 124 149 129 120 105 110 105 123 116 123 109 138 126 132 131 115
32 52 36 36 37 41 26 26 32 37 24 32 29 27 35 33 33 28 37
Using the Kolmogorov test, verify for the 10 per cent significance level the hypothesis that the normal distribution obtains for the dimension x. Note that in this case (for n # 4) YJ· =
'
TjVn=2.
Vn-
1-
rt '
where
obey a Student's distribution law with k = n - 2 = 18 degrees of freedom, where xij is a randomly selected value from the ith sample (in our case xn). 43.26 The results of 300 measurements of some quantity x are included in Table 80.
TABLE
Limits of the interval
m,
Limits of the interval
to to to to to
m,
Limits of the interval
Xt
Xt
50 60 70 80 90
80
60 70 80 90 100
1 2 9
23 33
100 110 120 130
to to to to
m,
X;
110 120 130 140
56 61 49 25 I
140 150 160 170
to to to to
150 160 170 180
19 16 4 2
324
METHODS OF DATA PROCESSING
Using the chi-square test, test that the data agree with the normal distribution whose parameter estimates should be computed from the experimental data. Smooth the data with the aid of a distribution specified by a Charlier-A series, and, using the chi-square test, verify that the data agree with the obtained distribution. 43.27 The measurements of light velocity c in the MichelsonPease-Pearson experiment gave the results shown in Table 81. For brevity, the first three digits of c; (in km.jsec.) are omitted (299 000).
81
TABLE
Limits of the interval c,
m,
Limits of the interval c,
m,
Limits of the interval c,
m,
Limits of the interval c,
m,
to to to to
3 7 4 8
755 760 765 770
to to to to
17 23 29 45
775 780 785 790
to to to to
40 17 16 10
795 800 805 810
to to to to
5 2 3 4
735 740 745 750
740 745 750 755
760 765 770 775
780 785 790 795
The following estimates for the expected value deviation a were obtained from the data:
c=
800 805 810 815
c and the standard
a = 14.7 km.jsec.
299733.85 km.jsec.,
The chi-square test of the hypothesis that the data agree with a normal distribution law with parameters c and a gives the value x~ = X~H = 18.52; the number of degrees of freedom in this case is kH = 9 and P(x 2 ): X~H) = 0.018; small intervals are united. The hypothesis should be rejected. Smooth the observations with the distribution law specified by a Charlier-A series and test, with the chi-square test, that the experimental data obey the resulting distribution law. 43.28 Two lots, each containing 100 items, are measured. The number of items h;1 with normal, underestimated and overestimated dimensions are exhibited in Table 82. TABLE
82
Dimensionj Results of measurements j Lot no. 1 (underestimated dimension) i
2 (normal dimension)
3 (overestimated dimension)
hw
1 2
25 52
50 41
25 7
100 100
hot
77
91
32
200
44.
DATA PROCESSING BY METHOD OF LEAST SQUARES
325
Using the chi-square test, determine whether the number of a lot and the character of the dimensions of the items are independent at a 5 per cent significance level.
44.
DATA PROCESSING BY THE METHOD OF LEAST SQUARES Basic Formulas
The method of least squares is applied for finding estimates of parameters appearing in a functional dependence between variables whose values are experimentally determined. If the experiment gives n + 1 pairs of values (x 1, y 1), where x 1 are the values of the argument and y 1 are the values of the function, then the parameters of the approximating function F(x) are selected to minimize the sum n
S
=
L
[Yt - F(x;)] 2 •
i=O
If the approximating function is a polynomial, that is,
F(x) = Qm(x)
ao + a1 x + · · · + amxm
=
then the estimates of its coefficients normal equations m
L
J~O
sk+Jai
=skao +
sk+la1
ak
(m ~ n),
are determined from a system of m
+ · · · + sk+mam
=
vk
+
(k = 0, 1,2, ... ,m),
where (k = 0, 1, 2, ... , 2m), n
vk =
L
t~o
Y;X~
(k
=
0, 1,2, ... ,m).
If the values x 1 are given without errors and the values y 1 are independent and equally accurate, the estimate for the variance a 2 of y 1 is given by the formula U2 =
_l_
Smin>
n-m where Sm1n is the value of S, computed under the assumption that the coefficients of the polynomial F(x) = Qm(x) are replaced by their estimates that are determined from the system of normal equations. If y 1 are normally distributed, then the method given is best for finding the approximating function F(x). _ The estimates aak of the variances of the coefficients ak and the covariances Kak>a 1 are given by the formulas where Mk,J = /).k1j/)., /:). = ldkjl is the determinant of the system of normal equations of the (m + l)st order,
dkf = sk + 1
(j, k
=
0, 1, 2, ... , m),
/).ki is the cofactor of dk 1 in the determinant /)..
326
METHODS OF DATA PROCESSING
Jn solving the system of normal equations by the elimination method, the quantities Mk.J may also be obtained without replacing the u", by their numerical values. The linear combination of the V~c used to represent ii,, will have as the coefficient of r 1 the desired number Mk,J· In the particular case of a linear dependence m = l, we have:
s2 S2So -
Smin
s!
-2 Ual
1'
n -
=
s1
So Smin ---2 - - ' S1 1
s2 s0
n -
Smtn
S2So -
St n -
1.
In the case in which the measurements are not equally accurate, that is, y, have different variances af, all the previous formulas remain valid if S, s1c and v~,; are replaced by n
=
2 pf(y, i=O
s~ =
2 pfxf i=O
v~ =
2 pfy,x~ i=O
S'
- aa - a1x, - ... - amxr),
n
(k
0, 1,2, ... ,2m),
=
n
(k=O,l,2, ... ,m),
where the "weights" p'f of y; are
A 2 is a coefficient of proportionality.
If the "weights" p; are known, the estimates of the variances of individual measurements y, are computed by the formula -2
U·
'
=
s;,in (n-
m)p~
If y; is obtained by averaging n; equally accurate results, then the "weight" of the measurement Yi is proportional to n;. One may take pf = n;. All the formulas remain unchanged except the one for at; in this case, 2
a;
s;,in
= (7n---m~)(:::.n,-.----,1,..,.)
The confidence intervals for the coefficients a1c for any given confidence level have the form where y is determined from Table 16T for Student's distribution for the values of ex and k = n - m degrees of freedom.
44.
DATA PROCESSING BY METHOD OF LEAST SQUARES
327
In the case of equally accurate measurements, the confidence interval for the standard deviation a and the confidence level a are determined from the inequalities yla < a< y2a, where Yl and y 2 are found from Table 19T for a chi-square distribution with entry value a and k degrees of freedom. For the same purpose, one can use Table 18T; in this case, Y1
=
J
n- m
--2-' X1
Y2
=
m
In--
,.J--2-,
.,
X2
where x~ and x~ are determined from the equations P( X2 :::;; X12) = -1 --a, 2
r,x 2 )'
nr
1+ a ---2-
2)
X2 =
for k = n - m degrees of freedom. The confidence limits form a strip containing the graph of the unknown correct dependence y = F(x) with a given confidence level a; they are determined by the inequalities Qm(x;) - yay(x;) < y(x;) < Qm(x;)
+ yay(x;),
where a~(x;) is the estimate for the variance of y defined by the dependence y = Qm(x) (it depends on the random variables repre5ented by the estimates of a~c). In the general case, the computation of a~(x) is difficult because it requires the knowledge of all the covariances ka",a,· For a linear dependence (m = 1), -2cX ) = Oy
+ UalX -2 2 + 2k- ao.alX.
-2 Uao
The value of y is determined from Table 16T for Student's distribution for the entry a and k = n - m degrees of freedom. In the case of equidistant values X; of the argument, the computation of the approximating polynomial can be simplified by using the representation
where
P~c,n(x;)
are the orthogonal Chebyshev polynomials:
P~c n(x')
=
•
I (-l)iC£C£+j x'(x' (x' -_J + 1), n(n-l)···(n-]+1) 1) ...
i=o
x'
= X - Xmin
h Xmax' Xmin
h _
Xmax -
-
Xmin
n
'
are the maximal and minimal values of X;,
C~c
=
I
y,P~c,n(x;),
sk
=
i:
Ft,n(x;).
i=O
i=O
The estimates for the variances of the coefficients b1c are determined by the formula -Smin ---·
n- m
sk
328
METHODS OF DATA PROCESSING
The values of the Chebyshev polynomials multiplied by P~c,n(O) for k = 1 to 5, n = 5 to 20, x' = 0, 1, .. . , n are given in Table 30T. If the coefficients b" are computed from Table 30T, then for the computation of the polynomialsP~c,n{x') in the formula for Qm(x) it is also necessary to consider the coefficient P~c.n(O) and to choose the ordinates of these polynomials from the same tables or to multiply the value of the polynomial, obtained according to the preceding formula, by P~c.n(O). In some cases, the approximating function is not a polynomial, but can be reduced to a polynomial by a change of variables. Examples of such change are given in Table 83. TABLE
Ex. No.
83
The form to which it is reduced
Initial function
1 2
y = Aekx
3
y = ao
+X
4
y = ao
+ X"
5
2 y = Aexp { -(x- -2a) -}
y = Bx•
a,
a,
2a
Change of variables
z = a0 z = a0
+ a,x + a,u
z =In y; a0 = In A; a 1 = k z = logy; u = log x
y = a0
+ a1 u
u = -
y = ao
+ a,u
u =x•
z = a0
+ a,x +
1
X
1
a.x•
z=Iogy;
a0 = log A a1 =
loge Za" ;
a loge -a2 -;
loge 2 a. =- 2a• a 6
y = ao
+
7
y = ao
+
8
y
a1
x + X"a2 + · · ·
y = ao
a,x•
y
+ a 2 x 20 + · · · = a 0 x-m + a,x"
= ao
z = a0
+ a,u + a2 u2 + · · · + a,u + a2 u 2 ... + a,u
1 u = X
u = x• z = yxm; u = xm+n
If y is a function of several arguments zh then to obtain the linear approximating function y = a 0z 0 + a 1 z 1 + · · · + amZm
corresponding to the values Yt and Z~ct in (n + 1) experiments, it is necessary to find the solutions a" of the system of normal equations (k
=
0, 1, 2, ... , m),
where S~c; =
f3~c =
n
2
t=O
Z~c;Z;t
(k,j
YtZkt
(k
n
2
t=O
=
=
0, 1, 2, ... , m);
0, 1, 2, ... , m).
44.
329
DATA PROCESSING BY METHOD OF LEAST SQUARES
If the values z1d are known without error and the measurements of Yi are equally accurate, the estimates of the variances of ak are determined by the formula where tP = Sm;n/(n - m) and Nk,k is the ratio of the cofactor of a diagonal element of the determinant (of the system of normal equations) to the value of the determinant itself. In solving the system without using the determinant, Nk,k will be the solutions of this system if we replace all f3~< by 1 and the other {3 1 by zeros. The role of zk can be played by any functions f~<(x) of some argument x. For example, if the function y, defined in the interval (0, 21r), is approximated by the trigonometric polynomial m
y = A0
+ k=l 2: (/..~
f.L~<
sin kx),
then for equidistant values xi the estimates for the coefficients determined by the Bessel formulas:
1
=
flk
= - -1
n
2
n
2
n
+ 1 ~~ yi;
Xo
+
Xk = - -1
n
L Yi sm kx n
.
1
+
L Yi cos kx
,\~<
and
f.L~<
are
n
1;
i=o
(k= 1,2, ... ,m).
l=o
For a complex functional dependence and a sufficiently small range of variation of the arguments zk, the computations are simplified if the function is expanded in a power series of deviations of arguments from their approximate values (for example, from their mean). If there are errors in xi and y 1 too, and these variables obey a normal distribution, then, in the case of linear dependence
the estimate ii1 is the root of the quadratic equation
and the estimate ii0 is given by the formula
where
a~, a~
are, respectively, the variances of the xi and they.,
s~< =
n
n
2: x7, 1=0
rk =
2: Y7 i=O
n
(k = 1, 2),
V1
=
2: X1Y1· i=O
Of the two roots of the quadratic equation, we select the one that better fits the conditions of the problem.
330
METHODS OF DATA PROCESSING SOLUTIONS FOR TYPICAL EXAMPLES
Example 44.1 In studying the influence of temperature t on the motion of a chronometer, the following results were obtained (Table 84).
w
TABLE 84
oc
t,, w,
5.0
9,6
2.60
2.01
16.0
If
w=
I
19.6
1.34
+ a 1 (t-
a0
1.08
15)
I 1
24.4
29.8
0.94
+ a2 (t-
34.4
1.06
1.25
15)2,
holds, where w are the computed values of w, determine the estimates for the coefficients ak and the estimates for the standard deviations, a of an individual measurement and &ak of the coefficients ak. Establish the confidence intervals for a~< and for the standard deviation a, characterizing the precision of an individual measurement for a confidence level a = 0.90. SOLUTION. We determine the normal equations for the coefficients a~< and Mk,k· To decrease the sizes of the coefficients of the normal equations, we introduce the variable t - 15 X= _1_5_
and seek the approximating function
We then determine the coefficients of the normal equations sk and in the computations in Table 85.
v~<>
as
TABLE 85
i
xP
x,
xi'
x3
x(
w,
0 1 2 3 4 5 6
1
-0.667 -0.360 0.067 0.307 0.627 0.987 1.293
0.4449 0.1296 0.0045 0.0942 0.3931 0.9742 1.6718
-0.2967 -0.0467 0.0003 0.0289 0.2465 0.9615 2.1617
0.1979 0.0168 0.0000 0.0089 0.1546 0.9490 2.7949
2.60 2.01 1.34 1.08 0.94 1.06 1.25
s 2=3.7123
s3= 3.0555
s. =4.1221
s 1 = 2.254;
s2
1
1 1 1 1 1
'
I WiXt
wixP
-1.7342 -0.7236 0.0898 0.3316 0.5894 1.0462 1.6162
1.1567 0.2605 0.0060 0.1017 0.3695 1.0327 2.0898
- -Sa=7 I St=2.254
Vo
= 10.28
v1 = 1.2154 v2=5.0169
We obtain: s 0 = 7;
v0
=
10.28;
=
v1
3.712; =
1.215;
s 3 = 3.056;
v2
=
5.017.
s 4 = 4.122;
44.
DATA PROCESSING BY METHOD OF LEAST SQUARES
331
The system of normal equations becomes 7ii~ 2.254ii~ 3.712ii~
+ + +
2.254ii~ 3.712a:~ 3.056a:~
+ 3.712a:; + 3.056a:; + 4.122a:;
=
v0 ,
=
v1 ,
=
v2 •
Solving this system by elimination and without substituting the numerical values for vk, we obtain:
a:~ =
+ 0.0986v 1 0.0986v 0 + 0.7248v 1
a:;
-0.3314v 0
a:~ =
=
0.2869v 0
-
0.3314v 2 ,
-
0.6260v 2 ,
0.6260v 1
-
+ 1.005lv2 •
Substituting the values of vk, we find: a:~ =
Mk,k
a:~ = -
1.404;
a:;
1.246;
=
are the coefficients of vk in each equation for Mo,o = 0.2869;
=
M1,1
0.8741. a:~;
M 2 ,2
0.7248;
=
that is, 1.0051.
We compute the value Sm1n necessary for finding the estimates of the variance of an individual y 1 and the variances of the coefficients ilk; the computations are in Table 86.
TABLE
ab +
i
0 1 2 3 4 5 6
I
a~xi
2.2352 1.8527 1.3207 1.0217 0.6230 0.1745 -0.2067
86
' 21 a~x
wi
.,
ei'
0.3889 0.1133 0.0039 0.0823 0.3436 0.8515 1.4613
2.624 1.966 1.325 1.104 0.967 1.026 1.255
-0.024 0.044 0.015 -0.024 -0.027 0.034 -0.005
0.000576 0.001936 0.000225 0.000576 0.000729 0.001156 0.000025
Smin = 0.005223
We obtain Smin
=
0: 2 = -2
aa'o =
M 0 , 0 a2 aa'o =
=
0.005223. Furthermore, we find ::in2
=
0.0003746;
0.01936;
a
0.001306; &~., =
O'a·l =
=
0.03614;
0.0009464;
0.03076;
Returning to the argument t, we obtain
aa'2 =
&~.2 =
0.001312;
0.03623.
332
METHODS OF DATA PROCESSING
where ii0 =
ii~ =
1.404; ii2 =
1~;2
iil
=
~~
= -0.08306;
= 0.003885'
and the corresponding estimates for the standard deviations aa": 0.01936;
Ua 0 = Ua• 0 =
Ua 2 =
&a'
15 ~
=
0.0001610.
We find the confidence intervals for the coefficients ak for a confidence level ex = 0.90. Using Table 16T, for the values of ex and k = n - m = 4 degrees of freedom, we find y = 2.132. The confidence intervals for
ak:
become 1.363 < a0 < 1.446, 0.08031 < a 1 < 0.08581, 0.003542 < a 2 < 0.004228. We find the confidence interval for the standard deviation the precision of an individual measurement:
a,
characterizing
where y 1 and y 2 are determined from Table 19T fork = 4, ex = 0.90. We have = 0.649; y 2 = 2.37, hence,
Y1
0.02345 <
a
< 0.08565.
Similarly one can solve Problems 44.1 to 44.3, 44.5, 44.9, 44.10 and 44.13. Example 44.2 The results of several equally accurate measurements of a quantity y, known to be a function of x, are given in Table 87.
TABLE
87
X
y
X
y
0.0 0.3 0.6 0.9 1.2
1.300 1.245 1.095 0.855 0.514
1.5 1.8 2.1 2.4 2.7
0.037 -0.600 -1.295 -1.767 -1.914
44.
333
DATA PROCESSING BY METHOD OF LEAST SQUARES
Select a fifth-degree polynomial that approximates the dependence of y on x in the interval [0, 2.7]. Use (the orthogonal) Chebyshev polynomials. Estimate the precision of each individual measurement as characterized by the standard deviation a, and find the estimates of the standard deviations of the coefficients bk for the Chebyshev polynomials Pk,n(x). SoLUTION. We make the change of variable z = x/0.3 in order to make the increase of the argument unity. We compute the quantities Sk, ck, bk (k = 0, 1, ... , 5) according to the formulas given in the introduction to this section. The tabulated values of the Chebyshev polynomials are taken from 30T. The computations are listed in Table 88. TABLE
88
z
Po.g(z)
P1.9(z)
P._g(Z)
P3.9(z)
P4.9(z)
Ps.g(z)
0 I 2 3 4 5 6 7 8 9
I 1 1 1 1 I 1 1 1 I
9 7 5 3 1 -1 -3 -5 -7 -9
6 2 -I -3 -4 -4 -3 -1 2 6
42 -14 -35 -31 -12 12 31 35 14 -42
I8 -22 -17 3 18 18 3
6 -14 1 11 6 -6 -11 -1 14 -6
So= IO
s1
s. =
= 330
132
S3
-17
-22 18
s. = 2860
= 8580
S5
= 780
The computations, performed on a (keyboard) desk calculator with accumulation of the results, give: S0
=
10,
sl
=
330,
s3
=
s4
=
2860,
c0
=
8580, -0.530,
c1
=
c4
=
c3 = -14.659,
Ss
=
66.802,
C2
=
780, -7.497,
14.515,
C5
=
-1.627.
b2
=
-0.05680,
For the estimates of the coefficients bk we get:
b0 '63
=
b1 '64
-0.530,
=
0.20243,
b5 = -0.00209. -0.00486' = 0.00508' Recall that if one uses the tabulated values of the Chebyshev polynomials, the formula for the required fifth-degree polynomial has the form =
Y = h0 P 0 . 9 (z) + b1P1.9(z) +
b2P2.e(z)
+
b3P3.e(z)
+
b4P4.9(z)
+
bsP 5 .e(z).
However, if one uses the analytic formulas for the calculation of the Chebyshev polynomials, then the coefficients bk should be replaced by b~ =bkPk.n(O), where Pk.n(O) is the tabulated value of Pk,n(z) for z = 0. We compute the estimate a 2 : _2
Smln
a=--,
n- m
334
METHODS OF DATA PROCESSING
where we use the tabulated values of the Chebyshev polynomials from Table 88 for finding the values Yi· The computation of Smm is indicated in Table 89.
TABLE Xi
I
Zi
y,
0 1 2 3 4 5 6 7 8 9
1.300 1.245 1.095 0.855 0.514 0.037 -0.600 -1.295 -1.767 -1.914
5\
Ej
e2
1.310 1.236 1.098 0.868 0.514 0.017 -0.602 -1.263 -1.793 -1.908
-0.010 0.009 -0.003 -0.013 0.000 0.020 0.002 -0.032 0.026 -0.006
0.000100 0.000081 0.000009 0.000169 0.000000 0.000400 0.000004 0.001024 0.000676 0.000036
--0.0 0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7
89 '
Smin
We obtain: Smin
=
a
0.002499,
J
=
Smin
n-m
=
0.002499
=
0.02503.
Next, according to the formula
we find abo
=
0.007917;
abl
=
0.001378;
ab2
=
0.002179;
ab 3
=
0.0002702;
ab.
=
0.0004680;
ab 5
=
0.0008947.
Problems 44.4, 44.6 and 44.12 can be solved by following this solution. Example 44.3 The readings of an aneroid barometer A and a mercury barometer B for different temperatures tare given in Table 90.
TABLE i
t,
oc
A,mm. B,mm.
-0 1 2 3 4
I
90 t,
i
oc
A,mm. B,mm.
-10.0 6.2 6.3 5.3 4.8
749.0 746.1 756.6 758.9 751.7
744.4 741.3 752.7 754.7 747.9
5 6 7 8 9
3.8 17.1 22.2 20.8 21.0
757.5 752.4 752.5 752.2 759.5
If the dependence of B on t and A has the form B
=
A
+ a0 + a1t +
a 2 (760
- A),
754.0 747.8 748.6 747.7 755.6
44.
335
DATA PROCESSING BY METHOD OF LEAST SQUARES
find estimates of the coefficients a~;;, construct the confidence intervals for the coefficients a~;; and for the standard deviation a of the errors in measuring B for a confidence level ex = 0.90. y
=
SOLUTION. Let us use the notations z 0 = I, B - A. Then, the required formula becomes
zi
=
t, z2 = 760 - A,
The initial data for these notations are represented in Table 91.
91
TABLE
i
z,
Zo
y
Z2
a0
a2z2
y
led
e~
-0.739 -0.934 -0.228 -0.074 -0.558 -0.168 -0.511 -0.504 -0.524 -0.034
-4.46 -4.62 -3.92 -3.75 -4.23 -3.83 -4.31 -4.36 -4.36 -3.87
0.14 0.18 0.02 0.45 0.43 0.33 0.29 0.46 0.14 0.03
0.0196 0.0324 0.0004 0.2025 0.1849 0.1089 0.0841 0.2116 0.0196 0.0009
+ a1 z1
----- -- 0 1 2 3 4 5 6 7 8 9
10.0 6.2 6.3 5.3 4.8 3.8 17.1 22.2 20.8 21.0
1 1 1 1 1 1 1 1 1 1
11.0 13.9 3.4 1.1 8.3 2.5 7.6 7.5 7.8 0.5
-4.6 -4.8 -3.9 -4.2 -3.8 -3.5 -4.6 -3.9 -4.5 -3.9
-3.725 -3.686 -3.687 -3.676 -3.671 -3.661 -3.799 -3.852 -3.838 -3.840
Smin
We determine the values S~;;; = Sao=
10;
2:f~o Z~;;iZ;i
s 12 = s 21 = 741.97; (31
=
577.22;
S22 =
494.87;
,82
=
0.8649
and ,B~;; = 2:f ~a yizki (k,j = 0, 1, 2):
117.5; So2 = S2o = 63.6;
So1 = S1o =
=
S11 =
1902.6;
,80 = -41.7;
-276.75.
We write the system of normal equations, but for the time being we do not replace (3~;: by their numerical values: 10a 0 117.5a0 63.6a0
+ + +
117.5a1 1902.59a1 741.97a 1
+ + +
63.6a 2
=
,8 0 ,
741.97a 2 = (3 1, 577.22a 2
=
,8 2 •
Solving this system by elimination, we find: a0 =
-0.6076,8 0
a 1 = -0.02289,8 0 cx 2
= -0.03754,8 0
+ 0.02289,8 1 - 0.03754,82 , + 0.001916,8 1 + 0.0000591,82 , + 0.0000591,8 1 + 0.005792,8 2 .
Setting the numerical values of ,8~;: in these expressions, we find coefficients of ,8~;: in the expression for a~;; are the values of Nk,k: a0 =
-3.621;
Na,o = 0.6076;
a1 =
-0.01041;
N1,1 = 0.001916;
a2 =
-0.06719;
N 2 , 2 = 0.005792.
a~;;;
the
336
METHODS OF DATA PROCESSING
Furthermore, we find:
a2
Smin
=
0.8649 (see Table 91);
a= 0.3515;
=
0.12356;
a~ 0 =
0.07508;
Ua 0 =
0.272;
a~ 1 =
0.0002368;
Ua 1 =
0.0154;
o-~2 =
0.0007156;
Ua 2 =
0.0268.
We construct the confidence intervals for the coefficients a.k and for the standard deviation a, which determines the accuracy of an individual measurement, by using Student's distribution for a.k (see Table 16T) and the chi-square distribution for a (see Table 19T). The number of degrees of freedom is k = n - m = 7 and the confidence level is a. = 0.90. We find: y = 1.897, y 1 = 0.705, y 2 = 1.797. The confidence intervals for a.k, become
<
a. 0
< -3.101,
-0.0396 <
a.l
<
0.0188,
-0.1180 <
a.2
<
0.0164,
-4.141
and for the standard deviation a or 0.2478 < a< 0.6316. Example 44.4 Table 92 contains the values X;, Y; and the "weights" that determine the accuracy in measuring Y; for a given value X;.
TABLE
92
i
x,
y,
p~
i
x,
y,
p~
0 1 2 3 4
1.5 1.1 0.7 0.3 -0.1
6.20 3.45 2.00 1.80 2.40
0.5 1.0 1.0 1.0 1.0
5 6 7 8
-0.5 -1.0 -1.5 -2.0
4.55 8.85 15.70 24.40
1.0 1.0 0.5 0.25
pf
----- --- - - ----- ---- -
If y is a second-degree polynomial in x,
find the estimates for the variances of individual measurements of y; and the variances of the coefficients ak (k = 0, 1, 2). Construct the confidence limits for the unknown true relation y = F(x) at a confidence level a. = 0.90.
44.
337
DATA PROCESSING BY METHOD OF LEAST SQUARES
SOLUTION. We compute the quantities s~ and v~ for the system of normal equations but consider the "weight" of each measurement. The computations are given in Table 93.
TABLE i
Pt
0 1 2 3 4 5 6 7 8
0.50 1.00 1.00 1.00 1.00 1.00 1.00 0.50 0.25
--
xf
X~
x•
y,
y,x,
y,xi'
2.25 1.21 0.49 0.09 0.01 0.25 1.00 2.25 4.00
3.375 1.331 0.343 0.027 -0.001 -0.125 -1.000 -3.375 -8.000
5.0625 1.4641 0.2401 0.0081 0.0001 0.0625 1:0000 5.0625 16.0000
6.20 3.45 2.00 1.80 2.40 4.55 8.85 15.70 24.40
9.300 3.795 1.400 0.540 -0.240 -2.275 -8.850 -23.550 -48,800
13.950 4.174 0.980 0.162 0.024 1.138 8.850 35.325 97.600
x,
1.5 1.1 0.7 0.3 -0.1 -0.5 -1.0 -1.5 -2.0
93
'
'
We obtain:
s;
s~ =
7.250;
s~ =
0;
s;
-1.425;
s~ =
11.837;
40.100;
v~ =
-24.955;
=
v~ =
=
v; =
6.300;
64.366.
We write the system of normal equations:
7.250a0 0 6.300a 0
+0 + 6.300a2 + 6.300a1 - 1.425a2 1.425a1 + 11.837a2 -
= =
=
40.100} -24.955 · 64.366
We find the numerical values of the determinant 11 of the system and the cofactors ok; of the elements dk; = s~ +; of this determinant: Ll = 275.87;
D00 = 72.54;
D11 = 46.12;
We compute the estimates of the coefficients
D22 = 45.68;
ak:
and get
ao We find
Smin
=
2.096;
al
=
-3.068;
a2
=
3.955.
by performing the computations given in Table 94:
338
METHODS OF DATA PROCESSING TABLE i
y,
0 1 2 3 4 5 6 7 8
6.20 3.45 2.00 1.80 2.40 4.55 8.85 15.70 24.40
i'io
ii2xf
ji,
.,
8.8945 4.7833 1.9370 0.3558 0.0395 0.9883 3.9531 8.8945 15.8124
6.390 3.506 1.886 1.532 2.442 4.618 9.116 15.592 24.043
-0.190 -0.056 0.114 0.268 -0.042 -0.068 -0.266 0.108 0.357
+ a1X1
-2.5044 -1.2775 -0.0507 1.1762 2.4030 3.6298 5.1634 6.6970 8.2305
94
·~ 0.0361 0.0031 0.0130 0.0718 0.0018 0.0046 0.0708 0.0117 0.1274
Sm!n = 0.2208
We compute the estimates of the variances of individual measurements a} by the formula Smin
-2
1
a·=---,
'
n- mp[
and obtain: a~ a~
=
=
a~
1
= 0.0368. 0 .5 = 0.0736;
a~ = a~
=
a~ = a~ = a~
a~=
= 0.0368;
0.1472.
The estimates of the variances of the coefficients a"' and their covariances are given by the formulas
We have: a~ 0 =
Ka 0 ,a 1
=
0.009336;
-0.001156;
a~ 1 =
Ka 0 ,a2
=
-0.005108;
We calculate the estimate of the variance a~(x) = a~ 0
a~ 2 =
0.005936; a~(x)
0.005879;
Ka 1 ,a2
=
0.001329.
of y by the formula
+ a~ 1 x 2 + a~ 2 x 4 + 2Ka 0 ,a 1 x + 2Ka 0 ,a2 X 2 + 2Ka1 ,a2 X 3
or by a~(x) =
10- 5 (933.6 - 231.2x - 428.0x 2
+
265.8x 3
+
587.9x4).
The values a~(x;) for all X; are calculated in Table 95. We construct the confidence limits for the unknown true relation y ji; - yay{x;) < y < )i;
+
=
F(x):
yay{x;),
where y is determined from Table 16T for a = 0.90 and k of freedom: y = 1.943.
=
n - m
The confidence limits for y are computed as in Table 95. Similarly one can solve Problems 44.7, 44.8 and 44.11.
=
6 degrees
44.
TABLE i
x,
-
933.0- -428.0xi' 265.8x? 587.9x1' 231.2x,
95 a~(x,)
1.5 1.1 0.7 0.3 -0.1 -0.5 -1.0 -1.5 -2.0
586.8 679.3 771.8 864.2 956.7 1049.2 1164.8 1280.4 1396.0
y,
a.(x,)
y,-
y,+
ya.(x,)
ya.(x,)
--- --- ---
--
0 1 2 3 4 5 6 7 8
339
DATA PROCESSING BY METHOD OF LEAST SQUARES
-963.0 897.1 2976.2 0.03497 -517.9 860.7 0.01375 353.8 -209.7 91.2 141.2 0.00794 -38.5 7.8 4.8 0.00838 -4.3 0.3 0.1 0.00953 -107.0 36.7 0.01012 33.2 -428.0 265.8 587.9 0.01590 -963.0 897.1 2976.2 0.04197 -1712.0 2126.4 9406.4 0.11217
0.187 0.117 0.088 0.092 0.098 0.101 0.126 0.205 0.335
6.390 6.023 6.753 3.506 3.279 3.733 1.886 1.715 2.057 1.532 1.353 1.711 2.442 2.252 2.632 4.618 4.422 4.814 9.116 8.871 9.361 15.592 15.194 15.990 24.043 23.392 24.694
Example 44.5 The values of the electric resistance of molybdenum depend on temperature T°K as shown in Table 96.
TABLE p,
T, °K
p,
T, °K
micro-ohm/em.
2289 2132 1988 1830
96
61.97 57.32 52.70 47.92
micro-ohm/em.
1489 1286 1178
37.72 32.09 28.94
If p is linearly dependent on T: p
=
ao
+ a1T,
determine the coefficients a 0 and a 1 by the method of least squares. The errors in = 0.8 measurements of p and T are specified by the standard deviations and aT = 15°, respectively. Find the maximal deviation of the calculated value of p from the experimental one.
a,
SOLUTION.
We calculate the quantities s1" rk (k
=
1, 2), v1 as shown in
Table 97. TABLE i
0 1 2 3 4 5 6
rr w-
T,
2,289 2,132 1,988 1,830 1,489 1,286 1,178 s, = 12,192
I
2
52,395 45,454 39,521 33,489 22,171 16,538 13,877
i s 2 = 22.344 ·103
I
P<
97 p~
T,p,·l0- 1
fi,
., ---
61.97 57.32 52.70 47.92 37.72 32.09 28.94
3,840.3 3,285.6 2,777.3 2,296.3 1,422.8 1,029.8 837.5
14,185 12,221 10,477 8,769 5,617 4,127 3,409
r 1 = 318.66
r 2 = 15,490
v, = 58,805 ·10
61.82 57.15 52.86 48.15 38.00 31.95 28.73
0.15 0.17 -0.16 -0.23 -0.28 0.14 0.21
---
340
METHODS OF DATA PROCESSING
We obtain: s 1 = 12,192; r 1 = 318.66;
r2
=
15,490;
v1 = 58,805 · 10. We write the quadratic equation for the coefficient ii1: 2
~+
+
[sr - (n
l)s2] a~ - [rr - (n
~
s 1r1 - (n
+
+
1h]
l)v 1
2
~-~=0 aT
'
which, after the substitution of the numerical values, becomes iit
+ 0.065708iil
- 0.0028444
=
0.
Solving this equation, we find two values for ii1: ii11
=
0.029786;
ii12
-0.095494.
=
Obviously, the negative root ii12 is extraneous since the data contained in Table 97 show that when T increases p increases. Consequently, iil
=
0.029786.
We determine the coefficient ii0 by the formula _ =
ao
r1 - ii1s1 n+l
= _
6 356 ..
We calculate the values of ii0 in Table 97: where p are tile computed values of the quantity
p = -6.3558
+ 0.029786T.
From the data of Table 97 we find that [emax[ One can solve Problem 44.15 similarly.
0.28.
=
PROBLEMS 44.1 The results of several equally accurate measurements of the depth h of penetration of a body into a barrier for different values of its specific energy E (that is, the energy per unit area) are given in Table 98. TABLE i
E,
h,
i
E,
98 h,
i
E,
h,
- - - -- - - -- - - - - -- -- 0 41 5 139 20 9 241 30 4 6 1 50 10 250 31 8 154 19 2 81 11 10 7 180 23 269 36 3 104 14 8 208 26 12 301 37 120 16 4
44.
341
DATA PROCESSING BY METHOD OF LEAST SQUARES
Select a linear combination of the form h = a0
+ a1 E.
Determine the estimates &~" of the variances of the coefficients a"' and the estimate &2 of the variance determining the accuracy of an individual measurement. 44.2 Solve the preceding problem by shifting the origin of E to the arithmetic mean of E and the origin of h to a point close to the expectation of h and thereby simplify the computations. 44.3 The height h of a body in free fall at time tis determined by the formula h = a0 + a 1 t + a 2 t 2 , where a0 is the height at t = 0, a1 is the initial velocity of the body and a 2 is half the acceleration of gravity g. Determine the estimates of the coefficients a0 , a1 , a 2 and estimate the accuracy of determination of the acceleration of gravity by the indicated method, by using a series of equally accurate measurements whose results appear in Table 99. TABLE
t, sec.
h, em.
t, sec.
h,cm.
t, sec.
99
h, em.
t, sec.
h, em.
51.13
10 30
85.44
61.49
-
- ---- - ---- - ---- - - --1 30 2 30 3 30
-
11.86 15.67 20.60
4 30 5 30 -6 30
26.69 33.71 41.93
7 30 8 30 9 30
11
99.08
30 12 30
72.90
t, sec.
13 30 14 30
-
h, em.
129.54 146.48
113.77
44.4 Solve the preceding problem by using (the orthgonal) Chebyshev polynomials. 44.5 Several equally accurate measurements of a quantity y at equally spaced values of the argument x give the results appearing in Table 100. TABLE X
y
100
-3
-2
-1
0
-0.71
-O.Dl
0.51
0.82
1
2
3
0.88
0.81
0.49
- -- -- -
If y is quite accurately approximated by the second-degree polynomial
342
METHODS OF DATA PROCESSING
determine the estimates of the coefficients fi", the variance of an individual measurement a 2 and the variances a~k of the coefficients a". 44.6 The amount of wear of a cutter is determined by its thickness (in millimeters) as a function of operating time t (in hours). The results are given in Table 101.
TABLE
101
t
y
t
y
t
y
0 1 2 3 4 5
30.0 29.1 28.4 28.1 28.0 27.7
6 7 8 9 10 11
27.5 27.2 27.0 26.8 26.5 26.3
12
26.1 25.7 25.3 24.8 24.0
13
14 15 16
Using (the orthogonal) Chebyshev polynomials, express y both as a first- and then as a third-degree polynomial oft. Considering that the results are valid, in both cases estimate the magnitude of the variance of an individual measurement and construct the confidence intervals for the standard deviation a for a confidence level a = 0.90. 44.7 The value of the compression of a steel bar xi under a load Yi and the values of the variances a~, which determine the accuracy in measurements of y;, are given in Table 102.
TABLE
102
i
0
1
2
3
4
x., fl. y;,Kg.
5 51.33 82.3
10 78.00 25.0
20 144.3 49.3
40 263.6 51.3
60 375.2 46.7
a~
Find the linear dependence,
associated with Hooke's law. Construct the confidence intervals for the coefficients a" (k = 0, 1) and also the confidence limits for the unknown correct value of the load for x ranging from 5 to 60 [L if the confidence level is a = 0.90. The "weights" of the measurements corresponding to each value xi of the compression are taken inversely proportional to af. 44.8 Table 103 contains the average values of Y; corresponding to the values X; of the argument and also the number n; of measurements of y for x = xi.
44.
343
DATA PROCESSING BY METHOD OF LEAST SQUARES TABLE
i
x,
y,
0 1 2
I
2 3
0.10 0.19 0.24
--
103
n,
i
x,
Yt
n,
21 8
3 4 5
4 5 6
0.32 0.39 0.48
11 11 10
-- -13
Construct the approximating second-degree polynomial and determine the estimates of the standard deviations aak of the coefficients ilk. 44.9 The net cost (in dollars) of one copy of a book as a function of the number (in thousands of copies) in a given printing is characterized by the data accumulated by the publisher over several years (Table 104). TABLE
104
X
y
X
y
X
y
1 2 3 5
10.15 5.52 4.08 2.85
10 20 30 50
2.11 1.62 1.41 1.30
100 200
1.21 1.15
Select the coefficients for a hyperbolic dependence of the form
and construct the confidence intervals for the coefficients (k = 0, 1) and also for the quantity y for different values of X; if the confidence level is a = 0.90. 44.10 A condenser is initially charged to a voltage U after which it is discharged through a resistance. The voltage U is rounded-off to the nearest multiple of 5 volts at different times. The results of several measurements appear in Table 105. TABLE
105
i
t,, sec.
u,, v.
i
t,, sec.
u,, v.
i
t,, sec.
u,,v.
0 1 2 3
0 1 2 3
100 75 55 40
4 5 6 7
4 5 6 7
30 20 15 10
8 9 10
8 9 10
10 5 5
It is known that the dependence of U on t has the form U = Uoe-at.
344
METHODS OF DATA PROCESSING
Select the coefficients U0 and a and construct the confidence intervals for U0 and a for a confidence level a = 0.90. 44.11 The following data obtained from an aerodynamical test of a model airplane (see Table 106) express the dependence of the angle of inclination 88 (of the elevator ensuring a rectilinear horizontal flight) on the velocity v of the air stream: ~ DB=
al Go+ 2"
v
TABLE 106 i
v, m./sec.
On;
n,
i
0 1 2 3 4
80 90 100 110 120
-3° 44' -2° 58' -2° 16' -1° 39' -1° 21'
8 12 11 9 14
5 6 7 8
v,,
m./sec. 140 160 180 200
On;
n,
-0° 38' -0° 07' 0° 10' oo 35'
6 9 12 10
Find the estimates of the coefficients a0 and a 1 and their standard deviations. The n; denote the number of measurements for a given value of the velocity v;. 44.12 The results of several measurements of the dimension x of a lot of items are divided into intervals and the frequencies pt in Table 107 are computed for them. TABLE 107
The limits of J the interval ofx, 50 60 70 80 90
to to to to to
60 70 80 90 100
pf
0.00333 0.00667 0.03000 0.07667 0.11000
The limits of the interval ofx,
pf
to to to to
0.18667 0.20333 0.16333 0.08333
100 110 120 130
110 120 130 140
The limits of the interval ofx,
pf
to to to to
0.06333 0.05333 0.01333 0.00667
140 150 160 170
150 160 170 180
If the values of pt refer to the midpoints of the intervals X;, select, by the method of least squares, the parameters for the relation p = Po exp {- (x
;:,2x)2}
that approximates the experimental distribution. Apply (the orthogonal) Chebyshev polynomials. Test whether the resulting dependence obeys a normal distribution law for x; that is, whether the following equation holds: 10 Po=~· V27T a
44.
345
DATA PROCESSING BY METHOD OF LEAST SQUARES
44.13 Table 108 contains the measured values of some quantity y as a function of time t (for a 20 hour period).
TABLE
108
t
y
t
y
0.00 0.05 0.10 0.15 0.20 0.25 0.30
-25 -26 -4 7 6 13 -30
0.35 0.40 0.45 0.50 0.55 0.60 0.65
26 32 40 32 21 11 -5
t
y
0.70 0.75 0.80 0.85 0.90 0.95
-16 3 -21 -22 -29 -32
I
If y = a sin (w t - q;) ,
where w
=
degrees 360 24 h , ours
determine the estimates of the parameters a and q;. Find the maximal deviation of the measured quantity y from the approximating function y. Hint. First choose the approximate value q;' and represent yin the form y = a sin 8 + b cos 8, where 8 = wt'- q;', b
= -
a(q; - q;') .
44.14 Table 109 contains the experimental data for the values of a function y = f(x) with period 27T.
TABLE
109
x,, degrees
y,
degrees
y,
degrees
y,
x,, degrees
y,
15 30 45 60 75 90
1.31 1.84 2.33 2.21 2.24 2.39
105 120 135 150 165 180
2.12 2.38 2.98 3.44 3.51 3.33
195 210 225 240 255 270
2.89 2.01 0.92 -0.24 -1.23 -1.98
285 300 315 330 345 360
-2.30 -2.22 -1.57 -1.03 -0.01 -0.82
Xi,
Xi,
Find the representation of this function by the polynomial
y=
fi 0
+
fi 1 cos x
+ b1 sin x +
fi 2 cos 2x
+ b2 sin 2x
and the maximal deviation of the measured quantity y from the approximating function y.
346
METHODS OF DATA PROCESSING
44.15 Table 110 contains the levels x and y of the water in a river at points A and B, respectively (B is 50 km. downstream from A). These levels are measured at noon during the first 15 days of April.
TABLE
i
0
x" m. Yi, m.
12.1 10.5
1
2
11.2 9.3
9.8 8.3
3
4
5
110 6
7
8
9
10
11
12
13
14
7.4 6.6 7.0 6.4 6.0 6.5 5.8 5.8 5.2 5.0 5.1 4.6 5.0 4.4
5.4 3.9
- - - -- - - - - -- - - -- -- - - - - - - - - - 10.4 9.2 8.5 8.8 9.6 8.6 7.1 6.9
If the relation holds, determine the estimates of the coefficients ii0 and ii1 and the maximal deviationy, from the calculated values 5\ if it is known that the errors in measurements of x and y are characterized by standard deviations ax = ay = 0.5 m.
45.
STATISTICAL METHODS FOR QUALITY CONTROL Basic Notions
Quality control methods permit us to regulate product quality by testing. A lot of items is sampled according to a scheme guaranteed to reject a good lot with probability a ("supplier's risk") and to accept a defective lot with probability (3 ("consumer's risk"). A lot is considered good if the parameter that characterizes its quality does not exceed a certain limiting value and defective if this parameter has a value not smaller than another limiting value. This quality parameter can be the number l of defective items in the lot (with the limits /0 and /1 > !0 ), the average value of t or ..\ (with the limits to and t 1 > to or ..\0 and ..\ 1 > ..\ 0 ), or (for the homogeneity control of the production) the variance of the parameter in the lot (with the limits a§ and a~ > a§). In the case in which the quality of a lot improves with the increase of the parameter, the corresponding inequalities are reversed. There are different methods of control: single sampling, double sampling and sequential analysis. The determination of the size of the sample and the criteria of acceptance or rejection of a lot according to given values of a and (3 constitutes planning. In the case of single sampling, one determines the sample size n0 and the acceptance number v; if the value of the controlled parameter is ~v in the sample, then the lot is accepted, if it is > v, then the lot is rejected. If one controls the number (proportion) of defective items in a sample of
45.
STATISTICAL METHODS OF QUALITY CONTROL
347
size n 0 , the total number of defective items in the lot being L and the size of the lot being N, then P(M >
a =
vI L
v cmcno-m
=
10 )
=
1-
L
°C:0-
1
m=O
10 ,
N
where the values C;:' can be taken from Table 1T or computed with the aid of Table 2T. For n 0 ::::; O.IN, it is possible to pass approximately to a binomial distribution law a=
1-
v
2:
m=O
C~0 pg'(l
- p 0 )no-m = 1 - P(p 0 , n 0 , v),
where p 0 = 10 /N, p 1 = 11 /N, and the values of P(p, n, d) can be taken from Table 4T or computed with the aid of Tables 2T and 3T. Moreover, if Po < 0.1, Pl < 0.1, then letting a 0 = n 0 p 0 , a 1 = n 0 p 1 (passing to the Poisson distribution law), we obtain
,..,r:l
--
1-
~ L. m=v+1
-a'{' e -a 1 m!
=
P( X2 --._ 2 ) ;:::; Xq1 ,
where ~
L.
m=v+1
am -a -e m!
are given in Table 7T, and the probabilities P(x 2 > x~) can be obtained from Table 17T fork = 2(v + 1) degrees of freedom. If 50 ::::; n 0 ::::; O.IN, n 0 p 0 ): 4, then one may use the more convenient formulas: a=~_ ~
fJ = ~ _ ~
348
METHODS OF DATA PROCESSING
For ~ 0 > ~ 1 , the lot is accepted if x :;:, v; it is rejected if formulas for a and (3 the minus sign is replaced by plus sign. If the controlled parameter has the probability density /(x)
x<
v, and in the
f..e-1\x,
=
then a = 1-
P(x 2
:;:,
X~o),
where x~o = 2n 0 /.. 0 v, x~ 1 = 2n 0 /.. 1 v, and the probability P(x 2 :;:, x2) is determined by Table 17T for k = 2n 0 degrees of freedom. If n 0 > 15, then approximately
If one controls the product homogeneity and the quality parameter normal, then
where q0 = vja 0 , q 1 = v/a 1 , the parameter is known or &2 =
&2 =
1
(ljn 0 )
--_-
n0
L n
1 1= 1
(
2r= 1 (x; X; -
-
1
- x) 2 if the expectation
IS
x of
L )2 n
n0 1 = 1
Xj
if xis unknown, and the probabilities P(& :::;; qa) are calculated from Table 22T = n 0 degrees of freedom if xis known and fork = n0 - 1 if xis unknown. In the case of a double sampling, one determines the sizes n1 of the first and n 2 of the second samples and the acceptance numbers v 1 , v 2 , v 3 (usually v1 < [n 1 j(n 1 + n 2 )]v 3 < v2 ). If in the first sample the controlled parameter is :::;; vl> then the lot is accepted; if the controlled parameter is > v 2 , then the lot is rejected; in the other cases the second sample is taken. If the value of the controlled parameter found for the sample of size (n 1 + n 2 ) is :::;; v3 , then the lot is accepted and otherwise, it is rejected. If one controls by the number of defective items in a sample, then
fork
As in the case of single sampling, in the presence of certain relations between the numbers n 1 , n 2 , N, 10 , 11 an approximate passage is possible from a hypergeometric distribution to a binomial, normal or Poisson distribution law.
45.
349
STATISTICAL METHODS OF QUALITY CONTROL
If one controls by the average value x Gf the parameter in a sample, then for a normal distribution of the parameter of one item with given variance a 2 in the particular case when n 1 = n 2 = n, v1 = v 3 = v, v2 = ro, we have a = 1 - P1 -
where
(3
0.5(p2 - pt},
PI
=
0.5
- to) , + o.5
Ps
=
0.5
+ 0.5 a/Vn ' (
j)
-
tl)
+ 0.5(p4
Ps
=
- p~),
P2
=
0.5
+ 0.5 (v-
P4
=
0.5
+
A
to) ; -
a/v 2n
0.5 (
j)
-
tl)
A;-
'
0
a/v 2n
For to > t 1, the inequality signs appearing in the conditions of acceptance and rejection are reversed and, in the formulas for p 1 , p 2, p 3 , p 4 , the plus sign appearing in front of the second term is replaced by a minus sign. If one controls by x and the probability density of the parameter X for one item is exponential: f(x) = Ae-r.x, n 1 = n 2 = n, v 1 = v3 = v, v2 = ro, then a =
PD,
1 - P1 - 0.5(p2 -
f3
Ps
=
+ 0.5(p4
- p~),
where P1
Ps
X~o),
=
1 - P(x 2
=
1 - P(x 2 ;:::: X~1),
;::::
P2 P4
X~o),
=
1 - P(x 2
=
1 - P(x2 ;:::: X~1),
;::::
x~ 1 = 2nA 1 v, and the probabilities P(x 2 ;:::: xn are computed according to Table 17T for k=2n degrees of freedom (for Pl andp 3 ) and k = 4n (for P2 and p4). If one controls the homogeneity of the production when the controlled parameter is normally distributed, n 1 = n 2 = n, v1 = v3 = v, v2 = ro, then
X~o = 2nA 0 v,
a = 1 - PI - 0.5(p2 - p~),
(3
P2
=
+ 0.5(p4
- p~),
where PI> p 2 , p 3 • p 4 are determined from Table 22T for q and k, and q = q 0 for p 1 and p 2 , q = q1 for p 3 and p 4 ; for a known x, k = n for PI and p 3 , k = 2n for p 2 and p 4 ; for an unknown x, k = n - 1 for p 1 and p 3 , k = 2(n - 1) for P2 and P4· In the sequential Wald analysis for a variable sample size n and a random value of the controlled parameter in the sample, the likelihood coefficient y is computed and the control lasts until y leaves the limits of the interval (B, A), where B = (3/(1 - a), A = (I - (3)/a; if y '( B, then the lot is accepted, if y ;:::: A, the lot is rejected and for B < y < A the tests continue. If one controls by means of m defective items in a sample, then
y
= y(n, m) =
eGc;r:.rl em en lo
m
0
N-lo
For n '( O.lN, a formula valid for a binomial distribution is useful: y(n, m) =
p]"(l - Pl)n-m m'1 )n-m' Po',. -Po
where
Po=
lo
N'
350
METHODS OF DATA PROCESSING
In this case, the lot is accepted if m :::;; h 1 + nh 3 , the lot is rejected if m): h 2 + nh 3 and the tests continue if h 1 + nh 3 < m < h2 + nh 3 , where h1
=
log B , logPl +log 1 -Po Po 1 - P1
h2
=
log A , logPl +log 1 -Po Po 1 - P1
1-p
hs
=
l o g - -0 1 - P1 ------7~-
logPl +log 1 -Po Po 1 - P1 In Figure 37 the strip II gives the range of values for n and m for which the tests are continued, I being the acceptance range, and III being the rejection range. If n :::;; O.lN, p 1 < 0.1, then
where a0 = np 0 , a 1 = nPl. For the most part, the conditions for sequential control and the graphical method remain unchanged, but in the present case h 1 = log B , logPl Po
h2 = log A , logPl Po
hs = 0.4343(pl - Po).
logPl Po If the binomial distribution law is acceptable, the expectation of the sample size is determined by the formulas M [n I Po ] -_ M [n I Pd
=
(1 - a) log B + a log A ' 1 Po logp 1 - (1 -Po) log 1 -Po Po - P1
f3 log B +
P1log~: -
(1 - ,8) log A
(1 - P1)
=
log~ ~:
m
FIGURE 37
45.
351
STATISTICAL METHODS OF QUALITY CONTROL
The expectation of the sample size becomes maximal when the number of defective items in the lot is l = Nha: M[ ] n max
= _
log B log A 1
logP1log -Po Po 1 - P1
where Po
'
lo
1ll'
=
PI
=
/1
111 .
If one controls by the average value .X of the parameter in the sample and the parameter of one item is a normal random variable with known variance a 2 , then
The lot is accepted if nx ~ h1 + han, the lot is rejected if n.X:;?: h 2 + han and the tests are continued if h1 + nh 3 < n.X < h2 + nh 3 , where h1 = 2.303 f
a2
1 -
f log B;
h 2 = 2.303
0
f
a2
1 -
f log A; 0
The method of control in the present case can also be graphically represented as in Figure 37 if n.X is used in place of m on the y-axis. For fa > f1> we shall have h1 > 0, h 2 < 0 and the inequalities in the acceptance and rejection conditions change their signs. The expected number of tests is determined by the formulas:
M[n I fa]
=
h2
+ (lf~ _:_)h~1
M[n I fd
=
h2
+l~1 h~
- h2)'
h2)'
If the parameter of an individual item has the probability density f(x) then
=
Ae-Ax,
The lot is accepted if n.X:;?: h1 + nh 3 , it is rejected if n.X ~ h2 + nha, and the tests are continued if h1 + nh 3 > n.X > h 2 + nha, where h1
=
log B -2.303 .\ _ A , 1
log A h2 = -2.303 A _ .\ ;
0
1
0
A
h3
=
log 2 Aa 2.303 A A 1 -
0
The graphical representation of the method of control differs from that represented in Figure 37 only because in the present case I represents the rejection
352
METHODS OF DATA PROCESSING
region and III represents the acceptance region. The expected number of tests is computed by the formulas M[n I Ao]
(1- a)logB
=
log M[n I
Ad
=
=-
~~
alogA,
- 0.4343 ,\ 1
f3log B log
M[nlmax
~~
+
+
~
Ao
(1 - f3) log A,
- 0.4343 ,\ 1
~
Ao
h1h2 h~ ·
If the production is checked for homogeneity (normal distribution law), then y
=
y(n, a)
=
(a2 a2)}
an~ exp { -n 2 a1 2 a1
2 a0
·
The lot is accepted (for a known .X) if na 2 :;;; h 1 + nh 3 , It IS rejected if + nh 3 and the tests are continued if h 1 + nh 3 < na 2 < h 2 + nh 3 , where
na 2 ~ h 2
a!
h _ 4.606log B 1 1 1 ' a§- a~
h _ 4.606 log A 2 1 1 '
hs
=
2.303log 2 ao 1 1
a§- a~
The graphical representation is analogous to Figure 37 with the values of na 2 on the y-axis. If .X is unknown, then whenever n appears in the formulas it should be replaced by (n - 1). The expected numbers of tests are
M[nJmax
=-
h1h2 2h~ ·
If the total number of defects of the items belonging to the sample is checked
and the number of defects of one item obeys a Poisson law with parameter a, then all the preceding formulas are applicable for the Poisson distribution if we replace: m by nx, Po and p 1 by a 0 and a 1, a 0 and a 1 by na 0 and nab x~o by 2na 0 and x~ 1 by 2nab where n is the size of the sample.
For n
~
50, na
~
4, it is possible to pass to a normal distribution
45.
353
STATISTICAL METHODS OF QUALITY CONTROL
To determine the probability that the number of tests is n < ng in a sequential analysis when a « f3 or f3 « a, one may apply Wald's distribution
P(y < yg) = Wc(Yg) =
J2~ J:o
i
y- 312 exp {- (y
+~-
2)} dy,
where y is the ratio of the number of tests (n) to the expectation of n for some value of the control parameter of the lot(/, g, A.), yg = Yln~no and the parameter c of Wald's distribution is determined by the following formulas: (a) for a binomial distribution of the proportion of the defective product, c = K
lp logP1 - (1 - p) log 1 -Pol Po 1 - P1 , p(l - p)(logP1 + log 1 -Po) Po 1 - P1
p = ~;
(b) for a normal distribution of the product parameter,
- lx- go; gt
c-K
g1
-
g0
,
(c) for an exponential distribution of the product parameter, 12.303log ~ - ,\1 c = K
c1 ~ Aor
~
"-ol
,
where K
=
{2.303llog Bl if the selected value of the parameter is < h 3 , a « f3; 2.303 log A if the selected value of the parameter is > h 3 , f3 « a.
A special case of control by the number of defective products arises in reliability tests of duration t, where the time of reliable operation is assumed to obey an exponential distribution law. In this case, the probability p that an item fails during time tis given by the formula p = 1 - e-'-t. All the formulas of control for the proportion of defective products in the case of a binomial distribution remain valid if one replaces p 0 by 1 - e-'-ot, p 1 by 1 - e-'-tt. If At < 0.1, then it is possible to pass to a Poisson distribution if, in the corresponding formulas, one replaces a 0 by nA 0 t, a 1 by nA 1 t, x~o by 2nA 0 t, x~ 1 by 2nA 1 t. The sequential analysis differs in the present case because for a fixed number n0 of tested items, the testing time t is random. The lot is accepted if t ~ t 1 + mt 3 , rejected if t ~ t 2 + mt 3 and the tests are continued if t 1 + mt 3 > t > t 2 + mt 3 , where
354
METHODS OF DATA PROCESSING
and m is the number of failures during time t. To plot the graph, one represents m on the x-axis and t on the y-axis. The expectation of the testing time T for At< 0.1 is determined by the formulas: M[T I ..\o] =
tH
no
M[n IPoL M[T]max
=
tH
no
M[n]max•
where tH is a number chosen to simplify the computations and p 0 = ..\ 0tH, Pl = ,\ltH. To determine the probability that the testing time T < t 9 if ex « (3 or f3 « ex, one applies Wald's distribution in which one should set y = t/M[T I ..\] and find the parameter c by the formula valid for a binomial distribution for the preceding chosen value of tH.
SOLUTION FOR TYPICAL EXAMPLES
Example 45.1 A lot of N = 40 items is considered as first grade if it contains at most !0 = 8 defective items. If the number of defective items exceeds !1 = 20, then the lot is returned for repairs.
(a) Compute ex and (3 by a single sampling of size n 0 = 10 if the acceptance number is v = 3; (b) find ex and (3 for a double sampling for which n 1 = n 2 = 5, v 1 = 0, v 2 = 2, v 3 = 3; (c) compare the efficiency of planning by the methods of single and double samplings according to the average number of items tested in 100 identical lots; (d) construct the sequential sampling plan for ex and (3 obtained in (a), determine nmin for the lot~ with L = 0 and L """ N. SOLUTION.
(a) We compute ex and (3 by the formulas ~ c;rc~g-m
L
ex= 1 -
m~
a
,
(3
ex = 0.089,
(3
clo
=
1 ~ c1o L 40
=
0.136.
40
m~o
Using Table lT for C:;', we find
(b) We compute ex and (3 by the formulas
Co
c5
f3 =~ C5 40
2
+ mL"1
~
1
and obtain ex = 0.105,
(3
=
0.134.
em20 cla-m 20 ·
45.
355
STATISTICAL METHODS OF QUALITY CONTROL
(c) The probability that a first-grade lot in the case of double sampling will be accepted after the first sampling of five items is P(m 1
:::;;
v1 )
c~c~2
= P(m 1 = 0) = - C = 0.306. 5 40
The expectation of the number of lots accepted after the first sampling from a total number of 100 lots is 100·0.306 = 30.6lots; for the remaining 69.4lots a second sampling is necessary. The average number of items used in double sampling is 30.6·5 + 69.4·10 = 847. In the method of single sampling, the number of items used is 100-10 = 1000. In comparing the efficiency of the control methods, we have neglected the differences between the values of a and f3 obtained by single and double sampling. (d) For a = 0.089 and f3 = 0.136 the plan of sequential analysis is the following: B = -1 -f3 = 0.149, -a
A = l -
a
f3 = 9.71,
log B = -0.826, log A = 0.987.
To determine nmin when all the items of the lot are nondefective, we compute the successive values of log y(n; 0) by the formulas log y(l; 0) =log (N- 11 )! +log (N- [0 + 1)! -log (N- 10 )!- log (N- /1 + 1)!, log y(n + 1; 0) = log y(n; 0) - log (N - !0 - n)! + log (N - !1 - n)! We have: logy(l; logy(3; log y(5; log y(7;
0) 0) 0) 0)
= = = =
0.7959; 0.3614; -0.1136; -0.6377;
log y(2; log y(4; log y(6; log y(8;
0) 0) 0) 0)
= = = =
0.5833; 0.1295; -0.3688; -0.9217.
Since the inequality log y(n; 0) < log B is satisfied only if n ): 8, it follows that nmin = 8. For a lot consisting of defective items, n = m. We find log y(1; 1) = 0.3979. For successive values of n, we make use of the formula logy(n + 1; m + 1) = logy(n; m) + log(/1
-
m) -log(/0
-
m).
We obtain log y(2; 2) = 0.8316; log y(3; 3) = 1.3087 >log A = 0.987; consequently, in this case nmin = 3. Similarly one can solve Problem 45.1.
356
METHODS OF DATA PROCESSING
Example 45.2 A large lot of tubes (N > 10,000) is checked. If the proportion of defective tubes is p ~ p 0 = 0.02, the lot is considered good; if p ): p 1 = 0.10, the lot is considered defective. Using the binomial and Poisson distribution laws (confirm their applicability): (a) compute a and (3 for a single sampling (single control) if n = 47, v = 2; (b) compute a and (3 for a double sampling (double control) taking n 1 = n2 = 25, v 1 = 0, v 2 = 2, v3 = 2; (c) compare the efficiency of the single and double controls by the number of items tested per 100 lots; (d) construct the plan of sequential control, plot the graph and determine nmin for the lot with p = 0, p = 1, compute the expectation for the number of tests in the case of sequential control. SOLUTION.
(a) In the case of binomial distribution, 2
a = 1-
2
{3
c4~o.02m0.98 47 - m'
m=O
2
2
=
C~0.10m0.9047-m.
m~o
Using Table 4T for the binomial distribution function and interpolating between n = 40 and n = 50, we get a = 0.0686, (3 = 0.1350. In the case of a Poisson distribution law, computing a 0 = n 0 p 0 = 0.94, a1 = n 0 p 1 = 4.7, we obtain
Using Table 7T, which contains the total probabilities for a Poisson distribution, we find (interpolating with respect to a) a= 0.0698,
f3
=
0.159.
(b) For a binomial distribution law, using Table IT and 4T, we find 2
2
a= 1 -
c~o.02m,0.98 25 -ml
:x:
m 1 =0
+ m~ 1 [ C;'l0.02m (3
1
0.98 25 -m 1 ( 1 -
= C8 50.1 °0.9 25 + m,~ 1 [ C;'lO.l m,0.9 25 -m,
C;'~0.02m•0.98 25 -m2)]
ex:
=
0.0704,
C;'tO.l m•0.9 25 -m2)] = 0.1450.
In the case of a Poisson distribution law, using Tables 6T and 7T and computing a 01 = 0.5, a 02 = 0.5, a 11 = 2.5, a 21 = 2.5, we obtain ~
a=
£....
m, ~3
0.5m,e-0.5 !
m1.
+
i
m, ~ 1
f3
2.5m 1 e- o.25 =
1
m1!
~
[o.sm'e,-0.5 ( m1
0
m2
~3-m,
o.sm2e,-0.5)] m2 0
=
0.0715,
45.
357
STATISTICAL METHODS OF QUALITY CONTROL
The essential difference between the values of f3 computed with the aid of binomial and Poisson distributions is explained by the large value of p 1 = 0.10. (c) The probabiUy oi acceptance of a good lot (p :;::; 0.02) after the first sampling in the case of double control (we compare the results of the binomial distribution) is P(m 1
v1 )
:(
=
P(m 1
=
0)
C8 5 0.02° · 0.98 25
=
=
0.6035.
The average number of good lots accepted after the first sampling from the total number of IOO lots is 100·0.6035
60.35.
=
For the remaining 39.65 lots, a second sampling will be necessary. The average expenditure in tubes for a double control of 100 lots is equal to 60.35. 25
+
39.64. 50 = 3497.
In a defective lot, the probability of rejection after the first sampling in the case of double control is 2
P(ml >
v2)
=
P(m 1 > 2)
=
2:
I -
m1
C;"lO.l m,0.9 25 -m,
=0
=
0.4629.
The average number of lots rejected after the first sampling from a total of I 00 lots is 100-0.4629
46.29.
=
For the remaining 53.71 lots a second sampling will be necessary. The average expenditure in tubes for a double control of I 00 lots will be 46.29. 25
+ 53.71· 50
3843.
=
For a single control, in all cases I 00 · 50
will be consumed. (d) For a = 0.0686, f3 distribution we get: B
=
=
log B
O.I450,
=
=
5000 tubes
0.1350 for a sequential control, using a binomial
A
-0.8388,
Furthermore, h1 = -1.140, h 2 for a good lot for p = 0:
1.496, h 3
=
nmin = -
for a defective lot when p
=
I:
nmin
=
h2
=
1.1007.
=
1.140 0 _0503
=
22.7 ~ 23;
+ nmlnh3' h2
nmin
=
0.0503 (Figure 38). We find
=
hl h3
log A
1.26I,
=
1 - h3
=
1.496 0.9497
=
1. 5 ~ 2 ·
We determine the average numbers of tests for different p: M[n I 0.02]
=
31.7;
M[n I 0.10]
=
22.9;
M [n]max
=
35.7.
nm 1n
358
METHODS OF DATA PROCESSING
m
Ill
I,S hz
arctan /lll.Jil.J
1,0 II
0.5
FIGURE 38
'o~------------------------~n -IJ,5
-W~------------------7,5 h,
I
Problems 45.2 to 45.5, 45.7, 45.8 and 45.10 can be solved by following this solution. Example 45.3 A large lot of resistors, for which the time of reliable operation obeys an exponential distribution, is subjected to reliability tests. If the failure parameter ,\ ::::; ..\ 0 = 2 · 10- 6 hours - 1 , the lot is considered good; if ,\ ~ ..\ 1 = 1 · 10- 5 hours-\ the lot is considered defective. Assuming that ..\t0 < 0.1, where t 0 is a fixed testing time for each item in a sample of size n0 , determine for a = 0.005, (3 = 0.08, the value of n 0 • Use the method of single sampling for different t 0 , find 11 with the condition that t 0 = 1000 hours and also construct the plan of sequential control in the case n = n0 for t 0 = 1000 hours. Compute tmin for a good lot and a defective one and M[T I ..\], P(t < 1000), P(t < 500). SOLUTION. The size n 0 of the sample and the acceptance number 11 are determined by noting that ..\t 0 < 0.1, which permits use of the Poisson distribution and furthermore, permits passing from a Poisson distribution to a chisquare distribution. We compute the quotient ..\ 0 /..\ 1 = 0.2. Next, from Table 18T we find the values x~o for the entry quantities P(x 2 ~ x~o) = 1 - a = 0.995 and k; X~1 for P(x 2 ~ X~1) = (3 = 0.08 and k. By the method of sampling, we establish that for k = 15 2
X~o =
X~1 =
4.48,
23.22,
X~o
=
0.1930;
=
0.2041.
Xq1
for k
=
16 2
X~o =
X~1
5.10,
=
24.48,
X~o Xq1
x~ 1
Interpolating with respect to x~o/x~ 1 = 0.2, we find: k = 15.63, x~o = 4.87, = 23.99. We compute 11 = (k/2) - 1 = 6.815; we take 11 = 6, 2n 0 ..\ 0 t 0 = 4.87,
hence, it follows that n0 t 0 ..\t0 < 0.1 leads to t0
=
4.87/2·0.000002
=
1.218·10- 6 • The condition
< 0.1/0.00001 = 10,000 hours (since ..\ 1 = 0.00001).
Taking different values given in Table 111.
t0
< 10,000, we obtain the corresponding values of n0
45.
TABLE
no
111
100
500
1000
2500
5000
12,180
2436
1218
487
244
t 0 in hours
In B
359
STATISTICAL METHODS OF QUALITY CONTROL
We compute B, A, t 1 , t 2 for the method of sequential analysis: B = 0.08041, = -2.5211, A = 184, In A = 5.2161. Taking n0 = 1218, we have t1 =
t2 = t3 =
258.7 hours; -535.3 hours, 165.2 hours (Figure 39).
The minimal testing time in the case when m = 0 for a good lot is tmin = 258.7 hours; for a defective lot tmin = - 535.3 + 165.2m > 0; m = 3.24 ~ 4; for m = 4, tmin = 125.5 hours. Iffor t < 125.5 hours m ): 4, then the lot is rejected. To compute the average testing time for n = n 0 = 1218, we take tH = t 0 = 1000 hours. Then Po = AotH = 0.002; Pl = /..ltH = 0.010;
.!..!!.__ not3
=
M[n I P1l
=
/..*tH
=
0.00497.
Furthermore, we find
M[n IPol
=
505,
M[nJmax
572,
=
1001;
t
JOO 2.fQ
FIGURE 39
2/1/J
15/J 100 5~
m -1/J/J
-200 -JOO -INIIJ -SIJ(J
-6'/J/J
360
METHODS OF DATA PROCESSING
then we compute M[T I A0 ] = 415 hours, M[T]max
M[T I ,\r) =
=
470 hours,
821 hours.
We find the probability that the testing time for a fixed number of items n = n 0 = 1218 is less than 1000 hours and 500 hours. Therefore, fortH= 1000
hours, we compute the value of the parameter c of Wald's distribution and the value of no tH y = M[n I Po]= M[TI ,\) with the condition that p 0 = A.0 t 0 = 0.002; PI = A. 1 t 0 = 0.01. Taking p = p 0 since a« (3, we obtain c = 2.37, y = 1000/415 = 2.406. We find that (see Table 26T) P(T < 1000) = P(n < 1218) = Wc(Y) = 0.9599. For y = 0.5, we have y
1.203,
=
P(T < 500) = 0.725.
One can solve Problem 45.9 similarly.
Example 45.4 The quality of the disks produced on a fiat-grinding machine is determined by the number of spots on a disk. If the average number of spots per 10 disks is at most one, then the disks are considered to be of good quality; if the average number is greater than five, then the disks are defective. A sample of 40 disks is selected from a large lot (N > 1000). Assuming that the number of spots on a disk obeys a Poisson distribution law: (a) determineaand(3forv = 9; (b) for these a and (3 construct the plan of sequential control, compute nmin for a good lot and a defective one and find the values of M[n I a]; (c) test a concrete sample, whose data appear in Table 112, by the methods of single and sequential control.
TABLE
n
Xn
n
1 2 3 4 5 6 7 8
0 1 1
10
112
Xn
n
Xn
n
Xn
n
Xn
1 1 1 1 2 2 2 2
17
18 19 20 21 22 23 24
2 2 3 3 3 3 3 4
25 26 27 28 29 30 31 32
4 4 4 4 4 4 4 4
33 34 35 36 37 38 39 40
4 4 5 5 6 6 7 7
- - - - - - - - - - - - - -- - - -- I
1 1 1 1
SoLUTION.
na 0
=
4, na 1
=
9 11 12 13 14 15 16
(a) Using the Poisson distribution, we have a0 = 0.1, a 1 = 0.5, 20. Using Table 7T for the total probabilities of Xn occurrences
45.
361
STATISTICAL METHODS OF QUALITY CONTROL
of spots on disks in the sample, we find 4xne-4 L= -
~ 20xne,- 20 0.00500.
oo
a=
Xn
1 Xn ·
10
=
h1
I
=
Xn
(b) For a = 0.0081, (3 control (Figure 40) are: B = 0.005041;
(3
0.00813, =
logE a log 2 a0
A = 122.8;
-3.29;
=
Xn ·
0.0050, the characteristics of the sequential
log B = -2.298; =
= 10
h2
=
log A = 2.089,
log A= 2.99; a log-1 ao
hs = 0.4343(a 1 - ao) = 0 _248 . log al ao We compute nmin: for for
nmin = 13.2 ~ 14 nmin = 18.7 = 19.
0, = n,
Xn = Xn
The average number of tests in the case of sequential control is M[n I a 0 ] = 21.8;
M[n I ad
=
11.8;
M[n]max
=
39.5.
(c) In a sample with n0 = 40, it turns out that Xn = 7 < v = 9; consequently, the lot is accepted. Applying the method of sequential control (see Figure 40), for n = 30 we obtain that the point with coordinates (n, m) lies below the lower line; that is, the lot should be accepted. Indeed, for for
29, n = 30, n
=
Xn = Xn =
4lz1 4h1
+ mh3 + mh3
= =
3.90; 4.15;
Xn Xn
> h1 < h1
+ mh3; + mh3.
Similarly one can solve Problem 45.11. Example 45.5 The quality of punchings made by a horizontal forging machine is determined by the dispersion of their heights X, known to obey a
FIGURE 40
362
METHODS OF DATA PROCESSING
normal distribution law with expectation x = 32 mm. (nominal dimension). If the standard deviation a :::;; a 0 = 0.18 mm., the lot is considered good; if a ): a 1 = 0.30 mm., the lot is defective. Find a and f3 for the method of single sampling if n0 = 39 and 11 = 0.22 mm. Use the resulting values for a and f3 to construct a control plan by the method of sequential analysis. Compute nmin for a good lot and a defective one and find M[n I a]. We compute
SOLUTION.
a
and f3 by the formulas
fork = n0 = 39, q 0 = 11ja 0 = 1.221, q 1 = 11ja1 = 0.733. Interpolating according
to Table 22T for the chi-square distribution, we find 0.0303;
a =
f3
0.0064.
=
We find the values of B, A, h 1 , h2 , h 3 for the method of sequential analysis: B
=
0.006601;
ln B
=
hl = -0.528;
-5.021; A h2 = 0.345;
=
30.10; ln A h3 = 0.0518.
=
3.405;
We find nmin· For the poorest among the good lots, 6 2 = a~ = 0.0324; + nminh3; nmin = 27.2 ~ 28. For the best among the defective lots, 6 2 = a! = 0.0900; nm1na! = h 2 + nminh3; nmin = 9.3 ~ 10. We compute the average numbers of tests M[n I a] for different a: nm!na~ = hl
M[n I a 0 ]
=
25.9;
M[n
I ad=
M[n]max
8.8;
=
34.0.
In a similar manner, one can solve Problem 45.12. Example 45.6 The maximal pressure X in a powder chamber of a rocket is normally distributed with standard deviation a = IO kg./cm. 2 • The rocket is considered good if X :::;; ~ 0 = 100 kg.jcm. 2 ; if X ): ~ 1 = 105 kg./cm. 2 , the rocket is returned to the plant for adjustment. Given the values a = 0 .I 0 and f3 = O.OI, construct the plans for single control (n 0 , 11) and sequential control, compute the probabilities P(n < n 0 ) and P(n < (I/2)n 0) that for the sequential control the average number of tests will be less than n0 and (I/2)n 0 , respectively. SOLUTION. To compute the sample size n 0 and the acceptance number 11 for a single control, we use the formulas
(Iajv-• n~0) 1_
0
=
I - 2a,
<1>(~ 1 -
11
a/Vn 0
)
= I - 2{3.
Substituting the values for a and f3 and using Table 8T for the Laplace function, we find II IOO. ;105- I I . ; IO v n 0 = 2.3264; 10 v n0 = 1.28I6, hence, it follows that n 0 = 52, 11 = 101.8 kg./cm. 2 • For the sequential control, we find that B = O.Olll, ln B A = 9.9, In A = 2.293, h 1 = -90, h 2 = 45.86, h 3 = 102.5.
=
-4.500,
363
45.
STATISTICAL METHODS OF QUALITY CONTROL
X=
We determine hmin· For the poorest among the good lots, when to = 100, nmin = 36;
for the best among the defective lots when nmin·l05
=
x = t1
+ nmin·l02.5;
45.86
105,
=
nmin
18.3
=
~
19.
The average number of observations M[n I t] is equal to
M[n
I to]= 30.6;
M[n
I t1J
17.8;
=
M[n]max
=
41.3.
To determine the probability P(n < 52) since a« {3 for x we compute: c = 1.146; K = ln A = 2.293; 1 no Yl2 = 2 Yll = 2.016. Yll = M[n I td = 4.031;
=
t1
=
105,
From Table 26T for Wald's distribution law we find that
P(n < 52)
=
P(n < 26)
0.982,
=
0.891.
By following this solution one can solve Problem 45.13. Example 45.7 The average time of operation of identical electron tubes represents t :;::,: t 0 = 1282 hours for a good lot and t ~ t 1 = 708 hours for a defective one. It is known that the time T of reliable operation obeys an exponential distribution law with the probability density f(t)
=
:>..e-At,
where the parameter A is the intensity of failures, that is, the inverse of the mean time of operation of a tube in hours. Determine for a = 0.001 and {3 = 0.01, the size n0 of the single sample and the acceptance number v, construct the sequential control plan and find nmin• M[n I .\], P(n < n 0 ), P(n < (1/2)n 0 ). SOLUTION. Assuming that n0 > 15 (since a and {3 are small), we replace the chi-square distribution, which the quantity 2An 0 jX obeys, by a normal distribution; i.e., we set
P(x 2
:;::,:
x~)
=
:n),
0.5 - 0.5
since the number of degrees of freedom is k
2n. We obtain the equations
=
0.5 -
0.5<1>(x~;~n2n)
=
1 - a,
0.5 -
0.5
=
{3;
hence, it follows, from Table 8T, that
X~o ~ 2n 2 n
=
-3.090,
X~l
- 2n 2Vn
=
2.324
364
METHODS OF DATA PROCESSING
v
0.000780 -
v
= -3.090 . ;-' v n0
0.001413 -
v
= 2.324. ;-. v no
v
If we solve this system of equations, we obtain v
n 0 = 99.03 ;:::; 100.
= 0.001141'
Since n 0 > 15, the use of a normal distribution is permissible. For the sequential control, we find that: In B = -4.604;
B = 0.01001; hl
=
7273;
A= 990;
h2 = -1090·10; A* =
~3
InA= 6.898; h3 = 938.0;
= 0.001066.
We determine nmin· For the poorest among the good lots, l = t 0 = 1282 hours, nmin = 21.1 ;:::; 22; for the best among the defective lots, l = t 1 = 708 hours, nmin = 47.4 ;:::; 48. We find the average numbers of tests for different A: M[n I A0 ] = 20.7;
M[n I A] = 46.6;
M[n]max = 90.0.
Since a « {3, we determine K = [In B! = 4.604 and, then, the parameter c of Wald's distribution: c = 1.525; furthermore, we find y 01 = 100/20.7 = 4.82; Yo2 = 2.41. From Table 26T, for y 01 (y 02 ) and c, we have p = P(n < 100) > 0.99
(for p < 0.999),
P(n < 50) = 0.939. Similarly Problem 45.14 can be solved. PROBLEMS
45.1 Rods in lots of 100 are checked for their quality. If a lot contains L ::::; !0 = 4 defective items, the lot is accepted; if L ~ 11 = 28, the lot is rejected. Find a and f3 for the method of single sampling if n 0 = 22, v = 2, and for the method of double sampling for n1 = n 2 = 15, v 1 = 0, v 2 = 3, v 3 = 3; compare their efficiencies according to the average number of tests; construct the sequential analysis plan and compute the minimal number of tests for a good lot and a defective one in the case of sequential control. Use the values of a and f3 obtained by the method of single sampling. 45.2 In the production of large lots of ball bearings, a lot is considered good if the number of defective items does not exceed 1.5 per cent and defective if it exceeds 5 per cent. Construct and compare
45.
STATISTICAL METHODS OP QUALITY CONTROL
365
the efficiency of the plan of single control, for which the sample size n 0 = 410 and acceptance number v = 10, and the plan of double control, for which n1 = n 2 = 220, v1 = 2, v 2 = 7, v3 = 11. Construct the sequential control plan with a and (3 as found for the plan of single control. Compare the efficiencies of all three methods according to the average number of tests and compute nmin for a good lot and a defective one for sequential control. 45.3 A large lot of punched items is considered good if the proportion of defective items p :( p 0 = 0.10 and defective if p ~ p 1 = 0.20. Find a and (3 for the control by single sampling: use sample size n0 = 300 and acceptance number v = 45. For the resulting values of a and (3, construct the control plan by the method of sequential analysis and compute nmin for a good lot and a defective one; find M [n I p] and P(n < n 0 ), P(n < (1/2)n 0 ). Hint: Pass to the normal distribution. 45.4 For a large lot of items, construct the plan of single control (n 0 , v) that guarantees (a) a supplier's risk of 1 per cent and a consumer's risk of 2 per cent, if the lot is accepted when the proportion of defective items is p :( Po = 0.10 and rejected when p ~ p 1 = 0.20 (use the normal distribution), (b) a = 0.20, (3 = 0.10 for the same p 0 and p 1 applied to a Poisson distribution law. Construct the corresponding plans of sequential control and find the expectations for the number of tests. 45.5 For a = 0.05 and (3 = 0.10, construct the plans of single and sequential control for quality tests of large lots of rivets. The rivets are considered defective if their diameter X> I3.575 mm. A lot is accepted if the proportion of defective rivets is p :( Po = 0.03 and rejected if p ~ p 1 = 0.08. Compute, for a Poisson distribution, the size n 0 of the single sample and the acceptance number v. For the same a and (3, construct the plan of sequential control, compute nmin for a good lot and a defective one and find the average number of tests M[n I p] in a sequential control. 45.6 Rivets with diameter X> I3.575 mm. are considered defective. At most 5 per cent of the lots whose proportion of defective items is p < p 0 = 0.03 may be rejected and at most I 0 per cent of lots whose proportion of defective items is p ~ p 1 = 0.08 may be accepted. Assuming that the random variable X obeys a normal distribution whose estimates of the expectation x and variance a2 are determined on the basis of sample data, find the general formulas for the size n 0 of the single sample in dimension control and for z 0 such that the following condition is satisfied P(x P(x
+ az 0 + azo
>
>
II p II p
=Po)
=
a,
P1)
=
I -
=
f3.
Compute n 0 and z 0 for the conditions of the problem. Consider the fact that the quantity
u
=
x + az
0
366
METHODS OF DATA PROCESSING
is approximately normally distributed with parameters M[u] =.X+ az 0 ,
where k = n - 1. Compare the result with that of Problem 45.5. 45.7 Using the binomial and Poisson distributions, construct the plan of double control for n 1 = n 2 = 30, v1 = 3, v2 = 5, v3 = 8, if a lot is considered good when the proportion of defective items is p :::;; p 0 = 0.10 and defective whenp ): p 1 = 0.20. For the values a and f3 found for the binomial distribution, construct the plans of single and sequential control, compare all three methods according to the average number of tests. For the sequential control, find nmin for a good lot and a defective lot and compute the expectation of the number of tests M[n Ip]. 45.8 Construct the control plans by the methods of single and sequential sampling for large lots of radio tubes if a lot with proportion of defective items p :::;; p 0 = 0.02 is considered good and with p ): p 1 = 0.07 is considered defective. The producer's risk is a = 0.0001 and the consumer's risk is f3 = 0.01. For the plan of sequential control, determine nmin for a good lot and a defective one, find the average number of tests M [n I p] and the probabilities P(n :::;; M [n I PoD, P(n :::;; 2M [n I PoD· 45.9 The time of operation T (in hours) of a transformer obeys an exponential distribution with an intensity of failures ,\. Assuming that ,\t0 < 0.1, construct the plans of control by single sampling and sequential analysis for a = 0.1 0, f3 = 0.10. For the single control, find the acceptance number v and the size n 0 of the sample if the testing period of each transformer is t 0 = 500, 1000, 2000, 5000 hours. (Replace the Poisson distribution by a chi-square distribution.) For the sequential control, take a fixed sample size n 0 corresponding to t 0 = 1000 hours and find the average testing time of each transformer M[T I ,\]. Assume that a lot of transformers is good if the intensity of failures ,\:::;; ,\0 = 10- 5 hours- 1 and defective if,\): ,\1 = 2.10- 5 hours- 1 . 45.10 A large lot of electrical resistors is subjected to control for a = 0.005, f3 = 0.08; the lot is considered good if the proportion of defective resistors is p :::;; Po = 0.02 and defective if p ): p 1 = 0.10. Applying a chi-square distribution instead of a Poisson one, find the size n 0 and the acceptance number v for the method of single sampling; construct the plan of sequential control for a good lot and a defective lot; compute the expectation of the number of tested items and the probabilities P(n < n0 ), P(n < (lj2)n 0 ). 45.11 Before planting, lots of seed potatoes are checked for rotting centers. A lot of seed potatoes is considered good for planting if in each group of 10 potatoes there is at most one spot and bad if there are five spots or more. Assuming that the number of spots obeys a Poisson distribution, compute a and f3 for the method of double sampling if n1 = 40, n 2 = 20, v1 = 4, v2 = 12, v3 = 14. For the resulting values of a and (3, construct the plans of single and sequential control. Compare the
45.
STATISTICAL METHODS OF QUALITY CONTROL
367
efficiencies of all three methods according to the mean expenditures of seed potatoes necessary to test 100 lots. 45.12 The quality characteristic in a lot of electrical resistors, whose random values obey a normal distribution law with a known mean of 200 ohms, is the standard deviation a, and the lot is accepted if a ~ a 0 = 10 ohms and defective if a ): a 1 = 20 ohms. Construct the control plans by the method of single sampling with n0 = 16, v = 12.92 and double sampling with n 1 = n 2 = 13, v 1 = v 3 = 12, v 2 = oo. For the resulting values of a and f3 (in the case of single control), construct the plan of sequential control. Compare the efficiencies of all three methods of control according to the average number of tests. Compute nm 1n for the poorest among the good lots and the best among the defective lots. 45.13 Several lots of nylon are tested for strength. The strength characteristic X, measured in g./denier (specific strength of the fiber), obeys a normal distribution with standard deviation a = 0.8 g./denier. A lot is considered good if X ): x 0 = 5.4 g.jdenier and bad if X ~ x 1 = 4.9 g./denier. Construct the plan of strength control by single sampling with n 0 = 100 and v = 5.1. For the resulting values of a and {3, construct the plan of control by the method of sequential analysis, compute the mean expenditure in fibers and the probabilities P(n < n0 ), P(n < (1/2)n 0 ). 45.14 It is known that if the intensity of failures is ,\ ~ ,\0 = 0.01, then a lot of gyroscopes is considered reliable; if,\ ): ,\ 1 = 0.02, the lot is unreliable and should be rejected. Assuming that the timeT of reliable operation obeys an exponential distribution and taking a = f3 = 0.001, construct the plans for single (n 0 , v) and sequential controls according to the level of the parameter ,\. Find the average number of tested gyroscopes M [n I ,\] for the case of sequential control. 45.15 A large lot of condensers is being tested. The lot is considered good if the proportion of unreliable condensers is p ~ p 0 = 0.01; for p ): Pl = 0.06 the lot is rejected. Construct the plan of single control (n 0 , v) for the proportion of unreliable items so that a = 0.05, f3 = 0.05. To establish the reliability, each tested condenser belonging to the considered sample is subjected to a multiple sequential control for a' = 0.0001, {3' = 0.0001 and a condenser is considered reliable if the intensity of failures ,\ ~ ,\ 0 = 0.0000012 and unreliable for ,\ ): ,\ 1 = 0.0000020 hours - 1 (n is the number of tests used to establish the reliability of a condenser for given a' and (3'). One assumes that the time of reliable operation of a condenser obeys an exponential distribution. 45.16. Construct the plans of single and sequential controls of complex electronic devices whose reliability is evaluated according to the average time T of unfailing (reliable) operation. If T ): T 0 = 100 hours, a device is considered reliable and if T ~ T 1 = 50 hours, unreliable. It is necessary that a = f3 = 0.1 0. Consider that for a fixed testing time tT a device is accepted if tTjm = T ): v and rejected if T < v, where m is the number of failures for time t, and vis the acceptance number in the case of single control (n 0 = 1; in case of failure the device is repaired and the test is continued). In this case, tT/T obeys approximately a
368
METHODS OF DATA PROCESSING
Poisson distribution. In the case of sequential control, the quantity t depends on the progress of the test. (a) Determine the testing time tT and the acceptance number v for a single control. (b) For the plan of sequential control, reduce the condition for continuation of the tests ln B < In y(t, m) < In A to the form t 1 + mt3 > t > t2 + mt 3 • For t 1 , t2 , t 3 , obtain, preliminary general formulas. (c) In the case of sequential control, determine the minimal testing time tmin for the poorest of the good lots and the best of the rejected ones.
46.
DETERMINATION OF PROBABILITY CHARACTERISTICS OF RANDOM FUNCTIONS FROM EXPERIMENTAL DATA Basic Formulas
The methods of determination of the expectation, the correlation function and the distribution laws of the ordinates of a random function by processing a series of sample functions does not differ from the methods of determination of the corresponding probability characteristics of a system of random variables. In processing the sample functions of stationary random functions, instead of averaging the sample functions, one may sometimes average with respect to time; i.e., find the probability characteristics with respect to one or several sufficiently long realizations (the condition under which this is possible is called ergodicity). In this case, the estimates (approximate values) of the expectation and correlation function are determined by the formulas
x = Tlr Jo x(t) dt, _
('T
1
Kx(r) = T-
Jo
T
-t
+
[x(t)- x][x(t
r)- x] dt,
where Tis the total time of recording of the sample function. Sometimes instead of the last formula one uses the practically equivalent formula, -
Kx(r)
1 -T
=
-
T
iT-t x(t)x(t + 0
r) dt- x 2 •
In the case when the expectation xis known exactly, Kx(r)
~T-t
1
=
T-
~
-T
T
1
-
T
Jo
[x(t)- x][x(t
+
iT-t x(t)x(t + r) dt 0
r) - x] dt
x2 •
46.
DETERMINATION OF PROBABILITY CHARACTERISTICS
369
If x and Kx(r) are determined from the ordinates of a sample function of a random function at discrete time instants ti = (j - 1)Ll, the corresponding formulas become 1 m x = - x(tJ), m i=l 1 m-! Kx(r) = m _I j~1 [x(tj) - x][x(tj + r) - x]
L
or Kx{r)
=
1 m-! m -li~ x(ti)x(ti
+ r)-
x2,
where r = IL\, T = mL\. For normal random functions, the variances x and Kx(r) may be expressed in terms of Kx(r). In practical computations; the unknown correlation function Kx(r) in the formulas for D[x] and D[Kx(r)] is replaced by the quantity Kx(r). When one determines the value of the correlation function by processing several sample functions of different durations, one should take as the approximate value of the ordinates of Kx( r) the sum of ordinates obtained by processing individual realizations whose weights are inversely proportional to the variances of these ordinates.
SOLUTION FOR TYPICAL EXAMPLES
Example 46.1 The ordinates of a stationary random function are determined by photographing the scale of the measuring instrument during equal time intervals Ll. Determine the maximal admitted value of Ll for which the increase in the variance of Kx(O) compared with the variance obtained by processing the continuous graph of realization of a random function will be at most 8 per cent if the approximate value of Kx(r) = ae-a1'1 and the total recording time Tis » 1/a. It is known that x = 0 and the function X(t) can be considered normal.
SOLUTION. Since x = 0, by use of the continuous recording, the value of Kx(O) is determined by the formula
K1 (0)
=
~
f
x 2 (t) dt.
For finding the variance of K1 (0), we have D[K1(0)]
=
~
M[Kr(O)] - {M[K1(0)J)2
;2
=
;2 f f K~(t2
- t1) dt1 dt2
a2foT (T- r)e-2a' dr.
If after integration we eliminate the quantities containing the small (by assumption) factor e-aT, we get
D[K1 (0)]
=
a2 T 2 a2 (2aT- 1).
370
METHODS OF DATA PROCESSING
If the ordinates of the random function are discrete, the value of Kx(O) is
Determining the variance of K 2 (0), we find that D[K2 (0)] =
~ 2 i~ ~~ M [X 2(JL\)X 2(lL\)] {
2 = 2
m
m
m
L: L: K'fc(lL\ -
- m 2K;(O)}
.
jL\),
i=l l=l
where for the calculation of the expectation one uses a property of moments of systems of normal random variables. Using the value of Kx( T), we obtain D[K2(0)]
2
L L;
2m
= ~
m
L (m _ r)e-2arA _ ...!!:..._ m
42m
e-2all-miA
= ~
22
m i=l l=l m r=o - 2a2L\ T(l - e-4aA) - 2L\e-2aA - T2 (1 - e-2aA)2
The limiting value of L\ is found from the equation D[K2 (0)] D[K1(0)]
=
0 01 a·
1
+ .
'
that is, from the equation 2a2L\[T(l _ e-4aA) _ 2L\e-2aA] (2aT- 1)(1 - e-2aA)2
=
1
+ 0.01a.
For aLl « 1, we obtain approximately K=
2aT- 1 a -· 2aT- 3 100
PROBLEMS 46.1
Prove that the condition lim Kx(T) ·~
=
0
00
is necessary in order that the function X(t) be ergodic. 46.2 Verify whether the expression Sx(w)
=~I
r
eiwTx(t) dtr
may be taken as an estimate of the spectral density if X(t) is a normal stationary random function (.X = 0) and Jooo !K( T)! dT < oo.
46.
DETERMINATION OF PROBABILITY CHARACTERISTICS
371
46.3 To determine the estimate of the correlation function of a stationary normal stochastic process X(t) (x = 0) a correlator is used that operates according to the formula K-x(T)
1 -T
=
-
T
lT-< x(t)x(t + T) dt. 0
Derive the formula for D[Kx(T)]. 46.4 Determine the expectations and the variances of the estimates of correlation functions defined by one of the formulas 1 K1(T) = -T
lT-< x(t)x(t + T)dt- (.X) 2,
K2(T)
r-·
-
=
T
1
0
T- T Jo
[x(t)- x][x(t
+
T)- .X] dt,
where .X = 1/(T- T) J~-· x(t) dt, if X(t) is a normal random function. 46.5 The correlation function of the stationary stochastic process X(t) has the form Find the variance for the estimate of the expectation defined by the formula .X =
~
f
x(t) dt.
46.6 The spectral density Sx(w) is found by a Fourier inversion of the approximate value of the correlation function. Determine D[Sx(w)] as a function of w if KxH =
~
f
x(t)x(t
+ T) dt,
x=
0,
the process is normal and, to solve the problem, one may use Kx(T) = ae-al
+
ajTI)
instead of Kx( T) in the final formula. 46.7 The correlation function Kx( T) determined from an experiment is used for finding the variance of the stationary solution of the differential equation Y(t) + 2 Y(t) = X(t). Determine how ay will change if, instead of the expression Kx(T) = a;e- 0 · 21 1' 1(cos 0.75T
+ 0.28 sin 0.75ITI)
representing a sufficiently exact approximation of Kx(T), one uses K~(T) = a;e-"1l
f3 1 T,
where a 1 and {3 1 are chosen such that the position of the first zero and the ordinate of the first minimum of the expression of K~( T) coincide with the corresponding quantities for Kx( T).
372
METHODS OF DATA PROCESSING
46.8
An approximate value of Kx( r) isused to find D[ Y(t)], where
Y(t)
=
dX(t). dt
Determine how ay will change if instead of the expression Kx(r)
a~e- 0 · 10 i•l(cos0.7r + ~sin0.7H),
=
which approximates quite accurately the expression Kx(r), one uses
where a and f3 are chosen such that the position of the first zeros and the value of the first minimum of the functions Kx( r) and K~( r) coincide. 46.9 The correlation function for the heel angle of a ship can be represented approximately in the form K 8 ( r) =
ae-al•l (cos
,Br
+ ~sin f3H),
where a= 36 deg. 2 , a= 0.05 sec. - 1 and f3 = 0.75 sec. - 1 • Determine D[K8 ( r)] for r = 0 and r = 3 sec. if G(t) is a normal random function and K8 ( r) is obtained by processing the recorded rolling of the ship during time T = 20 minutes. 46.10 The ordinate of the estimate of the correlation function for r = 0 is 100 cm. 2 , and for r = r 1 = 4.19 sec. Its modulus attains a maximum, corresponding to a negative value of 41.5 cm. 2 • According to these data, select the analytic expression for K( r): (a) in the form K(r) = a 2 e-al•l(cos ,Br (b) in the form K(r) = a 2 e-al•l cos f3r.
+ (a/{3) sin f31rj),
Determine the difference in the values of the first zeros of the functions K( r) in these two cases. 46.11 Determine D[Ke(r)] for r = 0, 2.09, 4.18 and 16.72 sec. if 1 T _
K(er)
=
K 8 (r)
= ae-al•l
7
JorT-• 8(t)8(t + r) dt, cos f3r,
where a= 25 deg. 2 , a = 0.12 sec. -I, f3 = 0.75 sec. - 1 and G(t) is a normal random function, {J = 0. To determine K 8(r), one uses a 10m. recording of G(t), where I em. of the graph along the time axis corresponds to I sec. 46.12 The graph of a sample function of the random function X(t) is recorded on a paper tape by using a conducting compound passing at constant speed between two contacts, one shifted with respect to the other by r seconds along the time axis. The contacts are connected to a relay system so that the relay turns on a stop watch when the ordinates of the sample function at the points where the contacts are located have the same sign and turns it off otherwise. Show that if
46.
DETERMINATION OF PROBABILITY CHARACTERISTICS
373
x = 0 and X(t) is a normal stationary random function, the estimate of its normalized correlation function can.be determined by the formula k(r) =
COS7T(1
-1),
where t 1 is the total reading of the stop watch and tis the total time the tape moves. 46.13 Under the assumptions of the preceding problem, determine D[k(5)] if for the determination of k(5) one uses the graph of the sample function corresponding to the recording time T = 10 minutes, a
= 0.2 sec. - 1 •
46.14 As a result of processing three sample functions of a single stationary random function X(t) for durations T1 , T2 and T 3 , three graphs of estimates of the correlation function were obtained. Assuming that the process is normal, derive the formula for finding the ordinates of the estimate of the correlation function KxC r ). Use all the experimental data with the condition that the variance of the error is minimal if for each sample function the estimate of the correlation function is given by the formula Klr) =
~;
f
1
x(t)x(t
+
r) dt,
j = 1, 2, 3
(x = 0).
46.15 Determine variance of the estimate for the correlation function of a normal stochastic process with zero expectation if to find Kx( r) one takes the ordinates of the sample function of the random function during equal time intervals ~. the duration of recording is T = m~ and in the final formula Kx(r) may be replaced by Kx(r). 46.16 The ordinates of a random function are determined by photographing the scale of an instrument during equal time intervals /), = 1 sec. Determine the ratio of D[K(O)] to the variance obtained by processing the continuous graph of the sample function if K( r) = ae- 0.5\rl' ( r is expressed in seconds), the process is normal and the observation time T = 5 minutes. 46.17 An approximate determination of the ordinates of a sample function of a stationary random function X(t) with zero expectation and a known correlation function Kx( r) is given by the formula
X(t)
=
~ (
;~ ,A;
27TjT
COST
. 27TjT) + B; Sill T a;,
where A;, B; are mutually independent random variables with unit variances and zero expectations and Tis a known number. Determine the constants a; so that e
=f
[Kx(r) - Kx(r)P dr =min.,
374
METHODS OF DATA PROCESSING
where Kx( T) is the correlation function corresponding to the preceding approximate expression for X(t). Determine the magnitude of e for optimal values of the constants. 46.18 To decrease the influence of the random vibration of the frame of a mirror-galvanometer used to measure a weak current, the readings are recorded during T = 10 sec. and the value j of the average recorded ordinate is considered to be the required intensity of the current. Find the mean error of the result if the vibration of the frame is described by the correlation function of the intensity of current J(t):
K(T)
= ae-«1•1,
where a =
10 - 1 sec. - 1 •
ANSWERS AND SOLUTIONS
l
RANDOM EVENTS
1. RELATIONS AMONG RANDOM EVENTS 1.1 By definition A u A = A, AA = A. 1.2 The event A is a particular 1.3 B = Aa, C = A5. case of B. 1.4 (a) A certain event U, (b) an impossible event V. 1.5 (a) At least one book is taken, (b) at least one volume from each of the three complete works is taken, (c) one book from the first work or three books from the second, or one from the first and three from the second, (d) two volumes from the first and second works are taken, (e) at least one volume from the third work and one volume from the first work and three from the second, or one from the second and three from the first. 1.6 The selected number ends with 5. 1.7 A means that all items are good, Jj means that one or none of them is defective. 1.8 Using the properties of events (BuB = B, BB = B, Bull= U, BU = B, BE= V, B u V =B), we get A = BC. 1.9 (a) A means reaching the interior of the region SA, A means hitting the exterior of SA. Then Au B = U; that is, A = V, B = U. (b) AB means reaching the region SAa common to SA and Sa; A means falling outside SA. Then AB = V; that is, A = U, B = V. (c) AB means reaching the common region SAa; Au B means hitting SAua; SAa = SAua only if SA= Sa; that is, A= B. 1.10 X = B. 1.11 Use the equalities A = AB u All, ll = All u All. 1.12 The equivalence is shown by passing to the complementary events. The equalities are proved by passage from n to n + 1. 1.13 No, since A u B = AB. 1.14 Use the equality A u B = AB. 1.15 C means a tie. 1.16 C = A(B1 u B2 ), C = A u ll1 B2 • 1.17 D = A(B1 u B2 u Ba u B4)(C1 u C2), 15 = Au Jj1Jj2llsll4 u C1C2. 1.18 C = (A 1 u A 2 )(B1B2 u B1Ba u B2B3 ). 2.
A DIRECT METHOD FOR EVALUATING PROBABILITIES
2.1 p = rmjn. 2.2 4/9. 2.3 p = 0.25 since the first card may belong to any suit. 2.4 lj65 ~ 0.00013. 2.5 23/240. 2.6 The succession of draws under such conditions is immaterial and therefore p = 2/9.
375
376
ANSWERS AND SOLUTIONS
2.7 One may consider that for control the items are taken from the total lot; p = (n - k)/(n + m - k). 2.8 One may consider one-digit numbers. (a) 0.2, (b) 0.4, (c) 0.04. 2.9 (a) N = a + lOb. This condition is satisfied only if a is even and a + b is divisible by 9, p = 1/18, (b) N = a + lOb + 100c. This number should be divisible by 4 and by 9; that is, a + b + c is divisible by 9, a + 2b is divisible by 4 (m = 22), p = 11/360. 10 · 9 · 8 · 7 · 6 8 · 7! · 3! Cg 5 2.10 2.11 ~ = 15 · 2.12 c~ = 14 · 105 _ 1 ~ o.302.
0.3.
2.13
2.14
(a)
5
9'
2
9'
(b)
7
9"
(c)
2.15 p =
c~c~-s
c~+m .
c~
2.16 Pk = -Ck (k = 1, 2, 3, 4, 5), P1 = 0.0556, P2 = 0.0025, Ps = 0.85 ·10-\ 80
P4 = 0.2·10- 5, p 5 = 0.2·10- 7 •
2.17
(a)
c1cn-1 2 2n-2 C~n
2.18 p =
= __ n_,
2n - 1
C~+~-m. c~+k
(b) 2
2.19 p =
1
c2cn-2 2
= ~.
2n-2
C~n
2n - 1
c~g3~Cl =
0.0029.
52
2.20 n = C~ 6 = 7140. The favorable combinations: (a) (7, 7, 7); (b) (9, 9, 3), (9, 6, 6); (c) (2, 8, 11), (2, 9, 10), (3, 7, 11), (3, 8, 10), (4, 6, 11), (4, 7, 10), (4, 8, 9), (6, 7, 8) and, therefore, m = 4 + 2·4·C~ + 4 3 ·8 = 564;p = 0.079.
(b)
=
C§C~ + CgC~
p
era
=
2_. 24
2.22 It is necessary to get n - m nickels from 2n buyers. The number of possible cases is CBn-m; p = 1 - (N/CBn-.m), where N is the number of cases when it is impossible to sell 2n tickets, N = 2:f,;r N;, N 1 = C~n--mc 2 m+ 1 > is the number of cases in which the first nickel came from the (2m + 2)nd buyer, N 2 = C~n-!(2,} + 3 > is the number of cases in which the first nickel came not later than from the (2m + 1)st buyer, and the second nickel from the (2m + 4)th buyer and so on; 1 n-m p = 1 - cn-m c~l-1· 2n
3.
.L
t=l
GEOMETRIC PROBABILITIES 3.1
p
= 1-
y;I
3.2 p
=
3 9.5 ~ 0.316.
3.3 p = 1 -
v'3
2
~
0.134.
3.4 Construction: AB is a segment of length 2h, C is the center of the disk. AD and BE are tangents to the disk, located on one side of the line AC. The triangles ADC and BEC coincide by rotation with angle q; = LDCE; therefore, LACB = q;, h = /tan(q;/2);p = (1/7T)arctan(h//). 3 .5 p 3.6
=
1 _ (1 _
(a) 0.0185,
2r : d) (1 _ 2r : d) .
(b) p = 16 ~
0;0~5 7T
= 0.076.
3.7 (a) 0.16, (b) 0.6. 3.8 x is the distance from the shore to the boat andy (with the corresponding sign) from the boat to the course of the ship. Possible values: x ~ 1 · v; for y < 0,
377
ANSWERS AND SOLUTIONS x + y ~ 1· v, for y < 0 IYl able values: IYI ~ (1/3)v; p
3.9
~ x (vis = 5(9.
the speed of the boat, 1
k(2 - k).
3.10 x values: IY-
=
xl
3.11 Two segments x, y. Possible values: 0 f/2, y ~ /(2, X+ y ~ /(2; p = 1(4.
X~
3.12 Two arcs x, y. Possible values: 0 7rR, y ~ 7rR, X+ y ~ 7rR; p = 1/4.
+
x a
~a, y ~a, ~ l, p = 1 -
y
~
3.14
~
~
x
+
y ~ l.
x + y
~
(x
Segments x, y, z. Possible values: 0 X + Z ~ y, y + Z ~ x; p = 1/2.
X
~
AL, y = AM. Possible values: 0 ~ x,p = 0.75.
X ~
3.13
1 hour). The favor-
+
~
y)
~
The favorable
l. Favorable values:
27rR. Favorable values: ~
(x, y, z)
l. Favorable values:
z,
AM = x, MN = y. Possible values~ 0 ~ x + y ~ l. Favorable values: x + y ~ l - a. For l/3 ~a~ l/2, p = [1 - (3a/l)]2; for l/2 ~ 3[1 - (a/1)]2.
3.15 x is an arbitrary instant, 0 ~ x ~ 12 minutes. The instants of arrival of a bus belonging to line A : x = 0, 4, 8; the instants of arrival of a bus of line B: y, y + 6, where 0 ~ y ~ 4. (a) Favorable values: for 0 < y ~ 2, we have y < x ~ 4, 6 + y ~ x ~ 12; for y > 2, we have y < x < 8 or y + 6 < x < 12; p = 2/3. (b) Favorable values: 2 ~ x ~ 4, 6 ~ x ~ 8, 10 ~ x ~ 12, 4 + y ~ x ~ 6 + y; for y < 2 we have 0 < x ~ y and for y > 2, y - 2 ~ x ~ y; p = 2/3. 3.16 0
~
y
~
x, y are the times of arrival of the ships. Possible values: 0 24. Favorable values: y - x ~ 1, x - y ~ 2; p = 0.121.
-f)
3.17 p = 1 - ( 1 -
~
x
~
24,
2 •
3.18 x is the distance from the shore to the first ship, and y the distance to the second ship. Possible values: 0 ~ (x, y) ~ L. The favorable region lx - Yl ~ dV 1 + (v 2 (v 1 ) 2 is obtained by passage to the relative motion (the first ship remains fixed and the second ship moves with speed v = v2 - v1 ); for L ~ dV 1 + (v 2 /v 1 ) 2 , p = 1 - [1 - (d/L)V1 + (v 2 /v 1 ) 2 ]2; for L ~ dV1 + (v 2 /v 1 ) 2 , p = 1. 3.19 (a) p = 1 - (19/20) 2 = 0.0975, (b) x, y, z are the coordinates of the inflection points. Possible values: 0 ~ (x, y, z) ~ 200. Favorable values: lx - Yl ~ 10, lx - zl ~ 10, Iy - zl ~ 10; p = 1 - (180/200)3 = 0.271. _ 27rR 2 (1 - cos ex) _ . 2 ex 3.20 p sm 2. 47rR 2 3.21 p
= {
R2
r"
L~~3 cos cp dcp dif;}: { 2R
2
l f 2
13
"
cos cp dcp dif; }
=
0.21.
3.22 x is the distance from the midpoint of the needle to the nearest line and cp is the angle made by the line with the needle. Possible values: 0 ~ x ~ L/2, 0 ~ cp ~ 7r. Favorable values: x ~ (l /2) sin rp, p = 2l/L7r.
3.23
Possible values: n2,
lal
~
n,
lbl
~
m.
(a) Favorable values: b ~
(n
a2 da =
Form~
p
=! + 2
P
_1_
2nm
1 = 1 - -2 nm
Jo
lm vb o
!
2
+ !!:._.
db = 1 -
6m
-v-m
311'
a2 •
378
ANSWERS AND SOLUTIONS
The roots will be positive if a:;::;: 0, b;;. 0. For m;;. n 2 , p = n 2 /12m; form:;::;: n2 , = 1/4 - V m/6n. (b) The roots of the equation will be real if b2 + a 3 :;::;: 0. The region for favorable values of the coefficients: a :;::;: 0, b2 :;::;: - a 3 • For n3 :;::;: m2 , 1 n312 p = -a 312 da = · 2nm o 5m
p
ln
p
=
! 2
_1-
rm b213 db
2nm )o
=
!
2
(1 - 0.6
m2/3).
n
3.24 Let A and B be the positions of the moving point and the center of the circle, u and v their velocity vectors and r the distance AB. From the point B we construct a circle of radius R. We consider that (3 > 0 if the vector v lies to the left of the line AB, -7T :;::;: (3 :;: ;: 7T. From the point A we construct tangents to the circle of radius R. The point A reaches the interior of the circle if the relative velocity vector falls into the resulting sector whose angle is 2e, e = arcsin (R/r). From A we construct the vector - v. Let 0 be the endpoint of this vector. From 0 we draw a circle whose radius coincides in magnitude with the velocity of the point A. The point A will lie in the circle only if the vector u - v lies in the sector. Let u > v. Then the required probability will be (Figure 41) p = a/27T. To determine a, we set 8 = LOCA, x = LOCD, y = LODC, y = LADO. Then a= 2e + 8 - y. Using the equalities
sin y v
sin ((3 - e) u
and
sin 8 v
sin ((3 + e), u
we obtain p =
L{ + 2e
arcsin
[~sin ((3
+
e)]- arcsin
[~sin ((3
- e)]}
The present formula is valid for any (3. For v > u, the problem may be solved similarly, but in this case one should consider several cases: (1) 1(31 ;;. e + (7r/2), p = 0. (2) (7r/2) + e :;::;: lf31 ;;. e: (a) for u :;::;: v sin (1(31 - e), we shall have p = 0, (b) for v sin (1(31 - e) :;::;: u :;::;: v sin (1(31 + e), we have p =
~arccos [~sin (1(31 - e)],
FIGURE 41
379
ANSWERS AND SOLUTIONS (c) for u > v sin p
(3) I.BI
~
(1,81
+ e), we shall have
=~{arccos [~sin(I,BI-
e: (a) for u
~ v
sin (e -
v sin (e -
e)]- arccos
[~sin(I,BI
+e)]}·
1,81), we shall have p = 1, 1,81) ~ u ~ v sin (e + 1,81),
(b) for
we shall have p = 1
(c) for u > v sin (e +
= 1-
p
4.
-~arccos [~sin (e
-
1,81)],
1,81), we shall have
~{arccos [~sin(e -1,81)]
+arccos
[~sin(e
+ I,BI)]}·
CONDITIONAL PROBABILITY. THE MULTIPLICATION THEOREM FOR PROBABILITIES n
4.1
p
= 1 - 0.3 ·0.2 = 0.94.
4.2 p = 1 -
TI c1
- Pk).
k=1
4.3 p = (1 - 0.2) 3 = 0.512.
4.4 2 4.5 p = 1 - (1 - 0.3)(1 - 0.2 ) = 0.328. 4.7
1 - 0.5n ;;. 0.9; n ;;. 4.
4.9
p
4.10
4.8
0.251. 4.6 p(l - p)n- 1. 1 - (1 - p) 4 = 0.5, p ;:::: 0.159.
S~; ) 4 729 = (wR2 = 256w4 = 0.029.
P = (1- ;2)(1- ;2)(1-
;2)(1-;2)(1- 1 ~ 2 )
···= : 2 ;:::: o.6o8. 1
4.11 From the incompatibility of the events it follows that P(A I B) = 0 and P(B I A) = 0; that is, the events are dependent. 4.12 P1P2· 4.13 p = 0.7·0.9 12 = 0.197. 4.14 p = 0.7 2 (1 - 0.6 2) = 0.314.
4.15
0.75.
4.16 p 1 = 0.9·0.8·0.7·0.9;:::: 0.45, p 2 = 0.7 2 ·0.8;:::: 0.39. 4.17
(a) 0,1 = (PtP3)n; that is, n = -1/(logp1 logp 3), (b) P = 1 - (1 - P1P3) 3(1 - P2P4) 3.
4.18
It follows from the equality P(A)P(B I A)
4.19 p = 4.21 4.22 4.23
2(~r[1- (~rJ.
9 8 7 (a) p = 1 - 10 . 9.8 = 0.3,
=
P(B)P(A I B).
4.20 p =
(b) p = 1 -
~+~+~·1
4 3 2 5 . 4 .3 =
=
3~0·
0.6.
_ 1 _ (n - m)! (n - k)!. n! (n- m- k)!
p-
1 (39) 3 39,997! 39,000! ( ) 1 a P = - 40,000! 38,997! ;:::: - 40 = 0 ·073 '
1 For solution see Yaglom, A. M., and Yaglom, I. M.: Challenging Mathematical Problems with Elementary Solutions. San Francisco, Holden-Day, Inc., 1964. Problem 92, p. 29 and solution to Problem 92, pp. 202-209.
380
ANSWERS AND SOLUTIONS (b) OS >- (40,000- N)(39,999 - N)(39,998 - N) ~ ((40,000- N)3 . ~ 40,000·39,999·39,998 ~ 40,000 , N
~
8,252.
(100,000 - 170) (100,000 - 2 ·170) ( ) - 1 a P 100,000 · (100,000- 170) x · · ·
4 · 24
X
=
(100,000- 60·170- 10·230) (100,000- 59·170- 10·230)
1- (100,000- 60·170- 10·230)
0125 . ,
=
100,000 (b)
=
Psup
1- (100,000- 5·170- 230)(100,000- 11·170- 2·230) (100,000 - 5 ·170) (100,000 - 11 ·170 - 230) (1 00,000 - 59 ·170 - 10. 230)
(1 00,000 - 59· 170 - 9 · 230)
X
(c) P = 1 - (1 - Psup)(l - Preg), Preg = 1 - l P(A)
4.25
P(B)
=
=
=
P(C)
1 - p -
Psup
X •.•
~ 0 ·0246 '
= 0.1029.
!. 2
P(A I B)= P(B I A) = P(C I A) = P(A I C) = P(B I C)= P(C I B)=
i·
that is, the events are pairwise independent; P(A
I BC) =
P(B I AC) = P(C I AB) = 1,
that is, the events are not independent in the set. No (see for example, Problem 4.25).
4.26 428
=
·
4.27 p
=
n !/nn.
z.!!:...
n . (n- 1) (n- 1)_ ... !.1 = 2(n!) 2 _ 2n (2n - 1) (2n - 2) (2n - 3) 2 (2n)!
p
4.29 P
= CgCfo ClC~ C§Cg C~Ci l = 35 5! 10! = O 081 c315 c312 c39 c36 · · 15 ,.
4. 3 0 p
= --
C~C~ C~-1C~-1 ... --=--="-..o.:.:..._..:::. 1· C~-
1 1 1 (11 - k)! p=n-(n-l)···[n-(k-1)]= n!
4· 31
4 ·32 P =
1 3 5
99
2·4·6· ··too=
100! 2 100 (50!) 2 ~ 0 ·08 ·
4.33 Let a 1 , a 2 , ••• , an be the buyers who have five-dollar bills and b 1 , b 2 , . . . , bm those with ten-dollar bills, and suppose that their numbers coincide with their order in the line. The event AI< means that one will have to wait for change only because of buyer bl< (k = 1, 2, ... , m); =
p
4.34
nm P(A~<) = _ n _ (n (n + 1) n
k~l
1) . .. (n - m (n- m
+ +
1) = n - m + 1. 2) n + 1
It may be solved as one solves Problem 4.33;
P(A ) k
=
11 -
2 2k
+
3'
P =
nm P(A~<)
I<~ 1
=
n - 2m 11
+
+ 1.
1
4.35 The first ballot drawn should be cast for the first candidate. The probability of this is n/(n + m). Then the ballots must follow in succession so that the
381
ANSWERS AND SOLUTIONS
number of drawn votes cast for the first candidate is always not smaller than for the second one. The probability of this event is (n - m)/n (see Problem 4.33); n (n - m) n- m p = (n + m) n = n + m· 5.
THE ADDITION THEOREM FOR PROBABILITIES 5.1
0.03.
5.6 p 5.7
5.2 1 -
=
P(AB)
0.55.
c117 a (Cfo +
=
= 2'!~ 1 pkj.
5.3 pk CroC§
5.4
2(r/R) 2.
5.5
+ CroC~ + Ctoc~ + CioC§C~ +
11/26.
C{oCg) ~ 0.4.
P(A) - P(AB).
5.8 P(B) = P(AB) + P(AB) = [P(A) + P(A)]P(B I A) = P(B I A). 5.9 P(B) = P(A) + P(BA) :;:, P(A). 5.10 0.323. 5.11 0.5. 5.12 npqm- 1 • 5.13 (a) 1/3, (b) 5/6. 5.14 A means that the first ticket has equal sums, B the second ticket. (a) P(A u B)= 2P(A) = 0.1105; (b) P(A u B)= 2P(A)- P 2(A) = 0.1075. 5.15 From P(A u B) ~ 1, it follows that P(B) - P(AB) ~ P(A) or P(A I B) >- 1 _ P(A) = a + b - 1 . ~ P(B) b 5.16 From Z =Xu Y, it follows that Z ~X+ I Yl, Z:;:, X- I Yl, P(Z ~ 11) :;:, P(X ~ 10 and I Yl ~ 1) = P(X ~ 10) + P(l Yl ~ 1) - P(X ~ 10 or I Yl ~ 1) :;:, 0.9 + 0.95 - 1 = 0.85, P(Z :;:, 9) :;:, 0.05, P(Z ~ 9) ~ 0.95. 5.17 0.44 and 0.35. 5.18 p(2 - p). 5.19 PB = 0.1 + 0.9·0.8·0.3 = 0.316; Pc = 0.9(0.2 + 0.8·0.7·0.4) = 0.3816.
5 20 ·
l _1_
-
P - n (n - 1)
+ (1 -
l) ln n -
n2 - n
+ 1.
n2 (n - 1)
5.21 PB ~ 0.8, Pc ~ 0.2. 5.22 G(m + n) = G(m) + [1 - G(m)]G(n I m); G(
5.23
I ) n m
P1 =
= G(n
111 l + 23 + 25
Another solution: P1
+ P2
5.24 P1
+ P2 + P3
5.25 p
+
5.26
P1
+
=
=
2
1
(1/2)p1; that is, P1 = 2/3, P2 1 . 4 1, P2 = 2 P1, P3 = 2 P2, I.e., P1 = 7, P2 =
1, p 2
1
2 p;
=
p =
m
1 P1 - '
11
+ ... = 3' P2 = 22 + 24 + ... = 3' 1
q = 1' q =
+ P2
m) - G(m).
1 - G(m)
n
+
m
=
= =
1/3. 2 1 7, P3 = 7·
2
3' P2. P = P1 '
n+m n +2m
= ---·
5.27 p 1 is the probability of hitting for the first marksman; P2 is the probability ofhittingforthesecondmarksman;p1 + P2 = 1,0.2p2 = 0.8·0.3P1;P = P1 = 0.455. 5.28 Use the condition of Problem 1.12. 5.29 If we calculate the number of identical terms, we get P
(0 Ak) 1
=
Ci;P(A1)-
C~P(A1A2) + C~P(A1A2A3)- · · · + ( -l)n- 1P(J]
Ak).
382
ANSWERS AND SOLUTIONS
5.30 Using the equality Il~~ 1 A~c = 2:~~ 1 A~c from Problem 1.12 and the general formula for the probability of a sum of events, we obtain PCrJl
A~c)
=
1-
tt P(A~c)- :~:jJ+l P(A~cA;) i
2 _nf +n~ k=l
P(A~cA;A;)-···+(-l)n-lp(fr
J=k+l i=J+l
k=1
Ak)}·
Ak
However, according to Problem 1.12 we have Il~~ 1 = .L~~1 A~c and, hence, for any s, P(Il~~ 1 A~c) = 1 - P(.L~~ 1 A~c). Also considering the equality 1 - c; + c; - ... + ( -l)n = 0, we get the formula indicated in the assumption of the problem. 5.31 Use the equality P( Ao JJ
Ak)
=
P(JJ
A~c) -
P(J]
Ak)
and the formula from the condition of Problem 5.30. n
5.32 p = k~l
(
-1)k-1
k!
.
The probability that m persons out of n will occupy their seats is = 1/m!. The probability that the remaining n - m persons will not sit in their seats is n~m ( -l)k 1 n-m(-1)k 5.33
C;;'(n - m)/n!
L -k!·
L-k,;
k~O
5.34
p=m ~c~o
•
The event A 1 means that no passenger will enter the jth car,
P(A,)
= (
1
~r,
P(A;A;)
= (1 -
~r,
..
P(A;A;As) = ( 1 -
~r
n: r.
and so on. Using the formula from the answer to Problem 5.29, we obtain p = 1-
c~(1- ~r
+ c;(1-
~r-
+ (-l)n-lc~-1(1-
1
5.35 The first player wins in the following n cases: (1) in m games he loses no game, (2) in m games he loses one but wins the (m + 1)st game, (3) in m + 1 games he loses two, but wins the (m + 2)nd game, ... , (n) in m + n - 2 games he loses n - 1 and, then, he wins the (m + n - 1)st game. p = pm(l + C~r.q + C~+lq2 + ... + C~+;-2qn-1). 5.36 The stack is divided in the ratio p 1 / p 2 of probabilities of winning for the first and second players, P1
1 1 1 2 = 21m ( 1 + 2 Cm + 2 2 Cm+l + · · · +
P2
1 (1 + 2 1 cln + 22 1 = 2n
2
1 n-1 ) 2 n-1 Cm+n-2 ' 1
Cn+l + · · · + 2m-l
cm-1 ) m+n-2 ·
5.37 The event A means that the first told the truth, B means that the fourth told the truth; p
= P(A I B)= P(A)~~ I A)_
Let PJc be the probability that (in view of double distortions) the kth liar transmitted the correct information; P1 = 1/3, P2 = 5/9, Ps = 13/27, P4 = 41/81, P(A) = p 1 , P(B I A) = Ps, P(B) = p4; P = 13/41.
383
ANSWERS AND SOLUTIONS
5.38 We replace the convex contour by a polygon with n sides. The event A means that line Ai; will be crossed by the ith and jth sides; A = L:r~ 1 L:~ ~ i +1 Ai;, p' = L:r~1 L:J~i+1Pif, where Pi;= P(Ai;); p' = (1/2) L~~1P~, P~ = L:f~1Pki- Pkk being the probability that the parallel lines are crossed by the kth side of length lk. From the solution of Buffon's problem 3.22, it follows that p~ = 2lk/Lrr; p' = (1/Lrr) L:~ ~ 1 lk. Since this probability is independent of the number and size of the sides, we have p = s/Lrr. 6.
THE TOTAL PROBABILITY FORMULA 6 ·1 P =
11 1
1 2
12·TI + 12·TI
13 132.
=
6' 2 p
3 4
1 2
4. 9 + 4. 9
=
7
=
T8.
6.3 H 1 means that among the balls drawn there are no white balls, H 2 means that one ball is white and H 3 that both are white;
! ( m1 + m2 ) . - 2 n1 + m1 n2 + m2 H; 1 means that a white ball is drawn from the jth urn; p _
6.4
P(H11 ) = _!!!_, m+k k P(Hd = - - k ' m+ P(H. ) _ 21
P(Hd
-
m
+ k)
(m
= m
k
+
(m + 1) (m + k + 1)
k
+
(m
+
+
k) (m
m
k
+
_ m 1) - m + k'
k.
Consider P(H;1) = m
m
+
k,
P(Hd = m
k
+
k·
Then P(H;+ 1,1) = m/(m + k). Therefore p = m/(m + k). 6.5 0.7. 6.6 2/9. 6.7 0.225. 6.8 0.75. 6.9 0.332. 6.10 The event A means getting a contact. The hypothesis Hk means that a contact is possible on the kth band (k = 1, 2). Let x be the position of the center of the hole and y the point of application of the contact. P(H1) = P(15 ~ x ~ 45) = 0.3, P(H2 ) = P(60 ~ x ~ 95) = 0.35. The contact is possible on the first band if for 25 ~ x ~ 35lx - Yl ~ 5, for 15 ~ x ~ 25, 20 ~ y ~ x + 5, for 35 ~ x ~ 45x - 5 ~ y ~ 45. Thus P(A I H 1) = 1/15. Similarly, P(A I H2) = 1/14, p = 0.045. 6.11 The event A means that s calls come during the time interval 2t. The hypothesis Hk (k = 0, 1, ... , s) means that during the first interval k calls came, P(Hk) = Pt(k). The probability that s - k calls come during the second interval will be
(k
=
6.12 The hypothesis Hk means that there are k defective bulbs, P(Hk) = 1/6 0, 1, ... , 5). The event A means that all 100 bulbs are good, c100
P(A
1
Hk) =
1 ~~~-k ~ o.9k c
(k =
o,
1000
1
p
5
= 6 k~O P(A I Hk) ~ 0.78.
1, ... , 5);
384 (k
ANSWERS AND SOLUTIONS 6.13 The hypothesis Hk means that there are k white balls in the urn 0, 1, ... , n); the event A means that a white ball will be drawn from the urn,
=
1
P(Hk) = n + 1 ,
I Hk)
P(A
=
k+1 n + 1;
n+2
P = 2 (n + 1) ·
6.14 The hypothesis Hk (k = 0, 1, 2, 3) means that knew balls are taken for the first game. The event A means that three new balls are taken for the second game, P(H)
=
k
=
Cfs
(9C~
6.15 P = 141q 6 · 16 p
c~c~-k +
P(A
'
1
H)= k
8c&c~
c~-k.
Cj\ '
oo89
P =
.
.
+ 7C}) = o.58.
25 24 (25 5 5 25) 24 30.29 + 30. 29 + 30.29 . 28
=
190 203"
6.17 P(A) = P(AB) + P(AB) = P(B)P(A I B) + P(B)P(A I B). The equality is valid only in several particular cases: (a) A = V, (b) B = U (c) B = A, (d) B = A, (e) B = V, where U denotes a certain event and V an impossible one. 6.18
By the formula from Example 6.2, it follows that m
6.19
In the first region there are eight helicopters, p
~
~
13, p
~
0.67.
0.74.
7. COMPUTATION OF THE PROBABILITIES OF HYPOTHESES AFTER A TRIAL (BAYES' FORMULA) 7 •1 p
=
0.1·5/6 0.9·1/2 + 0.1·5/6
5 32.
=
7.3 The hypothesis H 1 means that the item is a standard one and H2 that it is nonstandard. The event A means that the item is found to be good; P(Hl) = 0.96, P(H2) = 0.04, P(A I Hl) = 0.98, P(A I H2) = 0.05, P(A) = 0.9428; p = P(Hl I A) = 0.998. 7.4 The hypotheses Hk (k = 0, 1, ... , 5) means that there are k defective items. The event A means that one defective item is drawn;
~·
P(Hk) =
P(A I Hk)
=
~·
P(Hk I A) = P(Hk):(c:) I Hk).
The most probable hypothesis is H 5 ; that is, there are five defective items. 1 7.5 P(Ha I A) = 6 _0 _78 = 0.214 (see Problem 6.12). 7.6 The event A denotes the win of player D; the hypothesis Hk (k means that the opponent was player B or C; P(Hk) =
l 2;
P(A I Hl)
=
0.6
X
0.3 + (1 - 0.18)
X
0.7
X
=
1, 2)
0.5;
P(Hl I A) = 0.59; 0.3 + (1 - 0.06)0.4 X 0.7; P(H2 I A) = 0.41. 7.7 The second group. 7.8 The event A means that two marksmen score a hit, Hk means that the kth marksman fails; 6 p = P(H3 I A) = 13' P(A I H2)
=
0.2
X
385
ANSWERS AND SOLUTIONS 7.9
The event A means that the boar is killed by the second bullet; 3
I P(Hk).
P(A) =
k~1
The hypothesis Hk means that the kth marksman hit (k = 1, 2, 3); P(H1) = 0.048, P(H2) = 0.128, P(H3) = 0.288, P(H1 I A)= 0.103, P(H2 A)= 0.277, P(H3 I A)= 0.620. 7.10 The fourth part. 7.12 The events are: M 1 that the first twin is a boy, M 2 that the second is also a boy. The hypotheses are: H 1 that both are boys, H 2 that there are a boy and a girl; 1
P(M1)
= a
+
z1 [1
- (a
+ b)];
7.13 A~c means that the kth child born is a boy and B~c that it is a girl (k = 1, 2); P(A1A2) + P(B1B2) + 2P(A1B2) = 1, P(A1A2 + B1B2) = 4P(A 1B2). Therefore, P(A1A2)
2
1
1
+ P(B1B2) = 3' P(A1B2) = 6' P(A1A2) = 0.51 - 6; 103
p = P(A2I A1) = 153 ·
7.14 5/11. 7.15 One occurrence. 7.16 Hypothesis H 1 means that the first student is a junior and H2 means that he is a sophomore. A denotes the event that the second student has been studying for more time than the first, B means that the second student is in the third year. P(H1) = ~1 ' n-
P(H2) = ~1 ' n1 P(A) = (n _ l) 2 [n1Cn2 p = P(B
P(A I H 1)
+
I A)
n3) =
+
P(A I H 2) = ~1 , nn3 P(AB) = n _ 1 ;
n2 + n 3 , n-1
=
n2n3], 1
-
n1
1
+-
n2
7.17 1/4 and 2/11. 7.18 The hypotheses H~c (k = 0, 1, ... , 8) mean that eight out of k items are nondefective. A denotes the event that three out of four selected items are nondefective:
P(H~c)
=
~·
I A)
=
C~CL~c ----cr (k
P(H~c
P(H; I A)= 0 (j
p = P(H4
8.
3
=
1A)· 4 +
=
0, 1, 2, 8),
3, 4, 5, 6, 7),
P(A) =
1 5;
1 3 PCH5 1A)· 2 = 14 ·
EVALUATION OF PROBABILITIES OF OCCURRENCE OF AN EVENT IN REPEATED INDEPENDENT TRIALS 8.1
(a) 0.9 4
=
8.2
(a) Cro
2~ 0
0.656, = ; :6 ,
(b) 0.9 4
+ 4·0.1·0.9 3 = 0.948.
(b) 1 -
2~ 0 (1
+ Ci:o + Cra + Cia +
1)
= ::;4 ·
386
ANSWERS AND SOLUTIONS 8.3 8.4
8.5
8.7 p = 1- (0.8 4 8.8
1.35e- 2 = 0.18, (b) p ~ 0.09. 8.6 (a) 0.163, (b) 0.353.
= C~ 00 ·0.0P·0.99 197 ~
(a) p 0.17.
0.64.
+
4·0.8 3 ·0.2
mt C~pmqn-m[1-
Wn =
+
+ 2·0.8·0.2 3 )0.7 2 ·0.6 =
5·0.8 2 ·0.2 2
~rJ
(1-
=
0.718.
~r·
1- (1-
8.9 p = 1 - (0.7 4 + 4·0.7 3 ·0.3 ·0.4) = 0.595. 8.10 Hypothesis H 1 means the probability of hitting in one shot is 1/2, H 2 means that this probability is 2/3. The event A means that 116 hits occurred. P(H1 I A) ~ 2P(H2 I A); that is, the first hypothesis is more probable. 8.11 See Table 113. TABLE
113
p
0.01
0.05
0.1
0.2
0.3
0.4
0.5
0.6
R1o: 1
0.0956
0.4013
0.6513
0.8926
0.9718
0.9940
0.9990
0.9999
8.12 8.14
0.2. Rn:
8.13 0.73. 1 - e- 0 · 02 n (n > 10). See Table 114.
~
1
TABLE
n
1
Rn;l
0.02
114
10
20
30
40
50
60
70
80
90
100
0.18
0.33
0.45
0.55
0.63
0.70
0.75
0.80
0.84
0.86
- - - - - - - - - - - -- - - -- - - -
8.15 p = 1 - 0.95 10 = 0.4. 8.17 p = P~o 8.18 (a) p =
+ I
3PMPe P~:
k~O
8.20
8.16 p
+ Pa) +
=
1 - 0.9 5 = 0.41.
3P1oP~
= 0.0935. (b) 0.243.
kPb, = 0.311,
8.19
0.488.
A denotes the event that two good items are produced. The hypothesis
Hk means that the kth worker produces the items (k = 1, 2, 3); 3
p =
L
P(Hk
k~1
8.21
(a) p
8.22 P 1 8.23
=
-
p-
=
p4
1
-v'2: = +
0.794,
Clp 4 q
+
I A)
X
(b) 3p4
Cgp 4 q 2
+
P(A -
I Hk)
4p 3
~
0.22.
+2=
0, p
C~p 3 q 3 (p 2
en 22n-k 2n-k• 1
8.24
1
+
2k- 1
=
L
m=k
Pm
=
2k -1
npk
L
m=k
c~~}qm-k.
0.614.
2p 2q) = 0.723; Pn
0.784.
8.25 The 200 w. ones (R6,1 = 0.394; R1o,2 = 0.117). 8.26 0.64. 8.27 0.2816. 8.28 Pm = nC~~\pkqm-k form ;;. k; Pm = 0 form < k. 8.29 p
=
=
0.277.
387
ANSWERS AND SOLUTIONS 8.30
We require: 0.1 ~ 0.8n [ 1
n
+4+
n(n -
32
1)] .'
n
~
25.
8.31 We require: 0.99·5 10 = 4 10 + Cl: 04 9 + · · · + C~ 0 4 10 -n; n = 5. 8.32 P 4 ,o = 0.3024, P 4 , 1 = 0.4404, P4, 2 = 0.2144, P 4 , 3 = 0.0404, P4, 4 = 0.0024. 8.33 0.26. 8.34 0.159. 8.35 95/144. 8.36 n = 29. 8.37 n ~ 10. 8.38 n ~ 16. 8.39 8. 8.40 8. 8.41 fL = 4; p = 0.251. 8.42 fL+ = 3, fL- = 1; p = 32/81. 9.
THE MULTINOMIAL DISTRIBUTION. RECURSION FORMULAS. GENERATING FUNCTIONS 9.1 p 9.2 P 9.3
=
+
Ps; 2,2,1
2Ps; 3,2,0 = 50/243.
P3; 1,1,r + P3; 2,r o + P3; r,2,o = 0.245. 9! 1 9! (a) p = ( 3 !) 3 • 39 = 0.085, (b) p = 6 413121 =
9.4 P
= :
9.5 p
=
X
1 39 = 0.385.
°i
1 1 0.15 6 0.22 3 ·0.13 =: 0.13·10- 4 •
1 - 2(0.0664 4
+ ~ 0.2561 4 +
4·0.0664·0.256!3
+
9.6
6 · 0.0664 2 · 0.256P + 4 · 0.2561 · 0.0664 3 ) = 0.983. 12! 6! 12! 1 (a) p = 26 _6 r 2 = 0.00344, (b) p = 2 .. 21 2131 41 · 6 r 2 = 0.138.
9.7
(a) Pr = (l
+m+
+ mr + nr)! lr! mr! nr! ·(l
(/r (c) p =
9.8
n)lr +mr +n1'
1
p = Pn, P1c = P~c-r·2
+
+
(b) p = 6ph
[1rmmrnnr m + n) 1r+mr+n1
1 (1- Pie-r) 2 = 0.5; p = 0.5.
9.9 Let Pic be the probability of a tie when 2k resulting games have been played; Plc+r = (lj2)pk (k = 0, 1, ... ), Po= 1, Pn-r = (lj2)n-r; P = (1/2)Pn-r = 1/2n. 9.10 The number n should be odd. Let P1c be the probability that after 2k games the play is not terminated; Po = 1, Pic=
(~r(k
=
1, 2, ... ,n ~ 3 );
p =
~P
=
+
1
~ (~r- 3 ll 2 •
9.11 Let P~c be the probability of ruin of the first player when he has k dollars. According to the formula of total probability P1c = PP~c+r + qPic-r· Moreover, p + q = 1, Po = 1, Pn+m = 0. Consequently, q(p~c -Pie-r) = P(Pic+r -Pic· (1) p = q. Then P~c = 1 - kc, c = 1/(n + m); that is, Pr = m/(n + m), Pn = n/(n + m). (2)p #- q. ThenPic- Pie-r= (pfq)k(Pr- 1). Summing these equalities from 1 to n and from 1 to n + m, we obtain 11 - Pn
= (1 -
Pr)
(~r
1 - q_ p
q)n+m 1 - (-
,
1 - Pn+m
= (1 - Pr)
pq
1 - -
p
388
ANSWERS AND SOLUTIONS
Thus, 1Pr =
(%r
1-
Pn = 1 - Pr =
p)n+m' 1- ( -
(*r
1- ( -q)n+m p
q
9.12 P = Pm;Pm = Oform ~ n;Pn = 1/2n-l;Pm = 1/2nforn < m < 2n- I. In the general case Pm is determined from the recurrent formula 1 1 1 Pm = 2Pm-1 + 22 Pm-2 +···+ 2 n_ 1 Pm-n+l• which is obtained by the formula of total probability. In this case, the hypothesis Hk means that the first opponent of the winner wins k games; 1)n-k (k = 1,2, ... ,n- 1). Pm-k = P(Hk) ( 2 9.13 Pk is the probability that exactly k games are necessary. Fork = 1, 2, 3, 4, 5, Pk = 0, P 6 = 2p 6 = 1/2 5 , P 7 = 2Cgp 6 q = 3/2 5 , Pa = 2C~p 6 q 2 = 21/2 7 , P 9 = 7/2 5 , P 10 = 63/2 9 ; (a) R = 2J~ 1 Pk = 193/256, (b) if n is odd, then Pn = 0. For even n, Pn = (1/2)Pcn-ll/2, where Pk is the probability that after 2k games the opponents have equal numbers of points; p 5 = 0(1/2 10 = 63/2 8, Pk+l = (1/2)pk; that is, 63 63 Pk = 2k+s (k = 5, 6, ... ), Pn = 2(n/2)+ 3 •
Cr
9.14 Expand (1 - u)- 1 into a series and find the coefficient of um. 9.15 The same as in Problem 9.14. 9.16 The required probability is the constant term in the expansion of generating function 1 ( G(u) = 4 n U
1)n +2+U
(1
=
+ u)2n 4 nun ;
P
=
1 4n
en2n•
9.17 The required probability is the sum of the coefficients of u raised to powers not less than m in the expansion of the function 1 2 G(u) = ( 16 u
1
3
+ 4u + 8 + P
=
1 42n
1 4u 4n
2
k = 2n +m
1
)n
+ 16u2
(1
+
u)4n.
(4u)2n
,
C~n·
For n = m = 3,p = 0.073. 9.18 The required probability is twice the sum of the coefficients of U 4 in the expansion of the function 1 ( G(u) = 520 u
1
+-u +
) 20 3 20!
p = 2 520
=
8
1 20 20-m 520
20! m. n.'(20- m - n)l. um-n320-m-n; 316-2k k)! k! (16 - 2k)! = 0· 104 ·
2 2
m~o
k~O (4 +
n~o
'
9.19 (a) The required probability Pchamp is the sum of the coefficients of nonnegative powers of u in the expansion of the function 1 1 1) 24 (1 + u) 48 G(u) = ( 4 u + 4u + 2 = 424u24 ; 48 Pasplrant = 0.4423. Pchamp = C~8 2. ~24 (2 48 + = 0.5577, 4 24 k~24
2
cw
389
ANSWERS AND SOLUTIONS
(b) the probability of the complementary eve]\t is the sum of the coefficients of u whose powers range from - 4 to 3 in the expansion of the function
+
9.20 function
42o
=
p
1
+
u2
+ ... +
= 1
+
c~- 1 u
6" (u
=
Using the equality 1/(1 - u)n
uB)
Rm = 10,
+
C~:;:iu 2
=
~n (Cii;-
= 0.22.
+ · · · we obtain
= :L;;'~n P~c.
+ C~Ciii-12-
c;.Ciii-s
Using the equality
· · ·).
m = 20, R2o
=
1
610 ( C§g - C loCi2)
=
0.0029.
The desired probability is the coefficient of u21 in the expansion of the
G(u) = =
1~s
+
(1
u
+ ... +
1
106 (1 - C§u 10
+
1~6
ug)s =
( \ -_ u~or
Cgu 20 - · · ·)(1
1
p = 106 (C,f6 -
9.22
k == 16
Un(l - uB)n 6n(l - u)n .
=
and the series is cut off if m - 6k < n; (b) Rm 1 + c~- 1 + · · · + C'i~l = C';, we obtain
9.21 function
2: no
= 1 - 420
(a) The required probability Pm is found with the aid of the generating G(u)
For n
23
1
u)4o u2o ;
1 (1
G(u)
+
Cf,C:f6
+
Cgu
C~Cil)
=
+
C~u 2
+ · · ·);
0.04.
(a) PN is the coefficient of uN in the expansion of the function
~nn (\-=- u:r;
G(u) =
and the series is cut off when N - ms < n; (b)
P =
1
i
+ PN-
k==n
Pk = 1
+ PN- _!_ mn
C~CJJ-2m- · · ·)
(CJJ-
CkCJJ-m +
~3 (C~
- 3) = 0.1875;
(compare with Problem 9.20). 9.23
(a) G1(u)
= ~231
(\ -=_ :
v21
4
)
3
,
(b) G2(v) =
8"3 (1 +
v) 9 ,
(c)
G2(~)
p =
1
p = 83 C§ = 0.2461;
3 ~ 3 (1
G (u)
=
G1(u) x
P
=
3 ~ 3 (Cl2 + 3C,f2) =
=
+
0.1585.
u 2))1
+
u) 12 ,
390
ANSWERS AND SOLUTIONS
9.24 Hypothesis Hk means that the numbers of heads for the two coins first become equal after k tosses of both coins (k = 1, 2,. 0 0, n); the event A means that after n throws the numbers of heads become equal (previous equality is not excluded).
P(A)
i, P(Hk)P(A
=
k~l
I Hk),
P(A
Hk)
1
P(Hn),
P =
=
4 }_k
P(A)
=
P(A I Ho),
ca;!2k·
Consequently, C2'n = l:~~1 4kCa;!2kP(Hk)o Using successful values for n, one can findp = P(Hn)o Let R(u) = 2:k'~ 1 ukP(Hk), Q(u) = l:f~o uip;, wherepn-; = P(A I H;). Adding together the terms containing un, we obtain: n
oo
Q(u)R(u) =
2
un
n=l
-
R(u)- 1-
2
oo
Pn-kP(Hk) =
2
unPn(A) = Q(u) - 1;
n=l
k=l
~ k 1- u- k~l u
.y-- -
22k
(2n - 2)!
(2k - 2)! . lk!(k- 1)!'
p
=
22 n
1
(n- 1)! n!.
9.25 Let JL be the number of votes cast for a certain candidate. The probability of this is P~ = C~p~qn-~0 The probability that at most JL votes are cast for this candidate is a~ = l:~~o P •. The probability that among k candidates l - 1 receive at least JL votes, k - l - 1 persons get no more than JL votes and two receive JL votes each is
k'
+
2(1- 1)! (k.- l - 1)! (1 kl
p
9.26 (a)
P~ - a~)l-1a~-l-1P~;
n
= 2(1- 1)! (k.- l - 1)! ~~o P~a~-l-1(1 + P~- a~)z-1.
The probability of winning one point for the serving team is 2/3. pk
= C[5 Gr GrS+k + Cf-1Cf5 Gr Grl+k +
C~-1Cr5 Gr (~r+k
+ .. + c~=~cr51 Grk-2 (~r7-k + Cf5 Grk Gr5-k 0
or Pk
(~r 5 '~k (4k- 1Cf5 + 4k- 2Cf-1Cr5 + 4k- 3 c~-1Cr5 + · · ·
=
Q ~.. __
+ 4Cii:-1Cr5 1 + Cf5); 4k + 4 k - 1c1c1 k 14 + 4k - 2c2c2 k 14 + ... , + 4c~-lcr4 1 + Cf4)
(1)3 (2)3 1( 14
6k
(k = 0, 1, ... , 13). The numbers Pk and Qk are given in Table 115. TABLE
115
k
0
1
2
pk Qk
0000228 0000114
0000571 0000342
0001047 0000695
0001623 0001159
0002260 0.01709
0002915 0002312
0003546 0002929
k
7
8
9
10
11
12
13
0004604 0004064
0004986 0004525
0005254 0004890
0005407 0005148
0005450 0005299
0005392 0005345
3
4
5
6
/
P,, Qk
0.04118 0003524
391
ANSWERS AND SOLUTIONS 13
(b)
= .L
Pr
k~O
Pk
13
= 0.47401,
= .L
Qr
k~O
Qk
= 0.42056.
(c) let ak be the probability of scoring 14 + k points out of 28 + 2k for the first team (serving), which wins the last ball, f3k being the analogous probability for the second team;
+ C[3Cf4 (1)4 3 (2)24 3 + ···
1)2 (2)26 a 0 = C[4 ( 3 3
+ C[3Clj ( 31)26 (2)2 3 + (1)28 3 = 0.05198, that is, (ak ao
+ {3k)
+
f3o
Pk = ~ Pk
=
+
0.10543
1
3k (ao
=
(- 1)k gk+ 1
~-
+
(ak - {3k)
f3o), f3
(ao-
(e)
Prr
= .L
( -1)k0.00148
0.10543
qk = ~
gk+1
+
( -1)k0.00148 gk+1
;
00
k~o
P = P1
(-1)k
~ (ao - f3o);
_ ao + f3o ( -1)k {3 . qk- ~ - gk+ 1 (ao- a),
a),
00
(d)
=
Pk = 0.05257,
+ Prr =
Qrr
= .L
qk
k~o
+
Q = Q1
0.52658,
= 0.05286; Qrr
= 0.47342.
II RANDOM VARIABLES 10.
THE PROBABILITY DISTRIBUTION SERIES, THE DISTRIBUTION POLYGON AND THE DISTRIBUTION FUNCTION OF A DISCRETE RANDOM VARIABLE 10.1
See Table 116. TABLE
116
x,
I
0
I
Pi
I
0.7
0.3
-------
F(x)
10.2
=
0 { 0.7
for for
x ~ 0, 0 < x ~ 1,
1
for
x > 1.
See Table 117. TABLE Xi
0
I
117 2
3
- -- - - ----- Pi
0.125
0.375
0.375
0.125
392
ANSWERS AND SOLUTIONS
F(x) =
10.3
0 0.125 ( 0.500 0.875 1
See Table 118. TABLE
jj'
0.1
10.4
X.:; 0, 0 < X.:; 1, 1 < X .:; 2, 2 < X.:; 3, X> 3.
for for for for for
(a) P(X
118
2
3
4
5
0.09
0.081
0.0729
0.6561
= m) = qm-lp =
1/2m,
(b) one experiment.
10.5 X 1 is the random number of throws for the basketball player who starts the throws and X 2 is the same for the second player; P(X1 = m) = 0.6m- 1 ·0.4m } for all m ~ 1. P(X2 = m) = 0.6m+1.o.4m-1 10.6
See Table 119. TABLE
~1--=2_ p,
I o.oo8
119
3
8
9
14
15
19
20
25
30
0.036
0.060
0.054
0.180
0.027
0.150
0.135
0.225
0.125
10.7 P(X = m) = qm-•p = 1/2m-s for all m ~ 4, since the minimal random number of inclusions is four and occurs if the first device included ceases to operate. qn- 1 for m = 0, 10.8 (a) P(X = m) = { qn-m- 1 for O<m.:;n-1; pqm- 1 for 1.:;m.:;n-1, (b) P(X = m) = { qn- 1 for m = n. 10.9
P(X
= m) = C!,'(pmqn-m
P(X P(X P(X
10.13
See Table 120.
= m) = =
k)
=
for all 0 .:; m .:; n.
1 - 2·0.25m for all m ~ 1. (1 - pjw)k- 1p/w for all k ~ 1.
10.10 10.11 10.12
= m) = (np)m/m! e-np
for all m
TABLE
z,
0
~
0.
120
1
2
3
4
3
2
4
5 32
-
10 32
-
10 32
-
00
- -- -- - - - - - - -- l
P•
-
32
-
5 32
'
-
l
32
393
ANSWERS AND SOLUTIONS 10.14
See Table 121. TABLE
I0 I
x,
2
I
3
4
5
121 6
7
8
10
9
1I
12
13
---1--1-------------------------10 3 p, I 3 6 10 15 21 28 36 45 55 63 69 73 75
===1==----------------------------------------_-_-_ 14 15 16 17 18 19 20 21 22 23 24 25 26 27 X,
·~-----1---------
IO"p,
11.
75
73
69
63
55
1
45
36
28
21
15
10
6
3
1
THE DISTRIBUTION FUNCTION AND THE PROBABILITY DENSITY FUNCTION OF A CONTINUOUS RANDOM VARIABLE 11.1 11.2
F(x)
1 if x belongs to (0, 1), ={ 0
f(x)=_}
"
if x does not belong to (0, 1).
271'
v
1
e-x 212 .
11.4
(a) p = ;•
11.5
(a) a, ·
11.6
(a) f(x) = ~ (c)
11.7
1
(b) f(x) =
(b) a
J
'2log2 -.--
loge
Te-ttr.
~
1.18a,
xm-le-xm;xo
(c) f(x)
(x;;. 0),
(b)
=
~ e- x2t2a2. a
Xp = {
-x0 In (1 - p)}i 1m,
(m;;; I xofm
(a) 10,
(b) F(x) =
2 V27T'
1
2, b =
1
[ 18 e- 1212 dt, where t 8 = log x - log
Jo
a
+
11.8
(a) a =
11.9
1 a(f3 - a) (c) P(a < X < (3) = -arctan 2 f3 · 7T' a + a 1 1 1 11.10 (a) F(x) = 2 + .;;:arctanx, a=-;=·
.;;:;
Xo
a
(b) F(x) = 7T(X2
a2);
(b) P(IXI < 1) =
1
2.
V7T'
11.11
p =
1
2'
11.12 p
=
2 3
-·
11.13 Introduce the random variable X denoting the time interval during which a tube ceases to operate. Write the differential equation for F(x) = P(X < x), the distribution function of the random variable X. The solution of this equation for x = l has the form F(l) = 1 - e-kz. 11.4
(a)
~: (6P 00
11.5 f(x) = 1 ~
- sLz
+ 3z
1
2i 8(x
- Xt).
2),
Cb) 1 -
(~
_
:r+l.
394 12.
ANSWERS AND SOLUTIONS NUMERICAL CHARACTERISTICS OF DISCRETE RANDOM VARIABLES
12.1 x- p. 12.2 Xa = 1.8, xb = 1.7, xB = 2.0; the minimal number of weighings will be in the case of system (b). 12.3 M[X] = 2, D[X] = 1.1. 12.4 To prove this it is necessary to compute M[X] = dG(u)/dulu=I. where G(u) = (q1 P1tt)(q2 P2u)(qs Pstt). 12.5 We form the generating function G(u) = (q + pu)n; M[X] = G'(l) = np. 2 n 12.6 N t~ mtkt.
+
+
+
12.7 For the first, 7/11; for the second, -7/11 coins; that is, the game is lost by the second player. 12.8 Consider a, band cas the expected wins of players A, Band C under the assumption that A wins from B. For these quantities there obtain a = (m/2) + (b/2), c = a/2, b = c/2, forming a system of equations for the unknowns a, b and c. Solving the system, we obtain a = (4/7)m, b = (l/7)m, c = (2/7)m. In the second case, we obtain for the players A, Band C, (5/14)m, (5(14)m, (2(7)m, respectively. 12.9
~2
M[A] =
+(i4 +g5) +G7 +~B)+··· m
oo
= 3 M(C) = 2 2 M[X]
1 k(l - p)~<- 1 = -·
2
= p
p
2 00
M [X] = p
m(l - p )m- 4 = 4
1 + __ P = p
m=4
12.12
M[X]
2-
oo
1<=1
12.11
oo m 3 24 1 3 m=1 2 3 m = -2 - 49 = 1 98;
m + 1 3 1 48 +265+298+... = 43 m"2;o ------sm= 4 ( 1 _ ~r = 49. 00
12.10
2 -2m -
m=2
= k(p;
D[X]
s
= =
3
+ -1 p
=
8.
[k(l - p)(p]. The series
2 00
m. '
m=l< (m- k)!
qm-k
is summed with the aid of the formula dl< 00 k S = dql< m"2;0 qm = (1- q)k+1'
where q
=
1 - p.
12.13 (a) M[m] = w,wherew = 1/(1- e-a), mation of the series, we use the formulas
~
m=
me-am = 1
_!!._ ~ e-am da m=a
(b) M[m] = w
+ l.Forsum-
_!!._ (
1 ). 1 - e a 4.55, where P1 = 0.18, Ps
=
da
12.14 M[X] = 1/[p1 + P2Ps(l - P1)] = = P2 = 0.22. 12.16 M[n] = n + m L~= 1 (l(k). 12.15 M[X] = 4(2/3). 12.17 Find the maximum of the variance as a function of the probability of occurrence of an event.
395
ANSWERS AND SOLUTIONS
12.18 /Ls = np(l - p)(l - 2p) vanishes at p = 0, p = 0.5, and p = 1. 12.19 Treat the variance as a function of probability of occurrence of an event. 12.20 In both cases the expected number of black balls in the second urn is 5, and of white balls is 4 + 1/2 10 in the first case and 4 + e- 5 in the second case. 12.21 Two dollars. 12.22 For p < 3/4. 12.23 M[X] = [(n 2 - l)j3n]a. For finding the probabilities Pk = P(X = ka) that the random length of transition equals ka, use the formula of total probabilities and take as hypothesis At the fact that the worker is at the ith machine. 12.24 q = 0.9; P 10 = 1 - q 10 ~ 0.651. 12.25 M[X] = 3/2. 12.26 12.28
M[X]
12.29
X
= =
M[X]
2::::~ 1
(1/n 2 ) =
=
n/m; D[X]
= M + M1 N N
k
+
71" 2
/6.
+
n(m
n)/m
MN1 - NM 1 N + N1
+ + N 1)]N.
N1
y = 1/2p; y = 6.5 dollars.
12.27 2•
(
1 _
.!_ N
_!_)k·
+
N1
'
limk~"' Xk = [(M + M 1 )/(N Write the equation of finite differences for the expected number of white balls Xk contained in the first urn after k experiments:
xk+1- xk = 12.30 p = C/(, 12.31
x=
(~r
(n: 1r-\
qjp; D[X]
M[X]
1
"'
= n~l 2npn = n~1
x = ~;
D[X] = m(nn-; 1).
(2n - 2)! 22 n 2[(n _ 1)!]2
~ (~)n (2~)~
n~o
13.
(~1 + ~)xk.
M1
= q 2 jp 2 + qjp, where q = 1 - p.
"'
12.32 since
M ;
4
(n.)
= (1 _
"'
(2n)!
= n~ 22n(n!)2 = co,
x)-112.
NUMERICAL CHARACTERISTICS OF CONTINUOUS RANDOM VARIABLES
=a,
f2 V3 = 3 , E = a-y·
13.1
M[X]
13.2
M[X] = 0, D[X] =
13.3
M[X] =
p~;·
13.4
D[X] =
Z'
13.5
P(a < a-) = 1 -
e-n/4,
13.6
A= 4 h~, M[V]
=
v'"
a2
D[X]
~-
!: G- ~).
D[X] =
a
Vl.
E =
P(a > -) = -n/4 P(a < ii) = 0.544 a e ' P(a > a) 0.456
~.
13.7
M[X] =
hV'" D[X] = m + 1.
13.8
M[X] =
~ x0 ,
D[X] =
D[V] =
~ x5.
~ (~- ~)· h
2
'"
=
1 19 · ·
396
ANSWERS AND SOLUTIONS 13.9
M[X] = 0, D[X] = 2.
1
13.10
A = ~a+ 1 r(a
13.11
A = 1'(a)r(b)' M[X] = a
13.12
A
rca
+
1)' M[X] =(a+ 1)~, D[X] = ~ 2 (a
+
a
b)
r(~) =.
(
v;r ~ '
J: (1
To calculate the integral x =
1).
ab
+ b'
D[X]
M[X] = 0, D[X]
)
+
= (a + b)2(a + b + 1) 1
= - -2 (n > 2). n-
+ x 2)- en+ 1>12 dx, use the change of variables
V y/(1
- y) leading to the B-function and express the latter in terms of the
13.13
A=
13.14
Use the relation
r- function.
2(n-3)/2r(n ; 1)
_ v2 r(~) ' M[X]-
r(n; 2)'
D[X] = n - 1 -
x
2•
f(x) = dF(x) = _ d[1 - F(x)] dx dx
13.15 M [T] = 1/y. Notice that p(t) is the distribution function of the random time of search (T) necessary to sight the ship. 13.16 m(t) = m 0 e-Pt. Consider the fact that the probability of decay of any fixed atom during the time interval (t, t + 11t) is pl1t and work out the differential equation for m(t). 13.17
Tn
=
(1/p)(log 2)/(log e). Use the solution of Problem 13.16.
13.18 [P(T < T)]/[P(T > T)] = 0.79; that is, the number of scientific workers who are older than the average age (among the scientific workers) is larger than that younger than the average age. The average age among the scientific workers is T = 41.25 years. (2v - 1)(2v - 3) · · · 5 · 3 ·1 13.19 m2 v = ( )( ) ( ) n" for n > 2v + 1, m 2 v + 1 = 0. n- 2 n- 4 " · n - 2v
For the calculation of integrals of the form
l
oo
o
V n[y/(1 r -function. + q) + k).
make the change of variables x express the latter in terms of the r(p -:- k)r(p
+
=
13.20
mk = r(p)r(p
13.21
M[X] = 0, D[X] = 12
13.22
P.,k
q
x2)-Cn+1l
(
x 2 " 1 +n
77 2
dx,
y)] that leads to the B-function and
1
+ 2.
k
= 2: ( -l)k-i(x)k-imh where
m; = M[X 1].
j=O k
13.23
m"
=
2 0 C~(x)"- 1p.,;, where
t~
fLJ =
M[(X- x)i].
397
ANSWERS AND SOLUTIONS 14.
POISSON'S LAW 14.1
p
=
- e- 01
;;:;
0.095.
14.2 p
34
=
4 ! e- 3
14.3 p = 1 - e- 1 ;;:; 0.63. 14.4 p = e- 0 · 5 14.5 (1) 0.95958, (2) 0.95963. 14.6 0.9. 1 500 1 1 2 1 14.8 p = I ;;:; 1 - I ;;:; 0.08.
L
0.4.
14.10
;;:;
0.17.
0.61.
14.7
0.143.
L
em~3m.
14.9
;;:;
em~om.
sk
1
=
vli·
14.12 M[X] = D[X] = (log 2)/(log e)MN0 /ATn. Work out the differential equation for the average number of particles at the instant t. Equate the average number of particles with half the initial number. The resulting equation enables one to find the probability of decay of a given particle; multiplying it by the number of particles, we get M [X]. n1o
14.13
~ 1.02·10- 10 , p = 10! e -n ~
( a)
=
(b) p
1 - e-n- ne-n;;:;
~.673,
where
where s =
15.
:2:~
1
k;. In as much as
:2:f~ 1
,\ 1
and s is finite, then
THE NORMAL DISTRIBUTION LAW 15.1
p = 0.0536.
15.2 15.3
Pbelow =
15.5
Ex
15.6
See Table 122.
0.1725, (a) 1372 sq. m., =
0.4846, (b) 0.4105.
Pmside =
10 5 F ( x)
I -65
15.4
0.3429. 22 measurements.
2p J~ Ey ;;:; 0. 78Ey.
TABLE X
Pabove =
-55
1-----;-;- ----;;;-
-45
-35
-25
122
-15
-5
+5
+15
I
+25
+35
--------2,150 8,865 25,000 50,000 75,000 91,135 97,850 I 99,650 99,965
----
398
ANSWERS AND SOLUTIONS
15.7 E ~ 39 m. The resulting transcendental equation may be more simply solved by a graphical method. 15.8
E 1 =a
J!·
15.9 (a) 0.1587, 0.0228, 0.00135; (b) 0.3173, 0.0455, 0.0027. 15.10 p ~ 0.089. 15.11 p = 0.25. 15.12 (a) 0.5196, (b) 0.1281. 15.13 M [X] = 3 items. 15.14 Not less than 30 fL. 15.15 ~ 8.6 km. 15.16 (a) 1.25 mm., (b) 0. 73 mm.
X) _ Q) (b : x) for all x > b, 1_ Q)(b : X) Q)(x: x) _Q)(a: x) -'-:-----'---_;._ _.:... for a < x < b. Q)(b :X) _Q)(a: X) (x :
15.17
15.18
16.
(a) Fa(X)
=
(b) F6(x)
=
E= Jbln 2
P
a2 (bja) ·
CHARACTERISTIC FUNCTIONS 16.1
+ pe1u, where q = 1 -
E(u) = q
p.
n
= f1 (qk + pke 1u), where Pk + qk
16.2
E(u)
16.3
E(u) = (q
+ pe 1u)n;
16.4
E(u) = 1
+
16.5
E(u) = exp {a(e 1u
= 1.
i=l
M[X]
np, D[X] = npq.
1 a(l _ elu)' M[X] = a, D[X] = a(l +a). -
1)}, M [X] = D[X] = a.
a2u2)
T
16.6
E(u) = exp ( iux -
16.7
E(u)
16.8
E(u) = iu(b - a) , mk
16.9
E(u) = 1
1
= -1 - - . , -
IU
eiub -
+
=
mk
·
= k! bk+l -
eiua
=
(k
+
ak+l
1)(b - a)
vv; e-" 2 [i- $(v)], where v = uj2h and "'( ) -
..., v -
2 (" v; )a e
z2
d
z.
Integrate by parts and then use the formulas:
l
oo e-x 2 sm . 2px dx 16.10
V; e-P
= -
0
E(u) = ( 1 -
2
~)-A,
2
Ql(p),
Joo e-px 2 cos qx dx 0
mk = .:\(,\
+
1) · · ~~,\
+
k -
= -1
2
e-q 2 14 P
J;
2
-·
p
1).
2 See Jahnke, E., and Emde, R.: Tables of Functions with Formulae and Curves. 4th rev. ed. New York, Dover Publications, Inc., 1945.
399
ANSWERS AND SOLUTIONS 16.11
E(u)
11"
e'a" cos"' dcp
= -
7T
l 0 (au). Pass to polar coordinates and use
=
0
one of the integral representations of the Bessel function. 2 16.12 E(u) = exp [ixu - a I u]. By a change of variables it is reduced to the form a eixu E(u) = eixu- 2 - - 2 dx. 7T -oo x +a The integral in this formula is computed with the aid of the theory of residues, for which it is necessary to consider the integral
I+
a
-
f
7T
(1j
etzu
---dz
z2
+
a2
over a closed contour. For positive u the integration is performed over the semicircle (closed by a diameter) in the upper half-plane and for negative n over a similar semicircle in the lower half-plane.
~2 a
Ey(u) = exp {iu(b +ax) -
16.13
16.14 f.L 2 k 16.15
= a 2 k(2k-
f(x) =
( 7T
1)!!,
a
+ x 2)
a2
2
a 2},
= 0.
/L2k+1
(the Cauchy law).
for x > 0, for x > 0, f.(x) = {0 2 for x < 0, ex for x < 0. Solve this with the aid of the theory of residues; consider separate the cases of positive and negative values of x. 16.17 P(X = k) = 2-k, where k = 1, 2, 3, .... Expand the characteristic function in a series of powers of (1(2)etu and use the analytic representation of the 8-function given in the introduction to Section 11, p. 49. f(x) ={e-x 1 0
16.16
17.
THE COMPUTATION OF THE TOTAL PROBABILITY AND THE PROBABILITY DENSITY IN TERMS OF CONDITIONAL PROBABILITY 17.1 p
= !(() 2
b_
() ) 1
( In tan ()2 2
-
()1) · In tan 2
17.2 Denoting the diameter of the circle by D and the interval between the points by l, we obtain p = D(2tl-; D)
17.3 p
=
17.4 p
=
0.15.
i [1- d>(L ~ x)] + 2L:V; x
17.5
= 0.4375.
[exp { ~2-2} -p2
-
exp
{
-)2}] + ~ [<1>(L ~ x) + <1>(;)] ~ 0.67. -p2
In both cases we get the same result, P1
17.6 p
=
1- ~ rll {1- [<1>( 2;
i
(L £2x _
=
P2 = 0.4.
r)- <1>(2~ r)]f dz.
400
ANSWERS AND SOLUTIONS 17.7
F(w)
17.8 p
=
=
nr:f(y){f:+w /(x)dx}"- 1 dy.
128 I - 45 7T 2
~
0.712.
J+_
17.9 Pt = 2:f~Yt1 rt, where rt = 17.10
00
oo
.f.(x)fp(x - x 0 ) dx.
2(2a)mo + 1 + I)! e- 2 ".
f(a I mo) = (mo
III SYSTEMS OF RANDOM VARIABLES 18.
DISTRIBUTION LAWS AND NUMERICAL CHARACTERISTICS OF SYSTEMS OF RANDOM VARIABLES
f(x. y)
18.1
~ {~ -
~
for a
a;(d- c)
~
x
b, c
~
y
~
d,
outside the rectangle, F(x, y) = F1(x)F2(y), where
F1(x) =
1 for x ~ b, { x-a b _ a for a ~ x ~ b, 0
for x
F2(y) =
{1y-c
do _ c
~a,
(b) F(x, y)
(~arctan~
+
~)(~arctan~
(a) A
18.3
f(x, y, z) = abce-(ax+by+cz).
18.4
The triangle with vertices having coordinates:
20,
(~In a~c, 0, 0);
( 0,
=
~In a~c,
0);
(a) F(i, j) = P(X < i, Y < j) = P(X
18.5
~
( 0, 0, i - 1, Y
~
d,
for c ~ y ~ d, for y
18.2
=
for y
~
c.
+
~) ·
~In a~c). ~
j -
1).
For the values of F(i, j) see Table 123. TABLE
~
0
0 1 2 3
0.202 0.202 0.202 0.202
1
J
(b) 1 - P(X
~
6, Y
2
123 3
4
5
6
- -- -------- -- -
~
1)
0.376 0.475 0.475 0.475
=
0.489 0.652 0.683 0.683
1 - 0.887
=
0.551 0.754 0.810 0.811
0.113;
0.600 0.834 0.908 0.911
0.623 0.877 0.964 0.971
0.627 0.887 0.982 1.000
401
ANSWERS AND SOLUTIONS
(c)
M[X]
= 1.947;
= 0.504; Ilk11 I =
M[Y]
11
2 ' 610 0.561
0 ' 561 0.548
1/·
18.6 18.7
+ f(u,
P = f(u, v, w): [f(u, v, w)
18.8
w, v) + f(v, u, w) + f(v, w, u) + f(w, u, v)
+ f(w,
v, u)].
F(a1, b5) + F(a2, b1) - F(a2, bs) + F(a 3 , b4) - F(as, b2) + F(a4, b2) - F(a4, b4) + F(a5, b5) - F(a5, b1). a- 6 - a- 9 + a- 12 ,
18.9 P = F(a1o h 3 ) 18.10
P = a- 3
-
18.11
P=
7TR2 4ab R2 4 ab (7T - 2{3
4~: (7T -
+
sin 2{3)
2cx - 2{3
+
+
sin 2cx
sin 2{3)
for 0
~
R
~
b,
for b
~
R
~
a,
for a
~
R
~ Va + b
for R ~ where a
=
arc cos (a/R),
f3
V a 2 + b2 ,
~:
~~) ·
18.12
(a) c =
18 _13
(a) r
18.14
Consider the expectations of the squares of the expressions
=
xy
(b) p =
{+-11
ay{X- x)
+
(
1-
for n/m < 0, for n/m > 0,
ax(Y- y)
(b)
Ux
ay
=
\!!:._\· m
ay(X- x) - ax(Y- y).
and
18.15
Make use of the reaction kxy = M [XY] - xy.
18.16
llr;;ll =
18.17
(a) M[X] =ex+ y = 0.5,
-0.5 1 -0.5
1 -0.5 0.5
0.5 -0.5
(b) M[Y] =ex+ f3 = 0.45, D[X] = (ex + y)({3 + 8) = 0.25; D[ Y] = (ex + {3)(y + 8) = 0.2475; kxy = M[XY]- M[X]M[Y] =a- (a+ y)(a 1 18.18
M[X]
=
M[Y]
=
0;
2
llkt;ll =
0 18.19
f(x,y) = cosxcosy; M[X]
I kt; I
=
I\7T
~
3
2 ,
arc cos (b/R).
=
77~ 3 ,
2
7T
~ 311·
=
0
1
2
M[Y] = ~- 1;
+ {3)
=
0.175.
402
ANSWERS AND SOLUTIONS
+ !::.l arc cos!::.]· l
18.20 p = }}__ [1 - j1 - L 2 TTL [2 l 18.21 p = -[2(a TTab
+
b) - !].
Hint. Use the formula P(A u B) = P(A) + P(b) - P(AB), where the event A means that the needle crosses the side a and B that it crosses side b.
19.
THE NORMAL DISTRIBUTION LAW IN THE PLANE AND IN SPACE. THE MULTIDIMENSIONAL NORMAL DISTRIBUTION
=
~ [ 1 + ci>(x ~xx)] [1 + ci>(Y E~
19.1
F(x, y)
19.2
f(x, y) = 182TTV3
1
x exp { 19.3
(a) c
= 1.39,
(b)
2 [(x - 26) 2 196
-3
jjk!JII =
+
(x - 26)(y 182
0.132 -0.026
II
+
-0.02611 , 0.105
12)
+
(c) Sez
(y
+
12) 2 ] } 169 ·
= 0.162.
1 = 0.00560. 2TTe 3 2
v
19.4 !(2, 2) =
19.5
y)] ·
1 f(x, y, z) = 27TV 230 7T
x exp {/max
=
27T
2 ~ 0 (39x
1 V 2307T
=
2
+ 36y 2 + 26z 2
-
44xy
+ 36xz - 38yz)};
0.00595.
19.6
(a)
llkti 1 l
=
2 -1 0 0 0
-1 2 -1 0 0
0 -1 2 -1 0
0 0 -1 2 -1
0 0 0 -1 2
0
0
0
0
0
0 0 0 0
19.7 10 0 2 0
0 10 0 2
2 0 10 0
0 2 0 10
403
ANSWERS AND SOLUTIONS
19.10
fp2R2/E2 e-P 2 (d 2 /E 2 l Jo
P(R) =
(2 d'\;-)
10 ~ e-t dt,
where l 0 (x) is the Bessel function of an imaginary argument.
1
:2'
(a) P(X < Y) =
19.12
p =
19.13
(a) Pcirc = 1 - e-P
~
[
ID( c ~x x) - ID(a ~x x) ][ID(d ~/)
19.14 P =
o.s( 1 -
= 0.2035,
exp {- p 2
19.15
P
=
2: ( exp {- p 2
19.16
A
=
4dk, ex = Ex
19.17 P =
2
1D(V~l7r)~D(V~07T)
(c) Pun=
~D(~)~D(~)
19.18 Psat = 1 - q 3
-
1
(b) Psq =
iDe~/)]
-
['(,;;)]2
=
0.0335.
= 0.2030,
= 0.1411.
~:}).
~}
J
<
;:r1
(b) P(X < 0, Y > 0) =
19.11
+
-
exp {- p2
2p2d2 3£; ,
fJ
~D(;J~D(;J
=
;~}).
Ez
J
1
2p2k"
+ 3 £~
since ex > Ex,
+ Ps) 2 +
3q 2(1 - q) - 3q[(p2
fJ
•
> Ez.
2p2p4] - P~ = 0.379,
Pexcel = P~ + 3p;(pa + P4) + 3p2p5 = 0.007, where P2 = 0.196, Ps = 0.198, P4 = 0.148, p5 = 0.055, q = 0.403.
19.19 p =
~ [iD(k)]2.
19.20 p =
~D(~)
- ~~~
exp {- P2 ;:} -
[
1D(2~)
r.
19.21
19.22 19.23 19.24 19.25
25(x1 16(x1
2: n
; =1
-
-
(x1 -
+ 36(x1 2 2) + 5x~ + 10) 2
Xt-1) 2 =
-
10)(x2 - 10)
16(x 3
+
2) 2
+
+
8(x1
[ 5 - -log n - 2 (27T) ] · loge 2
The problem has no solution for n > 12.
= 7484.6. + 2) = 805.1.
36(x 2 - 10) 2 -
2)(x 3
404 20.
ANSWERS AND SOLUTIONS DISTRIBUTION LAWS OF SUBSYSTEMS OF CONTINUOUS RANDOM VARIABLES AND CONDITIONAL DISTRIBUTION LAWS 20.1 for a1 ~ x ~ a2, b1 ~ y ~ b2,
1 f(x, y, z)
= { (a2
C1 ~ z ~ c2,
- a1)(b2 - b1)(c2 - c1)
outside the parallelepiped;
0
(b
f(y, z) = {
b ;(
2 -
1 c2 - c1
)
for b1
o
lxl
~z~
c2,
for c1 ~ z ~ c2, outside the interval. The random variables X, Y, Z are independent.
_1_ f(z) = { c2 - c1
For
b2 , c1
outside the triangle;
0
20.2
~y~
IYl
~ R,
~ R, 2 ---x--;;2 2 V~R"'
fx(x) = Fx(x)
7TR2
=
Fy(y) =
,
J
~ [arcsin~
+
~
1 - ;:]
+
~·
~
+
i j1- ~:]
+
~;
for
lxl
for
lxl =
for
lxl
[arcsini
X and Yare independent, since f(x, y) i= fx(x)fy(y).
20.3
f(y I x)
1
2VR~- x
~ ~ [S(y +
2
+
R)
S(y
~
R)]
< R, R,
> R,
ll(z) being the ll-function. 20.4 X and Y are uncorrelated.
20.5
(a) f(x, y)
( b) f() x x
(c)
f(y
~ ~
inside the square,
{
outside the square. Inside the square;
= avl2-
I x) =
a2
2lxl ' Jy'() = avl2- 2IYI. Y a2 , 1
aV2 -
2lxl'
f(x
I y) =
1 aV2 -
21i
a2 (d) D[X] = D[Y) = 12 ,
(e) the random variables X and Yare dependent but uncorrelated.
405
ANSWERS AND SOLUTIONS
lzl
20.6 < R.
fz(z) = [3(R
2- z2)/4R3] for lzl
20.7
k = 4.fx(x) f(y I x) = fy(y), M[X]
20.8
M[X]
=
D[X] =
< R, f(x, y
I z)
=
1/[1r(R
2- z2)]
for
2xe-x 2 (x ~ O),fy(y) = 2ye-Y 2 (y ~ O),f(x I y) =fAx), M[ Y] = v:;;./2, D[X] = D[ Y] = 1 - 7T/4, kxy = 0.
= =
J_"'"'
M[X I y]fy(y) dy;
J_"'"'
D[X 1 y]fy(y) dy
+ J_"',
{x- M[X 1 y]} 2fy(y) dy.
20.9 Since M[X] = 5, M[Y] = -2, ax= a, ay = 2a, r = -0.8, it follows that: (a) M[XI y] = 5- 0.8/2(y + 2) = 4.2- 0.4y, M[YI x] = - 2 - 0.8 x 2(x - 5) = 6 - 1.6x, axlY = 0.6a, ay 1 x = 1.2a, (b) fx(x) =
2 . 11 _ exp { - (x 2- 25) } av 27T a
(c) f(x
I y) =
f( y
I x) =
20.10
,
~ 27T exp {- (x + ~% ~ . a +
1.6x - 6) 2 } 2 2.88a
Aj~exp{ -(a- !:)x
fx(x) =
•
4 · 2) 2 } •
0.6a
1 exp { - (y 1.2aV 27T
2 .11 - exp { - (y 8+ 22) } 2av 27T a
fy(y) =
2},
•
Aj~exp{ -(c-
fy(y) =
For the independence of X and Y it is necessary that
V ac exp { _ b 2 4
1rA
(xc
2
_
4xy b
+
y 2 )} = 1 .
This condition is satisfied for b = 0. In this case A 20.11
k
f(y
20.12
3V3 k
=-
7T '
I x)
1 f(x 18 '
J;
=
I y)
= -XY
a =
V ac/7T.
2
= - e-<2x+l.5y)
-;;
2
'
e-<x+syJ2.
1
(a) fx(x) = 40-/27T exp
{
-
(x - 125) 2 } 3200 '
1 { (y + 30) 2 } (b) fy(y) = 30-/27T exp 1800 '
1 { (x - 149) 2 } (c) f(x I 0) = 32-/27T exp 2048 , ' (d) j(y I 25)
20.13
M[X I y]
20.14 f(r) = 20.15
fr(r) =
1 { (y + 75) 2 } 24V27T exp 1152 .
0.8y
= 2 2r . _
1
=
~v~
+
exp
{
149, M[Y I x] r2 } - ~ 2a ,
r=
=
OA5x- 86.25.
M(R)
=
. 4a 1_
v~
·
:b exp { -~ (~2 + ;2 )}/o[~ (:2- ;2) ]}•
r
where / 0 (x) is the Bessel function of zero order of an imaginary argument; f
+ si~: rp)
1.
:;)y
2}·
406
ANSWERS AND SOLUTIONS
IV NUMERICAL CHARACTERISTICS AND DISTRIBUTION LAWS OF FUNCTIONS OF RANDOM VARIABLES 21.
NUMERICAL CHARACTERISTICS OF FUNCTIONS OF RANDOM VARIABLES 21.1
4a/w.
21.4
l M[cp] = arctan h
21.2 w(a/2). h In - 21
21.3 (1
+
[2)
h2
M[G] •
=
4.1 g., D[G] = 0.32 g 2 •
407
ANSWERS AND SOLUTIONS 21.6
M[Y]
21.5
40/7T em.
21.9
(n - 2)pq 2 (for n ;;. 3).
21.12
~rJ
(1 -
= 1.
3/7T.
.
21.15
n [1 -
21.17
f= T[1- exp{-a(l- e-am,
s= 21.18
21.7
n[1- (1-
~rJ
Jo
+
M[R] =
21.13
>.w.
21.16
n[l - (1 - p)m].
(n- k)[1-
a 2 /2.
EV;.
21.10
2p
+ exp{-a(1-
kT 2 [1- 2exp{-a(l- e-a)}
21.8
1.15 m.
(1-
~
n
e- 2a)}] 2.
krJP;;'(k),
where P;;'(k) is the probability that after the first series of cycles exactly k units will be damaged at least once;
i (-1)tC~ [1 -
C~
P;;'(k) =
21.19
(a) mp
+ 2 {(m -
2k)p
k; =0
i)]
p(n - k + m• n k[1 - (1 - p) 2]}P;;'(k) + P;;'(5)
t=o 4
+
+ 2P;;'(6) + P;;'(7)[1 -
x [3 - (1 - p) 2 - 2(1 - p) 3 ] X
[1 - (1 - p) 4 ]
(1 - p) 8 ],
where P;;'(k) = Cnpk(l - p)n-k for n = m = 8, (b) 2mp for n > 2m. a2
b + V a 2 + b2 b 2 a + V a 2 + b2 1 ..;--a + 6a In b + 3 a 2 + b2 •
21.20
6 b In
21.21
-2b a + b 2 + 3 a 2· ab2 In a + V ba 2 + b2 + a 2 3-a 22b 2 V 3
2
21.23
0.316g.
+ 225b 2 -
21.25
M[Z] = Sa; D[Z] = 100a2
21.26
M[Y]
21.27
M[ Y] = exp {- x(l - cos b)} cos (x sin b),
=
D[ Y] =
Ex_, D[Y] pV7T
~
[1
=
E~ p
+ exp {- x(l
- cos 2b)} cos (x sin 2b)] - y2.
21.29
M[Z] = 2a
21.30
M[Z] = 5(V3 - 1), D[Z] = 7600.
21.31
rxy
21.32
M[Z] = 0, D[Z] = 2L'. 2a 2 .
21.34
r
=
(b) 22.0 sq. m.,
E V7T, - D[Z] --7T 2p
(c) 10 sq. m. =
8) + -4p£22 (4- 7T).
a 2( 3 - 7T
n!!/V(2n- 2)!!, if n is even; rxy
a( 1
+
e;), D[R]
=
150ab.
2
(a) 26.7 sq. m.,
=
l /3; [2 /18.
(l_l). 7T
21.28
J2
21.24
a; 2 ( 1 -
21.33
~)
=
0 if n is odd. la
7T '
(l2 - 7T2..i.).
a2
(where e is the eccentricity).
408 22.
ANSWERS AND SOLUTIONS THE DISTRIBUTION LAWS OF FUNCTIONS OF RANDOM VARIABLES 22.1 Fx(y Fy(y)
~b)
{
=
for a> 0,
b)
(y -
for a < 0.
1 - Fx - a
22.2 22.3
fy(y)
=
fx(eY)eY.
z } 1 exp { - 2aa2 fz(z) = { ~v 211"az
for z > 0, for z.:;; 0.
22.4
:-v;
2} { -p 2(y) 2p --exp
fy(y) = {
for y ;. 0,
E
for y < 0. 22.5 11" 1ry sin { 0 fy(y) =
for
1
2 .:;;
for y <
2
y .:;; ; arctan e,
1
2 or
2
y > ; arctan e.
22.6 for 0 < v .:;; a3 ,
•
for v < 0 or v > a 3 • 22.7 fx(x)
1
l
=; [2 + x 2
(-CXJ.:;; x.:;; CXJ).
22.8 f~(y)
1
= { 7rY a2 - y2 0
22.9
(a) fy(y)
= 31r[l +
for IYl < a (the arcsine law)
for IYI ;;>a. 1 (1 _ y)sl2](l _ y)21s'
(b) if a > 0, then v~
fy(y)
= { ~(a + y)Vy
for y ;. 0, for y < 0;
if a < 0, then
{ fy(y) =
~(a
-Va for + y)Vy
y .:;; 0,
for y > 0; (c)
for
IYI.:;; ~·
for
IYI
h(y)- { :
> ::.
2
409
ANSWERS AND SOLUTIONS 22.10 For an odd n,
for even n, 2ay
for y > 0,
fy(y) = { mr(a2
0 22.11
(a) fy(y) = Iyle-Y 2
(
for·y:;;;: 0.
-oo :;;;: y :;;;: oo),
for y ~ 0, " 0 .or y < .
2ye-Y 2
(b) fy{y) = { 0
22.12
rck + 1.5) cos 2 k+.1. y --::::'---:__ Jy(y) = { v 7T rck + 1) 0
22.13
(a) fy(y) =
v1277 exp {
- y2} 2 ,
for
IYI :; ;: ~·
for
IYI
> ~-
1 { (y -a~y)2} · (b) fy(y) = ay V 2 277 exp -
22.14 fu(y) = {
r ) = 22.16 J.(z 22.17
.11 -exp { a;v 27T
(a) J.(z)
(b) f.(z)
22.18
= ("'! )a
1 = -2
X
1 for 0 :;;;: y :;;;: 1, 0 for y < 0 or y > 1 .
Z2} , -p a.
!(x, : _) dx - Jo ! !(x, !.) dx, X
exp { -lzl},
- ro X
(c) f,(z)
(d) f,(z) =
V~77 exp { -~}·
(a) f.(z) =
Sa'" yf(zy, y) dy -
(b) f,(z) = (1
J.(z)
f"'
X
= -
1 - exp
2axay
{-J:.L}, GxGy
yf(zy, y) dy,
2z
+
z2)2'
r(~) (c) f.(z) =
h 2 ax+ 2 ay.2 were a.=
2
r(~)-v;
(1
+ z 2)-
= 17 - -=[z-2-+_a.:...Y(.,..-:-:):--2::-] (Cauchy's distribution law).
410
ANSWERS AND SOLUTIONS 22.19
(a) fr(r) = r
J'
V 1
J
f(r cos r:p, r sin rp) dr:p;
r2
-r
= (b)
fr(r)
r
2 " 0
={
x2
-
+
f(x, -
for 0
0
for r < 0;
={
fr(r)
(d)
x 2)
-
~ 2 exp {- p 2 (~ r} 2r
(c)
Vr2
[f(x,
fr(r)
~
for 0
0
for r > a or r < 0;
a
= ar2 exp {
~
~
~r
r
-
x 2 )] dx
< oo,
a,
h ('h)
r + } la a 2 -~ 2
Vr2
2
'
where l 0 (z) is the Bessel function of zero order of imaginary argument;
22.20
_r_ exp { _ r 2 (a~ 4
/,( , r) =
( e)
axay
2- 2a~)]. 2+2a~)}la [r (a~ 4axay
2
axay
= (X-
x) cos ex+ (Y- ji) sin a; . V = (X- x) sm a+ (Y- ji) cos a;
U
a~
=
a~
cos 2 ex
+
a~
sin 2 a
+
tan 2cx
=
raxay
2 - 2- -2 ; ax- ay
raxay sin 2cx;
22.21 ' (2fa(cx) = { 4 0
for lcxl ~ 2,
l•ll
for lcxl > 2;
~ {:~In IPI
f,CJii
for 1,81
~
1,
for 1,81 >1. 22.22
f(t, r:p) = 2 7T
v 1t -
r~y
t 2 (1 - rxy s:n 2rp)}· 2(1 - rxY)
exp {
For rxy = 0,
f(s
I t) is the probability density of a normal distribution with parameters
M [S I t] = So
+ Vat + ii ~;
D[S I t] = D[So]
+
t4
t 2 D[Vo] (
22.24
- 2 fy(y) -
+4
D[a]
+
n)n/2
2
r(~)an y
n- 1
{
exp
+
2tks 0 v0
r
ny2j
-
2a2
t 2 ks 0 a
+
t 3 kvoa·
411
ANSWERS AND SOLUTIONS
The characteristic function of the random variable XJ if a 2 = 1, x1 = 0 is Ex/t) = (1 - 2t) -1/2. Then, the characteristic function of the random variable U = 2:7 ~ 1 x; will be Eu(t) = (1 - 2t)-n 12 and the probability density f(U) = __!_
j"'
27T -
etut(1 _ 2t)-nt• dt = 00
1
r(~)2nl2
u
If the random variables X; have the same variance a 2 and .X; = 0, then the random variable a2 U
Y=+
Consequently, fy{y)
=
Jn.
fu[rp(y)]I.Jl(y)l, where rp(y) = y 2 n/a 2 •
22.25
22.27
23.
f(r, &) = r 2 cos {}
J 2 " 0
f(r cos{} cos q;, r cos{} sin q;, r sin&) dq;.
THE CHARACTERISTIC FUNCTIONS OF SYSTEMS AND FUNCTIONS OF RANDOM VARIABLES 23.1
Make use of the fact that for independent random variables
23.2
E.(u) = Exl,X2o····xn(u, u, .. . , u).
23.5
Ey(U)
=
23.6
Ey(u)
= (1 + iu)- 1;
eiu(a +b)
.
-
eiub
zua
·
m, = M[Y'] = ( -1)'r!
23.7 Ey(u) = J 0 (au), where J 0 (x) = (1/277) f~" e1x cos"' dq; is the Bessel function of first kind of zero order;
J
oo
fy(y) =
J 0 (au) cos uy du =
-oo
23.9
Ex 1 ,x 2 , .. .. xn (ul>
V
1 · a 2 - y2
U2, . . . , Un)
=
23.10
7T
exp {ai
m~1
Urn- ; •
mt U~
2 . n(n + 1) E y( u) = exp { zu - n(2n 6 + 1) u•} · 2
-
aa 2
:~: UmUm+1}·
412
ANSWERS AND SOLUTIONS 23.12
(a) M [Xf X~ X}] (b) M[(X~ -
=
-
kts
+
k~s)
+
a6 ,
- .a2)] = 8k1 2k1 3k2s.
a 2 )(X~
= 0.
23.13
M[X1X2Xs]
23.14
M[X1X2XsX4] = k12ks4
23.15
For the proof, make use of the expansion of the characteristic function
+
k13k2•
in an infinite power series of uh u2, ... , 23.16
where E(u 1 , variables. 23.17
+
k14k2s·
Un.
For the proof, use the property
•.• ,
Un)
is the characteristic function of a system of normal random
E(ul, u2) = (plelulsl X
24.
+ 1a2(kt2 +
8k12k1sk2s
a 2 )(X~
+
+ q2el(ul +u2)S2)N2 + qsel(ul +u2)S2)Na(p4elu2sl + q4elu2S2)N •;
qle!ulS2)Nl(p2el(ul +u2)Sl
(psel
CONVOLUTION OF DISTRIBUTION LAWS 24.1 for z :s;; 1a,
0
fz(z) =
z - 1a (b - a) 2 1b - z (b - a) 2 0
for 1a :s;; z :s;; a for a
+b
for z
~
+ b,
:s;; z :s;; 1b,
1b.
24.2 0
for z
x+y+a+b-z for x 4ab fz(z)
=
for
1b
+y+b- a
fz(z)
=
[<~>(z 1 1(b - a)
where
-
a Ux
x+y -
:s;; z :s;; x
b :s;; z :s;;
+y- a- b
for z :s;;
0
x + y + a + b,
x+y+a-
a+b-x-y+z for x 4ab
24.3
~
x+y+
:s;; z :s;; x
x) _<~>(z -
b Ux
b - a,
+y +a-
a - b.
2exp {- 2t2} dt.
= V227T Jof
+ y + a + b,
x)] ,
b,
413
ANSWERS AND SOLUTIONS 24.4
f.(z) =
0
for z,;;; 3a,
(z - 3a) 2 2(b - a) 3
for 3a ,;;; z ,;;; 2a + b,
3a) 2 - 3[z - (b + 2a)] 2 for 2a + b ,;;; z ,;;; a + 2b, 2(b - a) 3
(z -
(3b - z) 2 2(b - a) 3
for a+ 2b,;;; z,;;; 3b,
0
for z ;. 3b.
24.5 The convolution of the normal distribution law with the uniform probability law has the probability density
Equating the expectation and variance for J.(z) and for the probability density of the normal distribution law we obtain,
f~(z)
) j ''( 2 z
=
{ - (z 22 - z) .11- e x p
a2 V
27T
2
}
,
Uz
where
z=
2x,
If x = 0, then the relative error of such a substitution at the point z = 0 is
~ 07
= /.(O) -
f~(O) 100 07
/.(0)
/o
/o
TABLE
(Table 124).
124
l
E
2E
3E
4E
~%
-0.30
-3.02
-9.70
-17.10
24.6
fz(z)
1
= :;;. -:-1-+----;-f2n:(c-z---c)'""2'
where c = a + b, I= hk/(h + k). (For solution make use of the characteristic functions of the random variables X and Y.) 24.7
f.(z)
=
2 7T2
z
sh z ·
24.8 f.(z) = {
24.9
fr(r)
r = v'X exp
{
e- 2 ' 3 (1 - e- 216 ) 0
for z ;. 0, for z < 0.
2 2 11 - (k +4 ~k22)r } la [ 4rll v• ; (k22 - ku) 2 + 4k 21 2] '
414
ANSWERS AND SOLUTIONS
where Io(z) is the Bessel function of zero order; Ll =
I ku
k21
k 11 = 21[2 p 2 a1
24.12
I"JI[X] = Fn(x)
+
a22
+
k121· k22 ' (b22 - a22)"2] sm a ;
k22 =
2 ~ 2 [b~
k12 =
4 ~ 2 (a~ - b~) sin 2a =
+ b~ + (a~- b~) sin 2 a];
_!_ + _!_ - 2; D[X] = _!_
P1
P2
k21 .
(J.. - 1) + P2_!_ (_!_P2 -
P1 P1
1{(1 - Pl)P2[1 - (1 - Pl)n] P2- P1 - (1 - P2)P1[1 - (1 - P2)n]}.
=-
24.13
The required reserve resistance is 0.37 ·ih = 7.4 kg.
24.14
(a) P =
4~
lL
x [exp{ (b) p
= 0.75 -
-p2(L ~ xr}- exp{ -p 2 (~r}]}•
(L 2-;_ x)
x [ exp { _
P(XA
> XB)
=
~
- 2L:v;.
P2(L ~ xr} - exp {- (p ~ r}] x
24.15
[1 + rD( )~1-/~J)] ·
24.16
/.(z) = (m- 1)!
24.17
F.(z) = _!_b [a(l - b)(1 - an) - b(1 - a)(l - bn)].
24.18
See Table 125.
e-1\z
a-
TABLE
24.20 24.21
z,
0
P(Z = z,)
6
1
1 11
24
125
.
2
+
4~
J: <j)(Y ~ X)<j)(Y - ~ + X)
zm-l)._m
24.19
1),
3
4
1
1
4
24
1 12
P(Z = m) = (2 a~m e- 2a. m. The random variable Y has a binomial distribution. F.(n) = P(Z < n) = 1 - (nj2n - 1) (n = 1, 2, ... ).
dy.
415
ANSWERS AND SOLUTIONS 25.
THE LINEARIZATION OF FUNCTIONS OF RANDOM VARIABLES 25.1
EQ :::::: 9100 cal.
25 ' 2
D[Q]:::::;
1:mi
[(~r + (~r + (7r
pm
_ 2ava~rvz
+
2ama:rmz] .
J3l 25.3
E. : : :;
Jr2En2
(-t
E~ w2e4
(t
ml
+ w w + + + --------~----.-===========~--~----~~
1 )2[-2£2z we
J_+ r
a~
25.4
(r +
2
zr
( _
w
w12c-)2Ero2]
1 )2 1- -we
aa,
+
N2(r +
zr
],
-
e
where j = - - - · w r + N
25.5 E :::::: 66.66 m.; Ey :::::: 38.60 m. :::::: 0.52 m./sec. 25.7 For the assumed conditions the function V1 = - V cos q cannot be linearized. 25.8 ax :=: : 23.1 m.; ay :=: : 14.3 m., a 2 :=::::25m. 25.9 ax = ay :=: : 8.66 m.; a2 :=: : 7.05 m. 25.10 Ev :::::; [U/(q + w)]Ero. 25.11 Eh = 43 m. 25.12 a. :::::; w-a. 25.13 Eh :::::: 12.98 m. 25.14 The standard deviation of errors in determination of distance by the formula using the data of the radar station is ::::::22.85 m.
~ cp"(x)D[X],
25.15
y :::::; cp(x) +
25.16
M[S]:::::;
a: (1 -1) siny,
D[S] :::::;
a2b2 [a~ cos 2 y + a4 { (1 + 4
bj~; sin 25.17
Ex =
2
&
a '1/
+
D[ Y] :::::: [cp'(x)]2D[X]
E~ cos
2
5 ] cos 2 Y) + 12 a~ cos 2 y
=
.
b a).
&
;x
+ ~ [cp"(x)] 2 D 2 [X].
arc sin(-= sin
a 2 - b 2 sin 2 & ex 25.18 (a) By retaining the first two terms of the expansion in the Taylor series of the function Y = 1I X, we obtain y :::::: - 0.2, D [ Y] :::::: 0.16; (b) By retaining the first three terms of the expansion in the Taylor series of the function Y = 1/X, we obtain y:::::: -1.00, D[Y] :::::: 1.44. 24.19 (a) By the exact formulas
v=
4; ' (3a? +
r 2 ),
t
D [ V] = 16
[3a~ +
12r2 ai + 3r 4 a?];
(b) according to the formulas of the linearization method
25.20 (a) Measuring the height of the cone, we get D[V] :::::: 47T2 , measuring the length of the generator, we get D[V] :::::; 3.577T2 • 25.21 19.9 mg.
(b) by
416
ANSWERS AND SOLUTIONS 2 L 25.22 E g -_ 41T ~
25.23 26.
J
m _ 4.67 em.Isec..
n24T 2 (Et2 + 2Et'2) + L 2
(1 D[Z] ~ 96(1
-
2
k)27T
+
k).
THE CONVOLUTION OF TWO-DIMENSIONAL AND THREEDIMENSIONAL NORMAL DISTRIBUTION LAWS BY USE OF THE NOTION OF DEVIATION VECTORS
26.1 A normal distribution law with principal semi-axes of the unit ellipse a = 48.4 m., b = 12.4 m., making c 1 the angles a = 19° 40' and 109° 40' with the deviation vectors. 26.2 For y = 0, a degenerate normal law (deviation vector) ct + c~ = 50 m. For y = 90°, a normal distribution law with principal semi-axes of the unit ellipse a = c 1 = 30 m., b = c2 = 40 m., coinciding with the directions of the deviation vectors. 26.3 The principal semi-axes a = 1.2 m., b = 1.1 m. make angles of 33° and 123° with the x-axis. 26.4 The principal semi-axes a = b = 100 m.; that is, the total dispersion is circular. 26.5 a = 30.8 m., b = 26.0 m., a = 18° 15'. 26.6 (a) a = b = 25VS m., (b) a = 68.9 m., b = 38.8 m., a = 15°. 26.7 From the system of equations for the conjugate semi-diameters m and n, m2 + n2 = a 2 + b2 , mn = abj(sin y), we find m = 20m., n = 15m. and
v
p
=
~(~)~(~)
= 0.566.
lml = 73.2 m., lnl = 68.1 m., e = 74° 21'. (a) f(x,y) = 1.17·10- 5 exp{-7.06·10- 2(0.295x 2 -0.670xy+ 1.31y2)}, (b) a = 126.5 m., b = 53.8 m., a = 12° 10'. 26.10 a = 880 m., b = 257 m., a = 39° 12'. 26.11 The distribution law is defined by two error vectors (Figure 42): BEp sin f12 BEp sin [3 1
26.8 26.9
al
=
eel
= sm . 2 {{:/ \f'l +
Q )'
f-'2
FIGURE 42
a2 =
ee2 = sm . 2 (r:l1-'1 + f3 2)'
417
ANSWERS AND SOLUTIONS ex1
=
1r -
{3 2, ex 2 k 11
{3 1 as a consequence of 'which 2 c. 2a 2a = 2 2 · B4 (r:l r:1 ) Sill ~1 COS ~1 p Sill ~1 + ~2
=
m
B2E~ k 22 -· 4 (r:l 2 p2 Sill ~1
r:1 ) + ~2
( .
4
a
Sill ~1
. 2 ~2 r:1 2a ) + Sill COS ~2 ,
. 4a) + Sill ~2 ,
m
2 c . s r:l r:l· . s r:1 r:1 ) k 12 = · B4 (r:l r:1 ) Sill ~1 COS ~1 - Sin ~2 COS ~2 , 2 p 2 Sin ~1 + ~2 sin 2 {3 1 sin 2{31 - sin 2 {32 sin 2{32 tan 2ex = . sin 2 {3 1 cos 2{3 1 + sin 2 {32 cos 2{32 26.12 a = 18.0 km., b = 7.39 km., ex = 85° 36'. 26.13 To the error vectors a 1 and a2 one should add another error vector, as: Ei sin2 {32 + COS 2 {32 as = ----=---;--';-:;o----,,:::.,...---'--"' sin (f31 + {32) for as = {3 0, which gives at the point C a unit ellipse of errors with principal semi-axes a = 41.2 m., b = 19.7 m., making with the direction of the base the angles 74° 20' and 164° 20'. 26.14 Ev = 2.1 m./sec., Eq = 0.042 rad. 26.15 a = 156 m., b = 139 m.; the principal semi-axes directed along the course of the ship. 26.16 a = 64.0 m., b = c = 78.1 m.; the semi-axis a is directed along the course of the ship. 1 { (x - 45) 2 (y - 15) 2 (z + 75)2} 26.17 f(x, y, z) = 120(2w)Si2 exp . 50 32 72 .
m
v
26.18
The equation of the unit ellipsoid is (x - 30) 2
2100
+
y2
1125
+
z2
64 = 1.
26.19 7421
llkull
-2568
- 7597
-2568 8406 2322 -7597 2322 9672 26.20 p=-1.47·10 7 , q=-8.9·10 9 , tp=65°45', U1=4106, U2=-622, Us= -3484, a= 89.3, b = 57.0, c = 19.3, cos(a,x) = ±0.6179, cos(a,y) = +0.3528, cos (a, z) = +0.7025. =
FIGURE 43
418
ANSWERS AND SOLUTIONS
FIGURE 44
K,
26.21 If we take as the x-axis (Figure 43) the direction BK2 and as the y-axis the direction perpendicular to it, then by the linearization method we find three error vectors:
From this we find :
E5
(Di cos 2 ex
= EE
[Di sin 2 ex
D~
)
ku = 2p2 Di - H2 + D~ - H2 , · ex cos ex k _ E v2 D21 sm 12 - 2p2. Di - H2 , k 22
2p2
Di - H2
E~ (D 2
+ E't
_
H
2 )].
2
26.22 The error vectors a 2 and a 3 remain the same in magnitude and direction as in the preceding problem. The magnitude of the error vector a 1 caused by the error in the distance D 1 and its direction ex1 = LK~K2B" is determined from the formulas (Figure 44): . 1 . sm ex 1 = >:sin ex, where ,\ = 1 + 2D1 sine tan e cos ex + Di sin 2 e tan 2 e D'f_sin 2 e Disin 2 e
j
m-
vm-
V ENTROPY AND INFORMATION 27.
THE ENTROPY OF RANDOM EVENTS AND VARIABLES 27.1 H1
Since -
8
3
[8
5 log 5 H 2 = 15 Iog 4 - 15 15 log 3 - log 15 15 -
= -0.733 < 0, the outcome of the experiment for the first urn is more certain.
5 15
3]
15
419
ANSWERS AND SOLUTIONS 27.2 p = 1/2. 27.3
H1 = -
3 ~ 3 log 3 ~ 3 -
3 ~3) log ( 1 - 3 ~ 3)
(1 -
= 0.297 decimal unit,
3v'3) ( 3v'3) ( 3v'3 H2 = - 3v'3 477 log 477 - 1 - 477 log 1 - 477 = 0.295 decimal unit; that is, the uncertainties are practically the same. 27.4
(a) H= -cos 2 ~log.cos 2 ~- sin 2 ~log.sin 2 ~, n n n n
27.5
Since P(X = k) = p(l - p)k-\ then H[X] = _P log. p
+
(b) n = 4.
(1 - p) log. (1 - p). p
When p decreases from 1 to 0, the entropy increases monotonically from 0 to 27.6
(a) H[X] = - n[p log. p
+ (1 -
r::tJ.
p) log. (1 - p)
(b) H[X] = 1.5log. 2. (b) log. [ax V27Te],
27.7
(a) log. (d - c),
27.8
H[X] =log. (0.5V~).
27.9
(c) log. (ejc).
H[X I y] = Hy[X] = log. (ax Y27Te(1 - r 2 )), H[Yix] = Hx[Y] = log.(ayY27Te(l- r 2 )),
where ax and ay are the standard deviations and r is the correlation coefficient between X andY. 27.10
= log. v' (27Te)nlkl, where
lkl
is the determinant of the covariance matrix.
+
27.11
Hx[Y] = H[Y] - H[X]
27.12
The uniform distribution law: [(x) =
{b ~ a 0
27.13
Hy[X].
b,
for a :,:; x :,:; for x < a, x > b.
The exponential distribution law: [(x) =
{M~X] exp { -M~X]} 0
~
0, for x for x < 0.
420
ANSWERS AND SOLUTIONS
{-~}· 2m2
1 exp V27Tm 2
27.14
f(x) =
27.15
The normal law: I 1 exp{- 211k 1 ,La11(x!- M[X;])(y 1 - M[Y1])}. \ (27T)nikl !, f ex l-ex 27.17 loga 1050 and loga 30. k:' P2; = -n-·
/(x1,x2,···,xn) = 27.16
PH =
27.18
H[ Y1, Y2, ... , Yn] - H[X1, X 2, ... , Xn] =
where
I(8rp~cl8x 1 )
f'oo ··-J_
00 00
fx(X!,
X2, ... ,
Xn) loga
II (::;)1 dx1 "· dxn,
is the Jacobian of the transformation from ( Y1, Y2, ... , Yn) to
(X1, X2, ... , Xn).
27.19 (a) The logarithm of the absolute value of the determinant ial
THE QUANTITY OF INFORMATION 28.1
(a) 5 binary units,
(b) 5 binary units,
(c) 3 binary units.
28.2 For a number of coins satisfying the inequality 3k- 1 < N:;;;: 3k, k weighings are necessary. For k = 5, one may find a counterfeit coin if the total number of coins does not exceed 243.
I= 500( -0.5llog2 0.51 - 0.31log 2 0.31 - 0.12log2 0.12 - 0.06log 2 0.06) = 815 binary units. 28.4 The first experiment gives the amount of information 1 I1 = Ho - H1 = log2 N - N [k log2 k + (N - k) log2 (N - k)]
28.3
and the second experiment 1 I2 = H1 - H2 = N [k log2 k 1 - N [llog2 l + (k
+
(N - k) log 2 (N - k)]
- l) log 2 (k - l)
+ r log2 r + (N-
k - r) log 2 (N- k - r)].
28.5 The minimal number of tests is three in the sequences No. 6, No. 5 and No. 3, for example. Hint: Determine the amount of information given by each test and select as the first test one of those that maximizes the amount of information. Similarly, select the numbers of successive tests until the entropy of the system vanishes. To compute the amount of information, use the answer to the preceding problem. I Li P(A!) log2 P(A 1) 28.6 L: 1 P(a1)t1 ' T where P(a1) = P(A 1) if the code a1 corresponds to the symbol A1 of the alphaJ>r . For code No. 1, 782 I = 1. _ o 304 b'mary umts . I time . umts. . T 5 85 = . For code No.2, o 283 b.mary umts . I time . umts. . TI = 1.782 6 _30 = .
421
ANSWERS AND SOLUTIONS
28.7 For a more efficient code, the symbols of the code with the same serial numbers arranged in the order of their increasing durations should correspond to the symbols of the alphabet arranged in the order of decreasing probabilities; that is, the symbols d, c, b and a of the code should correspond to the symbols AI. A4 , A3 and A 2. The efficiency of such a code is
o.391
1.782 = 455
I
Tmln =
=
28.8 1 = 1 _ _!!_
1
b'maryumtstlmeumts. . I. .
+ 0.8log2 0.8 + 0.\ log2 0.1 + 0.1log2 0.1 og2 3
Hmax
=
0.4 2 .
28.9 (a) See Table 126. TABLE
126
Letter Probabilities
A
B
0.8
0.2
Coded notations
0
(b) See Table 127. TABLE
127
Letter combinations
AA
Probabilities
0.64
BA BB AB --- -----O.I6
O.I6
0.04
--------Coded notations
I
OI
000
OOI
(c) See Table 128. TABLE 128
AAA
Letter combinations Probabilities
AAB ABA BAA BBB ABB BAB BBA --- - - - ---- - -- - - - - ----
0.512
0.128
0.128
0.128
0.032
0.032
0.032
0.008
I
OI1
010
001
00011
OOOIO
OOOOI
00000
--- ------ ------------ ---
Coded notations
The efficiencies of the codes are respectively: 0.722 1.444 2.166 (a) - 1- = 0.722, (b) 1.5 6 = 0.926, (c) 2.184 28.10
(a) P(1)
(b) P(l)
=
~:~~
(c) P(l)
=
1.152 2. 184
= =
=
0.8, P(O)
0.615, P(O) 0.528, P(O)
=
=
0.2, la
=
0.385, la
= 0.472,
0.992.
1 - 0.722 1 - 0.962
=
la
=
=
= =
1 - 0.9977
0.278, 0.038,
= 0.0023.
422
ANSWERS AND SOLUTIONS 28.11
(a) See Tables 129 and 130. 129
TABLE
A
B
c
0.7
0.2
0.1
1
01
00
Letters Probabilities Coded notations
TABLE
130
BC CB AB BA AC CA BB cc - -- -- -- -- - - - - - - - - - 0.49 0.14 0.14 0.07 0.07 0.04 0.02 0.02 O.Dl - - - -- - - - - - - - - - - - - - AA
Two-letter combinations Probabilities Coded notations
1
011
010 0011 0010 0001 00001 000001 000000
(b) The efficiencies of the codes are 0.890 and 0.993, respectively. dancies of the codes are 0.109 and 0.0007, respectively. 28.12 See Table 131. TABLE
Letters 0
e,
a
e
H T H
c
Coded notations
111 110 1011 1010 1001 1000 0111 0110
Letters p B JI K
M ,lJ;
rr y
Coded notations
01011 01010 01001 01000 00111 00101 001101 001100
(c) The redun-
131 Letters H
hi 3 h,b
6 r 'l
Il:
Coded notations
001001 001000 000111 000110 000101 000100 000011 0000101
Letters X )!(
lO III I( II(
3
rl>
Coded notations
0000100 0000011 0000010 00000011 00000010 00000001 000000001 000000000
28.13 Use the fact that the coded notation of the letter A 1 will consist of k 1 symbols. 28.14 In the absence of noise, the amount of information is the entropy of the input communication system: I = - P(A 1 ) log 2 P(A 1 ) - P(A2) log2 P(A2) = 1 binary unit. In the presence of noise I= 0.919 binary unit; it decreases by an amount equal to the magnitude of the average conditional entropy, namely - P(a1)[P(A1 I a1) log2 P(A1 I a1) + P(A2 I a1) log2 P(A2 I a1)] - P(a2)[P(A1 I a2) log2 P(A1 I a2) + P(A2 I a2) log2 P(A2 I a2)], where
423
ANSWERS AND SOLUTIONS
28.15 If the noise is absent, I= H 1 = log 2 m; when the noise is present I = H1 - H2 = log2 m + p log2 p + q log 2 qf(m - 1).
2: 2:
I= log2 m +
28.16
I
j
P(a1)P(A 1 I a1) log2 P(A 1 I a;),
where
VI THE LIMIT THEOREMS 29.
THE LAW OF LARGE NUMBERS 29.1
(a) P(IX-
xl ;;.
4E) ~ 0.1375,
(b) P(IX-
xl ;;.
3a) ~ 1/9.
29.2 It is proved in the same manner as one proves Chebyshev's inequality. For the proof make use of the obvious inequality
In
~
f(x) dx
L~
eex-t 2 f(x) dx,
where Q is the set of all x satisfying the condition t 2 + In J x> e
29.3 Using arguments analogous to those in the proof of the Chebyshev inequality, one obtains a chain of inequalities P(X;;.
29.4
e)~
1
---a;;
leax
e
~
dF(e"x)
e-•eM[e•x].
eax~eae
Use the Chebyshev inequality and note that
x=
m + 1, and M[X 2] =
(m + 1)(m + 2), hence,
P(O < X < 2(m
29.5
+
1)) = P'(l X -
xl
< m
+
1) > 1 -
(mDl~]) 2
Denoting by Xn the random number of occurrences of the event A in
n experiments, we have P(l Xn - 5001 < 100) > 1 - (250/100 2 ) = 0.975. Conse-
quently, all questions may be answered "yes." 29.6 The random variables XI< are mutually independent and have equal expectations x~< = 0 and variances D[X~
For s < 1/2, since in this case ] 1 1 n lim D [ XI< = lim 2 n k=l n
L
n-+oo
29.8
lim n~
oo
Ln
ps = 0.
k=l
./1 1 = lim 2ln (n!) = lim 2 In {nn+
;~~ ~2 {(n +~)Inn- n +In V27T}
=limlnn=O n--t n ' 00
424
ANSWERS AND SOLUTIONS
which proves the applicability of the law of large numbers. 29.9 (a) Not satisfied, since
L
1 n
n
lim D [ n--too
k=l
xk
4W- 1) = = lim --'--=-~...:... 2
]
3n
n-+oo
CX),
(b) satisfied, since
(c) not satisfied, since lim n-oo
o{! n
i xk}
k=1
> lim n(n ~ 1) n-oo 2n
=
!. 2
Applicable since the inequality
29.10
1 0 .:;; D [ ;i
n
k-?1Xk
]
1
< n2
n
k-?1D[Xk]
C
< ;i'
where cis the upper bound of D[Xd for all k = 1, 2, ... , n, holds for k1 1 < 0. The relation
follows from the inequality. 29.11 To prove this, it suffices to estimate
where
Replacing all
ak
by their maximal value b, we obtain D
(!n i xk] < 3nn~
2 b2,
k=1
hence, it follows immediately that lim
29.12 29.13
o[! i xk] = o.
n k=l Applicable, since all the assumptions of Khinchin's theorem are satisfied. Consider n-oo
D[Zn]
= D [~
J
1
Xk] =
~2 ~ ~~ 1~ ri;aiat I < :2 ~~ J 1
lrl11,
where a 1 is the standard deviation of the random variable X 1• Since ru-+ 0 for Ii - j I -+ cx:J, then for any e > 0, one may indicate an N such that the inequality lr11 l < e holds for all li- jl > N. This means that in the matrix llr11 II, containing n 2 elements, at most Nn elements exceed e (these elements are replaced by unity) and the rest are less than e. From the preceding facts, we infer the inequality
L L I
1 n n Nn 1 N r;; < 2" + 2 (n 2 - Nn)e = e + - (1 - e), n 1= 1 1 = 1 n n n therefore, limn-"' D[znl = 0; this proves the theorem. 29.14 The law of large numbers cannot be applied since the series 12
6 "' -2:
(-1)k- 1
k defining M[X1] is not absolutely convergent. 7T2 k=1
,
425
ANSWERS AND SOLUTIONS 30.
THE DE MOIVRE-LAPLACE AND L YAPUNOV THEOREMS 30.1
P( 0.2 :;::;:
~
< 0.4)
=
0.97.
30.2
P(70 :;::;: m < 86)
=
0.927.
30.3 (a) P(m > 20) = 0.5, (b) P(m < 28) = 0.9772, (c) P(l4 :;::;: m < 26) = 0.8664. 30.4 In the limiting equality of the de Moivre-Laplace theorem, set b
=-a= eJ;q
and then make use of the integral representations of the functions
[~ -
P(A)]
X
[1 - P(B I A)],
from the point of view of the law of large numbers both methods lead to correct results, (b) in the first case, 9750 experiments will be necessary and in the second case, 4500 experiments. 30.13 (a) 3100, (b) 1500. 30.14 In all three cases the limiting characteristic function equals e-u 2 ' 2 • 30.15
lim EYn(u) = e-u 2 12, n~"'
VII THE CORRELATION THEORY OF RANDOM FUNCTIONS 31.
GENERAL PROPERTIES OF CORRELATION FUNCTIONS AND DISTRIBUTION LAWS OF RANDOM FUNCTIONS
31.1 Denoting by f(x 1 , x 2 I t1. t 2 ) the distribution law of second order for the random function X(t), by the definition of Kx(t 1 , t2) we have Kx(tl. t2) =
J_"'oo J_"'"'
(xl - xl)(x2 - x2)f(xl. x2
I th t2) dxl
dx2.
426
ANSWERS AND SOLUTIONS
Applying the Schwarz inequality, we get IKx(flo !2)1 2 :;::;:
J_oooo J_oooo
(x1 - x1) 2/(xlo x2 I t1, t2) dx1 dx2
J_
00 00
X
J_"'oo
(x~
-
x2) 2f(x~. X~ I t1, t2) dx~ dx~
= a;(t1 )a;,(t2h
which is equivalent to the first inequality. To prove the second inequality, it suffices to consider the evident relation M{[X(t1) - x(t 1 )] - [X(t 2 ) - x(t2)JI 2} ~ 0. 31.2 The proof is similar to the preceding one. 31.3 It follows from the definition of the correlation function. 31.4 Since X(t) = 27=1 !::..; + c, where cis a nonrandom constant and n is the number of steps during time t, we have D[X(t)] = M[na 2 ] = >..ta 2 • 31.5 The correlation function Kx( r) is the probability that an even number of sign changes will occur during time r minus the probability of an odd number of sign changes; that is, Kx(r) =
2oo
(Ar}"n -(2 )I e-1\,-
n=O
n '
2oo
n=O
(Ar)2n+1 (2 1)1 e-1\, = e-21\,.
n
+
•
31.6 Since M[X(t)X(t + r)] ¥- 0 only if (t, t + r) is contained in an interval between consecutive integers and, since the probability of this event is 0 if lrl > 0 and (1 - lrl) if H :;: ;: 1, we have for lrl :;::;: 1, Kx(T) = (1 -
lri)M[X 2] = (1 - lrl)
roo
Jo
X
XI\
X2
r(A
+
1) e-x dx
= (A+ 2)(A- 1)(1 - lrl). Consequently, Kx(T) = {
(A+ 2)(A + 1)(1 - lrl),
lrl :;::;: 1,
0,
lrl > 1. 31.7 Letting 0 1 = 8(t1), 82 = 0(t 1 + r), for the conditional distribution law we get f( e I e = 5o) = Jcelo e2) 2 1 f((J1) '
where f({}lo ()2) is the normal distribution law of a system of random variables with correlation matrix
II
Ke(O) Ke(r)
Ke(r)ll· Ke(O)
Substituting the data from the assumption of the problem, we get oo 1 P = 15 1ce2 I e1 = 5°) de2 = 2 [1 - ct>(2.68)J = o.oo37.
J
31.8 Denoting the heel angles at instants t and t + r by 0 1 and 82, respectively, and their distribution law by f({} 1 , () 2 ), for the conditional distribution law of the heel angle at the instant of second measurement, we get
f:o
f(()1, ()2) d()1
427
ANSWERS AND SOLUTIONS
31.9 Denoting xl = 0(t), x2 = G(t), Xs = 0(t + To), the correlation matrix of the system X1o X2, X 3 becomes 0 Ke(ro) Ka(O)
llk;zll =
-K(r 0 )
0
-Ke(O)
Ke(ro)
-Ke(ro)
,
Ke(O)
which after numerical substitution becomes
=II 36e-3~ 0·5
llk;tll
36(0.2520+ 1.572) 0
36e~0.511· 36
Determining the conditional distribution law according to the distribution law f(xl> x2, Xs) Jo"' f(xh X2, Xs) dx2 f(xs I X1
= 2, X2 > 0)
=
,
J_"'
00
Jo"' f(xl> X2, Xs) dx2 dxs
x 1=2
we obtain for the required probability, P
=
J
10
-10
31.10
ji(t) = a(t)x(t)
31.11
f(x) dx
f(xs I X1
+ J
x~acosB
x2 > 0) dxs
= 0.958.
b(t); Ky(t1o t2) = a*(t1 )a(t2)Kx(tl> t2).
J
=
= 2,
fa(a)fe(B) da dB; f(x)
= aJ 27T exp { - ; :2 } ·
"x+dx
31.12 The probability that the interval T will lie between r and r + dr is the probability that there will be n points in the interval (0, r) and one point in the interval (r, r + dr). Since by assumption these events are independent, we have ~
P(r
T
~
T
+
(t\r)n
= -n!- e- 7",\ dr·,
dr)
that is, f(r)
31.13 32.
f(u)
=
)..n+1 7 n
= - -1n.
e-ll<.
15.8~27T exp {- :928}·
LINEAR OPERATIONS WITH RANDOM FUNCTIONS 32.1
Since Kx(r) has no discontinuity at r = 0,
d2
Kx(r) = - dr2 Kx(T)
32.2
Ky(r)
=
D[ Y(t)]
32.3
a(a 2
=
= aa 2 e-"'' 1(1 - arrl).
+ ,f32)e-"'''(cos,8-r- ~sin,BH).
Ky(O) = a(a 2
+ ,82 ).
Using the definition of a mutual correlation function, we get Rxx(r)
=
M{
=
!
[X*(t) - x*]
dX(~r+
M{[X*(t) - x*][X(t
r)}
+
r) - x]} =
!
Kx(r).
428
ANSWERS AND SOLUTIONS
32.4 Since any derivative of Kx(r) is continuous at zero, X(t) may be differentiated any number of times. 32.5 Twice, since (d 2 /dr 2 )Kx(r)J,~o and (d 4 /dr 4 )Kx(r)J,~o exist, (d 5 /dr 5 )Kx(r) has a discontinuity at zero. 32.6 Only the first derivative exists since (d 2 /dr 2 )Kx(r) exists for r = 0 and (d 3 /dr 3 )Kx(r) has a discontinuity at this point. 32.7 Rxx(r) = a 2 a;(r- t 0 )e-al'-ta1. 32.8 D[ Y(t)] = a;, D[Z(t)] = a 2 a;.
v=
32.9 Ky( r) = 2a 2 a;a 2 e- a 2 ' 2(1 - 2a 2r 2). 32.10 The distribution f(v) is normal with variance a; = a(a 2 0, p = 0.3085. 32.11 z(t) = x(t) + y(t); Kz(th t2) = Kx(t1, t2)
32.12.
=
x(t)
i
+
Ky(t1, t2)
=
x;(t); Kx(tl, t2)
J~l
Rxy(tl, t2)
+
Kx/tl, t2)
J~l
32.13
Ky{r) = Kx(r)
32.14
Kz(r)
+
d2 dr 2 Kx(r)
a;e-al'l{ 1 + aJrJ
=
i
+
2a 2
+3
+
+
+
fJ2) and
Ryx(tl, t2).
i i
1~1
Rx,x1(t 1 , 12 ).
J~l
l*J
d4 dr 4 Kx(r).
+ ~ a 2r 2 (a2r2 - aJrJ
to new variables of integration, and perform the integration, we obtain D[ Y(t)]
=
J~ (t
Ky(t, t) = 2
- r)Kx(r) dr.
32.16 Solving the problem as we did 32.15, after transformation of the double integral we get Kx(t1, 12 ) =
J:
2
(t 2
-
r)Ky( r) dr
+ J~ 1
(t1
-
r)Ky( r) dr r~2 -tl
32.17 32.19
Rxy(t1, t 2 )
=
J:
- )a 2
Kx(t 1,
g) dg.
32.18
Ct2- t1 - r)Ky(r) dr.
D[ Y(20)] = 1360 em?.
429
ANSWERS AND SOLUTIONS 32.20
32.21
Since the variance D[8(t)] is small, sin 8 D[llV(t)] = 2g 2
J:
!
(t- T)Ke(T) dT = 2
2
~ 8,
a[t-
~ (1
- e-"'1)],
which after substitution of numerical values leads to allv = 18.6 m.jsec. 32.22 Using the definition of the correlation function as the expectation of the product of the deviations of the ordinates of a random function, and the formulas for the moments of normal random variables, we obtain Kx(T) = a2 Ke<
Ky{T) = e-"' 2 ' 2 [1 + 2a2 (1 - 2a 2 T 2 )]. 1 Rxy(T) = - 3 aa2e-al
32.26
Ky(ti. t2) = a*(tl)a(t2)Kx(T)
32.24
+
!:
b*(t1)b(t2) d 4 4(T)
+
(a*(t1)b(t2)
+
b*(t1)a(t2)]
d 2K (T) d;2 ·
32.27 It does not exist. 32.28 (a) Stationary, (b) nonstationary. 32.29 a~ = 6.5 ·10 8 a~[0.1t- 0.2 + 0.1 cos (2.48 ·I0- 3 t)- 8.0 sin (2.48 ·10- 3 t)]. For t = 1 hour, ay ~ 1.5 km. 32.30 D[a(t)] ~ a 1t; D[,B(t)] ~ b 1t; a1 =
~ L"'
b1 = -2 7Tq
i"' 0
+
[
2)
~ kr arcsin k,p(T) +
r l(cos AT
+
2 cos AT . ] k2 - q - arcsm ke(T) dT;
2) -q1 k~ arcsin ke(T)
+
cos AT . ] kr - P - arcsm k,p( T) dT.
k,p(T) and Ke(T) are the normalized correlation functions 'ir(t) and El(t); A= Vpq.
32.31
D[Z(t)] =
J: J: exp {a2(T1 + T2) + ~4 [3
T1)}
X Kx( T2 - T1) dT1 dT2,
where
33.
PROBLEMS ON PASSAGES
33.1 fa = 107T[1 -
2T
2~ Va
2
+ ,8 2
exp { -;;}
=
11.9.
430
ANSWERS AND SOLUTIONS 33.4
1~5 e
0·9 [
1-
crf ~5)] =
9.91 sec.
33.5 Starting with t = ( V47T 2p~ - a 2 ) -1. 33.6 The problem reduces to the determination of the number of passages of the random function X(t) beyond the level V w0 jk (going up) and - V w0 jk (going down). Answer:
v~ a + ,.., exp { -
;.I
Wo}
2ka .
33.7 Since the radius of curvature is [vj':Y(t)], the sensitive element reaches a stop when lf(t) leaves the limits of the strip ± v/R 0 , which leads to
~ V a2 + fJ2 exp {- 2a(a2 : fJ2)R~} sec. -1. 33.8
For h ;;, 54.5 m.
33.9
Q
= exp
{=a: exp {- ;;} ·
33.10 Denoting by f(x, x1. x 2) the probability density of the system of normal variables X(t), X(t) and X(t), we get the required probability density,
=
f(x)
f"'
f(x, 0, x2) dx2
-=-oo~::-:co------
J_"' J_"' f(x, 0, x2) dx2 dx
Considering that the correlation matrix has the form 0 -Kx(O) 0
we find after integration that f(x)
33.11
f(x)
=
=
1 V27Ta
2
exp {- x }[1 2a
+( V2a x )] ·
~ exp {x }[1 -(~)]· . 2a 2aV2
aV27T
2 2
33.12 The required number equals the number of passages (from both sides) beyond the zero level; consequently,
33.13
where A; 1 (j, l = 3, 4, 5)
431
ANSWERS AND SOLUTIONS
are the cofactors of the determinant ll 2 and k 11 are included in the answer to Problem 33.14. 33.14 n is the probability density p of sign changes for L and of the point with coordinates x, y. These are related as follows: d d P x Y
=
p(os(x,ox y)
=
P(o <
SY in the vicinity
O· os(x + dx, y) O· os(x, y) O· os(x, y + dy) ' ox < ' oy > ' oy <
>
os(x, y) _ ox <
8s(x, y) d . O ox2 x' < 2
o)
os(x, y) _ o2s(x, y) d ) ay < oy2 Y ·
The probability p dx dy can be computed if one considers that K(S, 7]) uniquely defines the distribution law of osfox, osfoy, o2sfox 2, o2Sfoy 2. Performing the computations we obtain: ll2
ii = p = 47T2 YLllll2
[
1
+
vka4.!l2 (7T2 + arctan vka4ll2 )] '
where ll1
=
kuk22 - M2,
w2)w~ dwl dw2;
Ll2
=
kaak44 - k~4;
k22
=
J_"'oo J_"'oo
s,(wl,
kaa =
J_"'oo J_"'oo
s,(wh w2)wi dw1 dw2;
J_"'"' J_"'"' S,(wh w2)w1w2 dw1 dw2;
kn =
J_"'oo J_"'oo
ka4 =
J_"'oo J_"'oo' s,(wl, w2)w~w~ dwl dw2;
k44 =
J_"'"' J_"'"'
S,(wl.
w2)w~ dw1 dw2;
k12 =
=
J_"'oo J_"',
S,(wl.
w 2 )w~w2 dw1 dw2;
k 45
kas
s,(wl.
kss
34.
=
=
J_"', J_"',
S,(wl.
w2)w~ dwl dw2;
w2)w 1 w~ dw1 dw2;
J_"', J_"'oo s,(wh w2)w~w~ dwl dw2.
SPECTRAL DECOMPOSITION OF STATIONARY RANDOM FUNCTIONS 34.1
K(T)
=
2asinbT.
34.2
'T
34.3
K(T)
sin w
T
= 2c2 (2 cos w 0 T- 1) - - -0 · 'T
Denoting
we have oJ 2a a3 S(w) = J - a-;-- = - ( 2 2)2 ua 7TW+a a2
34.4
. 2 w)2 sm
S(w) = - ( - 27T w
·
2
34.5
aa2 S(w) = --:;:;- (w2
aa2 --:;:;- ( w2
w2 + a2 + f32 + a2 + f32)2 _ 4f32w2 w2 + a2 + {32 + a2 _ f32)2 + 4a2f32
w2 + a2 + f32 --:;:;- ( w2 _ a2 _ f32)2 + 4a2w2
aa2
432
ANSWERS AND SOLUTIONS 34.6
s (w)
2a2
= -
7T
a(a 2 + {32) -;---;;---;;-'-----=~--:-=---;; (w2 + a2 + fJ2)2 - 4f32w2
2a 2 a(a 2 + {3 2) -:;; ( w2 _ a2 _ f32)2 + 4a2w2
2a 2 a(a 2 + {3 2 ) -:;; ( w2 + a2 _ f32)2 + 4a2f32
34.7
Solving this problem as we did 34.3, we get a 2 16a3 w 4 S(w)
= -:; (w2 +
a2)4
2aaw 2
34.8
S(w)
34.9
Two derivatives, since Sx(w) decreases as 1/w 2 when w increases.
34.10 34.11
=
7r[(w 2 + a2 + f32)2 _ 4f32w2]
=
S(w)
i
l
2a1a1 2.
7T;~lw
dS(w)
+at
2aa2 w 7r[(w2 _ a2 _ {J2)2 + 4a2f32]2
~
x {4{32(a2 + f32) _ ( a2 + 132 + w2)2}.
Consequently, for w = 0 there will always be an extremum. If, for w = 0, the expression between brackets is negative, the sign of the derivative at this point changes from plus to minus; there will be one maximum at this point, and no other maxima. Thus, the condition for no maxima except at the origin is a 2 > 3{3 2 • For
= 3{3 2 ,
a2
S(w)
a 2a
= -:;;
1
w2 + 4{32'·
that is, S(w) also can have only one maximum at the origin. Therefore, if a 2 ;:. 3{3 2 , then there exists one maximum at the origin, if a 2 < 3{3 2 , there will be one minimum at the origin and two maxima at the points
= ± w2 '
w
34.12
w2
= {I a2
+ f32
v' 2 v'f32
-
v' a2
+ f32 .
Since
then
.
D[X(t)] =
J"'
S.;,(w) dw
-"'
34.13
= -7T~ 2 · a
Since
Sx(w) =
2a~:;;: exp {- :
2 2}•
and
Rxx(r) = : ,
J_"'"'
exp {iwr}Sx(w) dw,
then Sxx(w)
34.14
=
iwSx(w)
=
2:~:;;:exp{ -::} = Sxx(w).
Since Kll(r)
= ae-al
+aIr)+ a 2 k~(l -aIr)],
the Fourier transform leads to S ( ) ll w
34.15
Rxy(T) = Kx(T
34.16
Sxy{w)
=
+
2aa2
- 7r( w2 + a2)2
To)=
J_"'oo
(iw)l<ei"''o[Su(w)
+
(P k2 2) 1 + 2W ·
eiw(t+
433
ANSWERS AND SOLUTIONS
34.17 Since Kz(T) = K.x(T)Ky(T) = a 1a 2(at
+ ,BD(a~ + (3~)e-
fi:
fi~ sin,B1H)(cos,82T-
sin,82ITI).
then the Fourier inversion leads to S (w) = a{a cosy'
+
a cosy' + (w + ,8') sin y' a2 (w + fJ')2 + a2 a cosy" + (w - ,8") sin y" a cosy" + (w + ,8") sin y"} (w - ,8")2 + a2 (w + ,8")2 + a2 '
(w - ,8') sin y'
+
(w _ ,8')2
z
+ where a = a1
+
tan Y1
(3" = fl1 - ,B2,
y' = Y1
+ Y2,
y" = Y1 - Y2,
a1a2,B~,B~
a2
tan Y2 = ,8 2• a = 2
Since K 11(T) = K"'(T)Ke(T), the Fourier transform leads to
34.19 =
=
a1 (3 1'
47T cos 3 Y1 cos 3 Y2 Since Kz(T) = Kx(T)Ky(T) + x Ky(T) + PKx(T), then s a1a2(a1 + a2) x2a2a2 jl 2a1a1 z(w) = 7T[w 2 + (a 1 + a 2)2] + 7T(w 2 + a~) + 7T(w 2 + at)
34.18
S 11(w)
+ ,82,
{3' = ,81
a2,
a1a2 47T cos y1 cos 'Y2 a cosy' - (w - ,8') sin y' a cosy' - (w + ,8') sin y' { x (w - {3')2 + a2 (w + ,8')2 + a2 a cosy" - (w - ,8") sin y" _ a cosy" - (w + ,8") sin y"}· + (w - ,8")2 + a2 (w + ,8")2 + a2
where a= a1
+
34.20
,8' = (31
a2,
+ (32,
f3"'= ,B1- (32, y' = Y1 a1 a2 tan y1 = ,8 1 • tan 'Y2 = (3 2 •
+
y2,
y"
= Y1- 'Y2•
Applying the general formula Sy(w)
00
= 2 J_
w1)wiSx(w1) dw1
(w - w1)2Sx(w -
00
and the results of Problem 34.17, we get 2a 2 a(34 { 1 Sy(w) = 7T cos 2 y w2 + 4a 2
4aa (
+
} 4(a 2 + ,8 2 ) (w 2 + 4a 2 - 4(3 2) 2 + 16a2,8 2 '
x2
a
34.21
S ( ) Y w
34.22
Sy(w) = w2(a2aV27Texp{ -::}
34.23
S 11(w)
+ where S"'(w)
a
tan y = -· (3
)
= -:;;- w2 + 4a2 + w2 + a2 .
=
+
S"'(w)
~ sin 2 2q(J_
00 00
= S 1(w), Se(w)
+
axexp
-{~22 })·
00
cos4 q J_ 00 Sl/l(w - w1)Se(w1) dw1 Se(w - w1)Se(w1) dw1
=
S 2 (w), Sl/l(w)
+
00
J_ 00 Sl/l(w - w1)SI/I(w1) dw1],
= S 3 (w);
2a1at a~ + !J~ · 1 2 3 S ( ) 1 = ' ' 1 w = ---:;- (w2 + a~ + ,BJ) - 4,B~w 2 ' and all the integrals may be computed in a finite form. Because the final result is cumbersome in the present case, it is preferable to use numerical integration methods.
434
ANSWERS AND SOLUTIONS
has one maximum for w = 0. w
34.25
=
S;( w)
where a2 = rj5n 2
7Ta 2
{-1- [1 + 3aT
a2a [ 1 - cos -:;;- --w--:2,....--a
n(n 27T
T =
1 (
+
1
wa
l)r] _ r}, n2 47T
47T 2 QlQ2'
a
2(Ql
a =
. w] ' -aa) sma
r [+
j8n 2 27Ta 2 Ta 2
=
+
1
n(n 27T
l)r]
'
Q2)
Y
'
and j 0 is the intensity of photocurrent created when one hole coincides with the aperture of the diaphragm. 35.
COMPUTATION OF PROBABILITY CHARACTERISTICS OF RANDOM FUNCTIONS AT THE OUTPUT OF DYNAMICAL SYSTEMS 35.1
Y(t) is a stationary function; consequently, c2 Sy(w) = 2 2' w
+a
which after a Fourier inversion yields Ky(T)
7TC2
=-
e-a[>[.
a
35.2 Since Y(t) is stationary, finding the expectation of both sides of the equation we obtain that y = (b 1 /a 1 )x. The spectral density is S(w)= Y
bgw 2 agw 2
+ +
b~
ai
a~a bgw 2 + b~ S(w)=' x 7T (agw 2 + aD(w 2 + a 2 )
which after integration between infinite limits gives D[Y(t)] = a~ a1b8a aoa1
35.3 where S· (w) = nc
35.4
Su(w) 2a1a1(ar
=
a1
+ a 0 bi. + aaa
n 4 w 2[S,;c(w) + c 2 w 2 S 0 (w)] g 2 (w2 _ n2)2 + 4h2w2 '
+ !3D , + 4arw2]
7r[( w2 - f1i - ar)2
Since by the assumption of the problem a(t) can be considered stationary Srx(w)
=
e2 w
2
+
e
2
S,.(w),
where S,.(w) is obtained as in Problem 35.3. Integrating Sa(w) between infinite limits with the aid of residues, we get a~ = 2.13 ·10- 6 rad. 2 , aa = 1.46·10- 3 rad. 2a 2 a(a 2 + {1 2 ) 35.5 Sy( w) = 7r[( w2 _ a2 _ f12)2 + 4a2w2]' where a = h, Sy(w), we get
f1
=
vP
- h 2, a 2
=
7Tc 2/2hP. Applying a Fourier inversion to
435
ANSWERS AND SOLUTIONS • _ 2a~a( a 2 + {3 2 ) 35.6 Se(w) - 1r[(w2 _ a2 _ (32)2 + 4a2w2]'
Ke(T) =
a~ral'l(cosf3T + ~sinf3H).
where 1 r
kT 2 ae = D' 35.7 Sy(w)
+
= 7T(w2
a=1T
4(49w 6 1)2(w2
+ +
25)
+
4)(w2
9)
35.8 No, since the roots of the characteristic equation have positive real parts and, consequently, the system described by the equation is nonstationary. 3·5.9 Since ~c(t) is stationary, it follows that
=
s,c(w)
D[~ ( )] c t
=
aa(a2
l-w2
w6Sx(w) 2hiw +
+
w~l2'
+ (32)w6
[({3 1 - {3) 2 + (a1 - a) 2][({31 + {3) 2 + (a1 - a) 2] X
{ ( - {3~
+
+
{3 2
+ (- f32 + f3r + a1
X [({31 - {3) 2 + (a1 + a) 2][({31 + {3) 2 + (a 1 + a) 2] 2 a + a!} 2 + 4(a 2{3~ - 2atf3~ + a£ - 2a 2 a~ + atf3 2) a1 (ar + [3~)
a~
+
+
a2)2
4(arf32 - 2a2f32 a(a2 + [32)
+
a4 - 2ara2
+
a2f3D},
= h, {31 = Vw~- h 2 . a = 3 ·10- 4 g2 , we get D[e(t)] =
Letting wa = n, D[~c(t)], where is mentioned in the answer to Problem 35.9. Substituting the numerical data we get D[e(t)] = 0.06513; ae = 0.255. 35.11 The formula is a consc:
D[~c(t)]
35.13
PSx(w) Syx(w) = (k2 - w2) - 2hiw' R
yx
( ) _ k2 T
-
f"'
e
-"'
Sx(w) d (P - w2) - 2hiw w.
iwt
35.14 The independent particular integrals of the homogeneous equation are e-t, e- 7t, the weight function isp(t) = (lf6)(e-t- e-7t),
~~ Ky(T) = 7exp {- T + 4:
1+
2}{
T
7
35.15
D[Y(t)- Z(t)]
= D[Z(t)] X
+fa~ L~ p*(T1)P(T2)
Kx(T2 - T1) dT1 dT2 - 2 Re
fa~
p*(T)Rx.(T) dT,
436
ANSWERS AND SOLUTIONS
where the minus sign in the lower limits of integration means that the point 0 is included in the domain of integration. D[ Y(t)] = (
35.16
a~
aa+a
a
35.17 origin;
)
[t
2
+ 2a~~a+a + a ) (1
- 2at)] .
const., whose value may be taken zero by a proper choice of the
=
w)2 t 2+
a~P2 ( D[a(t)]= H 2 1+g
2a'fP2 (t g 2H 2j 0 (t-r)Kw(r)dr.
35.18 Replacing X(t) by its spectral decomposition, we obtain the spectral decomposition of Yl(t)
= J_"'_
_ -w 2
+
1 2hiw
+
k2
[e-•t+irot + -(w + wo) +(a- h)i e-
+ where w 0 Ky 1 (th t2)
=
=
Vk 2
J"'
_"' X
-
-(wo
-1w~
(a- h)i e-
h2 • From this it follows that
Sx(w) (w2 _ k2)2 + 4h2w2 {e-a
+
_1_ e-h(tl-t2)
4w~
[[(w - wo)2 + (a -
X
x x
+
_1_
e 100 o
e-
2w 0
+
_1_ 2w 0
+ +
h)2]e-l"'o
- (w - ai
[w~
- (w
-ht2[(w - Wo
+
+
Wo -
e-
+ ( -w
+ [(w + wa)2 + (a hi) 2 ]
h)2]
+
ai - hi)2]e-l"'o
ai - hi)e -lroot2
+ ( -w - Wo + ai + hi)elrootl - w0
-
ai
+
ai-
hi)e 1"'o12]
1 11]} dw,
hi)e- "'o
which, after we substitute the expression for Sx(w) and integrate with the aid of residues, gives the final result in the finite form: KYl (tl, t2)
sin f1t 1 M J_- e -ht-' -{1-,
+
N 1 = e-ht,(cosf1t1
r=lh-al, 35.19
Ky(th t2)
=
a; exp X
{
H
[
(ti + -
~
h
asin,Bt1) ,
j
= 1, 2;
f1=wo.
t~ + 2a2)}
a) +
- V~7TJ: 1
exp {
-~ tt- a) 2}
t1.
437
ANSWERS AND SOLUTIONS
e-at 1
35.22
y(t)
=
+ (~ + a2:_2 t 2 + a
2]}• 2:_)e-at 3 cx
~ "'·~ 1 A 1"'y"'(t)e1 + J: p(t, ~)x(g) d~;
Ky(tl. t2)
1
n
= /1 2
Lm=
k, j, !,
AtmAlky:!;(t1)Yk(t2)k;z 1
+
J:l J: 2p(t1, ~)p(t2, 7])Kx(t, 7]) d~ d7],
where Y1Ct), ... , Yn(t) are the independent particular integrals of the corresponding homogeneous equation, 11 =
YI(O) y~(O)
Yn(O) y~(O)
y\.n -1l(Q)
y~n -1l(Q)
...
y~n
-1)(0)
and A 11 are the cofactors of this determinant. Since the solution of the system leads to
35.23 Y2(t)
= -2 J~
[e-
+
2[Y2(0)- Y1 (0)]e-t
+
[2Y1 (0)- Y2 (0)]e- 2t,
and then D[Y2(t)]
=
4[~ + (1
- 2t)e- 2 t
+ +
+ (~ t (2e-t- r
29°)e-st + e- t] 4
2
t) 2 D[Y2 (0)]
+
(2e- 2 t - 2e-t)2D[Y1 (0)]
2(2e-t - e- 2 t)(2e- 2 t - 2e-t)ky 1 (0),y 2 (o);
D[ Y2(0.S)]
=
0.624.
438
ANSWERS AND SOLUTIONS 35.24
D[Y1(t)] =
~2 e- 4 t + i9 (-t 2 +
4t- 20 3
)e-
31
). + ( 21 t 2 - 2t + 45) e - 2t + ( 91 t 2 - 61 t + 123 08 ' D[ Y2(t)]
=
3
2 e- 41
-
8 27 (3t 2 - 6t
+
+
14)e- 31
+
(2t 2 - 4t
+
l)e- 21
D[ Y1 (0.5)]
35.26
Since Y(t) and Z(t) can be assumed stationary,
yW
=
Q
7Tb2(w2
2
+
2 -
°
2 t 9
+
~!)·
0.01078, D[ Y2 (0.5)] = 0.00150.
35.25
S ( )
(~ t
=
2
CXUxW
2
a2) ( w2
+
1)'
b2
which after integration leads to D[Z(t)] =
a2 _x_.
35.27
ab + 1 A normal law with parameters y = 0, ay = 0.78.
35.28
Sx(w)
=
n: {w 2 S~c(w) + J"'
g
-<XJ
+
p~
[2
w~S,;Jw1)Sq;(w
J_"'"'
wHw - w1) 2 Sq;(w - w1)S.,(wl) dw1
+
J_"'"' (w
3
+ J_"'"' +
+
n4
Sy{w) = 2 w 2 [S~c(w) g
p~
+
- w1) dw1
4
Sl/l(w -
J_"'"' (w
[ w4 Sl/l(w)
- w1) 2S"'(w -
+ J_"'"'
+
4
J_"'"' (w
+
4
J_"'"' (w-
p~w 2 SI/I(w)];
1
w1)w~S"'(w1) dw1 w 1 )w~SI/I(w
-
w 1 )w~S"'(w 1 ) dw
- w 1 )S1/I(w 1) dw1]
(w - w1)4Sq;(w - wl)Se(wl) dw1
- w1) 2Sq;(w - w1)wtSe(w1) dw1 w1) 3 w1Sq;(w- w1}Se(w1) dw1]};
n4
Sxy{w) = 2 PxPzW 4 SI/I(w). g
35.29 To find the asymmetry and the excess, one should determine the moments of Y(t) up to and including the fourth. To find these moments it is necessary to find the expectations: M[X 2(t1)X 2(t2)], M[X 2(tl)X 2(t2)X 2(ts)] and M[X 2(tl)X 2(t2)X 2(ts)X 2(t 4 )],
for the determination of which one should take the derivatives of corresponding orders of the characteristic function of the system of normal random variables. For example,
~ 40
M[X 2(t1)X 2(t 2 )] = 0 Zl1
i
2 {exp [--21 J.z=l
ll2
k!lutuz]}l
Ul =u2=0
•
439
ANSWERS AND SOLUTIONS where
llk;zll
is the correlation matrix of the system of random variables X(t 1),
X(t1), X(t2), X(t 2).
M [X 2 (t1)X 2 (t 2 )] M [X 2 (tl)X 2 (t 2)X 2(t 3 )]
=
K~(O)
+
2K~(t2 -
tl)Kx(O)
+
= 2K~(t 2 -
2K~(ts -
+
M [X 2(tt)X 2(t 2)X 2(t 3 )X 2(t4 )]
+ +
+
+
K~(O);
t2)Kx(O) + 2K~(ts - t1)Kx(O) 8Kx(t2 - tl)Kx(ts - tl)Kx(ts - t2);
K~(t2 - t.) + K~(t2 - t1) + K~Cts - t2) + K~(t. - t1) 4[K;(t2 - t1)K~(t. - ts) + K;(ts - t 1 )K~(t 4 - t 2)
= K~(O)
+
t1)
2K~(O)[K~(ts -
+
t.)
+ 8Kx(O)[KxCts - t2)Kx(t4 - t2)Kx(t4 - fs) + Kx(tl + Kx(f2 + Kx(ts 16[Kx(tl - t2)Kx(t1 - fs)Kx(f2 - t.)KxCts + Kx(f2 - tl)Kx(tl + Kx(tl - fs)Kx(fl -
+
K~(t. -
fs)Kx(fl t1)Kx(f2 ft)Kx(ts f4) t.)Kx(f2 t.)Kx(t2 -
K~(ts -
t1)]
t1)K~(ts -
t2)]
t.)Kx(t4 - fs) t.)Kx(t4 - t1) t2)Kx(f2 - t1)] ts)Kx(ts - !4) fs)Kx(f2 - f4)].
Substituting the obtained expressions in the general formulas for moments of the solution of a differential equation, we get S
35.30
k
2
= k +a For r
y
k k 2 (
+
) 2a ;
Ex = 3
[15P + 25ka + 2a 2 (k
+
a)(3k
+
2a)
-
]
1 .
> 0, we shall have
R ( ) _ 27r(klk2c) 2 -h 2 , 2w2(h1 + h2) cos w2r - [w~ - w~ - (h1 + h 2) 2] sin w 2 r. yz T w2 e [(w 2 - w1) 2 + (h1 + h2) 2][(w2 + w1) 2 + (h1 + h2) 2] '
and for r :;;;: 0, R ( ) _ 27r(ktk2c) 2 YZ T
-
wl h ,
x e
36.
1
2w1(h1 + h2) cos w1 T [(w 2 - w 1) 2 + (h1
+ [(w~ - wD + (h1 + h 2) 2] sin w1 T + h2) 2][(w2 + w1) 2 + (h1 + h2) 2] '
OPTIMAL DYNAMICAL SYSTEMS
36.1 Determining Kx(r) as a correlation function of a sum of correlated random functions and applying to the resulting equality a Fourier inversion, we get Sx(w)
36.2
=
. )36.4 L( lW - a2(w2
+
Su(w)
SxzCw) = iw[Su(w)
+
Sv(w)
+
Suv(w)
36.3
Svu(w)].
+
S!v(w).
L(iw) = iwe-i"", D[e(t)] = 0 .
. 2
w
+ (32)2 +
2 x { w (w
+
b2(w2
+
(32)2 -iro>
e
x [ (m - in) (
(
a2)2
-
(w - ia)2(w - i(3)2 2m
m _ in .
m -
+ m+
+
if3)2e-Cn+iml>
.
m- 1a
(w
+
m
+
in)
. )(mm ++ inin +- iaif3) 2
ln
x e-Cn-iml>(w- m
+
in]}•
440
ANSWERS AND SOLUTIONS
where
- jv
m-
/L2 + v2 - /L 2 '
a2fJ2 + b2a2
=
/L
a2
+
b2
L(iw)
36.5
=
n
v=
,
= a2
(a
c2 (a
jvp,2 +2v2 + 1-L,
ab[,8 2 -
+
a2
+ ,B)(w + d)(w
a2[
b2
.
- i,B)' - id)
where
Va2
c =
36.6
D[e(t)] =
J_"'"'
!c Va2,82
=
+ b2a2.
[N(iw)[2S,.(w) dw
- J_"'"' 3 6·
d
+ b2'
S~v(w)] dw.
[L(iw)[ 2[Su(w) + Sv(w) + S,.v(w) +
m + in 7 L(" ) = ia 2 { zw 2mc 2 [m + i(n + n1)]2 -
m~
.
w + m - in (w - m1 - in1)(w + m1 - in1)
-m +in
where
D[e(t)]
n
= V V,84 + y4
n1
= J J,84 +
y4
- ,82,
+ ::2
_
,82;
2 4 2 2 7T [IA[ a = 7Ta - - - - + Im ( - A - .) ] • 2 2
2n
2m c
n
m
+
zn
where A= [m 2
36.8 36.10
L(iw)
=
L(iw)
-
m~ -
m+in (n + n 1) 2] + 2im(n + n1)
36.9
e-at.
=
we-.:.tia { w [cos ,Br -
(1 -
L(iw)
=
e-t[iwT
+
(1
+
r)].
~) sin ,BT] + i [(2,8 - a) sin ,BT - a cos ,Br]};
D[e(r)]
a2 {(a2 +,8 2,82) = 772fJ 2 2
-
a) e- 2 fi< [ cos ,BT- ( 1 - ~ -
36.11
where
L(" ) a2(a zw = c 2 (d
~2 e-
+ ,8) + a)
2 fit
[cos ,Br
i,B w - id'
-at w -
e
+ (1
sin ,Br] 2 -
~) sin ,Brr}.
441
ANSWERS AND SOLUTIONS 36.12 L(iw) =
c2
a 2( w 2
+
b2)
{(w
2
+ r:l 2 )e- 1"''o -
(b- {3) e-b'o(w - ia)(w- ir:l)} (a - b) !-' '
!-'
where 1 a 2 = 7T - (aa~
+ {3rr~) ,
1 ar:l ....!:. ({3a~
b2
= 2
a
+
7T
= e-at( cos ar + sin pr + i ~sin
36.13
L(iw)
36.14
L(iw) =
aa~),
c
aa~ =-· 7T
2
ar).
1 . e-ato{e- 1P'o[f3 - i(a - y)](w - {3 - ia) 2{3 (w-1y) + e1P'o[f3 + i(a - y)](w
n~(a2 + (32)M- 2a 2 n~7T[~ IAI
+
D[e{t)] = [ni
Im ({3
2 -
+ {3 -
ia)};
~ 2 ia)],
where 1
= 2{3 e-
A
iy) ,
36.15 The required quantity is characterized by the standard deviation of the error of the optimal dynamical system of 1.67, 0.738, 0.0627 m.jsec., respectively.
36.16
D[e(t)] = 4a~a 2 d, where
y =aT,
which gives for a 8 the values 1.62, 0.829, 0.0846 m.jsec. 36.17
D[ ( )] _ 2( 2 e t - ae a
7Tki
a ) _
+ !-' 2
a2
{d +
2jc2l 2
a2 a !+ -'
av
2cl
+ (a+ a v)2 [{3b' - (a + av)a']},
X
where a~Ctv
2 a =· 7T'
C1
= (av
c2
=
2!-'r:l(r:l !-'
CXv
+ +
+ {31)[{32 +
a1)(cxv
.
Ia
+
• (cxv - cx) 2]'
-a+ i{3
. )fr:l 1a1 \1-'
+
. Ia
+ I.{31 )(r:l!-' +
ex! = a2 _ [32
+ ;2 +
J(
+ ~r
{3! =
+ ;2
J(
+ ;2
cx2 - [32
-
a2 _ (32
cx2 - [32
_
. ICX -
. ) lCXv
_(
a
,
+
"b'· 1
,
a2
+ (32)2
_
Pa~ ;
(cx2
+ [32)2
-
k2a~.
r-
442
ANSWERS AND SOLUTIONS .
36.18 L(zw) =
(w2
+
4as
a2)2
{-{3+ +
X
a
Y + + iw) 2 [e-aT(f3 + yT + a + iw
iw
X
(w - ia) 2 [3afJ - y 4 as
-
(w
-
.\1 _ A2 _ e-troT iw w2
(a
+
Y
(a
2a>.1
+ ia) 2 e-troT{e-aT[a([3 4as
+ .\2 +
+
iw) 2
-
.\ 2
yT)
iw[(f3
+
)
+
+
•
y]
.\2T _
iw
- zw(fJ
+
+
A1
+
.\2]} w2
+ ,\1)]
+ >.2T) + P1 + .\2 T)]},
2a(.\1
yT)e-aT
where 4(p.1 - fJ) 4 + aT -
.\1 =
fLl
D[e(t)]
=
= },
a~ [ 1 +
=
fL2
+
A1fL1
{3 =
To,
A2/L2 -
T
2 .\2 = - 0.015202 sec.
~ (2afJ
(1
+
- 2y
,
y = ae-a'o;
aro)e-a'o,
+
-1
a.\1 - .\2) - ;2 (y
+
>.2)]
=
0.4525.
36.19 The general formula for L(iw) is the same as in the preceding problem except that fJ- 2 = 1 ; fJ = - a 2 r 0 e- a
D[e(t)]
=
a~ [ a 2 +
=
A2 fL2 -
4.58·10- 3 ;
~ (2afJ
- 2y
>.2
+
=
-2.54·10-4;
a.\1 - .\2)
- .r. a2 (y + >.2)] 36.20
l(r) = S(r), D[e(t)] = 0.
36.21
For the first system
L(iw)
=
[(w2
+
a2 - [32)2
+
w
+ i.\4 2(0 - w)
+
A3 - i.\4 -troT -e 2(0 + w)
>.a + i.\4 WT A3 - i.\4 -wT]} + 2(0 - w) e + 2(0 + w) e [w 2 - (a 2 + {3 2) - 2iaw][2a(>.1 + .\ 4 ) - .\2 - .\3Q - i(.\1 + .\4)w] e-iroT[w 2 - (a 2 + (32) + 2iaw][2a(>. 1 + .\2T + As sin OT + .\4 cos OT) + .\2 + >.sO cos OT - >.40 sin OT + iw(.\1 + .\2 T + >.s sin OT + .\4 cos OT)], .\1 + .\ 2T .\2 x [ iw - w2
-
0.0110 sec. - 2 .
4a2fJ2]
,\1 A2 A3 x { ---+ 2 iw
=
the constants .\1. .\2 , As, and .\ 4 are determined from the system >.1 + 10.\2 + 0.1244>.2 + 0.9903>.4 = 0.000578, .\1 + 13.4034.\2 + 0.1728.\s + 0.9620.\ 4 = 0, >.1 - 0.8752.\2 + 0.1657.\s + 0.9837>.4 = 0, .\1 + 10.1831.\2 + 0.1236.\s + 0.9889.\ 4 = 0.000584,
443
ANSWERS AND SOLUTIONS
which has the solutions: A1 = -0.0018, A2 = 0.000011, As = -0.0106, A4 = 0.0036. The variance for the optimal system of first type is D[e(t)] = 0.135 ·10- 4 • For the second system, the form of L(iw) remains the same but A1 = A2 = 0, and As, A4 are determined from the system As + 5.937>. 4 = 0, As
+
8.003A 4 = 0.0047,
which leads to As = -0.0136, .\4 = 0.0023. The variance for this system is D[e(t)] = 0.266·10- 4 •
36.22 a= e-a'2, D[e(t)] = (1 36.23 a
=
~sin f3T);
e-a•( cos (3T +
D[e(t)]
= a~ [ 1
e- 2 a'o)a~.
- e- 2a'( 1
b
~ e-a> sin fh;
=
+ 2 ~ sin {3TCOS f3T +
2
fi: sin f3T)] . 2
36.24
a2
+ (32
36.25 a=- --{3- e-"'o sinf3To = -0.09721 sec.- 1 , b
=
e-a•o(cos(3T 0
36.26 a= e-a•o(cos(3T 0
37.
~sinfiho)
= 0.9736,
D[e(t)] = 0.404 deg. 2/sec. 2 •
c = 0,
b
-
+ ~ sin,8To) =
= ~ e- a>o sin ,8To = 0.20 sec.,
0.99, c
= 0.
THE METHOD OF ENVELOPES
a~[2E(l
Ka(T) =
37.1
- q 2 ) - q 2 K(1 - q 2 ) -
~],
where
r(T) = -2cx 11'
l"' 0
sin wT dw = -1 [e-a' E1. (exT) - ea' E1. (-exT)]; 2 +ex2 11'
w
Ei (x) denotes the integral exponential function. Ei (x) =
J x
-00
Since
37.2
we have
e" -du. u
w1
P(
=
~
2cx/11', w2
0) =
=
a.
~ (1 + ~)
are independent of a.
= 0.818;
.
P(
z1(1 -
2) = 0.182
:;;.
444
ANSWERS AND SOLUTIONS
P(
+ -1
7T{J
v-o:
2
+ {32 (7T- + arctan (32. - - -0:2)] - ·
1 [1 1 = 2 - 7Tf3
v-o:
2
+ {3 2 (7T2 + arctan (322:{30:2)] ·
2
2o:{J
'
37.4 P = 0.5 and are independent of a/{3. . 37.5
~)
o:2(1 -
f(rp)
= 2 [(.r p -2o:)2 4 )] 3/2 +o: 2( 1 -7T2 7T
37.6 The phase is uniformly distributed over the interval [0, 27T]. (o:2
.
37.7
f(rp) =
f(a, a) =
37.8
+ (32 (7T + arctan f32 - 0:2)2] 2 --2af3 77 + (o:2 + {32 ) [ 1 - o:27T2+{J2(32 ( 2 + arctan (322:{3a2) 2] }3/2·
o:2
{ [.
2
0:2 +(32 (7T + (32) [ 1 - - +· arctan (32 - - -0:2) - 2] 7T2(32 2 2afJ
rp - - - - 7Tf3 2
-:. "t ~ r··
a
J
o:a~V27T
1
37.9 Since k(r) = r'*l(l
+ o:Jri),
l
~ ~)n
= 0.982,
k(2)
( ;o:3 ) sin 2w dw = )o 7T w + o: 2
r(2) = 2 (""
+ ··(,
[1.2e- 0 · 2 Ei (0.2) - 0.8 Ei (- 0.2)]
7T
= 0.122, then j(a2 I a1 = ax) = 48 ·~ 8 a exp {- 24 · 0:a~ - 23.2} Io (47.56 a 2) · ax
ax
ax
37.10 Since /). 2 = w~
- wi
= ( o: 2
(32 (7T (32 o:2)2] + {32) [ 1 - o:27T+ 2{3 2 2 + arctan 2:{3 /).
-
w2
= 0.0089,
= 0.0135 « 1,
the following formula is useful 4.457T·10- 3
!( ) ~ T
37.11
f( ) T
~
r 2[(; _ 0.693r
=
+ 8.9·10- 3r 2
0.0417T r 2[(; _ 0.647r
+ 0.0814r 2
37.12 The required average number of passages equals the probability of occurrence of one passage per unit time p
J
= 2
37.13 0.0424o: sec. - 1 .
w2- w2 2
1
27T
e- 2 = 0.083o: sec. - 1 •
445
ANSWERS AND SOLUTIONS 37.14 where
/Crp2l rp1)
=
i: { 2
K2 + ( 1 _ KK2)312
1
_ T
k(T)
772(3 (7T (32 _ a2) - 1 _ • a 2 + (3 2 2 + arctan ~ - 4.53 sec.,
("" r(T)- 2 Jo 7T[(w2
= -0.95; K
-
= V1 -
q2
[~+arcsin K]}.
+
• • 2a(a 2 + {3 2 ) a 2 _ {3 2)2 + 4a 2f3 2] SinwTdT,
cos (rp2 - y);
y
= 179°;
D[X(t + T)] :::; M[A 2]M[cos 2 ] - {M[A]M[cos cD]} 2 ;
VIII MARKOV PROCESSES 38.
MARKOV CHAINS 38.1
It follows from the equality f?!J<x+fi
=
f?!Jaf?!Jfi.
38.2 p(3) = R'p(O), where R
= f?/J1f?/J2f?/Js = l!rttll; r2 = r12
r1 =
rs1
= rn = Y22 = rsa = a~ + ag + = r2a = 3(a~a2 + a~aa + a1a~);
ra = r1a = r21 = r 32 = 3(ala~
+
a~a 3
+
a~ + 6a1a2aa;
a2a~);
38.3 States: Q 1 means that all competitions are won, Q2 means that there is one tie, Q 3 means thaf a sportsman is eliminated from the competitions. By the Perron formula, pW = p~n{ = p~'i] = 0, p~nJ = 1, p'N = an, p~nd = yn, p~'lj = 1 - yn, PW = 1 - PW - Pl'if' an
Pl'if
=
{
yn
(3 - - - -
for y of a,
nf3an-l
for y = a.
a- Y
38.4 States: Q1 means that the device is in good repair, Q2 means that the blocking system is out of order, Q3 means that the device does not operate; p~"{ = PW = A'if = 0, pl"'J.l = (1- a - {3)n, p~'i] = (1- y)n, p&nJ = 1, p~'lj =
1 - (1 - y)n, Pl'1J = 1 - Pln{ - Pl'if,
[(1 _ a _ {3)n _ (1 _ y)n] a Pi'if = { y - a - f3
na(1 - y)n-l
for a + f3 of y, for a+ f3 = y.
446
ANSWERS AND SOLUTIONS
38.5 The state Q1 (j = 0, 1, 2, 3) means that j members of a team participate in competitions. Fori < k, p\~J = 0 (i, k = 0, 1, 2, 3), io"d = 1 , p
a) - f(o:, y) - [({3, y) cp o:, ' 'Y 0: f3 , ( f3
o:" - y"
no:n-1
=- - (o:-y)2' o:-y
cp(o:, o:, y)
cp(o:, o:, o:) = n(n - 1)o:"- 2 •
38.6
Make use of Perron's formula for single eigenvalues
n (,\- Pv). m
,\~<=P~<
(k=0,1, ... ,m),
v=O
For i > k, Ak1(,\)
=
0,
Fork> i,
A~
=
1,\..&'~<- .9"ID~
38.7 Use Perron's formula when the eigenvalue ,\ and the eigenvalue ,\ = 1 is not multiple.
= (,\-
1,\..&'- .9"1
(k
,\-p
For i > k,
A~< 1 (,\)
=
has multiplicity m,
_ I,\..&' - .9"1. Am,m(,\) ,\ _ 1 ,
p)m(,\- 1);
Al
= p
= 0, 1, ... , m- 1).
0,
For k > i, k =1- m, A1<1(,\)
=
I,\..&' - .9" I Dl<j( ,\) (,\ _ p)~< i+1
38.8 The state Q1 means that there arej \•:bite balls in the urn after the drawings. For j > i, Pu = 0; fori ;:. j, C!- 1 Cf):! +t
CfJ The eigenvalues ,\ 0 = 1, ,\~< = Cfi-~
()
P2"2
=
1 sn'
(n)
-
Pss -
1 20 n'
(n)
-
P1o -
1
A"{=
2Gn- ;n)'
P~"{ =
3G"- ff" + 2~")·
1
(n) -
·- 2 n•
P~nJ =
1 -
P2o -
~n
P~"d =
1
-
+ :n-
1 2n- 1
2~n'
3U"- 2~")·
1 + sn'
447
ANSWERS AND SOLUTIONS 38.9 P;;
State Q1 means that the maximal number of points is N + j; j; p 11 = 1/m for i < j (see Example 38.1);
= 0 for i >
p\fl
= (
i)n;
p\r)
,m
o
=
for
p\~)
i > k;
=
k)n (k-1)n (m ----,;--
PH
for
= i/m,
i < k.
38.10 The state Q 1 means that j cylinders (j = 0, 1, ... , m) remained on the segment of length L. The probability that the ball hits a cylinder is ja, where a=
2(r
+ L
R)
.
P;,;-1 = ja,
'
Pt;
= 1-
ja,
Pi!=
0
for i i= j and i i= j - 1 (i, j = 0, 1, ... , m). The eigenvalues >.k = 1 - ka (k = 0, 1, ... , m), p\rl = 0 fori < k. Fori > k, ()
Akt >. = a
i-ki! 1>.6"-&'l k! flt=k(>. - 1 + va)
By Perron's formula for i > k, we have (n)_ Ptk - a
t-ki!I[
>,n(>.->.,)
]
k! j=O nt=k (>. - 1 + va) }.=).t i-k -k[1 - Ck + v)aln.
=
i!±(-1)1 -k(l-ja)n k! f=k ( j - k)! (i- j)!
a .L c-over v=O
38.11 State Q1 (j = 1, 2, ... , m) means that the selected points are located in j parts of the region D; p 11 = j /m, Pt.f+ 1 = 1 - j/m. The eigenvalues >., = r/m (r = 1,2, ... ,m). From &'H= HJ, it follows that h,k = (C);;::};c;;.-_\) (h 1k = 1); from H- 1 &' = JH- 1 and H- 1 H = @" it follows that h\,- 1 l = ( -1)'- 1 C~-:: 11 C~-=-\. ,qpn = HrH- 1 • p\~l = 0 fori> k and fori~ k,
Pl~> = C~::\
kf (-1)k-t-r(r m+ i)n Ck-t
r=O
(for another solution see Problem 38.10). 38.12 Set e = e 2" 11 m. Then
where
m
>.k
L
a,e'' -1)(k -1)
(k
= 1, 2, ... , m).
r=l
PW = 38.13
_!_ m
i
A~e(r-1l
Pt(a>) --
r=1
1
-
m
(.J, k -- 1, 2, ... , m) .
Q 1 represents the state in which the particle is at point
i
Pt.t-1 =
ni'
Pt ,t+1
i = 1 - -m
X;.
(i = 0, 1, ... ,m).
The matrix equation H- 1 &' = II8 11 .\1IIH- 1 is equivalent to the equations (1 - e)Rt(g) = m(>.t - g)RM), (i = 0, 1, ... , m), where m Ri(g) = gkhh;1) = C;(l - g)m/2(1-}.0(1 + om/2(1+-<0.
L
k=O
Since Rt{g) is a polynomial, the eigenvalues .\1 = 1 - 1i/m (i = 0, 1, ... , m). From HH- 1 = 6", it follows that g; = L:k'=o 81kgk = L:7'=o h; 1R 1(t). Letting 1 - g ~ = 1 + ( Cf = 2- m/2'
448
ANSWERS AND SOLUTIONS
we find the elements h;; m
2
htt~j
=
;~a
h\1-
=
2
of the matrices H
1J
m
h~;-1)~1
= 2- m/2(1 -
+
~)1(1
H - 1 are given by the expression
=
~)m-!
(i = 0, 1, ... , m).
t~o
The probabilities p~~J are the elements of the matrix
38.14 Q1 describes a state in which the container of the vending machine contains j nickels. Poo
= q, Pmm = p, Pt.1+1 = p, Pt.J-1 = q
U=
0,1, ... ,m).
The eigenvalues are Ao = 1,
where h1o = 1 U = 0, 1, ... ,m), jhr (q)<J+1l/2 . U- 1)k7T sm - - - sm -=----'-::-m+1 p m+1 U = 0, 1, ... , m;
q)112 .
h k = ('
p
h Ok
c (p)k
(-1) -
(k
1 -
-
q
k
= 1, 2, ... ,
m),
= 0, 1, ... ,m),
[(p)k/2 . kj7T . ..:.....__......:..::_ (k- 1)j7T] k J = C1 h1(-1 sm - - (p)
U=1,2, .. :,m; k=0,1, ... ,m). = tf:
The constants C1 are determined from the condition H- 1H
'1.- 1 p
co = ck =
m
2p
+
1 _
(~r+l'
[ k ] 1 1 - 2 vpq cos m : 1
-1
m _ P (n) ik -
"'
L..,
h
(i, k
\nh( -1)
!1''1
jk
(k
= 1, 2, ... , m),
= 0, 1, ... , m).
t~o
State Q1 means hitting the target and Q2 means a failure; 1 1 Pn =a, P21 = {J, P1(0) = 2 (a + {J), P2(0) = 1 - 2 (a + {J),
38.15
{Pt(n); P2(n)} = (.9')n{Pt(0); P2(0)}. The eigenvalues are: .1 1 = 1, .12 = a - {J. By the Lagrange-Sylvester formula for A2 ¥- 1, we get g;n
= 1_
Pt(n) If .1 2
= 1, then
! + fJ
= 2( 1
1
[.9 - (a - {J)tf - (a - {J)n(,qJJ - tf)],
_a+ {J) [2{J
+
(1 - a - {J)(a- {J)n+l].
449
ANSWERS AND SOLUTIONS
38.16 From 2:!'~ 1 p~"'l = 1, 27'~ 1 PttP~"') = p}"'l (j = 1, 2, ... , m), it follows that p)"'> = 1/m (j = 1, 2, ... , m). 38.17 Pt; =
Q1 describes the state in which the urn contains j white balls;
2j(m- j)
m2
Pt.J+1 =
'
r
(m - j)2 m2 '
(j=0,1, ... ,m).
PJ.J-1 = m2
The chain is irreducible and nonperiodic, p~'k> = p'k_"'>. From the system (k
= 0, 1, ... ,m),
we get
C2m
(k
= 0, 1, ... , m).
38.18 Q1 describes the state in which the particle is located at the midpoint of the jth interval of the segment; Pu = q, Pmm = p, Pt,J+1 = p, PJ.J-1 = q (j = 1, 2, ... , m). The chain is irreducible and nonperiodic. The probabilities p'k_"'> can be determined from the system qpi"'' + qp~"') = Pi"'), ppC,."'J1 + pp~"') = pC,."'), (k = 2, 3, ... , m - 1). PPk"'-\ + qp'k_"'/1 = Pk"'> Then
(~r-iA"'',
Pk"') =
For p = q, p'k_"'> = 1/m and for p =F q, 1-
P (oo)k -
E
,-(ir W''
(k = 1,2, ... ,m).
The probabilities p'k"'' can also be obtained from p~~> as n -HfJ (see Problem 38.14). 38.19
The chain is irreducible and nonperiodic. From the system (j
= 1, 2, ... ),
it follows that u1
1
= -:-;
1.
00
i
i
00
2: -.+- 1 Ut = t~1 2: c· + 1)
u1,
t~1 l
l
1
•
u1.
Since
i -(. +i1)'.
1~1
1,
l
there is a nonzero solution. We also have 00 00 1 lu1 u1 = u1(e - 1) < ex:>,
2:
J~1
1
2: -:-;
J~11·
that is, the chain is ergodic, 1 1 p)"'' = -:--1 pi"'> = e - 1, 1. pi_"')
p( 00) 1
-
-;---~:-:-
(e - 1)j!
(j
= 1, 2, ... ).
450
ANSWERS AND SOLUTIONS 38.20
The chain is irreducible and nonperiodic. From the system 00
2
UtPtt
=
(j = 1, 2, ... ),
Ut
t~1
00
2 lutl
t~1
00
=
Ut
2P
1~1
=
1- 1
u ____!.
q
< cx:J;
consequently, the chain is ergodic; that is, (j
38.21
= 1, 2, ... ).
The chain is irreducible and nonperiodic. From the system 00
2
UtPtt
=
(j = 1, 2, ... ) '
Ut
1~1
it follows that Ut
(j
= 2(j- 1)
=
2, 3, ... ).
The series
1~1 lu,l = u1 00
00
[
1
+ ;~
1 ] 2(j- 1)
is divergent; that is, the chain is nonergodic. This is a null-regular chain for which
plk'> = O(i,k = 1,2, ... ). 38.22 Q 1 means that the particle is located at the point with coordinate = 1, 2, ... ); Pn = 1 -a, Pt.i+1 =a, Pt+1,; = {3, P;; = 1 - a - (3 (j = 1, 2, ... ). The chain is irreducible and nonperiodic. From the system 2~ 1 u1p 11 = u" follows that uk = (a({3)k- 1u 1 (k = 1, 2, ... ). For (a/{3) < 1, we have j6. (j
and, consequently, the chain is ergodi;o;
that is, (k = 1, 2, ... ).
If a/{3 ;. 1, the Markov chain is null-regular; pj'(/> = 0 (j, k = 1, 2, ... ).
woo
38.23
Since
38.24
From the system
=
0, p* 1
= 2~~1P)~> =
(j
r
=
1 (j
+
= s
1, r
+
+
1,
s
+
2, ... , m),
we obtain
P
-
*1 -
f3
1 - a(m - r)
(j
=
r
+
1, r
+
2, ... , m).
2, ... , m).
451
ANSWERS AND SOLUTIONS
38.25 Q1 represents the state in which player A has j dollars (j = 0, 1, ... , m); Poo = 1, Pmm = 1, Pi.i+l = p, Pt.t-1 = q (j = 1, 2, ... , m- 1). The probabilities p* 1 = p)'f:l of ruin of player A are determined from the system Pu
= P*2P + q,
P*,m-1 = PP*m-2. P*t (j = 2, 3, ... , m - 2).
= qp*;-1 + PP*t+1
Setting p* 1 = a- b(qjp)1, we find for p #- q that 1 - (!!.)m-1 P*t = ---''-f-q-'-c-::::-
1-
(~r
and for p = q that p* 1 = 1 - j jm (j = 1, 2, ... , m - 1). The probabilities of ruin of B are p,1, 1(B) = 1 - p* 1(A). Another solution of this problem may be obtained from the expression for p)g> as n-->- oo (see Example 38.2). 38.26 H = llh1kll = lle
= .!_ m
= _1
lle-U-1J{k-1)11·
2:m
8 (v-1)(n+j-k); m v=1 that is, p)~l = 1 if n + j - k is divisible by m and p)~J = 0 otherwise (j, k = 1, 2, ... , m). p)r;n+r) = 1 if r + j - k is divisible by m and p)r;n+r) = 0 otherwise (r = 0, 1, ... , m - 1). 1 m-1 1 Ptk = - lim p)r;n+r) = (j, k = 1, 2, ... , m). m n-+co r=O m
p)~l
2:
1"-t&" - !!PI A2
= (,\ -
t\ 3 = e,
= a,
a)(t\ 3
-
1),
,\ 4 = e 2 ,
where e = e The period K = 3. For j, k = 2, 3, 4; PW = 1 if n + j - k is divisible by 3 and PW = 0 otherwise. By the Perron formula, 1 an(a 2{1 + a8 + y) en- 1({1e 2 + De + y) e2 n- 1 ({1e 4 + oe 2 + y) p\"d = 3 1 - a3 + (a - e)(l - e) 2 3(a - e2 ) ' (n) 1 an(a 2 y + af1 + o) en- 1(ye 2 + f1e + 8) e 2 n- 1 (ye 4 + f1e 2 + y) P13 - - + (a - e)(l - e) 2 • 3 1 - a3 3(a - e2 ) 8 2n-1(8e4 + ye2 + {1) 3(a - e 2 ) = 1 - an - p
plk -P11
=
.!.3 nJim ~"'
[p<3n)
o U=
;1c
+ p<3n + 1) + p<3n + 2l] tk jk ,
1, 2, 3, 4),
Ptk
= ~
(k
= 2, 3, 4;
j
=
1, 2, 3, 4).
38.28 The chain is irreducible and periodic with period K = 2. The first group consists of states with odd numbers and the second, those with even numbers. Then limn~"' p)~n) = Pk and limn-"' p)~n + 1 ) = 0 if j + k is an even number, and limn~"' p)~n) = 0, limn~"' p)~n + 1 ) = Pk if j + k is an odd number. The mean limiting absolute probabilities A = 1/2m (k = 1, 2, ... , 2m) are determined from the equality !!P'f; = f;, P~c = KPk·
452
ANSWERS AND SOLUTIONS
38.29 Q1 describes the state in which the particle is at point x 1 (j = 0, 1, ... , m); Pm = 1, Pm,m-l = 1, Pt.i+l = Pt.J-l = q (j = 1, 2, ... , m- 1). The chain is irreducible and periodic with period K = 2; qfh = Po, Po + qp2 = P1, PPm- 2 + Pm = Pm-lo PPm-l = Pm, PP1<-1 + qpk:+l = P~< (k = 1, 2, ... , m- 1). For P -# q, we have
t-Eq
1
A
Po
(p)m'
= 2 1- -
q
t-E
(p)l< q 1-
1 PI<= 2p A
For p
39.
=
r'
q
l _ (~
q
2
Pm =
t-E
(p)m-1
1
A
q
(k
(~r
= 1,2, ... ,
m- 1).
q, we havep1 = Pm = 1/2m, PI<= 1/m (k = 1, 2, ... , m- 1).
THE MARKOV PROCESSES WITH A DISCRETE NUMBER OF STATES 39.1
Pn(t)
=
(~~)n ·
e-1\pt
39.2 Pn(t) = [(>. 1
+ 1>. 2 )t]n e-<~tl H2>t. n.
39.3
Pn(t)
= [A~?]n e-A
where A(t) = >. J~ [1 - F(x)] dx; p
n
( ,\1-)n nI
= lim P (t) = t-oo
n
e-1\l
'
where f = fo"' [1 - F(x)] dx is the expected flight time of the electron. 39.4 39.5
p
=
J~ f
T •
Fn(t)
=
1 O
{
n-1
(At)l<
k=O
k!
2 --
e-1\t
if t "" 0, if t < 0;
fn(t)
={
>. (>.t)n-l
e-1\t
0 mk
39.6
=
if t "" 0,
(n - 1)!
n(n
+
1) .. · (n >.~<
+
if t < 0; k - 1)
.
Solving the first system of equations dPt~<(t)
~
for initial conditions Ptk(O)
Pt~<(t)
= llt~<
=
= ->.Ptk:(t) + >.Pi,k:-l(t) by induction from
(>.t)l-1< -1\t { Co! - k)! e
Pt,l< +l(t)
if 0 ~ k ~ i, if k > i,
to Ptk(t), we obtain:
453
ANSWERS AND SOLUTIONS 39.7
For A = p., the inequality Pm =
0.015
2:n~o n!
gives m = 4. 39.8
m! -;;;---y,; ;
The system of equations for the limiting probabilities Pn: mApo = JLP1 } [(m - n)>.. + p.]Pn = (m- n + 1)Apn-1 + JLPn+1 , JLPm = Apm-1
has the solutions Pn
= (m ~! n)!
GrPo,
where Po is determined by the condition L:::'~o Pn = 1. The expected number of machines in the waiting line is A+p. Lq = m- -A-(1- Po). 39.9
The system of equations for the limiting probabilities Pn is:
mApo = JLP1 [(m- n)A + np.]Pn = (m- n + l)APn-1 + (n + 1)p.Pn+1 for 1 :;;;: n < r [(m- n)A + rp.]Pn
= (m- n + l)Apn-1 + YJLPn+1
for r:;;;: n :;;;: m -
YJLPm = Apm -1
and it has the solutions: Pn
= {n!
(:~ n)! (~rPo m!
if1:s;:n:s;:r,
(A)n
rn 'r! (m - n)! -;;, Po
if n > r;
the expected number of machines in the waiting line for repairs is
Lq =Po
m!
m
n- r
JT n~r rn-'(m
- n)!
(A)n -;;, .
39.10 The probability that the computer runs is the limiting probability that there are no calls for service in the system p 0 = e- hi~, where p. is the average number of repairs per hour. The expected efficiency resulting from application of more reliable elements during 1000 hours of operation is
= 161c39.11
(b- a).
(a) The system of equations for the limiting probabilities Apo = JLP1
(A + kp.)p"
= AP~c-1
(A + np.)p"
=
+ (k + 1)p.Pk+1
AP><-1 + np.P><+1
(1 :;;;: k < (k
~
n)
n)},
454
ANSWERS AND SOLUTIONS
has the solutions: ( 1 I>..)" k! \~ 1 Po {
for 1 ~ k ~ n,
P~c = ~ (~)kPo n.n fL
~ n,
for k
where Po is the probability that all devices need no service and can be determined from the condition 2.k'~ o Pk = 1;
f
1 ( ),.) k
n- 1
l~c~ 0 k! ~
Po=
fL ( ),.) (n- 1)!(nfL- ,\) ~
+
n}-
1
with the condition that ),. < nfL. (b)
p*
=
l
oo
k--;;n
P~c
-),.)n ·
(
f.lPo
=
(n- 1)! (nfL- >..)
00
fJ.,
L
1 - F(t) =
(c)
k-n
P~cP~c(T > t),
where P~c(T > t) is the probability that the waiting time in the line is longer than t if there are k calls for service in the system:
P~c(T >
= kin (p.,~t)i e-gnt.
t)
1•
i=O
Substituting this value, we get
since Pk!Pn = (>..fnfL)k-n, changing the order of summation we obtain as a result,
1 - F(t)
i
Pne-gnt
=
(,\.?;
;~o 1 ·
i
= np.,~n e-
(__.:)k-n-; np.,
~c~n+i
nfL
),.
and, since PniP* = 1 - (>..(nfL), then F(t) = 1 - p*e-
i ro
L~ n (k
(d)
k
=-
f
00
m2 = k~1 kpk
m3
=
n-1 L (n k~o
m1
=
+
=-· np., - ),.
tdF(t)
- n)pk = Pn
oo
L~ o k
k
npn 1 _ ),. nfL
k)pk = Po
(),.)k nfL
= Pn
n-1
+ Po k~ 1 1-
-
k.
fL
(
),.
~
' nfL 1 - nfL
1 (k _ 1)!
n-1 L -n _ k (),.)k k~o
0);
p*
ro
o
~
)r'
(~r·
•
39.12
Apply the formulas of Problem 39.11; i = 2/115 hours.
39.13
Select n so that p*e-
39.14
(a) The system of equations for the limiting probabilities APo = P.,P1 (>.. (>..
+ kp.,)p" + np.,)P~c
= AP~c-1
APk-1 APz-1 = np.,pz =
+ +
(k
+
1)p.,P~c+1
n,uPk+1
=
4 (see Problem 39.11).
~
(1 k < n) } (n ~ k ~ l - 1) '
455
ANSWERS AND SOLUTIONS where l
=
n
+
m has the solutions
rpo (A.)k
if1,;k,;n,
P~c = lk!P: (,\)k --- -
if n ,; k ""' l,
n! nk-n fL.
where Po is the probability that there are no calls for service in the system:
1 1 1 - (~t+ l_{n-1L -1 (,\)k - +-1 (,\)n ~
Po-
lc=o k! f.L
n!
I
1 -~
f.L
llf.L
)
(b) the probability of refusal Po
(,\';
~)
Pt = nlnl-n .
' I
·
(c) the probability that all devices are busy is
n~m
* P
(~r+1
-
A
L., Pk = Pn
=
'
k=n
where Pn =Po
n!
(d)
F(t)
~
m1
=
(1 -
m2 = m1
f.L
:(.;;)~.,I (p;? [(~)'- (,~rJ
I -
p
(e)
(~)n·
[,\ llf.L - (m
nn~) 2
1 - (~r+ ,\
+
1--
+ 1)
(,\)m+1 + llf.L
(,\)m+2~J m llf.L
1
n-1
npn
(t;:;, 0).
+ Po k~1 (k _
1)!
;
(A)~c.
\~, '
llf.L
264
39.15 39.16
m 1 = 665'
m2
1550
= 665
~
2.33.
The system of equations for the limiting probabilities m,\po
[(m- n)J..
+
=
llf.L]Pn: (m- n m,uPm -
I
f.LP1
+
1),\pn-1
+
(n
+ l)f.LPn+1j~'
APm- 1
has the solutions ( f.L )m-n( ,\ \ n Pn = C:i, ,\ + f.L ,\ + f.L) ·
39.17
The system of equations for the probabilities Pn(t): dPn(t)
~
for initial conditions Pn(O)
=
=
-n,\Pn(t)
+
) ) (n- 1 ,\Pn-1Ct
8n 1 has the solution Pn(t) =
e-~t(l
-
e-~t)n- 1 .
456
ANSWERS AND SOLUTIONS 39.18
The systems of equations
dP 0 (t) _
P ( )
~-JL l t '
for initial conditions Pn(O) 2:;;'~o
G(t, u) =
= 8n 1 is solved with the aid of the generating function Pn(t)un; G(t, u) satisfies the differential equation
oG(t, u) = (>.u _ JL)(u _ 1) oG(t, u) ot au
with the initial condition G(O, u) = u. It has the solution G(
= JLK +
)
t, u
where
r1-
K=
{JL -
u[I - (A + JL)K]. I - UAK
e
Ae
if A i= JL,
#lt
if A= JL"
_t_
1
+ JLf
'
thus, it follows that Po(t) = JLK,
39.19
(n;;. 1).
Pn(t) = (1 - AK)(I - JLK)(AK)n-l
The system of equations dP 0 (t) _
, ( )Pof ( )
~--Aot
dPn(t)
An(t)Pn(t)
------;]( = -
with the initial condition Pn(O) Pn(t) = tn(l
+
+
An-l(t)Pn-l(t)
(n;;.
Bno has the solutions: P 0 (t) = (1
=
at)-
+
a)(l
+
2a)· ·1 ·[1
+
+
at)-lla,
(n- l)aJ.
n.
40. CONTINUOUS MARKOV PROCESSES 40.1
aJCt,
X1o X2, . . . ,
b;z(t,
40.2
a1(t,
X1o X2, ... , X1o . . . ,
Xn)
=
rp/t,
Xn) = if;1(t,
an+l = Xn+2; bn+3.n+3 =
Xn) = if;JCt,
an+2 =
X1, X 2 , . . . , X1o X2, . . . ,
X1o ... ,
j
Xn),
Xn+3;
Xn); Xn)rp 1(t, X1o
X2, . . . ,
Xn).
= 1, ... , n;
an+3 = -a 3 Xn+l - 3a2 Xn+2- 3aXn+3;
C2 ,
the remaining b11 = 0. 40.3 U(t) = U 1 (t) is the component of a two-dimensional Markov process for which a1 = x2, a2 = - ( a 2 + (3 2) x l - 2ax2, bn = c 2 , b 12 = -2ac 2 , b22 = 4a 2 c 2 . 40.4 a;(t, X1, . . . , Xn) = rpJCt, X1o . .. , Xn); b11 = if;Jllt, X1, . . . , Xn). 40.5 The Markov process has r + n dimensions; a1 = rp1(t, x 1 , . . . , x,), j = 1, 2, ... , r; a,+ 1 =x,+l+l, l=l,2, ... ,n-1; n
Gr+n
=-
2:
br+p.r+q =
Cr+n+l-jXj;
Cr+pCr+q;
p,q
1~1
the other b;z
=
0; here Cr+k
=
f3k+m-n- 2_7;;n -m ak-JCJ+r· 1
= n-
m, .. . , n,
457
ANSWERS AND SOLUTIONS 40.7
where ~ 1 (t) and ~ 2 (t) are mutually independent random functions with the property of" white noise." 40.8
where c is determined from the conditions of normalization. For rp(u) = fJ2u 3 f= clexp{-a42f3: Yia
40.9
2a22y~}. a
c11
= iflty) exp { 2 Jo"'
f(y)
-~;
2a2
=
a
·vaf3
J""
e-"'dYJ.
-oo
~~~~ dYJ }.
where cis determined from the condition J~"" f(y) dy = 1. 40.10 Setting U1 = l;;(t), U2 = U1 - U, for U2 we find an equation that is independent of U1 • The Kolmogorov equation for U2 will be of or -
a'{[Y2
fJy2
RC
1 +C F(y 2 )
J} f
2
and its stationary solution is:
f(Y2)
c exp { - ;
=
y~
a 8f 21 7TRC oy~ =
-
- 2;:-
J:
2
2
O
F(YJ) dYJ }.
where cis determined from the condition of normalization. The required probability density f(y) is the convolution of f(Y2) and the normal distribution law with zero expectation. In the particular case f(Y2 )
27TR k y 2 ( 1 - sgn Y2 )} , = c exp { - 7TY~ ~ - 7 4 2
where c1
f(u)
=
~~ { v1 ~ 27T [ 1 +
V27T
+
=
a(l
2v1 + kR ; + v1 + kR) 2
Cl>Cv 1u+ 2J] exp {
LkR
+
1 [1
+
(27T 7T: 1)a2 }
Cl>CV27T
~7TkR + 1)]
+
x exp { 40.11
f(r, y) = aa2:: ra2exp{
(27T
7T(1
+
+ kR)u 2 } } 27TkR + 1)a2 •
-2(aa2a~ ra2J·
The Kolmogorov equation for U = exp {-a V} has the form 2 2 82/ f) -of = -1- [ ( y I n y - az.0 R)ea 2 a 2 12 ] + -1 a~;-· OT RC fJy 2 c oy 2 The stationary solution is: 40.12
(tiio)
f(y) = N- 1 exp {-
(ai~~2 a~
[ y 2 (ln y -
~)
- 2ai0 Re-a 2 a 2 y]},
where
a~ = ~a ea
2 u2
[E1( -a2a 2) - 2ln aa - 0.57721· · ·]
(compare Stratonovich, 1961, p. 243).
458
ANSWERS AND SOLUTIONS
Jory rp(Y!) d"l ~' )
f(y) = c exp {- 22
40.13
a
where
40.14
The Kolmogorov equation is:
:y {[a(T) + ~(r)y]f} -
~+
~ :;2
[y 2 (r)f] = 0;
the equation for the characteristic function E( r, z) is: oE oE 07 - iza(r)E- zWr) oz
Y
=
exp
{f a~
40.15 of
"
vT
+
i
2
z 2E = 0,
=
f
f f
+
Wr1) dr1}{x
exp {- 2
2
. } l - 21 a~z 2 + izy , (
E(r, z) = exp
exp [-
f ~(r1) 2
dr1]a(r2) dr 2 } ,
{J( r 1) dr1} y 2 ( r 2 ) dr 2 •
The Kolmogorov equation is: 1 0 1 i5a~ 02/ - -;:r (yf) - -2 y 2 8:2 1 0 y 0 y
a
y
= X exp
{
-
--r;;-t r'1 T
=
•
f
0,
1
=
.
ay V
1-
27T
exp
{
= xe-n
4~~ 2 { (1 - ~; e- 2h'1)
+
+ : 0 sinwo(r-
t)
~; e- 2h'1(h COS 2 Wo/31 Wo
of o c2 o2f - - a-(fsgny)- - -2 or oy 2 oy
where fL =
=
= Vk2
-
- Wo Sin 2wael)};
h2 •
0.
~ (:2 - ~)
and L~;l(x) are the generalized Laguerre polynomials. W(T)
=
w( T1, Y1) =
fP
;
t)];
40.18
40.19
:1)2}
iMt), we find that the coefficients of the
=
T1 = T - f; 40.17
2ay2
J] .
where
at =
(y -
i5a~ [ { 2( T - t) l a~ = 2To 1 - exp To
-
40.16 Setting U1(t) = U(t), U2 (t) Kolmogorov equation are:
y1
-
w(aT, Y1) dy1;
;~ exp {- >.;'r1} exp { ·- ~ y'f} Da(Yl)c;,
459
ANSWERS AND SOLUTIONS
where Da(x) is an even solution of the Weber equation 2 (the parabolic cylinder function): J2y
dx2 a 1 is
(1
)
+ 4 x2
- a y
a root of the equation Da(f3) V2a
Y1 = - - y; c
N; =
40.20
W(T) w( T1, Y1)
=
=
{3
fP f"'
=
>.'f
= 0;
0, T1
0.5;
-
= aT;
v';:;
V2a
1
c, = - N Da1(0);
= - - Uo;
c
exp{
= a1
c
J
-~y~}Da1 (yl)dyl.
w(aT, Y1) dy1;
1 ~1 exp {- AJTl} exp {- ~ y~} V(y )c 1
1,
where Va(x)
=
J;2-a1 2{2-1!4r(~- ~a)D~1 >(x)sin7T(~ +~a) 2114r(~- ~a)D~2 >(x)cos7T(~ +~a)};
+ D~1 >(x)
and D~2 >(x) are the even and odd solutions of the Weber equation 2 : J2y
dx2
(1
a1 is the root of the equation Vaif3) V2a
Y1 = - - y; c
N, =
)
+ 4 x2
{3
J_"'oo
=
- a y
0; "A'f
V2a
= --
c
=
a1
0. 5;
-
c, = -
= txT;
T1
v';:;
ua;
exp {-
= 0;
c
~ Y~} Va/Yl) dy
1
N Va1(0); f
1 •
IX METHODS OF DATA PROCESSING 41.
DETERMINATION OF THE MOMENTS OF RANDOM VARIABLES FROM EXPERIMENTAL DATA 41.1
10.58 m. 41.2 (a) 814.87 sq. m., 41.3 f5 = 424.73 m.jsec., av = 8.84 m./sec. 41.4 f5 = 33 m./sec., Ev = 3.07 m.jsec. 41.5 x = 404.85 sq. m., ax = 133 sq. m. 41.6 For P(A) = 0.5, Dmax = 1/4n. 41.7
D[ar] = 2(n -;; 1) D 2 [X], n
(b) 921.86 sq. m.
D[a~] = ~1 D[X]. n-
2 See Tables of Weber Parabolic Cylinder Functions in Fletcher, A., eta!.: An Index of Mathematical Tables. Vol. II. Oxford, England, Blackwell Scientific Publications, Ltd., 1962.
460
ANSWERS AND SOLUTIONS 41.8 4
Sk
1.1°
41.11
k
= 0.85, E~ = 2.70. =
I
k = 2 (n _ I)·
41.9
J
(a) k
2n(n7I'- I)·
=
~n j:::, 2
I
(b) k =
J [I + ~ n
41.12
A1 =
ltfa'f, where
1)]
(n -
It is an arbitrary number.
I x =n
41.13
.
n
L
k~l
xk,
the values of kn being given in Table 23. 41.14 x = 48.3I m., ji = 53.3I m., Ex = 10.75 m., 1 n 1 n 41.15 x = - L xk, Yk, P = 1i E~ =
where
pV2
1 n - 1
a~=--
k~l
Va~ cos 2 a
+
-
=
a~=
x) 2 ;
-=- :L I
n
1
+ a~ sin2 a;
kxy sin 2a
n
L (xkk~l
kxy
= 12.50 m.
:L
k~l
n
Ey
_1_ n - 1
cxk - x)Ch -
k=l n and angle a is determined from the equation
i
(yk- y)2;
k=l
B
tan 2a = -22kxy -2. U x - Uy
41.16
x=
1m., .P = 40 m., E~ =23m., k _
41.17
En=
1.07 m.
r~fl) J~
First, show that the probability density of the random variable a is determined by the formula { n(a)2} ( n )
r(n; 1)
41.18
See Table I32. TABLE
It
1-10
J;;
0.107
F(x)
0.107
11-20
21-30
31-40
41-50
132 51-60
61-70
71-80
81-90
91-100
0.087
0.106
--- ---- --- --- --- --- ------
0.100
0.127
0.087
0.093
0.127
0.093
0.073
- -- ---- --- --- --- --- --- --- ---·
x=
0.207
0.334
48.50, D[X] = 829.18.
0.421
0.514
0.641
0.734
0.807
0.894
1.0
461
ANSWERS AND SOLUTIONS 41.19
See Table 133. 133
TABLE
x=
II
0--3
3-6
6-9
9-12
12-15
15-18
18-21
21-24
P1
0.000
0.002
0.006
0.040
0.070
0.114
0.156
0.164
II
24-27
27-30
30--33
33-36
36-39
39-42
42-45
PI
0.180
0.122
0.108
0.030
0.004
0.004
0.000
22.85, D[X] = 40.08. 41.20 ai and a~ are unbiased estimates of the variance 2a4 D[ - 2 ] _ 3n- 4 4 • D[at] = n- 1; a2 - (n - 1)2 a '
that is, D[ai] < D[an (see Table 134) for any n > 2. 134
TABLE
n D[a~] D[a~]
42.
3 0.80
5
7
10
15
00
0.73
0.71
0.69
0.68
0.67
- -- -- - - -- -
CONFIDENCE LEVELS AND CONFIDENCE INTERVALS 42.1
(92.36 m., 107.64 m.).
42.2
x=
116 ; 2 m., (115.53 m.; 116.57 m.).
0.55; 0.34. 42.4 (a) x = 10.57 m., D'x = 2.05 m., (b) 0.26, (c) 0.035. (5.249 sec., 5.751 sec.); (1.523 sec., 1.928 sec.). (867.6 m.jsec., 873.0 m./sec.). 42.7 Not less than 11 measurements. (24,846 m., 25,154 m.), (130.7 m., 294.9 m.). (4.761·10- 10 , 4.805·10- 10), x = 4.783·10- 10 • (a) (420.75 m./sec., 428.65 m.fsec.), (6.69 m.fsec., 12.70 m./sec.), (b) 0.61, 0.76. 42.11 Not less than three range finders. 42.12 Not less than 15 measurements. 42.13 0.44, 0.55, 0.71, 0.91. 42.14 See Table 135.
42.3 42.5 42.6 42.8 42.9 42.10
TABLE
n u ii
=20m. =20m.
135
3
5
10
25
±18.98 m. ±33.72 m.
± 14.71 m. ±19.05 m.
± 10.40 m. ± 11.59 m.
±6.58 m. ±6.84 m.
462
ANSWERS AND SOLUTIONS 42.15
i = 425 hours, (270.70 hours, 779.82 hours).
42.16
(410.21 hours, 1036.56 hours).
42.18
(0.123, 0.459).
42.20
(0.000, 0.149), (0.000, 0.206), (0.000, 0.369).
42.21
For marksman A (0.128, 0.872), for marksman B (0.369, 0.631).
42.22
(1.15, 3.24).
43.
42.23
(50. 75 hours, 85.14 hours).
(0.303, 0.503), (0.276, 0.534).
(3.721, 4.020).
For ex = 0.99 for r12 (0.42, 0.68), for r1s (0.13, 0.47), for r14 (0.21, 0.53),
42.25
42.26
42.19
42.17
42.24
(0, 4.6).
for ex = 0.95 for r 12 (0.45, 0.65), for r1 3 (0.17, 0.43), for r14 (0.25, 0.49).
9.82 < X < 11.18, 1.624 < ax < 2.632, 70.58 < y < 77.42, 8.12 < ay < 13.16, 0.369 < fxy < 0.796.
TESTS OF GOODNESS-OF-FIT
43.1 X = 0.928, x~ = 2.172, k = 4, P(x 2 ;. x~) = 0.705. The deviation is insignificant, the hypothesis on agreement of the observations with the Poisson distribution law is not contradicted. 43.2 X = 1.54, x~ significant.
=
7.953, k
=
6, P(x 2
;.
x~) = 0.246. The deviation is in-
43.3 x = 5, p = 0.5, X~ = 3.156, k = 9, P(x 2 ;. xD = 0.944. The hypothesis that at each shot the probability of hitting is the same is not disproved. 43.4
x~ =
10.32, k
=
7, P(x 2
;.
x~) =
0.176. The deviations are insignificant.
43.5 Dhyp = 0.1068, Ahyp = 1.068, P(Ahyp) = 0.202, Db!n = 0.1401, Abin = 1.401, P(Ab 1n) = 0.039. The hypothesis that the observations agree with a hypergeometric distribution law is not disproved; the deviation of the statistical distribution from the binomial is significant and the hypothesis about the binomial distribution should be rejected. 43.6 x = 11.8 g., a = 4.691 g., k = 2, X~ = 1.16, P(x_ 2 ;. x~) = 0.568. The hypothesis that the observations obey a normal distribution is not disproved. 43.7 x = 22.85, a= 6.394, k = 6, x~ = 5.939, P(x 2 ; . x~) = 0.436. The hypothesis that the statistical dist;"ibution agrees with a normal distribution is not disproved since the deviations are insignificant. 43.8 M[Z] = 4.5, D[Z] = 8.25, where Z is a random digit; M[X] = 22.5, D[X] = 41.23, a= 6.423, Do = 0.0405, A = 0.6403, P(A) = 0.807. The hypothesis that the statistical distribution agrees with a normal distribution is not disproved. 43.9 x~ = 5.012, k = 9, P(x 2 ;. x~) = 0.831. The deviations are insignificant; the hypothesis that the first 800 decimals of the number 7T agree with a uniform distribution law is not disproved. 43.10 Do = 0.0138, A = 0.3903, P(A) = 0.998. The hypothesis that the first 800 decimals of 7T obey a uniform distribution law is not disproved. 43.11 X~ = 4, k = 9, P(x 2 ;. x~) = 0.91. The hypothesis that the observations obey a uniform distribution law is not rejected.
463
ANSWERS AND SOLUTIONS
43.12 Do = 0.041, A = 0.5021, P(A) = 0.963. The hypothesis that the observations agree with a uniform distribution is not rejected since the deviations are insignificant. 43.13 x~ = 24.9, k = 9, P(x 2 ~ x~) = 0.0034. The deviations are significant; the hypothesis that the experimental data agree with a uniform distribution should be rejected. The results of the computations contain a systematic error. 43.14 x = 8.75, a= 16.85, X~H = 11.86, kH = 5, P(x 2 ~ X~H) = 0.0398; an estimate of 8 = v6 a = 41.28 is obtained for the parameter !5 of the Simpson distribution law; X~c = 17 .06, kc = 5; P(x 2 ~ x~) = 0.00402. The hypothesis that the observations agree with the Simpson distribution is rejected and the hypothesis that they agree with a normal distribution may be considered not rejected. 43.15 x =logy, x = -0.1312, fi~ = 0.3412, fix= 0.5841, n = 9, k = 6, P(x 2 ~ x~) = 0.890. The hypothesis that the experimental data obey a logarithmically normal distribution law is not disproved (the deviations are insignificant). 43.16
x=
rii 2 = 11.469,
2.864,
a=
J ~2
=
X
M [X] ·= vfi,
1
112 ,
where 11 is the root of the equation T(ll)
= 0.4229;
T(ll) =
+
v1 for
11
0.511rD(11) + 11 2
;
2vm2
= 1.2, we have T(v) = 0.4200; for v = 1.3, T(v) = 0.4241; II~
1.271,
M[X]
=
2.662, P(x
2
2.094,
fj =
~ x~) =
X~=
5.304,
k = 9,
0.894.
The hypothesis that X is the absolute value of a normally distributed variable is not disproved. 43.17 x = 87.46, a = 2.471, a = 80.02, ~ = 94.90, x~H > 500, kH = 7, P(x 2 ~ X~H) ~ 0. The probability density 'Y(x) for the convolution of a normal and uniform distribution has the form 1
'¥(x) = 2 ·14.88
[
rD
(X -2.471 80.02) (94.90 - X)] +
X~" = 2.949,
k., = 6, P(x 2 ~ x~.,) = 0.814. The hypothesis that the experimental data obey a normal distribution law is disproved. The hypothesis that the experimental data agree with the convolution of a normal distribution and a uniform one is not contradicted.
43.18
; = 50.13,
a= rj~
= 40.0,
x~
= 2.73, k = 8, P(x 2
~ x~)
= 0.95.
The hypothesis that the observations agree with a Rayleigh distribution is not contradicted. 43.19 X = 508.6, {j = 123.7, X~H = 2.95, kH = 7, P(x 2 ~ X~H) = 0.888. The parameter ii for a Maxwell distribution is determined from the formula
_
a
X-
Xo
= 1.5 96 = 193.4,
X~M
= 1.383,
kM = 7,
P(X 2 ~ X~M) = 0.986.
The observations fit a Maxwell distribution better than they fit a normal distribution. 43.20 l = 871.5 hours, ii = 0.001148, k = 8, x~ = 4.495, P(x 2 ~ x~) = 0.808. The hypothesis that the observations agree with an exponential distribution law is not disproved (the deviations are insignificant).
464
ANSWERS AND SOLUTIONS
43.21 i = 394.5 hours, fi = 228.1 hours, Vm = 0.5782, m= 1.789, bm = 0.8893, 13.44, k = 7, P(x 2 ~ x~) = 0.0629. The hypothesis on the agreement of the observations with a Weibull distribution is not disproved. 43.22 The arctan distribution law is 1 1 z Do = 0.0195, A = 0.6166, F(z) = _"' f(z) dz = 2 + ;. arctan 2:'
x~ =
f.
P(A) = 0.842. The hypothesis that the statistical distribution of variable z agrees with a Cauchy distribution and, consequently, that of the variable Y with a normal one is not disproved. 43.23 The arcsine distribution function
1 1 · a ( ) =Fz 2 +-arcstn-, 7T a
D0
0 029 0, =.
A= 0.917,
P(A) = 0.370.
The hypothesis that the pendulum performs harmonic oscillations is not disproved. 43.24 fi 2 = 0.1211, k = 2, x~ = 1.629, P(x 2 ~ x~) = 0.59. The deviations are insignificant; the hypothesis that the observed values of q; obey a chi-square distribution with k' = 19 degrees of freedom and, consequently, the hypothesis on the homogeneity of the series of variances are not disproved. Hint: The values of q1 should be arranged in their increasing order and divided into intervals so that each interval contains at least five values q1• . 1 -
l-
2
43.25 F(7J 1) = - - , D" = 0.126, A= 0.797, P(A) = 0.549. n The hypothesis that the observed values obey a Student's distribution and, consequently, the hypothesis that the observed values of x 1 obey a normal distribution law are not rejected. 43.26 x = 115.3, a = 21.43, x~H = 10.2o, kH = 10, P(x 2 ~ x~H) = 0.43, p, 3 = 2046, p, 4 = 6137·10 2 , Sk = 0.2079, Ex= -0.0912. The distribution function for a Charlier-A series is: F(z) = 0.5 + 0.5
465
ANSWERS AND SOLUTIONS
number of the lot is rejected. A systematic underestimate of dimensions is characteristic for the second lot. 44.
DATA PROCESSING BY THE METHOD OF LEAST SQUARES 44.1 ii = 0.609 + 0.1242£, Mo.o = 0.3896, M 1 ,1 = 0.00001156, cP = 1.464, 0.5704, a~ 1 = 0.0000169.
a~ 0 =
44.2 ii = 0.679 + 0.124£, a2 = 1.450, a~ 0 = 0.5639, a~ 1 = 0.00001672. The coincidence with the results of Problem 44.1 is fully satisfactory. The accuracy of the result in Problem 44.2 is higher than in Problem 44.1 since in solving 44.1 a large number of computations were performed and among them there occurred subtraction of approximately equal numbers. 44.3 44.4
h= ii =
+ 65.89t + 489.28t 2 , a 2 = 0.001245, a9 = 1.177 cm.(sec. 2 • 65.021 + 5.176Pl.la(x)·13 + 1.087P2. 1 a(x)·13, 9.14
where x = 30t - 1, or•
+ 65.895! + 489.28t 2, a9 = 1.167 cm.(sec. 2. = 0.8057 + 0.2004x- 0.1018x 2, 6 2 = 0.0002758,
ii = 44.5
y Ga 0
44.6
9.133
= 0.00009192,
y= y=
26.97 26.97
Ga 1
= 0.000009848,
Ga 2
= 0.000003283.
+ 0.3012Pl.l 6 (t) = 29.38 - 0.3012t, + 0.3012Pl.le(f) - 0.000916P2.Ie(t) + 0.01718P3 •16 (t) = 29.82 - 0.7133t + 0.06782t 2 - 0.002864! 3 ,
where Pk, 16 are the tabulated values of the Chebyshev polynomials. For a linear dependence a = 0.3048; for ex = 0.90, we have 0.2362 < a < 0.4380. For a dependence of third degree, a = 0.1212; for ex = 0.90, we have 0.0924 < a < 0.1800. 44.7 y = 21.07 + 5.954x, Ua 0 = 2.90, Ua 1 = 0.0889, Ka 0 ,a 1 = -0.2041. The confidence intervals for ak for ex = 0.90 are: 14.3 < a 0 < 27.9, 5.75 < a 1 < 6.16. a~(x) = 2.900 - 0.4082x + 0.0889x 2. The confidence limits for y = F(x) for ex = 0.90 are given in Table 136. TABLE
i
44.8
0
1
2
3
4
y,-
yu.(x,)
45.3
72.7
140.0
258.4
366.8
y, +
yu.(x,)
57.3
83.3
148.7
268.8
383.6
y= Ua 2
44.9
136
0.3548
+ 0.06574x + 0.00130x2;
Ua 1
= 0.0147;
Ua 1
= 0.0106;
= 0.00156.
y=
1.1188
+ 8 •9734 , X
Ga 0 = 0.2316,
Ga 1 = 0.6157,
for ex= 0.95, we have: 1.065 < a 0 < 1.172, 8.831 < a 1 < 9.115; Ka 0 ,a 1 = -0.0854. The confidence limits for y = F(x) if ex = 0.95 are given in Table 137.
466
ANSWERS AND SOLUTIONS TABLE
1
X;
2
137
5
3
10
20
30
50
100
200
1.47
1.35
1.26
1.22
- -- - - - - 5.55 4.06 2.87 1.97 1.52 1.37 1.25 1.16 1.11 - -- - - - - - - - - - - - - -- -
- - - -- - - -- -
y,- yiiy(x,) y,
+
10.03 10.27
yiiy(x,)
5.66
4.16
2.96
2.06
62 (x) = 0. 05364 _ 0.1708 " x
44.10
U = 100.8e- 0 · 31271 ,
44.11
{jB
= 204'.9 - 34 ~05 '
0.3790. x2 0.2935 < a < 0.3319.
Ua 0 = 4'.36,
v
p~ = . /
0 v 27T a
=
+
89.97 < U 0 < 112.9,
(x - 117.25) 2 } 44.12 p = 0.1822 exp { , 2 . 462 .91
44.13 q/
1.62
lemaxl =
0.04633,
= 0.1854.
62° is chosen according to the formula y = a' sin (wt - q/), where
a'=
IYo.o51 + IYotl + IYo.e51 =
44.14
ji = 1.0892 - 1.2496 cos x
33,
ji = 30.75sin(wt _ 59o59'),
I.Y - Ylmax = 18°.4. + 2.0802 sin x + 0.9795 cos 2x + 0.4666 sin 2x,
44.15 45.
ji
lemaxl = 0.24 for X = 120°. = -3.924 + 1.306x; lemaxl = 1.41.
STATISTICAL METHODS OF QUALITY CONTROL
For a single sample a = 0.0323, fJ = 0.0190; for a double sample = 0.0100. The average expenditure of items for 100 lots in the case of a double sample is 48.36·15 + 51.64·30 = 2275 items. The expenditure for 100 lots in the case of single sampling is 2200 items. The expenditure of items is almost the same, but in the case of double sampling the probabilities of errors in a and fJ are considerably smaller. A = 30.38, B = 0.01963, log A = 1.4825, log B = - 1. 7069. For a good lot ifp = 0, nmin = 13; logy (12, 0) = -1.6288, logy (13, 0) = -1.7771. For a defective lot whenp = 1, nmin = 2, logy (1, 1) = 0.8451, logy (2, 2) = 1.9590. 45.2 For a single sample a = 0.049, f3 = 0.009; for a double sample a = 0.046, (3 = 0.008, A = 19.8, B = 0.01053, h 1 = -3.758, h 2 = 2.424, h3 = 0.02915; M[n I Pol = 244.2, M[n I pi] = 113.6, M[nlmax = 321.9. For 100 lots in the case of double sampling, the average expenditure of items is 35.1·220 + 64.9·440 = 36,278 items; in the case of single sampling, the average expenditure is 41,000 items. In the case of sequential analysis, the average expenditure for 100 good lots is not greater than 24,420 items. 45.3 The normal distribution is applicable: a = 0.0023, (3 = 0.0307, A = 415.9, B = 0.03077, h 1 = -4.295, h 2 = 7.439, h 3 = 0.1452. For a good lot if p = 0, nmin = 30; for a defective lot if p = 1' nmin = 9; M [n I 0.1 OJ = 94.52, M[n I 0.20] = 128.9, M(n]max = 257.4, c = 2.153, P(n < 300) = 0.9842, P(n < 150) = 0.8488. 45.1
a
= 0.0067, fJ
467
ANSWERS AND SOLUTIONS
45.4 (a) n0 = 285, v = 39 (a normal distribution is applicable); A = 98, B = 0.0202, h1 = -4.814, h 2 = 5.565, h 3 = 0.1452; M[n I p 0 ] = 102.1, M[n I prJ = 101.0; M[n]max = 219.4; (b) no= 65, v = 8; A= 8, B = 0.2222, h1 - 1.861, h2 = 2.565, hs = 0.1452; M[n I Po] = 21.6, M[n I pd = 38.6; M[n]max = 38.6. 45.5 Apply the passage from a Poisson distribution to a chi-square distribution: v = 9, n 0 = 180, A = 18, B = 0.1053, h1 = -2.178, h2 = 2.796, h 3 = 0.05123; M[n I p 0 ] = 90.86, M[n I prJ = 79.82, M[nlmax = 125.2. For a good lot if p = 0, we have nmin = 43, for a defective lot if p = 1, nmin = 3. Z1-pZ1-vo _ ( 1 zij)( Z1-a + Zl-P ) 2 • n0 +• Zl-P 2 Zl-Po - Zl-Pl where zv are the quantiles of the normal distribution: F(zv) = 0.5 + 0.5cD(zv) = p, Zo.97 = 1.881, z 0 . 92 = 1.405; Zo.95 = 1.645, Zo.go = 1.282, Zo = 1.613, no = 87. The single sample size in the case of magnitude control for the same a, {3, p 0 , p 1 is considerably smaller than in the case of control of the proportion of defectives.
45.6
Zo =
Z1-aZ1-p1
Z1-a
+ +
45.7 In the case of a binomial distribution law (with passage to the normal distribution law) a = 0.1403, f3 = 0.1776, n 0 = 49, v = 6, A = 5.864, B = 0.2065, h1 = -1.945, h2 = 2.182, h3 = 0.1452, M[n I Po] = 30.3, M[n I prJ = 26.4, M[n]max = 34.2. The average expenditure in the case of double sampling for 100 lots represents 64.34 · 30 + 35.66 · 60 = 4070 items. In the case of single sampling, the expenditure of items for 100 lots is 4900 items; in the case of sequential analysis, the average expenditure for 100 good lots is not greater than 3030 items. In the case of a Poisson distribution, a = 0.1505, f3 = 0.2176, n 0 = 49, v = 6 (passage to a chi-square distribution).
45.8 Apply the normal distribution law: n0 = 286, v = 15, A = 9900, B = 0.01, h1 = 3.529, h2 = 7.052, h = 0.04005, M[n I 0.02] = 176.0, M[n I 0.07] = 231.9, M[n]max = 647.1, c = 3.608, P(n < M[n I 0.02]) = 0.5993, P(n < 2M[n I 0.02]) = 0.9476, P(n < n0 ) = 0.8860. 45.9 For n0 = 925, v = 12. For to= 1000 hours, A= -2.197, B t 1 = 237.6, t 2 = -237.6, 13 = 74.99; M[T 110- 5] = 613.2, M[T I 2·10- 5] M [TJmax = 750.6. TABLE t0,
no
hours
= 2.i97, = 482.9,
138
500
1000
2000
5000
1849
925
463
185
45.10 For the method of single sampling, apply the passage from a Poisson distribution to a chi-square distribution: v = 6, n0 = 122, A = 184, B = -0.08041, h1 = -1.487, h2 = 3.077, h3 = 0.0503. For a good lot, if p = 0, nm 1n = 30; for a defective lot, if p = 1, nmin = 4. M[n I 0.02] = 48.3, M[n I 0.10] = 54.6, M[nJmax = 95.9, C = 5.286; P(n < n 0 ) = 0.982,
For a double sample a
f3
0.0009152; for a single sample = 671.0, B = 0.0009166, h 1 = -4.446, h2 = 4.043, h 3 = 0.2485, M[n I a 0 ] = 29.2, M[n I ar] = 16.0, M[nlmax = 70.7. The average expenditure of potatoes per 100 lots in the case 45.11
no = 62, v
=
0.001486,
=
= 13 (the passage to the normal distribution Jaw); A
468
ANSWERS AND SOLUTIONS
of double sampling is 62.88·40 + 37.12·60 = 4743 items. The expenditure of potatoes per 100 lots is 6200 items. In the case of sequential analysis, the average expenditure per 100 good lots is not greater than 2920 items. 45.12 For a double sample, a = 0.0896, f3 = 0.023 3; for a single sample, n 0 = 15,11 = 12.45; A = 10.90, B = 0.02560, h 1 = -977.7, h 2 = 637.2, h3 = 184.9; M[n I ao] = 9.81, M[n I ad = 2.78, M[n]max = 10. In the case of double sampling, the average expenditure of resistors per 100 good lots is 85.66·13 + 14.44·26 = 1488; in the case of single sampling, the expenditure is 1500 items; in the case of sequential analysis, the average expenditure is not larger than 981 items. 45.13 In the case of single sampling, a = 0.0000884, f3 = 0.00621, B = 0.00621, A= 1124·10, h1 = 6.506, h2 = -11.94, h3 = 5.15; M[n I go]= 26.02; M[n I gi) = 47.32, M[n]max = 121.4, C = 2.542, P(n .:;; 300) > 0.99( < 0.999); P(n .:;; 150) = 0.9182. 45.14 n 0 = 86, 11 = 66.7 hours, A = 999, B = 0.001001, h 1 = 690.8, h2 = -690.8, hs = 69.33, "-* = 0.01442, M[n I t\ 0 ] = 22.48, M[n I "-d = 35.67, M[n]max = 99.31. 45.15 For a single control of proportion of unreliable condensers n0 = 246, 11 = 5. For a sequential reliability control of condensers A = 9999, B = 0.0001, h1 = 1152·10\ h2 = -1152·10\ h 3 = 6384·10 2, ,\* = 0.000001566. 45.16 tr = 952.6 hours, 11 = 72.8 hours, In A = 2.197, In B = -2.197, In AToT1 ln BT0 T1 t1 = T T, = 219.7 hours, t2 = T T, = -219.7 hours, 1 -
1 -
0
fs
0
T1 ToT1ln To = T T, = 69.3 hours. 1 -
0
For the poorer of the good lots ('f = To = 100)tm1n = 715.7 hours; for the better of the defective lots ('f = T1 = 50)fmtn = 569.2 hours. 46.
DETERMINATION OF PROBABILITY CHARACTERISTICS OF RANDOM FUNCTIONS FROM EXPERIMENTAL DATA 46.1
limr~oo
One
D[x]
should
x=
prove that if
(1/T) f~ x(t) dt,
then
M[x]
=
x,
= 0.
46.2 No, since limr~oo M[,S\(w)] = Sx(w), but D[Sx(w)] = S~(w) and, consequently, does not tend to zero as T increases. 46.3
r-,
2 D[Kx(T)] = (T _ T)2 Jo
(T- T - T1)
X [K~('Tl)
46.4
2 M[K1 (T)] = K(T)- (T _
M[K2(T)] 2
=
2
T) 2
Jo
rr-,
K(T) - (T _ T) 2 Jo
rT-t
D[Kl(T)] = (T _ 'T) 2 Jo
+
Kx('Tl
+
T)Kx('T1 - 'T)] d'T1.
(T+t
_
(T- T- T1)K(T1) dT1; (T- T- T1)K(T
(T- T- 'Tl)[K 2 (T1)
+
K('T1
+
+
T1) dT1;
T)K(T1 - T)] dT1
4 (T -
T)s
469
ANSWERS AND SOLUTIONS D[K2(r)]
=
2 (T _ r) 2
e-1 (T-
)o
x [K 2 (r 1 )
+
T
1
)a
0
{J:{J: -
+
(T
~
+
(T
~ r)•
-
(T~ r)•{f= 2a~
D[x]
46.6
D[S(w)]
aT
=
r)K(r 1 - r)
T- 1 rT - 1 rT -1
2 - (T _ r)3
46.5
+
K(r 1
r 1)
-
r)•
1
1
1
)a
+
K(r)K(r
+
+
r 1)
K(r)K(r 1 - r)] dr1
[K(t2 - t1)K(t3 - t1)
+
K(t2 - t1
+
r)
r
x K(t 3 - t 1 - r)] dt 1 dt 2 dt 3
+
(T- T - TI)[Kh
(T - T - rl)Kh) drl (T- T- Tl)K(r
+
+
r)
r
K(rl - r)] drl
Tl)drlr·
(1 - 1-aTe-aT). 21r!T2
x
I:
{f
(T- t)
[K(t
+
TJ)
+
K(t - TJ)] sin (T - TJ)w d7J
+
1fT
e-i1WK(t- r) drl2} dt.
46.7 ay will decrease by 2 per cent. 46.8 Ty will decrease by 3 per cent. 46.9 D[K8 (r) = 22 grad.', D[K8 (3)] = 2.8 grad. 4 • 46.10 The value of the first zero of the function K(r) equals (a) 2.20 sec., (b) 2.30 sec. a2 46.11 D[Ke(r)] = 2(T _ r) 2a2 x { a(a 2
+ + (32 (3 2 ) +
e
- 2a1 [2
X COS
D[K8 (0)] = 5.82 grad.\ D[K8 (4.18)] = 4.80 grad.\
r cos
2{3
r
+
1 . 2(3
~ sm
r
+
1
~
- f3 sin 2f3r]}. 2(3 T + a cos 2{3r a2 + (32 >
D[K8 (2.09)] = 5.35 grad.\ D[K8 (16.72)] = 2.92 grad.'
and the corresponding standard deviations are 2.41, 2.32, 2.19 and 1.71 grad. 2. 46.12 When t increases the quotient, t 1 /t converges in probability to the probability P of coincidence of the signs of the ordinates of the random functions X(t) and X(t + r), related, for a normal process, to the normalized correlation function k( r) by k( r) = cos 7T(l - P), which can be proved by integrating the two-dimensional normal distribution law of the ordinates of the random function between proper limits. 46.13 Denoting by 1 [ X(t)X(t + r) 1 Z(t)
= 2 1 +
IX(t)X(t
+
r)iJ
and by P the probability that the signs of X(t) and X(t + r) coincide, we get kx( r) = cos 7T(l - z) ~ cos 7T(l - z) + 7I'(z - z) sin 7T(l - z).
z=
P;
470
ANSWERS AND SOLUTIONS
Consequently, D[k(r)] ~
7T 2D[i]
+
Ja
sin 2 1r(1 - P) =
fa
[1 - k~(r)]D[i];
7T 2
(""' f"'}t(xb x2, x3, x4) dx1 dx2 dx3 dx4 - 2 2 ,
-ooJo )a
-oo
j(x1, x2, X3, x4) being the distribution law of the system of normal variables X(t 1), X(t 1 + r), X(t2), X(t2 + r).
46.14 reaction:
Kx(r)
= g 1K1(r) + g2K2(-r) + g 3K3(-r), where we have the approximate a~
g;=1
1
1
(j=1,2,3);
2
a;
2 ("Ti
= T'f
Ja
-
(T;- r)Kt(r) dr.
++ a~ a~ a~ For T; exceeding considerably the damping time of Kx(r), it is approximately true that
a1 =~(a-.!!...), T; T1 where b = fa"' rK(r) dr and K(r) is a sample function. 46.15
-
m-l-1
2
2
D[Kx(r)] = (m _ /) 2
[K;(sM
+
Kx(sb.
x (m - l - s)
46.16
By 9 per cent.
46.17
txo =
~ LT Kx(r) dr; b.opt =
46.18
("T
Ja
+
lb.)Kx(sb. - lb.)]
s=l
a;=
2 ("T T )a
Kx(r)
K;(r) dr - Tail -
+
1
-
m _ 1 [K;(o)
21TjT COST
dr,
j > 0;
!.2 I o:J. 1~1
Since
J =~I j(t)dt, then D[j] = ;; [1- o:1T(l-
The mean error is EJ
=
pVl a 1
=
e-aT)] =a~=
0.58 ·10- 8 A.
+
(0.86·10- 8 ) 2 A2 •
v A;(lb.)].
SOURCES OF TABLES REFERRED TO IN THE TEXT*
IT.
The binomial coefficients C;:': Beyer, W., pp. 339-340; Middleton, D., 1960; Kouden, D., 1961, pp. 564-567; Volodin, B. G., et al., 1962, p. 393.
2T.
The factorials n! or logarithms of factorials log n!: Barlow, P., 1962; Beyer, W., pp. 449-450; Bronstein, I., and Semendyaev, K. A., 1964; Boev, G., 1956, pp. 350--353; Kouden, D., 1961, pp. 568-569; Segal, B. I., and Semendyaev, K. A., 1962, p. 393; Unkovskii, V. A., 1953, p. 311; Volodin, B. G., et al., 1962, p. 394.
3T.
Powers of integers: Beyer, W., pp. 452-453.
4T.
The
binomial distribution function P(d < m + 1) = P(d ~ m) = - p)n-k: Beyer, W., pp. 163-173; Kouden, D., 1961, pp. 573-578. L~=o C~pk(l
ST.
6T.
The values of the gamma-function r(x) or logarithms of the gamma-function log r(x): Beyer, W., p. 497; Bronstein, I., and Semendyaev, K. A., 1964; Hald, A., 1952; Middleton, D., 1960; Boev, G., 1956, p. 353; Segal, B. I., and Semendyaev, K. A., 1962, pp. 353-391; Shor, Ya., 1962, p. 528.
a~ e-a for a Poisson distribution: Beyer, W., m. pp. 175-187; Gnedenko, B. V.; Saaty, T., 1957; Boev, G., 1956, pp. 357-358; Dunin-Barkovskii, I. V., and Smirnov, N. V., 1955, pp. 492-494; Segal, B. I., and Semendyaev, K. A., 1962. The probabilities P(m, a) =
7T.
The total probabilities P(k ;:. m) = e-a Lk=m ak(k! for a Poisson distribution: Beyer, W., pp. 175-187.
8T.
The Laplace function (the probability integral) in case of an argument expressed in terms of standard deviation
9T.
The probability density of the normal distribution rp(z)
. } e-z•t 2 for an v 27T argument expressed in standard deviations: Beyer, W., pp. 115-124; Gnedenko, B. V., p. 383. =
* More complete information on the references is found in the Bibliography, which follows this section.
471
472
SOURCES OF TABLES REFERRED TO IN THE TEXT
lOT.
The derivatives of the probability density of the normal distribution cp(x): cp 2 (x) = cp"(x) = (x 2 - l)cp(x); cp 3 (x) = cp"'(x) = - (x 3 - 3x)cp(x): Beyer, W., pp. 115-124.
11 T.
The reduced Laplace function for an argument expressed in standard deviations,
12T.
The probability density of the normal distribution for an argument expressed in standard deviation, cp(z) = pjv';. e-P 222 : see 9T.
13T.
Thefunctionp(z) see 8T, 9T.
14T.
The Student distribution law
2pjv';. f~ e-p 2 x 2 dx- 2z(p/V;.)e-P 222
=
P(T < t) = r[(k + 1)/2] r(k/2)Vk-rr
{t
Ja
(t + k
x2) -
=
. dx.
Beyer, W., pp. 225-226; Gnedenko, B. V.; Yaglom, A. M., and Yaglom, I. M., 1964; Volodin, B. G., eta!., 1962, p. 404; Segal, B. I., and Semendyaev, K. A., 1962. 15T.
The probabilities
16T.
The values of y associated with the confidence level a = P(l T! < y) and k degrees of freedom for the Student distribution: Arley, N., and Buch, K., 1950; Cramer, H., 1946; Laning, J. H., Jr., and Battin, R. H., 1956; Unkovskii, V. A., 1953, pp. 306-307; see also 14T.
17T.
The probabilities P(x 2 >- x 2 ) =
18T.
The values of x~ depending on the probability P(x 2 freedom for a chi-square distribution: see 17T.
19T.
The lower limit y 1 and the upper limit y 2 of the confidence level a and k degrees of freedom for a chi-square distribution: Laning, J. H., Jr., and Battin, R. H., 1956; Smirnov, N. V., and Dunin-Barkovskii, I. V., 1959, p. 405.
20T.
The probabilities L(q, k) = P[Vkj(l square distribution: see 22T.
21 T.
The probability density of a chi-square distribution
1"'x:
1 x
q
;;;.
x~)
and k degrees of
+ q) < x < Vkj(l - q)] for a chi-
yk-le-y2f2
f(y, k)
=
2'k 2)/2r(k/2):
see 5T, 9T. 22T.
The probabilities P(y ~ qVk) for the quantity y obeying a chi-square 1 distribution: P(y ~ qVk) = 2,k 2 , 12 r(k/ 2) yk-le-Y 2 12 dy: Beyer, W.,
Jorq""
pp. 233-239; Shor, Ya., 1962. 23T.
The Rayleigh distribution law P(X < x)
24T.
The function p(x)
=
=
1 - e -x 2 t2 a 2 : Bartlett, M., 1953.
1 - e-P 2 x 2 : Bartlett, M., 1953.
473
SOURCES OF TABLES REFERRED TO IN THE TEXT 25T.
The probabilities 00
P(DVn ~ A) = P(,\) = 1 - K(A),
K(A)
=
2 ( -1)ke-2k2"2 k=-
ctJ
for the Kolmogorov distribution law: Arley, N., and Buch, K., 1950; Gnedenko, B. V.; Milne, W. E., 1949; Dunin-Barkovskii, I. V., and Smirnov, N. V., 1955, pp. 539-540. 26T.
The values of y(p-quantiles) depending on the parameter c and the Wald distribution function: p
= Wc(Y) =
J J: ;7T
y- 312 exp{
-~ (Y +
1-
2)
}dy:
Takacs, L., 1962; Basharinov, A., and Fleishman, B., 1962, pp. 338-344. 27T.
Tables of random numbers: Beyer, W., pfJ. 341-345.
28T.
The function 7J(p) = -p log 2 p: Wald, A., 1947.
29T.
The orthogonal Chebyshev polynomials ~
. .
Pk,n(x) ~ L. ( -1) 1 CkCk+t
t=o Middleton, D., 1960.
x(x - 1) · · · (x - j n(n - l) · · · ( n - 1·
+ +
1) 1):
30T.
Two-sided confidence limits for the estimated parameter in the binomial distribution law: Beyer, W., 187-189.
31T.
The values of z = tan h- 1 r =
32T.
The relations between the parameters bm, Vm and m for the Weibull distribution law: Koshlyakov, N. S., Gliner, E. B., and Smirnov, M. M., 1964.
1
1
2 1n 1
+ r _ r:
.
Dwtght, H., 1958.
BIBLIOGRAPHY
Arley, N., and Buch, K.: Introduction to Probability and Statistics. New York, John Wiley and Sons, Inc., 1950. Bachelier, L.: Calcul des Probabilites (Calculus of Probabilities). Paris, 1942. Barlow, P.: Barlow's Tables of Squares, Cubes, Square Roots, Cube Roots, and Reciprocals of all Integer Numbers up to 12,500. 4th Ed. New York, Chemical Publishing Co., Inc., 1962. Bartlett, M.: Philosophical Magazine, No. 44, 1953. Basharinov, A., and Fleishman, B.: Metody statisticheskogo posledovatel'nogo analiza i ikh prilosheniya (Methods of statistical sequential analysis and their applications). Sovetskoe Radio, 1962. Bernstein, S.: Teoriya Veroyatnostei (Probability Theory). Gostekhizdat, 1946. Bertrand, I.: Calcul des Probabilites (Calculus of Probabilities). Paris, 1897. Beyer, W.: Handbook of Tables for Probability and Statistics. Chemical Rubber Co., Ohio. Boev, G.: Teoriya Veroyatnostei (Probability Theory). Gostekhizdat, 1956. Borel, E.: Elements de Ia Theorie des Probabilites (Elements of Probability Theory). Paris, 1924. Bronstein, 1., and Semendyaev, K. A.: Guide Book to Mathematics for Technologists and Engineers. New York, Pergamon Press, Inc., 1964. Bunimovich, V.: Fluktuatsionnye protsessy v radio-priemnykh ustroistvakh (Random processes in radio-reception equipment). Sovetskoe Radio, 1951. Cramer, H.: Mathematical Methods of Statistics. Princeton, N.J., Princeton University Press, 1946. Czuber, E.: Wahrscheinlichkeitsrechnung und ihre Anwendung auf Fehlerausgleichung Statistik und Lebensversicherung (Probability Theory and its Application to Error-Smoothing, Statistics and Lzfe Insurance). Leipzig and Berlin, 1910. Davenport, W. B., Jr., and Root, V. L.: Introduction to Random Signals and Noise. New York, McGraw-Hill Book Co., Inc., 1958. Dlin, A.: Matematicheskaya statistika v tekhnike (Mathematical statistics in technology). Sovetskaya Nauka, 1958. Dunin-Barkovskii, I. V., and Smirnov, N. v:: Teoriya Veroyatnostei i Matematicheskaya Statistika v Tekhnike-Obshchaya Chast (Probability Theory and Mathematical Statistics in Technology-General Part). Gostekhizdat, 1955. Dwight, H.: Mathematical Tables of Elementary and Some Higher Order Mathematical Functions. 3rd Rev. Ed. New York, Dover Publications, Inc., 1961. Feller, W.: Introduction to Probability Theory and its Applications. New York, John Wiley and Sons, Inc., Vol. 1, 1957, Vol. 2, 1966. Gantmakher, F. R.: The Theory of Matrices. New York, Chelsea Publishing Co., 1959. Glivenko, V.: Kurs Teorii Veroyatnostei (Course in Probability Theory). GONTI, 1939. Gnedenko, B. V.: Theory of Probability. New York, Chelsea Publishing Co. (4th Ed. in prep.).
475
476
BIBLIOGRAPHY
Gnedenko, B. V., and Khinchin, A.: Elementary Introduction to the Theory of Probability, 5th Ed. New York, Dover Publications, Inc., 1962. Goldman, S.: Information Theory. Englewood Cliffs, N.J., Prentice-Hall, Inc., 1953. Goncharov, V.: Teoriya Veroyatnostei (Probability Theory). Oborongiz, 1939. Guter, R. S., and Ovchinskii, B. V.: Elementy Chislennogo Analiza i Matematicheskoi Obrabotki Resul'tatov Opita (Elements of Numeral Analysis and the Mathematical Processing of Experimental Data). Fizmatgiz, 1962. Gyunter, N. M., and Kuz'min, R. 0.: Sbornik Zadach po Vysshei MatematikeCh. III (Collection of Problems in Higher Mathematics-Part III). Gostekhizdat, 1951. Hald, A.: Statistical Theory with Engineering Applications. New York, John Wiley and Sons, Inc., 1952. Jahnke, E., and Emde, F.: Tables of Functions with Formulae and Curves. New York, Dover Publications, Inc., 1945. Kadyrov, M.: Tablitsy Sluchainykh Chisel (Table of Random Numbers). Tashkent, 1936. Khinchin, A.: Raboty po Matematicheskoi Teorii Massovogo Obsluzjevaniya (Work in the Mathematical Theory of Mass Service [Queues]). Fizmatgiz, 1963. Koshlyakov, N. S., Gliner, E. B., and Smirnov, M. M.: Differential Equations of Mathematical Physics. New York, John Wiley and Sons, Inc. (Interscience), 1964. Kotel'nikov, V.: A nomogram connecting the parameters of Weibull's distribution with probabilities. Theory of Probability and Its Applications, 9: 670-674, 1964. Kouden, D.: Statischeskie Metody Kontrolya Kachestva (Statistical Methods of Quality Control). Fizmatgiz, 1961. Krylov, V. I.: Approximate Calculations of Integrals. New York, The Macmillan Co., 1962. Laning, J. H., Jr., and Battin, R. H.: Random Processes in Automatic Control. New York, McGraw-Hill Book Co., Inc., 1956. Levin, B.: Teoriya sluchainykh protsessov i ee primenenie v radiotekhnike (Theory of random processes and its application to radio technology). Sovetskoe Radio, 1957. Linnik, Y. V.: Method of Least Squares and Principles of the Theory of Observations. New York, Pergamon Press, Inc., 1961. Lukomskii, Ya.: Teoriya Korrelyatsii i ee Primenenie k Analizu Proizvodstva (Correlation Theory and its Application to the Analysis of Production). Gostekhizdat 1961. Mesyatsev, P. P.: Primenenie Teorii Veroyatnostei i Matematicheskoi Statistiki pri Konstruirovannii i Proizvodstve Radio-Apparatury (Applications of Probability Theory and Mathematical Statistics to the Construction and Production of Radios). Voenizdat, 1958. Middleton, D.: Introduction to Statistical Communication Theory. New York, McGraw-Hill Book Co., Inc., 1960. Milne, W. E.: Numerical Calculus. Princeton, N.J., Princeton University Press, 1949. Nalimov, V. V.: Application of Mathematical Statistics to Chemical Analysis. Reading, Mass., Addison-Wesley Publishing Co., Inc., 1963. Pugachev, V. S. · Theory of Random Functions. Reading, Mass., Addison-Wesley Publishing Co., Inc., 1965. Romanovskii, V.: Diskretnye Tsepi Markova (Discrete Markov Chains). Gostekhizdat, 1949. Romanovf.kii, V.: Matematicheskaya Statistika (Mathematical Statistics). GONTI, 1938. Rumshiskii, L. Z.: Elements of Probabiliiy Theory. New York, Pergamon Press, Inc., 1965.
BIBLIOGRAPHY
477
Saaty, T.: Resume of useful formulas in queuing theory. Operations Research, No. 2, 1957. Sarymsakov, T. A.: Osnovy Teorii Protsessov Markova (Basic Theory of Afarkov Processes). Gostekhizdat, 1954. Segal, B. I., and Semendyaev, K. A.: Pyatiznachnye Matematicheskie Tablitsy (Five-Place Mathematical Tables). Fizmatgiz, 1961. Shchigolev, B. M.: Mathematical Analysis of Observations. New York, American Elsevier Publishing Co., Inc., 1965. Sherstobitov, V. V., and Diner, I.: Sbornik Zadach po Strel'be zenitoi Artilrii (Collection of Problems in Antiaircraft Artillery Firing). Voenizdat, 1948. Shor, Ya.: Statisticheskie metody analiza i kontrolya kachestva i nadezhnosti (Statistical methods of analysis, quality control and safety). Sovetskoe Radio, 1962. Smirnov, N. V., and Dunin-Barkovskii, I. V.: Kratkii Kurs Matematicheskoi Statistiki (Short Course in Mathematical Statistics). Fizmatgiz, 1959. Solodovnikov, V.: Statistical Dynamics of Linear Automatic Control Systems. Princeton, N.J., D. Van Nostrand Co., Inc., 1956. Stratonovich, R. L.: Izbrannye voprosy teorii jluktuatsii v radioteknike (Selected questions in fluctuation theory in radio technology). Sovetskoe Radio, 1961. Sveshnikov, A. A.: Applied Methods of the Theory of Random Functions. New York, Pergamon Press, Inc. (in prep.). Takacs, L.: Stochastic Processes, Problems and Solutions. New York, John Wiley and Sons, Inc., 1960. Unkovskii, V. A.: Teoriya Veroyatnostei (Probability Theory). Voenmorizdat, 1953. Uorsing, A., and Geffner, D.: Metody Obrabotki Eksperimental'nykh Dannykh (Methods for Processing Experimental Data). IL, 1953. Venttsel', E. S.: Teoriya veroyatnostei (Probability theory). Izd-vo Nauka, 1964. Volodin, B. G., et a!.: Rukovodstvo Dlya lnzhenerov po Resheniyu Zadach Teorii Veroyatnostey (Engineer's Guide for the Solution of Problems in Probability Theory). Sudpromgiz, 1962. Wald, A.: Sequential Analysis. New York, John Wiley and Sons, Inc., 1947. Yaglom, A. M., and Yaglom, I. M.: Challenging Mathematical Problems with Elementary Solutions. San Francisco, Holden-Day, Inc., 1964. Yaglom, A.M., and Yaglom, I. M.: Probability and Information. New York, Dover Publications, Inc., 1962. Yule, G. U., and Kendall, M. G.: Introductory Theory of Statistics. 14th Rev. Ed. New York, Hafner Publishing Co., Inc., 1958.
Index
Absorbing state, 232 Addition, of probabilities, 16-22 Aftereffect, and Markov process, 248 Apollonius' theorem, 147 Arctan law, 321 Arithmetic mean deviation, 73 Asymmetry coefficient, 108
Confidence levels, 286-300 Continuous Markov processes, 256-274 Continuous random variables, 48-53 numerical characteristics of, 62-67 Convolution, of distribution laws, 128-136 Correlation coefficient, 85 Correlation theory, of random functions, 181-230 properties of, 181-18 5 Covariance, of random variables, 85
Bayes' formula, 26-30 Bessel formulas, 329 Binomial distribution, 30 D, computation of, 62
Cauchy distribution, 321 Cauchy probability law, 53, 120 Central moment, computation of, 62 definition of, 54 Characteristic function, 74-79 of random variables, 108 subsystems of, 125 systems of, 124-128 Charlier-A series, 302 Chebyshev's inequality, 171 Chebyshev's polynomials, 327 Chebyshev's theorem, 171 Chi-square test, 301 Complementary events, 1 Composition, of distribution laws, 128-136 Conditional differential entropy, 157 Conditional distribution laws, 99-106 Conditional entropy, 157 Conditional mean entropy, 158 Conditional probability, 12-16 Conditional variance, 103 Confidence intervals, 286-300
definition of, 54 S-function, 49 Data processing, methods of, 275-374 Degenerate normal distribution, 145 De Moivre-Laplace theorem, 176-180 Dependent events, 12 Deviation vectors, use of, 145-156 Differential entropy, 157 Differential equations, 205 Discrete random variable, 43--48 numerical characteristics of, 54-62 Distribution ellipse, 146 Distribution function, 43-48 Distribution laws, 84-91 composition of, 128-136 convolution of, 128-136 of functions of random variables, 115123 of random functions, 181-185 symmetric, 62 Distribution polygon, 43-48 Double sampling, 348 Dynamical systems, characteristics at output of, 205-216
479
480 Encoding, Shannon-Fano method, 163 Entropy, and information, 157-170 of random events and variables, 157-162 Envelopes, method of, 226-230 Erlang's formula, 253 Essential states, 232 Estimates, of random variables, 275 Excess, of random variable, 108 Expectation, computation of, 62 definition of, 54 Exponential distribution, 319
INDEX Limit theorems, 171-180 Linearization, of functions, of random variables, 136-I45 Linear operations, with random functions, 185-192 Linear operator, I85 Logarithmic normal distribution law, 53 Lyapunov theorem, 176-180
M, computation of, 62
definition of, 54 Fokker-Planck equation, 256
mk, computation of, 62
definition of, 54 computation of, 62 definition of, 54 Markov chains, 231-246 Markov processes, 231-274 with discrete number of states, 246-256 Markov's theorem, 171 Maximal differential entropy, 159 Maxwell distribution, 319 Mean deviation, 62 arithmetic, 73 Mean error, 72 Mean-square deviation, computation of, 62 definition of, 54 Median, 49 Mode, 49 Moment(s), central, computation of, 62 definition of, 54 computation of, 62 definition of, 54 of random variables, 275-286 Multidimensional normal distribution, 9199 Multidimensional Poisson law, 70 Multinomial distribution, 36-42, 70 Multiplication of probabilities, 12-16 Mutual correlation function, I82 Mutually exclusive events, I
f'k,
Generating function, 36-42 Geometric probability, 6-11 Goodness-of-fit, tests of, 300-325 Green's function, 206
Homogeneous Markov chain, 231 Homogeneous Markov process, 297 Homogeneous operator, 185 Hypergeometric distribution, 313
Impulse function, 206 Independent events, 12 Independent trials, repeated, 30-36 Information, and entropy, 157-170 quantity of, 163-170 Integral distribution law, 43 Intersection, of events, I Irreducible Markov chain, 23I
Jacobian determinant, 116
Khinchin's theorem, 171 Kolmogorov equations, 256 Kolmogorov test, 30I
Lagrange-Sylvester formula, 231 Laplace function, 71 normalized, 71 Large numbers, law of, 171-175 Least squares, data processing by, 325-346
Nonhomogeneous operator, I85 Normal distribution law, 70-74, 91-99 Normalized covariance matrix, 85 Normalized Laplace function, 71
Optimal dynamical systems, 216-225 Ordinarity, of Markov process, 248
481
INDEX Pascal's distribution law, 78 Passages, problems on, 192-198 Pearson's law, 120 Pearson's tests, 302 Periodic Markov chain, 231 Perron formula, 232 Poisson's law, 67-70 Probability(ies), addition of, 12-16 characteristics of, determination of, 368374 conditional, 12-16 evaluation of, direct method for, 4-6 geometric, 6-11 multiplication of, 12-16 total, 22-26 Probability density, computation of, 80-83 Probability density function, 48-53 Probability distribution series, 43-48 Probability integral, 71
Quality control, definition of, 346 statistical methods for, 346-368 Quantile, 49
Random event(s), 1-42 relations among, 1-3 Random function(s), correlation theory of, 181-230 definition of, 181 distribution laws of, 181-185 linear operations with, 18 5-192 stationary, 181 Random sequence, 181 Random variable(s), 43-83 continuous, 48-53 numerical characteristics of, 62-67 discrete, 43-48 numerical characteristics of, 54-62 excess of, 108 functions of, 107-157 distribution laws of, 115-123 linearization of, 136-145 numerical characteristics of, 107-115 moments of, 275-286 systems of, 84-106 characteristics of, 84-91 uncorrelated, 85
Rayleigh distribution, 52, 318 Rayleigh's law, 119 Recursion formulas, 36-42 Regular Markov process, 247 Repeated independent trials, 30-36
computation of, 62 definition of, 54 Sequential analysis, 349 Set, of experiments, complete, 1 Shannon-Fano method of encoding, 163 Sheppard corrections, 277 Simpson distribution, 315 Single sampling, 346 Spectral decomposition, of stationary random functions, 198-205 Spectral density, 198 Standard deviation, 62 State, absorbing, 232 essential, 232 Stationarity, of Markov process, 248 Stationary random function, 181 spectral decomposition of, 198-205 Stochastic process, 181 Student's distribution, 287 Symmetric distribution law, 62 a,
Total probability, 22-26 computation of, 80-83 Transition probability, 231 Transitive Markov process, 248 Transmission function, 217 Triangular distribution, 315
Unbiased estimate, of random variables, 275 Uniform distribution, 52 Union, of events, 1
Variance, computation of, 62 definition of, 54
Wald analysis, 349 Weibull distribution function, 52, 319