8039.9789814335478-tp.indd 1
5/25/11 3:50 PM
This page intentionally left blank
TOPICS IN
PROBABILITY
Narahari Prabhu
Cornell University, USA
World Scientific NEW JERSEY
•
8039.9789814335478-tp.indd 2
LONDON
•
SINGAPORE
•
BEIJING
•
SHANGHAI
•
HONG KONG
•
TA I P E I
•
CHENNAI
5/25/11 3:50 PM
Published by World Scientific Publishing Co. Pte. Ltd. 5 Toh Tuck Link, Singapore 596224 USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE
British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.
TOPICS IN PROBABILITY Copyright © 2011 by World Scientific Publishing Co. Pte. Ltd. All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher.
For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.
ISBN-13 978-981-4335-47-8 ISBN-10 981-4335-47-9
Typeset by Stallion Press Email:
[email protected] Printed in Singapore.
YeeSern - Topics in Probability.pmd
1
5/12/2011, 2:28 PM
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-fm
Now I’ve understood Time’s magic play: Beating his drum he rolls out the show, Shows different images And then gathers them in again Kabir (1450–1518)
v
This page intentionally left blank
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-fm
CONTENTS
Preface
ix
Abbreviations
xi
1.
Probability Distributions 1.1. 1.2. 1.3. 1.4.
2.
1
Elementary Properties Convolutions . . . . . . Moments . . . . . . . . Convergence Properties
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
Characteristic Functions . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
Analytic Characteristic Functions 3.1. 3.2. 3.3. 3.4.
1 4 6 8 11
2.1. Regularity Properties . . . . 2.2. Uniqueness and Inversion . . 2.3. Convergence Properties . . . 2.3.1. Convergence of types 2.4. A Criterion for c.f.’s . . . . . 2.5. Problems for Solution . . . . 3.
. . . .
Definition and Properties Moments . . . . . . . . . The Moment Problem . . Problems for Solution . . vii
. . . .
11 15 17 19 21 24 27
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
27 30 31 40
May 12, 2011
14:38
viii
4.
9in x 6in
Topics in Probability
b1108-fm
Topics in Probability
Infinitely Divisible Distributions 4.1. Elementary Properties . . . . . . . . . . . . . . . 4.2. Feller Measures . . . . . . . . . . . . . . . . . . . 4.3. Characterization of Infinitely Divisible Distributions . . . . . . . . . . . . . . . . . . . . . 4.4. Special Cases of Infinitely Divisible Distributions 4.5. L´evy Processes . . . . . . . . . . . . . . . . . . . . 4.6. Stable Distributions . . . . . . . . . . . . . . . . . 4.7. Problems for Solution . . . . . . . . . . . . . . . .
5.
Self-Decomposable Distributions; Triangular Arrays
43 . . 43 . . 46 . . . . .
. . . . .
50 54 57 58 66 69
5.1. Self-Decomposable Distributions . . . . . . . . . . . . 69 5.2. Triangular Arrays . . . . . . . . . . . . . . . . . . . . 72 5.3. Problems for Solution . . . . . . . . . . . . . . . . . . 78 Bibliography
79
Index
81
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-fm
PREFACE
In this monograph we treat some topics that have been of some importance and interest in probability theory. These include, in particular, analytic characteristic functions, the moment problem, infinitely divisible and self-decomposable distributions. We begin with a review of the measure-theoretical foundations of probability distributions (Chapter 1) and characteristic functions (Chapter 2). In many important special cases the domain of characteristic functions can be extended to a strip surrounding the imaginary axis of the complex plane, leading to analytic characteristic functions. It turns out that distributions that have analytic characteristic functions are uniquely determined by their moments. This is the essence of the moment problem. The pioneering work in this area is due to C. C. Heyde. This is treated in Chapter 3. Infinitely divisible distributions are investigated in Chapter 4. The final Chapter 5 is concerned with self-decomposable distributions and triangular arrays. The coverage of these topics as given by Feller in his 1971 book is comparatively modern (as opposed to classical) but is still somewhat diffused. We give a more compact treatment. N. U. Prabhu Ithaca, New York January 2010
ix
May 12, 2011
14:38
9in x 6in
Topics in Probability
This page intentionally left blank
b1108-fm
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-fm
ABBREVIATIONS
Term
Abbreviation
characteristic function distribution function if and only if Laplace transform probability generating function random variable d
c.f. d.f iff L.T. p.g.f r.v.
Terminology: We write x = y if the r.v.’s x, y have the same distribution.
xi
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch01
Chapter 1
Probability Distributions
1.1. Elementary Properties A function F on the real line is called a probability distribution function if it satisfies the following conditions: (i) F is non-decreasing: F (x + h) ≥ F (x) for h > 0; (ii) F is right-continuous: F (x+) = F (x); (iii) F (−∞) = 0, F (∞) ≤ 1. We shall say that F is proper if F (∞) = 1, and F is defective otherwise. Every probability distribution induces an assignment of probabilities to all Borel sets on the real line, thus yielding a probability measure P . In particular, for an interval I = (a, b] we have P {I} = F (b) − F (a). We shall use the same letter F both for the point function and the corresponding set function, and write F {I} instead of P {I}. In particular F (x) = F {(−∞, x]}. We shall refer to F as a probability distribution, or simply a distribution. A point x is an atom if it carries positive probability (weight). It is a point of increase iff F {I} > 0 for every open interval I containing x. 1
May 12, 2011
14:38
2
9in x 6in
Topics in Probability
b1108-ch01
Topics in Probability
A distribution F is concentrated on the set A if F (Ac ) = 0, where is the complement of A. It is atomic if it is concentrated on the set of its atoms. A distribution without atoms is continuous. As a special case of the atomic distribution we have the arithmetic distribution which is concentrated on the set {kλ(k = 0, ±1, ±2, . . .)} for some λ > 0. The largest λ with this property is called the span of F . A distribution is singular if it is concentrated on a set of Lebesgue measure zero. Theorem 1.1 (below) shows that an atomic distribution is singular, but there exist singular distributions which are continuous. A distribution F is absolutely continuous if there exists a function f such that f (x)dx . F (A) = Ac
A
If there exists a second function g with the above property, then it is clear that f = g almost everywhere, that is, except possibly on a set of Lebesgue measure zero. We have F (x) = f (x) almost everywhere; f is called the density of F . Theorem 1.1. A probability distribution has at most countably many atoms. Proof. Suppose F has n atoms x1 , x2 , . . . , xn in I = (a, b] with a < x1 < x2 < · · · < xn ≤ b and weights p(xk ) = F {xk }. Then n
p(xk ) ≤ F {I}.
k=1
This shows that the number of atoms with weights > n1 is at most equal to n. Let Dn = {x : p(x) > 1/n}; then the set Dn has at most n points. Therefore the set D = ∪Dn is at most countable.
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch01
Probability Distributions
3
Theorem 1.2 (Jordan decomposition). A probability distribution F can be represented in the form F = pFa + qFc
(1.1)
where p ≥ 0, q ≥ 0, p + q = 1, Fa , Fc are both distributions, Fa being atomic and Fc continuous. p(xn ), q = 1 − p. Proof. Let {xn , n ≥ 1} be the atoms and p = If p = 0 or if p = 1, the theorem is trivially true. Let us assume that 0 < p < 1 and for −∞ < x < ∞ define the two functions Fa (x) =
1 p(xn ), p xn ≤x
1 Fc (x) = [F (x) − pFa (x)]. q
(1.2)
Here Fa is a distribution because it satisfies the conditions (i)–(iii) above. For Fc we find that for h > 0 p(xn ) ≥ 0, q[Fc (x + h) − Fc (x)] = F (x + h) − F (x) − x<xn ≤x+h
(1.3) which shows that Fc is non-decreasing. Letting h → 0 in (1.3) we find that q[Fc (x+ ) − Fc (x)] = F (x+ ) − F (x) = 0, so that F is right-continuous. Finally, Fc (−∞) = 0, while Fc (∞) = 1. Therefore Fc is a distribution. Theorem 1.3 (Lebesgue decomposition). A probability distribution can be represented as the sum F = pFa + qFsc + rFac
(1.4)
where p ≥ 0, q ≥ 0, r ≥ 0, p + q + r = 1, Fa is an atomic distribution, Fsc is a continuous but singular distribution and Fac is an absolutely continuous distribution.
May 12, 2011
14:38
4
9in x 6in
Topics in Probability
b1108-ch01
Topics in Probability
Proof. By the Lebesgue decomposition theorem on measures we can express F as F = aFs + bFac ,
(1.5)
where a ≥ 0, b ≥ 0, a + b = 1, Fs is a singular distribution and Fac is an absolutely continuous distribution. Applying Theorem 1.2 to Fs we find that Fs = p1 Fa + q1 Fsc , where p1 ≥ 0, q1 ≥ 0, p1 + q1 = 1. Writing p = ap1 , q = aq1 , r = b we arrive at the desired result (1.4). Remark. Although it is possible to study distribution functions and measures without reference to random variables (r.v.) as we have done above, it is convenient to start with the definition F (x) = P {X ≤ x} where X is a random variable defined on an appropriate sample space.
1.2. Convolutions Let F1 , F2 be distributions and F be defined by ∞ F1 (x − y)dF2 (y) F (x) =
(1.6)
−∞
where the integral obviously exists. We call F the convolution of F1 and F2 and write F = F1 ∗ F2 . Clearly F1 ∗ F2 = F2 ∗ F1 . Theorem 1.4. The function F is a distribution. Proof.
For h > 0 we have ∞ [F1 (x − y + h) − F1 (x − y)]dF2 (y) ≥ 0 F (x + h) − F (x) = −∞
(1.7) so that F is non-decreasing. As h → 0, F1 (x − y + h) − F1 (x − y) → F1 (x − y+) − F1 (x − y) = 0;
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch01
Probability Distributions
since
|F1 (x − y + h) − F1 (x − y)| ≤ 2,
5
∞
−∞
2dF2 (y) = 2,
the right side of (1.7) tends to 0 by the dominated convergence theorem. Therefore F (x+ )−F (x) = 0, so that F is right-continuous. Since F1 (∞) = 1 the dominated convergence theorem gives F (∞) = 1. Similarly F (−∞) = 0. Therefore F is a distribution. Theorem 1.5. If F1 is continuous, so is F. If F1 is absolutely continuous, so is F . Proof. We have seen in Theorem 1.4 that the right-continuity of F1 implies the right-continuity of F . Similarly the left-continuity of F1 implies that of F . It follows that if F1 is continuous, so is F . Next let F1 be absolutely continuous, so there exists a function f1 such that x f1 (u)du. F1 (x) = −∞
Then
F (x) =
−∞
∞
x
dF2 (y)
∞
= −∞
−∞
x −∞
f1 (u − y)du
f1 (u − y)dF2 (y) du
so that F is absolutely continuous, with density ∞ f1 (x − y)dF2 (y). f (x) =
(1.8)
−∞
Remarks. 1. If X1 , X2 are independent random variables with distributions F1 , F2 , then the convolution F = F1 ∗ F2 is the distribution of their
May 12, 2011
14:38
6
9in x 6in
Topics in Probability
b1108-ch01
Topics in Probability
sum X1 + X2 . For
F (z) = P {X1 + X2 ≤ z} =
∞
= −∞
dF2 (y)
x+y≤z
z−y −∞
dF1 (x)dF2 (y)
dF1 (x) =
∞
−∞
F1 (z − y)dF2 (y).
However, it should be noted that dependent random variables X1 , X2 may have the property that the distribution of their sum is given by the convolution of their distributions. 2. The converse of Theorem 1.5 is false. In fact two singular distributions may have a convolution which is absolutely continuous. 3. The conjugate of any distribution F is defined as the distribution F˜ , where F˜ (x) = 1 − F (−x− ). If F is the distribution of the random variable X, then F˜ is the distribution of −X. The distribution F is symmetric if F = F˜ . 4. Given any distribution F , we can symmetrize it by defining the distribution ◦ F , where ◦
F = F ∗ F˜ .
It is seen that ◦ F is a symmetric distribution. It is the distribution of the difference X1 − X2 , where X1 , X2 are independent variables with the same distribution F .
1.3. Moments The moment of order α > 0 of a distribution F is defined by ∞ xα dF (x) µα = −∞
provided that the integral converges absolutely, that is, ∞ |x|α dF (x) < ∞; να = −∞
να is called the absolute moment of order α. Let 0 < α < β. Then for |x| ≤ 1 we have |x|α ≤ 1, while for |x| > 1 we have |x|α ≤ |x|β .
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch01
Probability Distributions
Thus we can write |x|α ≤ |x|β + 1 for all x and so ∞ ∞ α β |x| dF (x) ≤ (1 + |x| )dF (x) = 1 + −∞
−∞
∞ −∞
7
|x|β dF (x).
This shows that the existence of the moment of order β implies the existence of all moments of order α < β. Theorem 1.6. The moment µα of a distribution F exists iff xα−1 [1 − F (x) + F (−x)]
(1.9)
is integrable over (0, ∞). Proof.
For t > 0 an integration by parts yields the relation
t −t
|x|α dF (x) = −tα [1 − F (t) + F (−t)]
t
+α 0
xα−1 [1 − F (x) + F (−x)]dx.
(1.10)
From this we find that t t α |x| dF (x) ≤ α xα−1 [1 − F (x) + F (−x)]dx −t
0
so that if (1.9) is integrable over (0, ∞), να (and therefore µα ) exists. Conversely, if να exists, then since |x|α dF (x) > |t|α [1 − F (t) + F (−t)] |x|>t
the first term on the right side of (1.10) vanishes as t → ∞ and the integral there converges as t → ∞. Theorem 1.7. Let
∞
ν(t) = −∞
|x|t dF (x) < ∞
for t in some interval I. Then log ν(t) is a convex function of t ∈ I.
May 12, 2011
14:38
8
9in x 6in
Topics in Probability
b1108-ch01
Topics in Probability
Proof. Let a ≥ 0, b ≥ 0, a + b = 1. Then for two functions ψ1 , ψ2 we have the H¨older inequality ∞ a ∞ 1/a |ψ1 (x)ψ2 (x)|dF (x) ≤ |ψ1 (x)| dF (x) −∞
−∞
×
∞
−∞
1/b
|ψ2 (x)|
b dF (x)
provided that the integrals exist. In this put ψ1 (x) = xat1 , ψ2 (x) = xbt2 , where t1 , t2 ∈ I. Then ν(at1 + bt2 ) ≤ ν(t1 )a ν(t2 )b
(1.11)
or taking logarithms, log ν(at1 + bt2 ) ≤ a log ν(t1 ) + b log ν(t2 ) which establishes the convexity property of log ν.
Corollary 1.1 (Lyapunov’s inequality). Under the hypothesis of 1
Theorem 1.7, νtt is non-decreasing for t ∈ I. Proof. Let α, β ∈ I and choose a = α/β, t1 = β, b = 1 − a, t2 = 0. Then (1.11) reduces to α/β
να ≤ νβ
(α ≤ β)
where we have written νt = ν(t).
1.4. Convergence Properties We say that I is an interval of continuity of a distribution F if I is open and its end points are not atoms of F . The whole line (−∞, ∞) is considered to be an interval of continuity. Let {Fn , n ≥ 1} be a sequence of proper distributions. We say that the sequence converges to F if Fn {I} → F {I}
(1.12)
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch01
Probability Distributions
9
for every bounded interval of continuity of F . If (1.12) holds for every (bounded or unbounded) interval of continuity of F , then the convergence is said to be proper, and otherwise improper. Proper convergence implies in particular that F (∞) = 1. Examples 1. Let Fn be uniform in (−n, n). Then for every bounded interval contained in (−n, n) we have |I| dx = → 0 as n → ∞ Fn {I} = 2n 2n I where |I| is the length of I. This shows that the convergence is improper. 2. Let Fn be concentrated on { n1 , n} with weight 1/2 at each atom. Then for every bounded interval I we have Fn {I} → 0 or
1/2
according as I does not or does contain the origin. Therefore the limit F is such that it has an atom at the origin, with weight 1/2. Clearly F is not a proper distribution. 3. Let Fn be the convolution of a proper distribution F with the normal distribution with mean zero and variance n−2 . Thus ∞ n 2 2 F (x − y) √ e−(1/2)n y dy Fn (x) = 2π −∞ ∞ 1 2 F (x − y/n) √ e−(1/2)y dy. = 2π −∞ For finite a, b we have ∞ b 1 2 dFn (x) = [F (b − y/n) − F (a − y/n)] √ e−(1/2)y dy 2π a −∞ → F (b− ) − F (a− ) as n → ∞ by the dominated convergence theorem. If a, b are points of continuity of we can write Fn {(a, b)} → F {(a, b)} so that the sequence {Fn } converges properly to F .
(1.13)
May 12, 2011
14:38
10
9in x 6in
Topics in Probability
b1108-ch01
Topics in Probability
If X is a random variable with the distribution F and Yn is an independent variable with the above normal distribution, then we know that Fn is the distribution of the sum X + Yn . As n → ∞, it is obvious that the distribution of this sum converges to that of X. This justifies the definition of convergence which requires (1.13) to hold only for points of continuity a, b. Theorem 1.8 (Selection theorem). Every sequence {Fn } of distributions contains a subsequence {Fnk , k ≥ 1} which converges (properly or improperly) to a limit F . Theorem 1.9. A sequence {Fn } of proper distributions converges to F iff ∞ ∞ u(x)dFn (x) → u(x)dF (x) (1.14) −∞
−∞
for every function u which is bounded, continuous and vanishing at ±∞. If the convergence is proper, then (1.14) holds for every bounded continuous function u. The proofs of these two theorems are omitted.
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch02
Chapter 2
Characteristic Functions
2.1. Regularity Properties Let F be a probability distribution. Then its characteristic function (c.f.) is defined by φ(ω) = where i =
√
∞
−∞
eiωx dF (x)
(2.1)
−1, ω real. This integral exists, since
∞
−∞
iωx
|e
|dF (x) =
∞ −∞
dF (x) = 1.
Theorem 2.1. A c.f. φ has the following properties: (a) φ(0) = 1 and |φ(ω)| ≤ 1 for all ω. ¯ (b) φ(−ω) = φ(ω), and φ¯ is also a c.f. (c) Re φ is also a c.f. Proof.
(a) We have
φ(0) =
∞
−∞
dF (x) = 1,
|φ(ω)| ≤ 11
∞
−∞
|eiωx |dF (x) = 1.
(2.2)
May 12, 2011
14:38
12
9in x 6in
Topics in Probability
b1108-ch02
Topics in Probability
∞ ¯ (b) φ(ω) = −∞ e−iωx F (dx) = φ(−ω). Moreover, let F˜ (x) = 1 − F (−x− ). Then ∞ ∞ ∞ iωx ˜ −iωx ˜ e F {dx} = e e−iωx F {dx}. F {−dx} = −∞
−∞
−∞
Thus φ(−ω) is the c.f. of F˜ , which is a distribution. (c) Re φ = 12 φ + 12 φ¯ = c.f. of 12 F + 12 F˜ , which is a distribution.
Theorem 2.2. If φ1 , φ2 are c.f.’s, so is their product φ1 φ2 . Proof. Let φ1 , φ2 be the c.f.’s of F1 , F2 respectively and consider the convolution ∞ F1 (x − y)dF2 (y). F (x) = −∞
We know that F is a distribution. Its c.f. is given by ∞ ∞ ∞ eiωx dF (x) = eiωx dF1 (x − y)dF2 (y) φ(ω) =
−∞ ∞
= −∞
eiωy dF2 (y)
−∞ −∞ ∞ eiω(x−y) dF1 (x −∞
− y)
= φ1 (ω)φ2 (ω). Thus the product φ1 φ2 is the c.f. of the convolution F1 ∗ F2 .
Corollary 2.1. If φ is a c.f., so is |φ|2 . Proof. We can write |φ|2 Theorem 2.1(b).
=
¯ where φ¯ is a c.f. by φφ,
Theorem 2.3. A distribution F is arithmetic iff there exists a real = 0 such that φ(ω0 ) = 1. ω0 Proof. (i) Suppose that the distribution is concentrated on {kλ, λ > 0, k = 0, ±1, ±2, . . .} with the weight pk at kλ. Then the c.f. is given by φ(ω) =
∞ −∞
Clearly φ(2π/λ) = 1.
pk eiωkλ .
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch02
Characteristic Functions
13
(ii) Conversely, let φ(ω0 ) = 1 for ω0 = 0. This gives ∞ (1 − eiω0 x )dF (x) = 0. −∞
Therefore
∞ −∞
(1 − cos ω0 x)dF (x) = 0
which shows that the points of increase of F are among 0, ±1, ±2, . . .). Thus the distribution is arithmetic.
2kπ ω0 (k
=
Corollary 2.2. If φ(ω) = 1 for all ω, then the distribution is concentrated at the origin. Remarks. 1. If F is the distribution of a random variable, then we can write φ(ω) = E(eiωX ) so that the c.f. is the expected value of eiωX . We have φ(−ω) = E(e−iωX ), so that φ(−ω) is the c.f. of the random variable −X. This is Theorem 2.1(b). 2. If X1 , X2 are two independent random variables with c.f.’s φ1 , φ2 , then φ1 (ω)φ2 (ω) = E[eiω(X1 +X2 ) ] so that the product φ1 φ2 is the c.f. of the sum X1 + X2 . This is only a special case of Theorem 2.2, since the convolution F1 ∗ F2 is not necessarily defined for independent random variables. 3. If φ is the c.f. of the random variable X, then |φ|2 is the c.f. of the symmetrized variable X1 − X2 , where X1 , X2 are independent variables with the same distribution as X. Theorem 2.4. (a) φ is uniformly continuous. (b) If the n-th moment exists, then the n-th derivative exists and is a continuous function given by ∞ eiωx (ix)n dF (x). (2.3) φ(n) (ω) = −∞
May 12, 2011
14:38
14
9in x 6in
Topics in Probability
b1108-ch02
Topics in Probability
(c) If the n-th moment exists, then φ admits the expansion φ(ω) = 1 +
n
µn
1
Proof.
(iω)n + 0(ω n ) (ω → 0). n!
(a) We have φ(ω + h) − φ(ω) =
so that
∞ −∞
|φ(ω + h) − φ(ω)| ≤ ≤2
Now
(2.4)
∞
eiωx (eihx − 1)dF (x)
(2.5)
|eihx − 1|dF (x)
−∞ ∞
−∞
| sin(hx/2)|dF (x).
x<−A,x>B
| sin(hx/2)|dF (x) ≤
x<−A,x>B
by taking A, B large, while B | sin(hx/2)|dF (x) ≤ η −A
dF (x) < ε
B
−A
dF (x) < η.
since | sin(hx/2)| < η for h small. Therefore |φ(ω +h)−φ(ω)| → 0 as h → 0, which proves uniform continuity. (b) We shall prove (2.3) for n = 1, the proof being similar for n > 1. We can write (2.5) as ∞ eihx − 1 φ(ω + h) − φ(ω) = dF (x). (2.5 ) eiωx · h h −∞ Here
and
iωx eihx − 1 eihx − 1 ≤ ≤ |x| e · h h
∞
−∞
|x|dF (x) < ∞
by hypothesis. Moreover (eihx − 1)/h → ix as h → 0. Therefore letting h → 0 in (2.5 ) we obtain by the dominated convergence
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch02
Characteristic Functions
theorem that φ(ω + h) − φ(ω) → h
∞
−∞
15
ixeiωx dF (x)
as required. Clearly, this limit is continuous. (c) We have eiωx =
n (iωx)n n=0
so that ∞ −∞
iωx
e
dF (x) = 1 +
n!
+ o(ω n xn ) (ω → 0)
n (iω)n n=1
n!
µn +
∞ −∞
o(ω n xn )dF (x),
where the last term on the right side is seen to be o(ω n ).
Remark . The converse of (b) is not always true: thus φ (ω) may exist, but the mean may not. A partial converse is the following: Suppose that φ(n) (ω) exists. If n is even, then the first n moments exist, while if n is odd, the first n − 1 moments exist.
2.2. Uniqueness and Inversion Theorem 2.5 (uniqueness). Distinct distributions have distinct c.f.’s. Proof.
Let F have the c.f. φ, so that ∞ eiωx dF (x). φ(ω) = −∞
We have for a > 0 ∞ 1 2 2 a √ e− 2 a ω −iωy φ(ω)dω 2π −∞ ∞ ∞ 1 2 2 a √ e− 2 a ω −iωy eiωx dF (x) = 2π −∞ −∞ ∞ ∞ 1 2 2 a dF (x) eiω(x−y) √ e− 2 a ω dω, = 2π −∞ −∞ the inversion of integrals being clearly justified. The last integral is the c.f. (evaluated at x − y) of the normal distribution with mean 0
May 12, 2011
14:38
16
9in x 6in
Topics in Probability
b1108-ch02
Topics in Probability 2
2
and variance a−2 , and therefore equals e−(x−y) /2a . We therefore obtain the identity ∞ ∞ 1 2 1 1 − 12 a2 ω 2 −iωy √ e− 2a2 (y−x) dF (x) e φ(ω)dω = 2π −∞ 2πa −∞ (2.6) for all a > 0. We note that the right side of (2.6) is the density of the convolution F ∗ Na , where Na is the normal distribution with mean 0 and variance a2 . Now if G is a second distribution with the c.f. φ, it follows from (2.6) that F ∗ Na = G ∗ Na . Letting a → 0+ we find that F ≡ G as required. Theorem 2.6 (inversion). (a) If the distribution F has c.f. φ and |φ(ω)/ω| is integrable, then for h > 0 ∞ 1 − e−iωh 1 φ(ω)dω. (2.7) e−iωx · F (x + h) − F (x) = 2π −∞ iω (b) If |φ| is integrable, then F has a bounded continuous density f given by ∞ 1 e−iωx φ(ω)dω. (2.8) f (x) = 2π −∞ Proof. (b) From (2.6) we find that the density fa of Fa = F ∗ Na is given by ∞ 1 2 2 1 e− 2 a ω −iωx φ(ω)dω. (2.9) fa (x) = 2π −∞ Here the integrand is bounded by |φ(ω)|, which is integrable by hypothesis. Moreover, as a → 0+ , the integrand → e−iωx φ(ω). Therefore by the dominated convergence theorem as a → 0+ , ∞ 1 e−iωx φ(ω)dω = f (x) (say). fa (x) → 2π −∞ Clearly, f is bounded and continuous. Now for every bounded interval I we have Fa {I} = fa (x)dx. I
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch02
Characteristic Functions
17
Letting a → 0+ in this we obtain F {I} =
I
f (x)dx
if I is an interval of continuity of F . This shows that f is the density of F , as required. (a) Consider the uniform distribution with density uh (x) =
1 h
for −h < x < 0, and = 0 elsewhere.
Its convolution with F has the density fh (x) =
∞
−∞
uh (x − y)dF (y) =
x+h
x
F (x + h) − F (x) 1 dF (y) = h h
and c.f. φh (ω) = φ(ω) ·
∞ −∞
eiωx uh (x)dx = φ(ω) ·
1 − e−iωh . iωh
By (b) we therefore obtain 1 F (x + h) − F (x) = h 2π
∞
−∞
e−iωx φ(ω) ·
1 − e−iωh dω iωh
provided that |φ(ω)(1 − e−iωh )/iω| is integrable. This condition reduces to condition that |φ(ω)/ω| is integrable.
2.3. Convergence Properties Theorem 2.7 (continuity theorem). A sequence {Fn } of distributions converges properly to a distribution F iff the sequence {φn } of their c.f.’s converges to φ, which is continuous at the origin. In this case φ is the c.f. of F.
May 12, 2011
14:38
18
9in x 6in
Topics in Probability
b1108-ch02
Topics in Probability
Proof.
(i) If {Fn } converges properly to F , then
∞ −∞
u(x)dFn (x) →
∞ −∞
u(x)dF (x)
for every continuous and bounded function u. For u(x) = eiωx it follows that φn (ω) → φ(ω) where φ is the c.f. of F . From Theorem 2.4(a) we know that φ is uniformly continuous. (ii) Conversely suppose that φn (ω) → φ(ω), where φ is continuous at the origin. By the selection theorem there exists a subsequence {Fnk , k ≥ 1} which converges to F , a possibly defective distribution. Using (2.6) we have ∞ ∞ 1 2 1 2 a √ e−iωy− 2 a ω2 φnk (ω)dω = e− 2a2 (y−x) dFnk (x). 2π −∞ −∞ Letting k → ∞ in this we obtain ∞ ∞ 1 2 1 2 a √ e−iωy− 2 a ω2 φ(ω)dω = e− 2a2 (y−x) dF (x) 2π −∞ −∞ ≤ F (∞) − F (−∞). Writing the first expression in (2.10) as ∞ 1 2 1 √ e−iω(y/a)− 2 ω φ(ω/a)dω 2π −∞
(2.10)
(2.11)
and applying the dominated convergence theorem we find that (2.11) converges to φ(0) = 1 as a → ∞. By (2.10) it follows that F (∞) − F (−∞) ≥ 1, which gives F (−∞) = 0, F (∞) = 1, so that F is proper. By (i) φ is the c.f. of F , and by the uniqueness theorem F is unique. Thus every subsequence {Fnk } converges to F . Theorem 2.8 (weak law of large numbers). Let {Xn , n ≥ 1} be a sequence of independent random variables with a common distribution and finite mean µ. Let Sn = X1 + X2 + · · · + Xn (n ≥ 1). Then as n → ∞, Sn /n → µ in probability.
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch02
Characteristic Functions
Proof.
19
Let φ be the c.f. of Xn . The c.f. of Sn /n is then
E(eiω(Sn /n) ) = φ(ω/n)n = [1 + iµ(ω/n) + 0(1/n)]n → eiµω as n → ∞. Here eiµω is the c.f. of a distribution concentrated at the point µ. By the continuity theorem it follows that the distribution of Sn /n converges to this degenerate distribution. Theorem 2.9 (central limit theorem). Let {Xn , n ≥ 1} be a sequence of independent random variables with a common distribution and E(Xn ) = µ,
Var(Xn ) = σ 2
(both being finite). Let Sn = X1 + X2 + · · · + Xn (n ≥ 1). Then as √ n → ∞, the distribution of (Sn − nµ)/σ n converges to the standard normal. Proof. The random variables (Xn −µ)/σ have mean zero and variance unity. Let their common c.f. be φ. Then the c.f. of (Sn − nµ)/ √ σ n is √ 1 2 φ(ω/ n)n = [1 − ω 2 /2n + 0(1/n)]n → e− 2 ω where the limit is the c.f. of the standard normal distribution. The desired result follows by the continuity theorem. Remark . In Theorem 2.7 the convergence of φn → φ is uniform with respect to ω in [−Ω, Ω]. 2.3.1. Convergence of types Two distributions F and G are said to be of the same type if G(x) = F (ax + b) with a > 0, b real.
(2.12)
May 12, 2011
14:38
20
9in x 6in
Topics in Probability
b1108-ch02
Topics in Probability
Theorem 2.10. If for a sequence {Fn } of distributions we have Fn (αn x + βn ) → G(x),
Fn (an x + bn ) → H(x)
(2.13)
for all points of continuity, with αn > 0, an > 0, and G and H are non-degenerate distributions, then αn → a, an
βn − bn →b an
and
G(x) = H(ax + b)
(2.14)
(0 < a < ∞, |b| < ∞). Proof. Let Hn (x) = Fn (an x + bn ). Then we are given that Hn (x) → H(x) and also Hn (ρn x + σn ) = Fn (αn x + βn ) → G(x), where ρn =
αn , an
σn =
βn − bn . an
(2.15)
With the obvious notations we are given that φn (ω) → φ(ω),
ψn (ω) ≡ e−iωσn /ρn φn (ω/ρn ) → ψ(ω)
uniformly in −Ω ≤ ω ≤ Ω. Let {ρnk } be a subsequence of {ρn } such that ρnk → a (0 ≤ a ≤ ∞). Let a = ∞, then |ψ(ω)| = lim |ψnk (ω)| = lim |φnk (ω/ρnk )| = |φ(0)| = 1 uniformly in [−Ω, Ω], so that ψ is degenerate, which is not true. If a = 0, then |φ(ω)| = lim |φnk (ω)| = lim |ψnk (ρnk ω)| = |ψ(0)| = 1, so that φ is degenerate, which is not true. So 0 < a < ∞. Now e−iω(σnk /ρnk ) =
ψ(ω) ψnk (ω) → φnk (ω) φ(ω)
so that σnk /ρnk → a limit b/a (say). Also ψ(ω) = e−iω(b/a) φ(ω/a).
(2.16)
It remains to prove the uniqueness of the limit a. Suppose there are two subsequences of {ρn } converging to a and a , and assume that
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch02
Characteristic Functions
21
a < a . Then the corresponding subsequences of {bn } converge to b, b (say) From (2.16) we obtain
e−iω(b/a) φ(ω/a) = e−iω(b /a ) φ(ω/a ) and hence |φ(ω/a)| = |φ(ω/a )| or |φ(ω)| = |φ(a/a )ω| = |φ(a2 /a2 )ω| = · · · = |φ(an /an )ω| = |φ(0)| = 1. This means that φ is degenerate, which is not true. So a ≮ a . Similarly a ≯ a . Therefore a = a , as required. Since we have proved (2.16), the theorem is completely proved.
2.4. A Criterion for c.f.’s A function f of a real variable ω is said to be non-negative definite in (−∞, ∞) if for all real numbers ω1 , ω2 , . . . , ωn and complex numbers a1 , a2 , . . . , an n
f (ωr − ωs )ar a ¯s ≥ 0.
(2.17)
r,s=1
For such a function the following properties hold. (a) f (0) ≥ 0. If in (2.17) we put n = 2, ω1 = ω, ω2 = 0, a1 = a, a2 = 1 we obtain a ≥ 0. f (0)(1 + |a|2 ) + f (ω)a + f (−ω)¯
(2.18)
When ω = 0 and a = 1 this reduces to f (0) ≥ 0. (b) f¯(ω) = f (−ω). We see from (2.18) that f (ω)a + f (−ω)¯ a is real. ¯ This gives f (ω) = f (−ω). (c) |f (ω)| ≤ f (0). In (2.18) let us choose a = λf¯(ω) where λ is real. Then f (0) + 2λ|f (ω)|2 + λ2 |f (ω)|2 f (0) ≥ 0. This is true for all λ, so |f (ω)|4 ≤ |f (ω)|2 [f (0)]2 or |f (ω)| ≤ f (0), as required. Theorem 2.11. A function φ of a real variable is the c.f. of a distribution iff it is continuous and non-negative definite.
May 12, 2011
14:38
22
9in x 6in
Topics in Probability
b1108-ch02
Topics in Probability
Proof.
(i) Suppose φ is a c.f.; that is, ∞ eiωx dF (x) φ(ω) = −∞
where F is a distribution. By Theorem 2.4(a), φ is continuous. Moreover, n
φ(ωr − ωs )ar a ¯s
r,s=1 n
=
r,s=1
∞
=
ar a ¯s n
−∞
∞ −∞
ei(ωr −ωs )x dF (x)
ar eiωr n
n
1
a ¯s e−iωs x
dF (x)
1
n 2 iωr x ar e = dF (x) ≥ 0 −∞
∞
i
which shows that φ is non-negative definite. (ii) Conversely, let φ be continuous and non-negative definite. Then considering the integral as the limit of a sum we find that τ τ e−i(ω−ω )x φ(ω − ω )dωdω ≥ 0 (2.19) 0
0
for τ > 0. Now consider 1 τ τ −λ(ω−ω )x e φ(ω − ω )dωdω Pτ (x) = τ 0 0 ∞ = e−isx φτ (s)ds −∞
where
1− φτ (t) = 0
|t| τ
φ(t) for |t| ≤ τ for |t| ≥ τ
.
(2.20)
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch02
Characteristic Functions
23
From (2.20) we obtain ∧
|λ| itλ 1 1− e Pτ (λ)dλ ψ∧ (t) = 2π −∧ ∧ ∞ ∧
|λ| λ(t−s)λ 1 e φτ (s)ds dλ 1− = 2π −∞ ∧ −∧ ∞ 4 sin2 21 ∧ (s − t) 1 φτ (s)ds → φτ (t) as ∧ → ∞. = 2π −∞ ∧(s − t)2 On the account of (2.19), ψλ is a c.f., and φτ is continuous at the origin. By the continuity theorem φτ is a c.f. Again φτ (t) → φ(t)
as τ → ∞
and since φ is continuous at the origin it follows that φ is a c.f. as was to be proved. Remark. This last result is essentially a theorem due to S. Bochner. Remark on Theorem 2.7. If a sequence {Fn } of distributions converges properly to a distribution F , then the sequence {φn } of their c.f.’s converges to φ, which is the c.f. of F and the convergence is uniform in every finite interval. Proof.
Let A < 0, B > 0 be points of continuity of F . We have ∞ ∞ eiωx Fn {dx} − eiωx F {dx} φn (ω) − φ(ω) = −∞
−∞
iωx
= x
B
B
+ A
e
iωx
e
We have I3 =
B
A
eiωx Fn {dx} −
iωx
= {e
[Fn (x) −
Fn {dx} −
Fn {dx} −
= I1 + I2 + I3 B A
F (x)]}B A
xB B
A
iωx
e
eiωx F {dx}
F {dx}
(say).
eiωx F {dx} − iω
B A
eiωx [Fn (x) − F (x)]dx
May 12, 2011
14:38
24
9in x 6in
Topics in Probability
b1108-ch02
Topics in Probability
and so |I3 | = |Fn (B) − F (B)| + |Fn (A) − F (A)| B |Fn (x) − F (x)|dx. + |ω| A
Given ε > 0 we can make |Fn (B) − F (B)| < ε/9,
|Fn (A) − F (A)| < ε/9
for n sufficiently large. Also, since |Fn (x) − F (x)| ≤ 2 and Fn (x) → F (x) at points of continuity of F , we have for |ω| < Ω B B |Fn (x) − F (x)|dx ≤ Ω |Fn (x) − F (x)|dx < ε/9. |ω| A
A
Thus |I3 | < ε/3. Also for A, B sufficiently large 1 iωx e Fn {dx} ≤ 1 − Fn (B) + Fn (A) < ε |I1 | ≤ 3 xB 1 eiωx Fn {dx} ≤ 1 − Fn (B) − Fn (A) < ε. |I2 | ≤ 3 xB The results follow from the last three inequalities.
2.5. Problems for Solution 1. Consider the family of distributions with densities fa (−1 ≤ a ≤ 1) given by fa (x) = f (x)[1 + a sin(2π log x)] where f (x) is the log-normal density 1 2 f (x) = √ x−1 e−1/2(log x) 2π = 0 for x ≤ 0.
for x > 0.
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch02
Characteristic Functions
25
Show that fa has exactly the same moments as f . (Thus the log-normal distribution is not uniquely determined by its moments). 2. Let {pk , k ≥ 0} be a probability distribution, and {Fn , n ≥ 0} a sequence of distributions. Show that ∞
pn Fn (x)
n=0
is also a distribution. −|ω| 3. Show that φ(ω) = eλ(e −1) is a c.f., and find the corresponding density. 4. A distribution is concentrated on {±2, ±3, . . .} with weights c (k = ±2, ±3, . . .) pk = 2 k log |k|
5. 6.
7. 8. 9. 10.
where c is such that the distribution is proper. Find its c.f. φ and show that φ exists but the mean does not. α Show that the function φ(ω) = e−|ω| (α > 2) is not a c.f. If a c.f. φ is such that φ(ω)2 = φ(cω) for some constant c, and the variance is finite, show that φ is the c.f. of the normal distribution. A degenerate c.f. φ is factorized in the form φ = φ1 φ2 , where φ1 and φ2 are c.f.’s. Show that φ1 and φ2 are both degenerate. If the sequence of c.f.’s {φn } converges to a c.f. φ and ωn → ω0 , show that φn (ωn ) → φn (ω0 ). If {φn } is a sequence of c.f.’s such that φn (ω) → 1 for −δ < ω < δ, then φn (ω) → 1 for all ω. A sequence of distributions {Fn } converges properly to a nondegenerate distribution F . Prove that the sequence {Fn (an x + bn )} converges to a distribution degenerate at the origin iff an → ∞ and bn = 0(an ).
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch03
Chapter 3
Analytic Characteristic Functions 3.1. Definition and Properties Let F be a probability distribution and consider the transform ∞ eθx dF (x) (3.1) φ(θ) = −∞
√ for θ = σ + iω, where σ, ω are real and i = −1. This certainly exists for θ = iω. Since B B θx ≤ e dF (x) eσx dF (x), (3.2) A A ∞ φ(θ) exists if −∞ eσx dF (x) is finite. Clearly, the integrals 0 ∞ eσx dF (x), eσx dF (x) (3.3) 0
−∞
converge for σ < 0, σ > 0 respectively. Suppose there exist numbers α, β (0 < α, β ≤ ∞) such that the first integral in (3.3) converges for σ < β and the second for σ > −α, then ∞ eσx dF (x) < ∞ for − α < σ < β. (3.4) −∞
In this case φ(θ) converges in the strip −α < σ < β of the complex plane, and we say (in view of Theorem 3.1 below) that F has an analytic c.f. φ. If α = β = ∞ the c.f. is said to be entire (analytic on the whole complex plane). 27
May 12, 2011
14:38
28
9in x 6in
Topics in Probability
b1108-ch03
Topics in Probability
The following examples show that a distribution need not have an analytic c.f. and also that there are distributions with entire c.f.’s. The conditions under which an analytic c.f. exists are stated in Theorem 3.5. Examples Distribution
c.f.
„ « n k n−k Binomial: f (n, k) = p q k 1 2 1 Normal: f (x) = √ e− 2 x 2π
Cauchy: f (x) =
Laplace: f (x) =
whole plane
e2θ
whole plane
e−|θ|
σ=0
„
xα−1 Γ(α)
1−
1 −|x| e 2
Poisson: f (k) = e−λ
(q + peθ )n 1 2
1 1 · π 1 + x2
Gamma: f (x) = e−λxλ α
Regions of existence
θ λ
«−α
σ<λ
(1 − θ2 )−1
λk k!
θ
eλ(e
−1 < σ < 1
−1)
whole plane
Theorem 3.1. The c.f. φ is analytic in the interior of the strip of its convergence. Proof.
Let φ(θ + h) − φ(θ) − I= h
∞ −∞
xeθx dF (x)
where the integral converges in the interior of the strip of convergence, since for δ > 0, ∞ ∞ ∞ θx σx xe dF (x) ≤ |x|e dF (x) ≤ eδ|x|+σx dF (x) −∞
−∞
−∞
and the last integral is finite for −α + δ < σ < β − δ. We have hx ∞ − 1 − hx θx e dF (x) e I= h −∞ ∞ eθx (h(x2 /2!) + h2 x3 /3! + · · · )dF (x). = −∞
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch03
Analytic Characteristic Functions
Therefore
|I| ≤
∞
−∞
≤ |h|
29
eσx |h||x|2 (1 + |hx|/1! + |hx|2 /2! + · · · )dF (x) ∞
−∞
eσx+δ|x|+|h||x|dF (x) < ∞
in the interior of the strip of convergence. As |h| → 0 the last expression tends to zero, so ∞ φ(θ + h) − φ(θ) → xeθx dF (x). h −∞ Thus φ (θ) exists for θ in the interior of the strip, which means that φ(θ) is analytic there. Theorem 3.2. The c.f. φ is uniformly continuous along vertical lines that belong to the strip of convergence. Proof.
We have
∞ σx iω x iω x e (e 1 − e 2 )dF (x) |φ(σ + iω1 ) − φ(σ + iω2 )| = −∞ ∞ eσx |ei(ω1 −ω2 )x − 1|dF (x) ≤ −∞ ∞ eσx |sin(ω1 − ω2 )(x/2)|dF (x). =2 −∞
Since the integrand is uniformly bounded by eσx and approaches 0 as ω1 → ω2 , uniformly continuity follows. Theorem 3.3. An analytic c.f. is uniquely determined by its values on the imaginary axis. Proof. φ(iω) is the c.f. discussed in Chapter 2 and the result fol lows by the uniqueness theorem of that section. Theorem 3.4. The function log φ(σ) is convex in the interior of the strip of convergence.
May 12, 2011
14:38
30
9in x 6in
Topics in Probability
b1108-ch03
Topics in Probability
Proof.
We have φ(σ)φ (σ) − φ (σ)2 d2 log φ(σ) = dσ 2 φ(σ)2
and by the Schwarz inequality ∞ 2 ∞ 2 1 1 2 σx σx σx xe dF (x) = e 2 · xe 2 dF (x) φ (σ) = −∞ −∞ ∞ ∞ σx ≤ e dF (x) · x2 eσx dF (x) = φ(σ)φ (σ). −∞
Therefore
d2 dσ2
−∞
log φ(σ) ≥ 0, which shows that log φ(σ) is convex.
Corollary 3.1. If F has an analytic c.f. φ and φ (0) = 0, then φ(σ) is minimal at σ = 0. If φ is an entire function, then φ(σ) → ∞ as σ → ±∞, unless F is degenerate.
3.2. Moments Recall that
µn =
∞
−∞
xn dF (x),
νn
∞ −∞
|x|n dF (x)
have been defined as the ordinary moment and absolute moment of order n respectively. If F has an analytic c.f. φ, then µn = φ(n) (0), and ∞ θn µn , φ(θ) = n! 0 the series being convergent in |θ| < δ = min(α, β). The converse is stated in the following theorem. θn Theorem 3.5. If all moments of F exist and the series µn n! has a nonzero radius of convergence ρ, then φ exists in |σ| < ρ, and inside the circle |θ| < ρ, φ(θ) =
∞ 0
µn
θn . n!
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch03
Analytic Characteristic Functions
31
θn Proof. We first consider the series νn n! and show that it also converges in |θ| < ρ. From Lyapunov’s inequality 1
1
n+1 νnn ≤ νn+1
we obtain 1
1
1
1
ν 2n µ 2n |µn | n νnn = lim sup 2n = lim sup 2n ≤ lim sup . lim sup n 2n 2n n Also, since |µn | ≤ νn we have 1
1
1
1
νnn |µn | n ≤ lim sup . lim sup n n Therefore νnn |µn | n = lim sup lim sup n n θn which shows that the series νn n! has radius of convergence ρ. For arbitrary A > 0 we have A ∞ ∞ |θ|n |θ|n A n ≥ νn |x| dF (x) = e|θx| dF (x) ∞> n! n! −A −A 0 for |θ| < ρ. So
A −A
e dF (x) ≤ θx
A −A
e|σx| dF (x) < ∞
for |σ| < ρ. Since A is arbitrary, this implies that φ(θ) converges in the strip |σ| < ρ.
3.3. The Moment Problem The family of distributions given by x α e−|y| {1 + ε sin(|y|α tan πα)}dy Fε (x) = k −∞
for −1 ≤ ε ≤ 1, 0 < α < 1 has the same moments of all orders. This raises the question: under what conditions is a distribution uniquely determined by its moments?
May 12, 2011
14:38
32
9in x 6in
Topics in Probability
b1108-ch03
Topics in Probability
Theorem 3.6. If F has an analytic c.f. then it is uniquely determined by its moments. θn Proof. If F has an analytic c.f., then the series µn n converges in |θ| < ρ = min(α, β) and φ(θ) is given by this series there. If there is a second d.f. G with the same moments µn , then by Theorem 3.5, G has an analytic c.f. ψ(θ), and ψ(θ) is also given by that series in |θ| < ρ. Therefore φ(θ) = ψ(θ) in the strip |σ| < ρ and hence F = G. The cumulant generating function The principal value of log φ(θ) is called the cumulant generating function K(θ). It exists at least on the imaginary axis between ω = 0 and the first zero of φ(iω). The cumulant of order r is defined by r 1 d log φ(iω) . Kr = i r dω ω=0 This exists if, and only if, µr exists; Kr can be expressed in terms of µr . We have K(iω) =
∞ 0
Kr
(iω)r r!
whenever the series converges. Theorem 3.7. Let φ(θ) = φ1 (θ)φ2 (θ), where φ(θ), φ1 (θ), φ2 (θ) are c.f.’s. If φ(θ) is analytic in −α < σ < β, so are φ1 (θ) and φ2 (θ). Proof.
We have (with the obvious notations) ∞ ∞ ∞ eσx dF (x) = eσx dF1 (x) · eσx dF2 (x), −∞
−∞
−∞
and since φ(σ) is convergent, so are φ1 (σ) and φ2 (σ).
Theorem 3.8 (Cram´ er). If X1 and X2 are independent r.v. such that their sum X = X1 + X2 has a normal distribution, then X1 , X2 have normal distributions (including the degenerate case of the normal with zero variance).
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch03
Analytic Characteristic Functions
33
Proof. Assume without loss of generality that E(X1 ) = E(X2 ) = 0. Then E(X) = 0. Assume further that E(X 2 ) = 1. Let φ1 (θ), φ2 (θ) be the c.f.’s of X1 and X2 . Then we have 1 2
φ1 (θ)φ2 (θ) = e 2 θ .
(3.5)
Since the right side of (3.5) is an entire function without zeros, so are φ1 (θ) and φ2 (θ). By the convexity property (Theorem 3.4) we have φ1 (σ) ≥ 1, φ2 (σ) ≥ 1 as σ moves away from zero. Then (3.5) gives 1
2
e 2 σ = φ1 (σ)φ2 (σ) ≥ φ1 (σ) ≥ |φ1 (θ)|. 1
(3.6)
2
Similarly |φ2 (θ)| ≤ e 2 σ . Therefore 1
1
2
e 2 σ |φ1 (θ)| ≥ |φ1 (θ)φ2 (θ)| = e 2 Re(θ
2)
1
= e 2 (σ
2 −ω 2 )
,
so that 1
2
|φ(θ)| ≥ e− 2 ω .
(3.7)
From (3.6) and (3.7) we obtain 1 1 1 1 − |θ|2 ≤ − ω 2 ≤ log |φ1 (θ)| ≤ σ 2 ≤ |θ|2 , 2 2 2 2 or, setting K1 (θ) = log φ1 (θ), 1 |Re K1 (θ)| ≤ |θ|2 . 2
(3.8)
From a strengthened version of Liouville’s theorem (see Lemma 3.1) it follows that K1 (θ) = a1 θ + a2 θ 2 . Similarly K2 (θ) = b1 θ + b2 θ 2 . Theorem 3.9 (Raikov). If X1 and X2 are independent r.v. such that their sum X = X1 + X2 has a Poisson distribution, then X1 , X2 have also Poisson distributions. Proof. The points of increase of X are k = 0, 1, 2, . . . , so all points of increase α1 and α2 of X1 and X2 are such that α1 + α2 = some k, and moreover the first points of increase of X1 and X2 are α and −α where α is some finite number. Without loss of generality we take
May 12, 2011
14:38
34
9in x 6in
Topics in Probability
b1108-ch03
Topics in Probability
α = −α = 0, so that X1 and X2 have k = 0, 1, 2, . . . as the only possible points of increase. Their c.f.’s are then of the form ∞ ∞ kθ ak e , φ2 (θ) = bk ekθ (3.9) φ1 (θ) = 0
0
ak = with a0 , b0 > 0, ak , bk ≥ 0 (k ≥ 1) and and φ1 (θ) = f1 (z), φ2 (θ) = f2 (z). We have
bk = 1. Let z = eθ
f1 (z)f2 (z) = eλ(z−1) .
(3.10)
Therefore a0 bk + a1 bk−1 + · · · + ak b0 = e−λ
λk k!
(k = 0, 1, . . .),
(3.11)
which gives ak ≤ Similarly |f2 (z)| ≤
1 −λ λk , e b0 k!
1 λ(|z|−1) . a0 e
|f1 (z)| ≤
1 λ(|z|−1) e . b0
(3.12)
Hence
1 λ(|z|−1) e |f1 (z)| ≥ |f1 (z)f2 (z)| = eλ(u−1) a0 where u = Re (z). This gives |f1 (z)| ≥ a0 e−λ(|z|−u) ≥ a0 e−2λ|z| .
(3.13)
From (3.12) and (3.13), noting that a0 b0 = e−λ we find that −2λ|z| ≤ log |f1 (z)| − log a0 ≤ 2λ|z|, or setting K1 (z) = log f1 (z), and log a0 = −λ1 < 0, |Re K1 (z) + λ1 | ≤ 2λ|z|.
(3.14)
Proceeding as in the proof of Theorem 3.8, we obtain λ1 +K1 (z) = cz, where c is a constant. Since f1 = 1, K1 (1) = 0, so c = λ1 and f1 (z) = eλ1 (z−1) , which is the transform of the Poisson distribution. Theorem 3.10 (Marcinkiewicz). Suppose a distribution has a c.f. φ(θ) such that φ(iω) = eP (iω) , where P is a polynomial. Then (i) φ(θ) = eP (θ) in the whole plane, and (ii) φ is the c.f. of a normal distribution (so that P (θ) = αθ + βθ 2 with β ≥ 0).
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch03
Analytic Characteristic Functions
Proof.
35
Part (i) is obvious. For Part (ii) let P (θ) =
n
ak θ k ,
n finite, ak real (cumulants).
1
From |φ(θ)| ≤ φ(σ) we obtain |eP (θ) | ≤ eP (σ) or eRe P (θ) ≤ eP (σ) . Therefore Re P (θ) ≤ P (σ). Put θ = reiα , so that σ = r cos α, ω = r sin α. Then an r n cos nα + an−1 r n−1 cos(n − 1)α + · · · ≤ an r n cosn α + an−1 r n−1 cos(n−1) α + · · · = 0. Dividing both sides of this inequality by r n and Suppose an π we letting r → ∞ we obtain an cos nα ≤ an cosn α. Putting α = 2n obtain π , an · 0 ≤ an cosn 2n so an ≥ 0 for n ≥ 2. Similarly, putting α = an ≤ an cosn
2π n
we find that
2π , n
and since cosn 2π n < 1 for n > 2 we obtain an ≤ 0. Therefore an = 0 for n > 2, P (θ) = a1 θ + a2 θ 2 , and φ(θ) is the c.f. of a normal distribution, the case a2 = 0 being the degenerate case of zero variance. Theorem 3.11 (Bernstein). Let X1 and X2 be independent r.v. with unit variances. Then if Y1 = X1 + X2 ,
Y2 = X1 − X2
(3.15)
are independent, all four r.v. X1 , X2 , Y1 , Y2 are normal. This is a special case of the next theorem (with n = 2, a1 = b1 = a2 = 1, b2 = −1). For a more general result see [Feller (1971), pp. 77–80, 525–526]. He considers the linear transformation Y1 = = 0, where ∆ is the a11 X1 + a12 X2 , Y2 = a21 X1 + a22 X2 with |∆|
May 12, 2011
14:38
36
9in x 6in
Topics in Probability
b1108-ch03
Topics in Probability
determinant
a12 . a22
a ∆ = 11 a21
If a11 a21 + a12 a22 = 0 then the transformation represents a rotation. Thus (3.15) is a rotation. Theorem 3.12 (Skitovic). Let X1 , X2 , . . . , Xn be n independent r.v. such that the linear forms L1 = a1 X1 + a2 X2 + · · · + an Xn , L2 = b1 X1 + b2 X2 + · · · + bn Xn ,
(ai = 0, bi = 0),
are independent. Then all the (n + 2) r.v. are normal. Proof. We shall first assume that (i) the ratios ai /bi are all distinct, and (ii) all moments of X1 , X2 , . . . , Xn exist. Then for α, β real we have (with obvious notations) (θ)
(θ)
(θ)
φαL1 +βL2 = φαL1 φβL2 so that n
(θ) φ(αai +βbi )Xi
=
i=1
n
(θ) φαai Xi
i=1
·
n
(θ)
φβbi Xi .
i=1
Taking logarithms of both sides and expanding in powers of θ we obtain n
Kr(αai +βbi )Xi
i=1
=
n i=1
Krαai Xi
+
n
Krβbi Xi
i=1
or n
Kr(Xi ) {(αai + βbi )r − (αai )r − (βbi )r } = 0
i=1
for all r ≥ 1. This can be written as r−1 n r Kr(Xi ) (αai )s (βbi )r−s = 0 s i=1
s=1
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch03
Analytic Characteristic Functions
37
for all r ≥ 1 and all α, β. Hence n
asi br−s Kr(Xi ) = 0 (s = 1, 2, . . . , r − 1, r ≥ 1), i
i=1
Let r ≥ n + 1. Then for s = 1, 2, . . . , n, i = 1, 2, . . . n we can write the above equations as Ar χr = 0
(3.16)
1 ≤ s, i ≤ n and χr is the column vector with where Ar = aSi br−s i (X1 )
elements Kr
(X2 )
, Kr
(Xn )
, . . . , Kr
. Since
|Ar | = (a1 a2 · · · an )(b1 b2 · · · bn )r−1
(cj − ci ) = 0,
j>i
the only solution of (3.16) is χr = 0. Therefore Kr(Xi ) = 0 for r ≥ n + 1, i = 1, 2, . . . , n.
(3.17)
Thus all cumulants of Xi of order ≥n + 1 vanish, and K (Xi ) (θ) reduces to a polynomial of degree at most n. By the theorem of Marcinkiewicz, each Xi has a normal distribution. Hence L1 and L2 have normal distributions. Next suppose that some of the ai /bi are the same. For example, let a1 /b1 = a2 /b2 , and let Y1 = a1 X1 + a2 X2 . Then L1 = Y1 + a3 X3 + · · · + an Xn , b1 L2 = Y1 + b3 X3 + · · · + bn Xn . a1 Repeat this process till all the ai /bi are distinct. Then by what has just proved, the Yi are normal. By Cram´er’s theorem the Xi are normal. Finally it remains to prove that the moments of Xi exist. This follows from the fact that L1 and L2 have finite moments of all orders. = 0, bi = 0 we can take a, c > 0 To prove this, we note that since ai such that |ai |, |bi | ≥ c > 0. Also, let us standardize the ai and bi so that |ai | ≤ 1, |bi | ≤ 1. Now if |L| = |a1 X1 +a2 X2 +· · ·+ an Xn | ≥ nM ,
May 12, 2011
14:38
38
9in x 6in
Topics in Probability
b1108-ch03
Topics in Probability
then at least one |Xi | ≥ M . Therefore P{|L1 | ≥ nM } ≤
n
P{|Xi | ≥ M }.
(3.18)
i=1
= i, then |L1 | ≥ M , Further, if c|Xi | ≥ nM and |Xj | < M for all j |L2 | ≥ M . Thus
nM
P{|Xj | < M } P{|L1 | ≥ M, |L2 | ≥ M } ≥ P |Xi | ≥ c j =1 n
nM
P{|Xj | < M }. ≥ P |Xi | ≥ c j=1
Summing this over i = 1, 2, . . . , n we obtain, using (3.18), n
n2 M
P{|Xj | < M }. n P{|L1 | ≥ M, |L2 | ≥ M } ≥ P |L1 | ≥ c j=1
Since L1 and L2 are independent, this gives 2 P |L1 | ≥ n cM P{|L2 | ≥ M } ≤ n n →0 P{|L1 | ≥ M } 1 P{|Xj | < M }
(3.19)
as M → ∞. We can write (3.19) as follows. Choose n2 /c = α > 1. Then P{|L1 | ≥ αM } → 0 as M → ∞. P{|L1 | ≥ M }
(3.20)
By a known result (Lemma 3.2), L1 , and similarly L2 , has finite moments of all orders. Lemma 3.1 (see [Hille (1962)]). If f (θ) is an entire function and |Ref (θ)| ≤ c|θ|2 , then f (θ) = a1 θ + a2 θ 2 . n Proof. We have f (θ) = ∞ 0 an θ , the series being convergent on the whole plane. Here n! f (θ) dθ (n = 0, 1, 2, . . .). (3.21) an = 2πi |θ|≤r θ n+1
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch03
Analytic Characteristic Functions
Also, since there are no negative powers n! f (θ)θ n−1 dθ (n = 1, 2, . . .). 0= 2πi |θ|≤r
39
(3.22)
From (3.21) we obtain n! an = 2πi or n! an r = 2π
n
0
2π
2π
0
f (reiα ) reiα idα r n+1 ei(n+1)α
f (reiα )e−inα dα
(n = 0, 1, . . .).
Similarly from (3.22) we obtain n! 2π f (reiα )einα dα 0= 2π 0 or n! 2π f (reiα )e−inα dα (n = 1, 2, . . .). 0= 2π 0 From (3.23) and (3.24) we obtain n! 2π n Ref (reiα )einα dα an r = π 0 Therefore |an |r n ≤
n! π
2π 0
(3.23)
(3.24)
(n ≥ 1).
ck2 dα = 2cn!r 2
or |an | ≤
2cn! → 0 as r → ∞ for n > 2. r n−2
This gives f (θ) = a0 θ + a1 θ + a2 θ 2 .
Lemma 3.2 (see [Lo´ eve (1963)]). For α > 1 if 1 − F (αx) + F (−αx) →0 1 − F (x) + F (−x) then F has moments of all orders.
as x → ∞
May 12, 2011
14:38
40
9in x 6in
Topics in Probability
b1108-ch03
Topics in Probability
Proof.
Given ε > 0 choose A so large that for x > A
1 − F (αx) + F (−αx) < ε and 1 − F (x) + F (−x)
1 − F (A) + F (−A) < ε.
Then for any positive integer r, r 1 − F (αr A) + F (−αr A) 1 − F (αs A) + F (−αs A) = < εr 1 − F (A) + F (−A) 1 − F (αs−1 A) + F (−αs−1 A) s=1
so that 1 − F (αr A) + F (−αr A) < εr+1 . Therefore 1 − F (x) + F (−x) < εr+1 for x > αr A. Now ∞ nxn−1 [1 − F (x) + F (−x)]dx A
= <
∞ r=0 ∞ r=0
αr+1 A αr A
εr+1
nxn−1 [1 − F (x) + F (−α)]dx
αr+1 A αr A
nxn−1 dx = εAn (αn − 1)
and the series converges for ε < α−n .
∞ (αn ε)r 0
3.4. Problems for Solution 1. If 1 − F (x) + F (−x) = 0(e−ρx ) as x → ∞ for some ρ > 0, show that F is uniquely determined by its moments. 2. Show that the distribution whose density is given by √ 1 e−| x| for x > 0 f (x) = 2 0 for x ≤ 0 does not have an analytic c.f.
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch03
Analytic Characteristic Functions
41
3. Proof of Bernstein’s theorem. Introduce a change of scale so that Y1 = √12 (X1 + X2 ), Y2 = √12 (X1 − X2 ). Then prove that 1 s s (X1 ) (Y1 ) + 1s Ks(X2 ) , 1 Ks Ks = √ 2 1 s s (X1 ) (Y2 ) + (−1)s Ks(X2 ) , 1 Ks Ks = √ 2 (X )
(X )
(Y )
(Y )
and similarly for Ks 1 , Ks 2 in terms of Ks 1 , Ks 2 . Hence show that (X ) Ks i ≤ 1 2Ks(X1 ) + 2Ks(X2 ) (i = 1, 2). s 2 (X )
This gives Ks i = 0 for s > 2, i = 1, 2. 4. If X1 , X2 are independent and there exists one rotation (X1 , X2 ) → (Y1 , Y2 ) such that Y1 , Y2 are also independent, then show that Y1 , Y2 are independent for every rotation.
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch04
Chapter 4
Infinitely Divisible Distributions 4.1. Elementary Properties A distribution and its c.f. φ are called infinitely divisible if for each positive integer n there exists a c.f. φn such that φ(ω) = φn (ω)n .
(4.1)
It is proved below (Corollary 4.1) that if φ is infinitely divisible, then φ(ω) = 0. Defining φ1/n as the principal branch of the n-th root, we see that the above definition implies that φ1/n is a c.f. for every n ≥ 1. Examples (1) A distribution concentrated at a single point is infinitely divisible, since for it we have φ(ω) = eiaω = (eiaω/n )n where a is a real constant. (2) The Cauchy density f (x) = πa [a2 +(x−γ)2 ]−1 (a > 0) has φ(ω) = eiωγ−a|ω| . The relation (4.1) holds with φn (ω) = eiωγ/n−a|ω|/n . Therefore the Cauchy density is infinitely divisible. (3) The normal density with mean m and variance σ 2 has c.f. φ(ω) = 1
2
2
1 σ2 2 ω n
eiωm− 2 σ ω = (eiωm/n− 2 infinitely divisible.
)n . Thus the normal distribution is
43
May 12, 2011
14:38
44
9in x 6in
Topics in Probability
b1108-ch04
Topics in Probability
(4) The gamma distribution (including the exponential) is infinitely divisible, since its c.f. is n φ(ω) = (1 − iω/λ)−α = (1 − iω/λ)−α/n . The discrete counterparts, the negative binomial and geometric distributions are also infinitely divisible. (5) Let N be a random variable with the (simple) Poisson distribution e−λ λk /k!(k = 0, 1, 2, . . .). Its c.f. is given by φ(ω) = eλ(e
iω −1)
,
which is clearly infinitely divisible. Now let {Xk } be a sequence of independent random variables with a common c.f. Φ and let these be independent of N . Then the sum X1 + X2 + · · · + XN − b has the c.f. φ(ω) = e−iωb+λ[Φ(ω)−1] , which is the compound Poisson. Clearly, this is also infinitely divisible. Lemma 4.1. Let {φn } be a sequence of c.f.’s. Then φnn → φ continuous iff n(φn − 1) → ψ with ψ continuous. In this case φ = eψ . Theorem 4.1. A c.f. φ is infinitely divisible iff there exists a sequence {φn } of c.f.’s such that φnn → φ. Proof. If φ is infinitely divisible, then by definition there exists a c.f. φn such that φnn = φ(n ≥ 1). Therefore the condition is necessary. Conversely, let φnn → φ. Then by Lemma 4.1, n[φn (ω) − 1] → ψ = log φ. Now for t > 0, ent[φn (ω)−1] → etψ(ω) as n → ∞. Here the expression on the left side is the c.f. of the compound Poisson distribution and the right side is a continuous function. Therefore for each t > 0, etψ is a c.f. and φ = eψ = (eψ/n )n , which shows that φ is infinitely divisible.
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch04
Infinitely Divisible Distributions
45
Corollary 4.1. If φ is infinitely divisible, φ = 0. This was proved in the course of the proof of Theorem 4.1. Corollary 4.2. If φ is infinitely divisible, so is φ(ω)a for each a > 0. Proof.
We have φa = eaψ = (aaψ/n )n .
Proof of Lemma 4.1. (i) Suppose n(φn − 1) → ψ which is continuous. Then φn → 1 and the convergence is uniform in ω ∈ [−Ω, Ω]. Therefore |1 − φn (ω)| < 12 for ω ∈ [−Ω, Ω] and n > N . Thus log φn exists for ω ∈ [−Ω, Ω], and n > N , and is continuous and bounded. Now log φn = log[1 + (φn − 1)] 1 1 = (φn − 1) − (φn − 1)2 + (φn − 1)3 − · · · 2 3 = (φn − 1)[1 + o(1)] and therefore n log φn = n(φn − 1)[1 + o(1)] → ψ or φnn → φ = eψ . (ii) Suppose φnn → φ. We shall first prove that φ has no zeros. It suffices to prove that |φn |2n → |φ|2 implies |φ|2 > 0. Assume that this symmetrization has been carried out, so that φnn → φ with φn ≥ 0, φ ≥ 0. Since φ is continuous with φ(0) = 1, there exists an interval [−Ω, Ω] in which φ does not vanish and therefore log φ exists and is bounded. Therefore log φn exists and is bounded for ω ∈ [−Ω, Ω] and n > N , so n log φn → log φ. Thus log φn → 0 or φn → 1. As in (i), n(φn − 1) → log φ = ψ. Theorem 4.2. If {φn } is a sequence of infinitely divisible c.f.’s and φn → φ which is continuous, then φ is an infinitely divisible c.f. Proof.
1/n
Since φn is infinitely divisible, φn is a c.f. Since 1/n n → φ continuous, φn
φ is an infinitely divisible c.f. by Theorem 4.1.
Theorem 4.3 (De Finetti). A distribution is infinitely divisible iff it is the limit of compound Poisson distributions.
May 12, 2011
14:38
46
9in x 6in
Topics in Probability
b1108-ch04
Topics in Probability
Proof. If φn is the c.f. of a compound Poisson distribution, and φn → φ which is continuous, then by Theorem 4.2, φ is an infinitely divisible c.f. Conversely, let φ be an infinitely divisible c.f. Then by Theorem 4.1 there exists a sequence {φn } of c.f.’s such that φnn → φ. By Lemma 4.1 en[φn (ω)−1] → eψ = φ. Here en[φn (ω)−1] is the c.f. of a compound Poisson distribution.
4.2. Feller Measures A measure M is said to be a Feller measure if M {I} < ∞ for every finite interval I, and the integrals ∞ −x+ 1 1 + − M {dy}, M (−x) = M {dy} (4.2) M (x) = 2 2 x− y −∞ y converge for all x > 0. Examples (1) A finite measure M is a Feller measure, since 1 1 M {dy} ≤ 2 [M {(−∞, −x)} + M {(x, ∞)}]. 2 y x |y|>x (2) The Lebesgue measure is a Feller measure, since 2 1 (x > 0). dy = 2 x |y|>x y (3) Let F be a distribution measure and M {dx} = x2 F {dx}. Then M is a Feller measure with M + (x) = 1 − F (x− ),
M − (−x) = F (−x+ ).
Theorem 4.4. Let M be a Feller measure, b a real constant and ∞ iωx e − 1 − iω sin x M {dx} (4.3) ψ(ω) = iωb + x2 −∞
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch04
Infinitely Divisible Distributions
47
(the integral being convergent). Then corresponding to a given ψ there is only one measure M and one constant b. Proof.
Consider ψ ∗ (ω) = ψ(ω) −
1 2h
We have
ψ(ω + s)ds (h > 0).
(4.4)
eiωx µ{dx}
(4.5)
−h
∗
h
∞
ψ (ω) = −∞
where
µ{dx} =
sin hx 1− hx
1 M {dx} x2
(4.6)
and it is easily verified that µ is a finite measure. Therefore ψ ∗ (ω) determines µ uniquely, so M uniquely. Since b = Im ψ(1), the con stant b is uniquely determined. Convergence of Feller measures. Let {Mn } be a sequence of Feller measures. We say that Mn converges properly to a Feller measure M if Mn {I} → M {I} for all finite intervals I of continuity of M , and Mn+ (x) → M + (x),
Mn− (−x) → M − (−x)
(4.7)
at all points x of continuity of M . In this case we write Mn → M . Examples (1) Let Mn {dx } = nx2 Fn {dx } where Fn is a distribution measure with weights 12 at each of the points ± √1n . Then 1 1 1 1 2 · + · =1 Mn {I} = nx Fn {dx } = n n 2 n 2 I if {− √1n , √1n } ⊂ I. Also Mn+ (x) = Mn− (−x) = 0 for x > √1n . Therefore Mn → M where M is a distribution measure concentrated at the origin. Clearly, M is a Feller measure.
May 12, 2011
14:38
48
9in x 6in
Topics in Probability
b1108-ch04
Topics in Probability
(2) Let Fn be a distribution measure with Cauchy density and consider Mn {dx } = πnx2 Fn {dx }. We have b n2 x2 dx → |b − a|, Mn {(a, b)} = 2 2 a 1+n x ∞ ∞ n2 dy + dy → , Mn (x) = 2y2 1 + n y2 x x −x −x n2 dy − dy → . Mn (−x) = 2 2 2 −∞ 1 + n y −∞ y
1 π
· 1+nn2 x2
Therefore Mn → M where M is the Lebesgue measure. Theorem 4.5. Let {Mn } be a sequence of Feller measures, {bn } a sequence of real constants and ∞ iωx e − 1 − iω sin x Mn {dx }. (4.8) ψn (ω) = i ωb n + x2 −∞ Then ψn → ψ continuous iff there exists a Feller measure M and a real constant b such that Mn → M and bn → b. In this case ∞ iωx e − 1 − iω sin x M {dx }. (4.9) ψ(ω) = i ωb + x2 −∞ Proof.
As suggested by (4.4)–(4.6) let
µn {dx } = K(x)Mn {dx },
−2
where K(x) = x
µn = µn {(−∞, ∞)} < ∞.
sin hx 1− hx
(4.10) (4.11)
Then Mn∗ {dx } =
1 µn {dx } µn
(4.12)
is a distribution measure. We can write ∞ iωx e − 1 − iω sin x K(x)−1 Mn∗ {dx }. ψn (ω) = i ωb n + µn 2 x −∞ (4.13)
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch04
Infinitely Divisible Distributions
49
(i) Let Mn → M and bn → b. Then ∞ K(x)M {dx } > 0 µn → µ = −∞
and Mn∗ → M ∗ ,
where M ∗ {dx } =
1 K(x)M {dx }. µ
Therefore from (4.13) we find that ∞ iωx e − 1 − iω sin x K(x)−1 M ∗ {dx } ψn (ω) → i ωb + µ 2 x −∞ = ψ(ω). (ii) Conversely, let ψn (ω) → ψ(ω) continuous. Then with ψn ∗ (ω), with ψ(ω) defined as in (4.4), ψn ∗ (ω) → ψ ∗ (ω); that is, ∞ eiωx µn {dx } → ψ ∗ (ω). (4.14) −∞
In particular µn = µn {(−∞, ∞)} → ψ ∗ (0). If ψ ∗ (0) = 0, then µn {I} and Mn {I} tend to 0 for every finite interval I and by (i) ψ(ω) = i ωb with b = lim bn . We have thus proved the required results in this case. Let µ = ψ ∗ (0) > 0. Then (4.14) can be written as ∞ eiωx Mn ∗ {dx } → ψ ∗ (ω). µn −∞
Mn∗
M∗
→ where M ∗ is the distribution measure correTherefore ∗ sponding to the c.f. ψ (ω)/ψ ∗ (0). Thus ∞ iωx e − 1 − iω sin x K(x)−1 Mn∗ {dx } µn x2 −∞ ∞ iωx e − 1 − iω sin x K(x)−1 M ∗ {dx }, →µ 2 x −∞ (the integrand being a bounded continuous function), and bn → b. Clearly, M {dx } = µK(x)−1 M ∗ {dx }
May 12, 2011
14:38
50
9in x 6in
Topics in Probability
b1108-ch04
Topics in Probability
is a Feller measure and
∞
ψ(ω) = i ωb + −∞
eiωx − 1 − i ω sin x M {dx } x2
as required.
4.3. Characterization of Infinitely Divisible Distributions Theorem 4.6. A distribution is infinitely divisible iff its c.f. is of the form φ = eψ , with ∞ iωx e − 1 − iω sin x M {dx }, (4.15) ψ(ω) = i ωb + x2 −∞ M being a Feller measure, and b a real constant. Proof.
(i) Let φ = eψ with ψ given by (4.15). We can write
1 ψ(ω) = i ωb − ω 2 M {0} + lim ψη (ω) η→0+ 2 where
(4.16)
eiωx − 1 − iω sin x M {dx } x2 |x|>η (eiωx − 1)G{dx } = i ωβ + c
ψη (ω) =
|x|>η
with cx2 G{dx } = M {dx } for |x| > η, and M {dx } sin x , β=− x2 |x|>η c being determined so that G is a distribution measure. Let γ denote the c.f. of G; then eψη (ω) = eiωβ+c[γ(ω)−1]
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch04
Infinitely Divisible Distributions
51
is the c.f. of a compound Poisson distribution. As η → 0, ψη → ψ0 , where eiωx − 1 − iω sin x M {dx } ψ0 (ω) = x2 |x|>0 is clearly a continuous function. By Theorem 4.3, eψ0 is an infinitely divisible c.f. Now we can write 1
eψ(ω) = eiωb− 2 ω
2 M {0}
eψ0 (ω) ,
so that φ is the product of eψ0 (ω) and the c.f. of a normal distribution. Therefore φ is infinitely divisible. (ii) Conversely, let φ be an infinitely divisible c.f. Then by Theorem 4.3. φ is the limit of a sequence of compound Poisson c.f.’s. That is, ecn [φn (ω)−1−iωβn ] → φ(ω) or
cn
∞
−∞
(eiωx − 1 − iwβn )Fn {dx } → log φ(ω)
where cn > 0, βn is real and Fn is the distribution measure corresponding to the c.f. φn . We can write this as ∞ iωx e − 1 − iω sin x Mn {dx } x2 −∞ ∞ sin xFn {dx } − βn → log φ(ω) + iωcn −∞
cn x2 Fn {dx }.
Clearly, Mn is a Feller measure. where Mn {dx } = By Theorem 4.5 it follows that ∞ sin xFn {dx } − βn → b Mn → M and cn −∞
where M is a Feller measure, b a real constant and ∞ iωx e − 1 − i ω sin x M {dx }. log φ(ω) = i ωb + x2 −∞ This proves that φ = eψ , with ψ given by (4.15).
May 12, 2011
14:38
52
9in x 6in
Topics in Probability
b1108-ch04
Topics in Probability
Remarks. (a) The centering function sin x is such that ∞ ix e − 1 − i sin x M {dx } x2 −∞ is real. Other possible centering functions are x (i) τ (x) = 1 + x2 and
−a for x < −a, (ii) τ (x) = |x| for −a ≤ x ≤ a, . a for x > a with a > 0.
(b) The measure Λ (L´evy measure) is defined as follows: Λ{0} = 0 and Λ{dx } = x−2 M {dx } for x = 0. We have ∞ min(1, x2 )Λ{dx } < ∞, −∞
as can be easily verified. The measure K{dx } = (1+x2 )−1 M {dx } is seen to be a finite measure. This was used by Khintchine. (c) The spectral function H is defined as follows: ∞ M {dy} for x > 0 − y2 x− H(x) = x+ M {dy} for x < 0, y2 −∞ H being undefined at x = 0. We can then write ∞ 1 [eiωx − 1 − i ωτ (x)]dH (x) ψ(ω) = iωb − w2 σ 2 + 2 0+ 0− [eiωx − 1 − iωτ (x)]dH (x), + −∞
where the centering function is usually τ (x) = x(1 + x2 )−1 . This is the so-called L´evy–Khintchine representation. Here H is nondecreasing in (−∞, 0) and (0, ∞), with H(−∞) = 0, H(∞) = 0.
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch04
Infinitely Divisible Distributions
Also, for each ε > 0
|x|<ε
53
x2 dH (x) < ∞.
Theorem 4.7. A distribution concentrated on (0, ∞) is infinitely divisible iff its c.f. is given by φ = eψ , with ∞ iωx e −1 P {dx } (4.17) ψ(ω) = i ωb + x 0 where b ≥ 0 and P is a measure on (0, ∞) such that (1 + x)−1 is integrable with respect to P. Theorem 4.8. A function P is an infinitely divisible probability generation function (p.g.f.) iff P (1) = 1 and ∞
log where ak ≥ 0 and
∞ 1
P (s) = ak sk P (0) 1
(4.18)
ak = λ < ∞.
Proof. Let (4.18) hold and P (1) = 1. Put ak = λfk where fk ≥ 0,
∞
∞ k 1 fk = 1. Let F (s) = 1 fk s . Then P (s) = P (0)eλF (s) and P (1) = P (0)eλ . Therefore P (s) = e−λ+λF (s) , which is the p.g.f. of a compound Poisson distribution, and is therefore infinitely divisible. Conversely, let P be an infinitely divisible p.g.f. Then Definition (4.1) implies P (s)1/n is also a p.g.f. (see [Feller (1968)]) for each n ≥ 1. Let P (s)1/n = q0n + q1n s + q2n s2 + · · · = Qn (s)
(say)
May 12, 2011
14:38
54
9in x 6in
Topics in Probability
b1108-ch04
Topics in Probability
or P (s) = (q0n + q1n s + q2n s2 + · · · )n . n = P (0) = p . If p = 0, then q In particular q0n 0 0 0n = 0 and P (s) = n 2 n s (q1n + q2n s + q3ns + · · · ) . This implies that p0 = p1 = p2 = · · · = pn−1 = 0 for each n ≥ 1, which is absurd. Therefore p0 > 0. It follows that P (s) > 0 and therefore P (s)1/n → 1 for 0 ≤ s ≤ 1. Now n P (s) n P (s) log log P (s) − log P (0) P (0) P (0) − 1 = ∼ 1 − log P (0) n log n 1 −1 P (0)
P (0)
n P (s) − n P (0) Qn (s) − Qn (0) = . = n 1 − Qn (0) 1 − P (0) Thus log P (s) − log P (0) Qn (s) − Qn (0) → . 1 − Qn (0) − log P (0) Here the left side is seen to be a p.g.f. By the continuity theorem the limit is the generating function of a non-negative sequence {fj }. Thus ∞
log P (s) − log P (0) = fj sf = F (s) (say). − log P (0) 1
Putting s = 1 we find that F (1) = 1. Putting λ = − log P (0) > 0 we obtain P (s) = e−λ[1−F (s)] which is equivalent to (4.18).
4.4. Special Cases of Infinitely Divisible Distributions (A) Let the measure M be concentrated at the origin, with weight 1 2 2 σ 2 > 0. Then (4.15) gives φ(ω) = eiωb− 2 ω σ , which is the c.f. of the normal distribution.
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch04
Infinitely Divisible Distributions
55
(B) Let M be concentrated at h(= 0) with weight λh2 . Then φ(ω) = eiωr +λ(e
iωh −1)
,
r = b − λ sin h.
Thus φ is the c.f. of the random variable hN + r, where N has the (simple) Poisson distribution e−λ λk /k! (k = 0, 1, 2, . . .). (C) Let M {dx } = λx2 G{dx } where G is the distribution measure with the c.f. φ. Clearly, M is a Feller measure and ∞ iωγ+λ[φ(ω)−1] φ(ω) = e , γ =b−λ sin x G{dx }. −∞
We thus obtain the c.f. of a compound Poisson distribution. (D) Let M be concentrated on (0, ∞) with density αe−λx x(x > 0). It is easily verified that M is a Feller measure. We have ∞ −(λ−iω)x ∞ iωx e −1 e − e−λx {dx } M {dx } = α x2 x 0 0 iω −α λ = log 1 − . = α log λ − iω λ Choosing
∞
b=α 0
we find that
sin x −λx e dx < ∞ x
φ(ω) =
iω 1− λ
−α ,
This is the c.f. of the gamma density e−λx λα xα−1 /Γ(α). (E) Stable distributions. These are characterized by the measure M , where M {(−y, x)} = C(px2−α + qy 2−α ) (x > 0, y > 0) where C > 0, p ≥ 0, q ≥ 0, p + q = 1, 0 < α ≤ 2. If α = 2, M is concentrated at the origin, and the distribution is the normal, as discussed in (A). Let 0 < α < 2, and denote by ψα the corresponding expression ψ. In evaluating it we choose
May 12, 2011
14:38
56
9in x 6in
Topics in Probability
b1108-ch04
Topics in Probability
an appropriate centering function τα (x) depending on α. This changes the constant b and we obtain ∞ iωx e − 1 − iωτα (x) M {dx } ψα (ω) = iωγ + x2 −∞ where
∞
γ =b+ −∞
τα (x) − sin x M {dx } x2
(|r| < ∞)
and sin x 0 τα (x) = x
if α = 1 if 0 < α < 1 if 1 < α < 2.
Substituting for M we find that ψα (ω) = iωγ + c(2 − α)[pIα (ω) + q I¯α (ω)] where Iα (ω) =
∞
0
eiωx − 1 − i ωτα (x) dx . xα+1
Evaluating the integral Iα we find that ω α ψα (ω) = i ωγ − c|w| 1 + iβ Ω(|ω|, α) |ω| where c > 0, |β| ≤ 1 and
Ω(|ω|, α) =
πα tan 2
if α = 1
2 log |w| π
if α = 1.
In Sec. 4.6 we shall discuss the detailed properties of stable distributions. We note that when β = 0 and α = 1 we obtain ψα (ω) = i ωγ − c|ω|, so that φ is the c.f. of the Cauchy distribution.
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch04
Infinitely Divisible Distributions
57
4.5. L´ evy Processes We say a stochastic process {X(t), t ≥ 0} has stationary independent increments if it satisfies the following properties: (i) For 0 ≤ t1 < t2 < · · · < tn (n ≥ 2) the random variables X(t1 ), X(t2 ) − X(t1 ), X(t3 ) − X(t2 ), . . . , X(tn ) − X(tn−1 ) are independent. (ii) The distribution of the increment X(tp )− X(tp−1 ) depends only on the difference tp − tp−1 . For such a process we can take X(0) ≡ 0 without loss of generality. For if X(0) ≡ 0, then the process Y (t) = X(t)−X(0) has stationary independent increments, and Y (0) = 0. If we write n k−1 k t −X t (4.19) X X(t) = n n k=1
then X(t) is seen to be the sum of n independent random variables all of which are distributed as X(t/n). Thus a process with stationary independent increments is the generalization to continuous time of sums of independent and identically distributed random variables. A L´evy process is a process with stationary independent increments that satisfies the following additional conditions: (iii) X(t) is continuous in probability. That is, for each ε > 0 P {|X(t)| > ε} → 0 as t → 0.
(4.20)
(iv) There exist left and right limits X(t− ) and X(t+ ) and we assume that X(t) is right-continuous: that is, X(t+ ) = X(t). Theorem 4.9. The c.f. of a L´evy process is given by E[eiωX(t) ] = etψ(ω) , where ψ is given by Theorem 4.6. Proof. Let φ1 (ω) = E[eiωX(t) ]. From (4.19) we find that φt (ω) = [φt/n (ω)]n , so for each t > 0, φt is infinitely divisible and φt = eψt . d X(t)+X(s) we obtain the functional Also from the relation X(t+s) = equation ψt+s = ψt + ψs . On account of (4.20), ψt → 0 as t → 0, so
May 12, 2011
14:38
58
9in x 6in
Topics in Probability
b1108-ch04
Topics in Probability
we must have ψt (ω) = tψ1 (ω). Thus φt (ω) = etψ(ω) with ψ = ψ1 in the required form. Special cases: Each of the special cases of infinitely divisible distributions discussed in Sec. 4.4 leads to a L´evy process with c.f. φt (ω) = etψ(ω) and ψ in the prescribed form. Thus for appropriate choices of the measure M we obtain the Brownian motion, simple and compound Poisson processes, gamma process and stable processes (including the Cauchy process). A L´evy process with non-decreasing sample functions is called a subordinator. Thus the simple Poisson process and gamma process are subordinators.
4.6. Stable Distributions A distribution and its c.f. are called stable if for every positive integer n there exist real numbers cn > 0, dn such that φ(ω)n = φ(cn ω)eiωdn .
(4.21)
If X, X1 , X2 , . . . are independent random variables with the c.f. φ, then the above definition is equivalent to d cn X + dn . X1 + X2 + · · · + Xn =
(4.22)
Examples (A) If X has a distribution concentrated at a single point, then (4.22) is satisfied with cn = n, dn = 0. Thus a degenerate distribution is (trivially) stable. We shall ignore this from our consideration. (B) If X has the Cauchy density f (x) = πa [a2 + (x − r)2 ]−1 (a > 0), then φ(ω) = eiωr−a|ω| . The relation (4.21) holds with cn = n, dn = 0. Thus the Cauchy distribution is stable. (C) If X has a normal density with mean m and variance σ 2 , then √ (22) holds with cn = n and dn = m(n − cn ). Thus the normal distribution is stable. The concept of stable distributions is due to L´evy (1924), who gave a second definition (see Problem 11).
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch04
Infinitely Divisible Distributions
59
Theorem 4.10. Stable distributions are infinitely divisible. Proof.
The relation (4.21) can be written as n dn ω −iω nc n = φn (ω)n e φ(ω) = φ cn
where φn is clearly a c.f. By definition φ is infinitely divisible.
Domains of attraction. Let {Xk , k ≥ 1} be a sequence of independent random variables with a common distribution F , and Sn = X1 + X2 + · · · + Xn (n ≥ 1). We say that F belongs to the domain of attraction of a distribution G if there exist real constants an > 0, bn such that the normed sum (Sn − bn )/an converges in distribution to G. It is clear that a stable distribution G belongs to its own domain of attraction, with an = cn , bn = dn . Conversely, we shall prove below that the only non-empty domains of attraction are those of stable distributions. Theorem 4.11. If the normed sum (Sn − bn )/an converges in distribution to a limit, then (i) as n → ∞, an → ∞, an+1 /an → 1 and (bn+1 − bn )/an → b with |b| < ∞, and (ii) the limit distribution is stable. Proof.
(i) With the obvious notation we are given that [χ(ω/an )e−iωbn /nan ]n → φ(ω)
(4.23)
uniformly in ω ∈ [−Ω, Ω]. By Lemma 4.1 we conclude that n[χ(ω/an )e−iωbn /nan − 1] → ψ(ω) where φ = eψ . Therefore χn (ω) = χ(ω/an )e−iωbn /nan → 1. Let {ank } be a subsequence of {an } such that ank → a (0 ≤ a ≤ ∞).
May 12, 2011
14:38
60
9in x 6in
Topics in Probability
b1108-ch04
Topics in Probability
If 0 < a < ∞, then 1 = lim|χ(ω/ank )| = |χ(ω/a)|, while if a = 0, then 1 = lim|χnk ank ω| = |χ(ω)|. Both implications here would mean that χ is degenerate, which is not true. Hence a = ∞ and an → ∞. From (4.23) we have n+1 ω e−iωbn+1 /an+1 → φ(ω), χ an+1 which can be written as n ω e−iωbn+1 /an+1 → φ(ω), χ an+1
(4.24)
since χ(ω/an+1 ) → 1. By Theorem 2.10 it follows from (4.23) and (4.24) that an+1 /an → 1 and (bn+1 − bn )/an → b. (ii) For fixed m ≥ 1 we have n m mn ω ω −iωmbn /an −iωbn /an e = φ e → φm (ω). φ an an Again by Theorem 2.10 it follows that amn /an → cm , (bmn − mbn )/an → dm , where cm > 0 and dm is real, while ω m e−iωdm /cm φ(ω) = φ cm or φm (ω) = φ(cm ω)eiωdm . This shows that φ is stable. Theorem 4.12. A c.f. φ is stable iff φ = eψ , with ω ψ(ω) = iωγ − c|ω|α 1 + iβ Ω(|ω|, α) |ω|
(4.25)
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch04
Infinitely Divisible Distributions
where γ is real, c > 0, 0 < α ≤ 2, |β| ≤ 1 and πα if α = 1 tan 2 Ω(|ω|, α) = 2 log|ω| if α = 1. π
61
(4.26)
Here α is called the characteristic exponent of φ. Proof. (i) Suppose φ is given by (4.25) and (4.26). Then for a > 0 we have aψ(ω) − ψ(a1/α ω) = iωγ(a − a1/α ) ω −ac|ω|α iβ [Ω(|ω|, α) − Ω(a1/α |ω|, α)] |ω| 1/α if α = 1 iωγ(a − a ) = 2βc a log a if α = 1. iω π This shows that φ is stable. (ii) Conversely, let φ be stable. Then by Theorem 4.11 it possesses a domain of attraction; that is, there exists a c.f. χ and real constants an > 0, bn such that as n → ∞ [χ(ω/an )e−iωbn ]n → φ(ω). Therefore by Lemma 4.1, n[χ(ω/an )e−iωbn − 1] → ψ(ω) where φ = eψ . Let F be the distribution corresponding to χ. We first consider the case where F is symmetric; then bn = 0. Let Mn {dx} = nx2 F {an dx}. Then by Theorem 4.5 it follows that there exists a Feller measure M and a constant b such that ∞ iωx e − 1 − iω sin x M {dx}. (4.27) ψ(ω) = iωb + x2 −∞ Let
x
U (x) = −x
y 2 F {dy}
(x > 0).
(4.28)
May 12, 2011
14:38
62
9in x 6in
Topics in Probability
b1108-ch04
Topics in Probability
Then n U (an x) → M {(−x, x)} a2n ∞ y −2 Mn {dy} → M + (x) n[1 − F (an x)] = Mn {(−x, x)} =
x− −x+
nF (−an x) =
−∞
y −2 Mn {dy} → M − (−x).
(4.29a) (4.29b) (4.29c)
By Theorem 4.11 we know that an → ∞, an+1 /an → 1. Therefore U (x) varies regularly at infinity and M {(−x, x)} = Cx2−α where C > 0, 0 < α ≤ 2. If α = 2 the measure M is concentrated at the origin. If 0 < α < 2 the measure M is absolutely continuous. In the case where F is unsymmetric we have n[1 − F (an x + an bn )] → M + (x),
nF (−an x + an bn ) → M − (x)
and an analogous modification of (4.29a). However it is easily seen that bn → 0, and so these results are fully equivalent to (4.29). Considering (4.29b) we see that either M + (x) ≡ 0 or 1 − F (x) varies regularly at infinity and M + (x) = Ax−α . Similarly F (x) and 1 − F (x) + F (−x) vary regularly at infinity and the exponent α is the same for both M + and M − . Clearly 0 < α ≤ 2. If M + and M − vanish identically, then clearly M is concentrated at the origin. Conversely of M has an atom at the origin, then a symmetrization argument shows that M is concentrated at the origin, and M + , M − vanish identically. Accordingly, when α < 2 the measure M is uniquely determined by its density, which is proportional to |x|1−α . For each interval (−y, x) containing the origin we therefore obtain M {(−y, x)} = C(px2−α + qy 2−α )
(4.30)
where p + q = 1. For α = 2, M is concentrated at the origin. For 0 < α < 2 we have already shown in Sec. 4.4 that the measure (4.30) yields the required expression (4.25) for ψ.
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch04
Infinitely Divisible Distributions
63
Corollary 4.3. If Gα is the stable distribution with the characteristic exponent α, then as x → ∞ xα [1 − Gα (x)] → Cp
2−α , α
xα Gα (−x) → Cq
2−α . α
(4.31)
Proof. Clearly, Gα belongs to its own domain of attraction with the norming constants an = n1/α . For 0 < α < 2, choosing n1/α x = t in (4.29b) we find that tα [1 − Gα (t)] → Cp 2−α α as t → ∞. For α = 2, Gα is the normal distribution and for it we have a stronger result, namely, xβ [1 − Gα (x)] → 0 as x → ∞. Theorem 4.13. (i) All stable distributions are absolutely continuous. (ii) Let 0 < α < 2. Then moments of order < α exist, while moments of order > α do not. Proof. (i) We have |φ(ω)| = e−c|ω| , with c > 0. Since the function is integrable over (−∞, ∞), the result (i) follows by Theorem 2.6(b). (ii) For t > 0 an integration by parts gives t |x|β F {dx} = −tβ [1 − F (t) + F (−t)] α
−t
t
βxβ−1 [1 − F (x) + F (−x)]dx + 0 t βxβ−1 [1 − F (x) + F (−x)]dx. ≤ 0
If β < α, this last integral converges as t → ∞. Since by Corollary 4.3 we have xα [1 − F (x) + F (−x)] ≤ M for x > t where t is large. It follows that the absolute moment (and therefore the ordinary moment) of order β < α is finite. Conversely if the absolute moment of order β > α exists, then for ε > 0 we have |x|β F {dx} > tβ [1 − F (t) + F (−t)] ε> |x|>t
or tα [1 − F (t) + F (−t)] < εtα−β → 0 as t → ∞, which is a contradiction. Therefore absolute moments of order β > α do not exist.
May 12, 2011
14:38
64
9in x 6in
Topics in Probability
b1108-ch04
Topics in Probability
Remarks (1) From the proof of Theorem 4.12 it is clear that φ(ω)a = φ(ca ω)eiωda for all a > 0, and the functions ca and da are given by (i) ca = a1/α with 0 < α ≤ 2, and γ(a − a1/α ) if α = 1 (ii) da = (2βc/π)a log a if α = 1. (2) If in the definition (4.21), dn = 0, then the distribution is called strictly stable. However, the distinction between strict and weak stability matters only when α = 1, because when α = 1 we can take dn = 0 without loss of generality. To prove this we note that dn = γ(n − n1/α ) for α = 1, and consider the c.f. χ(ω) = φ(ω)e−iωγ . We have χ(ω)n = φ(ω)n e−iωnγ = φ(cn ω)eiω(dn −nγ) = χ(cn ω)eiω(cn γ+dn −nγ) = χ(cn ω) which shows that χ is strictly stable. (3) Let α = 1 and assume that γ = 0. Then we can write ψ(ω) = −a|ω|α
for ω > 0, and − a ¯|ω|α
for ω < 0 (4.32)
where a is a complex constant. Choosing a scale so that |a| = 1 π πα we can write a = ei 2 ν , where tan πν 2 = β tan 2 . Since |β| ≤ 1 it follows that |ν| ≤ α
if 0 < α < 1,
and |ν| ≤ 2 − α
if 1 < α < 2. (4.33)
Theorem 4.14. Let α = 1 and let the c.f. of a stable distribution be expressed in the form φ(ω) = e−|ω|
α e±iπν/2
(4.34)
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch04
Infinitely Divisible Distributions
65
where in ±1 the upper sign prevails for ω > 0 and the lower sign for ω < 0. Let the corresponding density be denoted by f (x; α, ν). Then f (−x; α, ν) = f (x; α, −ν) for x > 0.
(4.35)
For x > 0 and 0 < α < 1, ∞
f (x; α, ν) =
kπ 1 Γ(kα + 1) (−x−α )k sin (ν − α) πx k! 2
(4.36)
k=1
and for x > 0 and 1 < α < 2 ∞
kπ 1 Γ(kα−1 + 1) (−x)k sin (ν − α). f (x; α, ν) = πx k! 2α
(4.37)
k=1
Corollary 4.4. A stable distribution is concentrated on (0, ∞) if 0 < α < 1, ν = −α and on (−∞, 0) if 0 < α < 1, ν = α. Proofs are omitted. Theorem 4.15. (a) A distribution F belongs to the domain of attraction of the normal distribution iff x y 2 F {dy} (4.38) U (x) = −x
varies slowly. (b) A distribution F belongs to the domain of attraction of a stable distribution with characteristic exponent α < 2 iff 1 − F (x) + F (−x) ∼ x−α L(x)
(x → ∞)
(4.39)
and 1 − F (x) → p, 1 − F (x) + F (−x)
F (−x) →q 1 − F (x) + F (−x)
(4.40)
where p ≥ 0, q ≥ 0 and p + q = 1, Here L is a slowly varying function on (0, ∞); that is, for each x > 0 L(tx) → 1 as t → ∞. L(t) The proof is omitted.
(4.41)
May 12, 2011
14:38
66
9in x 6in
Topics in Probability
b1108-ch04
Topics in Probability
Theorem 4.16. Let F be a proper distribution concentrated on (0, ∞) and Fn the n-fold convolution of F with itself: If Fn (an x) → G(x), where G is a non-degenerate distribution, then G = Gα , the stable distribution concentrated on (0, ∞), with exponent α(0 < α < 1), and moreover, 1 − F (t) ∼ t−α L(t)/Γ(1 − α). Conversely, if 1 − F (t) ∼ t−α L(t)/Γ(1 − α), we can find constants an such that Fn (an x) → Gα (x). Here L is a slowly varying function. Proof. (i) Suppose that Fn (an x) → G(x), and φ is the L.T. of G. Denote by F ∗ the L.T. of F . Then F ∗ (θ/an )n → φ(θ) or −n log F ∗ (θ/an ) → − log φ(θ). This shows that − log F ∗ (θ) is of regular variation at the origin, that is, − log F ∗ (θ) ∼ θ αL(1/θ)(θ → 0+), with α ≥ 0. Since log(1−z) ∼ −z for small z, we find that 1 − F ∗ (θ) ∼ θ α L(1/θ). This gives 1 − F (t) ∼ t−α L(t)/Γ(1 − α), as required. Moreover, − log φ(θ) = cθ α (c > 0) or α φ(θ) = e−c θ , so that G is the stable distribution with exponent α. Here 0 < α < 1 since G is non-degenerate. (ii) Conversely, let 1 − F (t) ∼ t−α L(t)/Γ(1 − α)(t → ∞). This gives 1 − F ∗ (θ) ∼ θ α L(1/θ)(θ → 0+). Let us choose constants an so that n[1 − F (an )] → c/Γ(1 − α) for 0 < c < ∞. Then as n → ∞, na−α n L(an ) =
a−α n L(an ) · n[1 − F (an )]Γ(1 − α) → c [1 − F (an )]Γ(1 − α)
and also −α na−α n L(an /θ) = nan L(an )
L(an /θ) → c. L(an )
Therefore 1 − F ∗ (θ/an ) ∼ θ αc/n and F ∗ (θ/an )n = [1 − cθ α /n + o(1/n)]n → e−c θ . α
This shows that Fn (an x) → Gα (x).
4.7. Problems for Solution 1. Show that if F and G are infinitely divisible distributions so is their convolution F ∗ G.
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch04
Infinitely Divisible Distributions
67
2. If φ is an infinitely divisible c.f., prove that |φ| is also an infinitely divisible c.f. 3. Show that the uniform distribution is not infinitely divisible. More generally, a distribution concentrated on a finite interval is not infinitely divisible, unless it is concentrated at a point.
rj < ∞. Prove that for arbitrary aj the 4. Let 0 < rj < 1 and infinite product φ(ω) =
∞
1 − rj 1 − rj eiωaj j=1
converges, and represents an infinitely divisible c.f.
∞ 5. Let X = 1 Xk /k where the random variables Xk are independent and have the common density 12 e−|x| . Show that X is infinitely divisible, and find the associated Feller measure. 6. Let P be an infinitely divisible p.g.f. and φ the c.f. of an arbitrary distribution. Show that P (φ) is an infinitely divisible c.f. 7. If 0 ≤ a < b < 1 and φ is a c.f., then show that 1 − b 1 − aφ · 1 − a 1 − bφ is an infinitely divisible c.f. 8. Prove that a probability distribution with a completely monotone density is infinitely divisible. 9. Mixtures of exponential (geometric) distributions. Let f (x) =
n
pk λk e−λk x
k=1
where pk > 0, pk = 1 and for definiteness 0 < λ1 < λ2 < · · · < λn . Show that the density f (x) is infinitely divisible. (Similarly a mixture of geometric distributions is infinitely divisible.) By a limit argument prove that the density ∞ λe−λx G(dλ), f (x) = 0
where G is a distribution concentrated on (0, ∞), is infinitely divisible.
May 12, 2011
14:38
68
9in x 6in
Topics in Probability
b1108-ch04
Topics in Probability
10. If X, Y are two independent random variables such that X > 0 and Y has an exponential density, then prove that XY is infinitely divisible. 11. Show that a c.f. φ is stable if and only if given c > 0, c > 0 there exist constants c > 0, d such that
12.
13. 14. 15. 16.
17.
φ(c ω)φ(c ω) = φ(cω)eiωd .
−k k Let the c.f. φ be given by log φ(ω) = 2 ∞ −∞ 2 (cos 2 ω − 1). Show that φ(ω)n = φ(nω) for n = 2, 4, 8, . . . , φ(ω) is infinitely divisible, but not stable. If φ(ω)2 = φ(cω) and the variance is finite, show that φ(ω) is stable (in fact normal). If φ(ω)2 = φ(aω) and φ(ω)3 = φ(bω) with a > 0, b > 0, show that φ(ω) is stable. If F and G are stable with the same exponent α, so is their convolution F ∗ G. If X, Y are independent random variables such that X is stable with exponent α, while Y is positive and stable with exponent β(< 1), show that XY 1/α is stable with exponent αβ. The Holtsmark distribution. Suppose that n stars are distributed in the interval (−n, n) on the real line, their locations di (i = 1, 2, . . . , n) being independent r.v. with a uniform density. Each star has mass unity, and the gravitational constant is also unity. The force which will be exerted on a unit mass at the origin (the gravitational field) is then Yn =
n sgn(dr ) r=1
d2r
Show that as n → ∞, the distribution on Yn converges to a stable distribution with exponent α = 12 . 18. Let {Xk , k ≥ 1} be a sequence of independent random variables with the common density 2 for |x| ≥ 1 3 log |x| f (x) = |x| 0 for |x| ≤ 1. √ Show that (X1 + X2 + · · · + Xn )/ n log n is asymptotically normal.
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch05
Chapter 5
Self-Decomposable Distributions; Triangular Arrays 5.1. Self-Decomposable Distributions A distribution F and its c.f. φ are called self-decomposable if for every c in (0, 1) there exists a c.f. ψc such that φ(ω) = φ(cω)ψc (ω).
(5.1)
We shall call ψc the component of φ. Restriction of c to (0, 1) is explained in Problem 5.1. If φ is self-decomposable, then it can be proved that φ = 0 (Problem 5.2). Thus the above definition implies that φ(ω)/φ(cω) is a c.f. for every c in (0, 1). Examples. 1. Degenerate distributions are (trivially) self-decomposable, and all their components are also degenerate. 2. A stable c.f. φ is self-decomposable, since by P. L´evy’s second definition (Problem 4.11) we have φ(ω) = φ(cω)φ(c ω)eiωd with 0 < c < 1, 0 < c < 1. Here ψc (ω) = φ(c ω)eiωd with c and d depending on c; the component is also self-decomposable. The concept of self-decomposable distributions is due to Khintchine (1936); they are also called distributions of class L. 69
May 12, 2011
14:38
70
9in x 6in
Topics in Probability
b1108-ch05
Topics in Probability
Theorem 5.1. If φ is self-decomposable, it is infinitely divisible, and so is its component ψc . Proof. (i) Let {Xk , k ≥ 1} be independent random variables with Xk having c.f. ψk−1/k (kω). Let Sn = X1 + X2 + · · · + Xn (n ≥ 1). Then the c.f. of Sn /n is given by E(eiω(Sn /n) ) =
n
ψk−1/k (kω/n)
k=1
=
n k=1
φ(kω/n) = φ(ω) φ((k − 1)ω/n)
so that φ is the c.f. of X1 /n + X2 /n + · · · + Xn /n. By the theorem on triangular arrays φ is infinitely divisible. (ii) We also have n
φ(ω) = φ(mω/n)
ψk−1/k (kω/n).
k=m+1
In this let m → ∞, n → ∞ in such a way that m/n → c (0 < c < 1). Then φ(ω) = φ(cω) m→∞ lim
n
ψk−1/k (kω/n),
n→∞ k=m+1
which shows that ψc (ω) = limm→∞ nk=m+1 ψk−1/k (kω/n). Again by the theorem on triangular arrays ψc is infinitely divisible. As a converse of Theorem 5.1 we ask whether, given a sequence {Xk , k ≥ 1} of independent random variables there exist suitable constants an > 0, bn such that the normed sums (Sn −bn )/an converge in distribution. It is clear that in order to obtain this convergence we have to impose reasonable restrictions on Xk . We require that each component Xk /an become uniformly asymptotically negligible (uan) in the sense that given ε > 0 and Ω > 0 one has for all sufficiently large n |1 − E(eiωXk /an )| < ε for ω ∈ [−Ω, Ω], k = 1, 2, . . . , n
(5.2)
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch05
Self-Decomposable Distributions; Triangular Arrays
71
Theorem 5.2. If the normed sums (Sn − bn )/an converge in distribution, then (i) as n → ∞, an → ∞, an+1 /an → 1; and (ii) the limit distribution is self-decomposable. Proof.
(i) Let φk be the c.f. of Xk . We are given that χn (ω) = e−iωbn /an
n
φk (ω/an ) → φ(ω)
(5.3)
k=1
as n → ∞. Take a subsequence {ank } of {an } such that ank → a(0 ≤ a ≤ ∞). If 0 < a < ∞, then for each k, |1 − φk (ω/a)| < ε for ω ∈ [−Ω, Ω] or Ω Ω |1 − φk (ω)| < ε for ω ∈ − , a a on account of the uan condition (5.2). Therefore φk (ω) = 1 in [− Ωa , Ωa ] and (5.3) gives |φ(ω)| ≡ 1. This means that φ is degenerate, which is not true. If a = 0, then 1 = |φ(0)| = lim |χnk (ank ω)| = lim
nk
k→∞
|φj (ω)|.
i
This gives |φj (ω)| = 1 for all ω and again leads to degenerate φ. Therefore a = ∞, which means that an → ∞. Proceeding as in the proof of Theorem 4.11 we find that an+1 /an → 1. (ii) Given c in (0, 1), for every integer n we can choose an integer m < n such that am /an → c, and m → ∞, n − m → ∞ as n → ∞. We can write (5.3) as m bn −bm am ω −iω bam n φk · · e−iω an χn (ω) = e an am k=1
n
×
k=m+1
= χm
φk
ω an
am ω χmn (ω) (say). an
(5.4)
May 12, 2011
14:38
72
9in x 6in
Topics in Probability
b1108-ch05
Topics in Probability
Here χn (ω) → φ(ω) and χm ((an /an )ω) → φ(cω). If we prove that φ = 0 then the c.f. χmn (ω) → φ(ω)/φ(cω), a continuous function. It follows that φ(ω)/φ(cω) is a c.f., which means that φ is selfdecomposable. To show that φ = 0, note that φ(ω0 ) = 0 for some ω0 implies that φ(cω0 ) = 0. By induction φ(cn ω0 ) = 0, so φ(0) = 0, which is absurd. Theorem 5.3. A c.f. is self-decomposable iff it is infinitely divisible and its Feller measure M is such that the two functions Mc+ , Mc− , where + + + x , Mc (x) = M (x) − M c x − − − , − Mc (−x) = M (−x) − M c are monotone for every c in (0, 1).
5.2. Triangular Arrays For each n ≥ 1 let the random variables X1n , X2n , . . . , Xrn ,n be independent with Xkn having the distribution Fkn and c.f. φkn . The double sequence {Xkn , 1 ≤ k ≤ rn , n ≥ 1}, where rn → ∞, is called a triangular array. Let Snn = X1n + X2n + · · · + Xnn . We are interested in the limit distribution of Snn +βn where {βn } is a sequence of real constants. The array {Xkn } will be called a null array if it satisfies the uniformly asymptotically negligible (uan) condition: for each ε > 0 there exists a δ such that P {|Xkn | > ε} < δ
(k = 1, 2, . . . , rn )
(5.5)
for n sufficiently large. In terms of c.f.’s we can express this as follows: given ε > 0 and Ω > 0 we have |1 − φkn (ω)| < ε
for |ω| < Ω,
k = 1, 2, . . . , rn
(5.6)
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch05
Self-Decomposable Distributions; Triangular Arrays
73
for n sufficiently large. As special cases of the limit results we seek we have the following: n /n (k = 1, 2, . . . , n; n ≥ 1) where {Xk } is a (i) Let Xkn = Xk −b an sequence of independent random variables with a common distribution. The problem is to find norming constants an > 0, bn such that the distribution of the random variables (Sn − bn )/an converges properly. We have seen that the limit distribution is (a) stable if the Xk are identically distributed, and (b) selfdecomposable in general. (ii) Let P {Xkn = 1} = pn and P {Xkn = 0} = 1 − pn and suppose that pn → 0, npn → λ > 0. Then we know that
P {Snn = k} → e−λ
λk k!
(k = 0, 1, 2, . . .).
We introduce the Feller measure Mn by setting Mn {dx} =
rn
x2 Fkn {dx}.
(5.7)
k=1
For this we have Mn+ (x) =
rn [1 − Fkn (x)], k=1
Mn− (−x) =
rn
Fkn (−x)
(5.8)
k=1
for x > 0. We also introduce the truncation procedure by which a random variable X is replaced bu τ (X), where −a for x < −a, τ (x) = x for − a ≤ x ≤ a (5.9) a for x > a. It is seen that Eτ (X + t) is a continuous monotone function of t and therefore vanishes for some t. For each pair (k, n) there exists a constant tkn such that Eτ (Xkn + tkn ) = 0. We can therefore center the random variable Xkn so that
May 12, 2011
14:38
74
9in x 6in
Topics in Probability
b1108-ch05
Topics in Probability
bkn = Eτ (Xkn ) = 0. Assume this has been done, so that bkn = Eτ (Xkn ) = 0.
(5.10)
Let rn
An =
Eτ 2 (Xkn ).
(5.11)
k=1
Proposition 5.1. As n → ∞ we have rn [φkn (ω) − 1] + iωβn + 0(An ) log E eiω(Snn +βn ) = k=1
for |ω| < Ω. Proof.
We have
φkn (ω) − 1 =
∞ −∞ ∞
= −∞ a
= −a
(eiωx − 1)Fkn {dx} [eiωx − 1 − iωτ (x)]Fkn {dx}
(eiωx − 1 − iωx)Fkn {dx}
−a
+
−∞ ∞
+ a
so that
|φkn (ω) − 1| ≤
a −a
(eiωx − 1 + iωa)Fkn {dx}
(eiωx − 1 − iωa)Fkn {dx}
1 2 2 ω x Fkn {dx} + 2
|x|>a
(2 + a|ω|)Fkn {dx}
1 ≤ ω 2 Eτ 2 (Xkn ) + (2 + a|ω|)Eτ 2 (Xkn )a−1 2 = c(ω)Eτ 2 (Xkn ). where c(ω) = 12 ω 2 + obtain
2 a
+ |ω|. Summing this over k = 1, 2, . . . , rn we
rn k=1
|φkn (ω) − 1| ≤ c(ω)An .
(5.12)
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch05
Self-Decomposable Distributions; Triangular Arrays
75
The uan condition implies that φkn (ω) = 0 in (−Ω, Ω) so that log φkn exists in (−Ω, Ω) for n sufficiently large. Therefore log φkn (ω) = log[1 + φkn (ω) − 1] = [φkn (ω) − 1] +
∞ (−1)r−1
r
r=2
[φkn (ω) − 1]r
and
r rn n
log φkn (ω) − [φkn (ω) − 1]
k=1
k=1
≤
∞ rn 1 k=1 r=2
r
|φkn (ω) − 1|r ≤
rn 1 |φkn (ω) − 1|2 2 1 − |φkn (ω) − 1| k=1
≤ sup |φkn (ω) − 1| · 1≤k≤rn
rn
|φkn (ω) − 1|
k=1
(5.13)
< εc(ω)An
by the uan condition and (5.2). From (5.2) and (5.3) it follows that rn φkn (ω) log E eiω(Snn +βn ) = log eiωβn k=1
=
rn
log φkn (ω) + iωβn
k=1
=
rn [φkn (ω) − 1] + iωβn + 0(An ) k=1
as required.
Theorem 5.4. Let {Xkn } be a null array, centered so that bkn = 0, and {βn } a sequence of real constants. Then Snn + βn converges in distribution iff βn → b and Mn → M, a Feller measure. In this case the limit distribution is infinitely divisible, its c.f. φ being given by
May 12, 2011
14:38
76
9in x 6in
Topics in Probability
b1108-ch05
Topics in Probability
φ = eψ , with
ψ(ω) = iωb +
∞ −∞
eiωx − 1 − iωτ (x) M {dx}. x2
(5.14)
Proof. The desired result is a consequence of Theorem 4.5. In order to apply this theorem we define the distribution rn 1 Fkn {dx} Fn {dx} = rn
(5.15)
k=1
and its c.f. φn (ω) =
rn 1 φkn (ω). rn
(5.16)
k=1
Then Mn {dx} = rn x2 Fn {dx}, the associated c.f. being φn (ω) = eψn (ω) , where ψn (ω) = iωβn + rn [φn (ω) − 1] = iωβn +
rn [φkn (ω) − 1].
(5.17)
k=1
Using Proposition 5.1 we can therefore write log E eiω(Snn +βn ) = ψn (ω) + 0(An ) (n → ∞).
(5.18)
(i) Let Mn → M and βn → b. By Theorem 4.5 it follows that ψn → ψ where ∞ iωx e − 1 − iωτ (x) M {dx}. (5.19) ψ(ω) = iωb + x2 −∞ Furthermore An =
rn k=1
a
= −a
rn
Eτ 2 (Xkn ) = Mn {dx} +
a
k=1 −a
|x|≥a
a2
x2 Fkn {dx} +
rn k=1
|x|≥a
a2 Fkn {dx}
1 Mn {dx} → M {(−a, a)} x2
+ [M + (a) + M − (−a)]a2
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-ch05
Self-Decomposable Distributions; Triangular Arrays
77
if (−a, a) is an interval of continuity of the measure M . Thus An tends to a finite limit and (5.8) gives log E eiω(Snn +βn ) → ψ(ω), with ψ given by (5.9). (ii) Conversely, suppose that Snn + βn converges in distribution. Then by (5.8) there is a c.f. φ such that ψn (ω) + 0(An ) → log φ(ω)
(5.20)
for |ω| < Ω. Since the convergence is uniform we integrate (5.20) over (−h, h) where 0 < h < Ω. The left side gives
h −h
dω
rn (eiωx − 1)Fkn {dx} + 2h0(An ) + βn k=1
=
rn
∞
k=1 −∞
h −h
iωdω
2 sin hx − 2h Fkn {dx} + 2h0(An ) x
and so sin hx Fkn {dx} 1− hx −∞ h 1 log φ(ω)λω. + 0(An ) → − 2h −h
rn k=1
∞
(5.21)
Now take h < 2; then 1 sin hx ≥ h2 x2 for |x| < 1, hx 10 1 sin hx > for |x| ≥ 1 1− hx 2
1−
Then the left side of (5.21) is ≥
rn h2 k=1
10
1
−1
2
x Fkn {dx} +
rn 1 k=1
2
|x|>1
Fkn {dx} + 0(An )
May 12, 2011
14:38
78
9in x 6in
Topics in Probability
b1108-ch05
Topics in Probability n h2 ≥ 10
r
k=1
n 1 x Fkn {dx} + 2 −1
1
r
2
k=1
|x|>1
Fkn {dx} + 0(An )
h2 An + 0(An ). 10 This shows that An is bounded as n → ∞. We can therefore write (5.20) as =
ψn (ω) → log φ(ω)
(5.22)
uniformly in |ω| < Ω. The required result now follows from Theorem 4.5.
5.3. Problems for Solution 1. If φ(ω) = φ(cω)ψc (ω) for c ≥ 1, where ψc (ω) is a c.f., then either ψc (ω) is degenerate or φ(ω) is degenerate. [If c = 1, then ψc (ω) ≡ 1. If c > 1 then since |φ(ω)| ≤ |φ(cω)| we obtain
ω ω
ω
≥ φ 2 ≥ · · · ≥
φ n
≥ |φ(0)| = 1, 1 ≥ |φ(ω)| ≥ φ
c c c which gives |φ(ω)| ≡ 1.] 2. A self-decomposable c.f. φ never vanishes. [If φ(2a) = 0 and φ(ω) = 0 for 0 ≤ ω < 2a, then ψc (2a) = 0. We have |ψc (a)|2 = |ψc (2a) − ψc (a)|2 ≤ 2[1 − Re ψc (a)]. Here ψc (a) = φ(a)/φ(ca) → 1 as c → 1, so we have a contradiction].
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-bib
Bibliography
Feller, W., An Introduction to Probability Theory and its Applications, Vol. 1, 3rd Ed. (Wiley, New York, 1968). Feller, W., An Introduction to Probability Theory and its Applications, Vol. 2, 2nd Ed. (Wiley, New York, 1971). Hille, E., Analytic Function Theory, Vol. II (Boston, 1962). Lo´eve, M., Probability Theory, 3rd Ed. (Van Nostrand, New York, 1963).
79
May 12, 2011
14:38
9in x 6in
Topics in Probability
This page intentionally left blank
b1108-bib
May 12, 2011
14:38
9in x 6in
Topics in Probability
b1108-Index
Index
absolutely continuous, 2 analytic c.f., 27 arithmetic distribution, 2 atomic, 2
L´evy measure, 52 L´evy processes, 57 L´evy–Khintchine representation, 52 Laplace, 28 Lebesgue decomposition, 3 Lyapunov’s inequality, 8
binomial, 28
moment problem, 31 moments, 6
Cauchy, 28 central limit theorem, 19 characteristic function, 11 continuity theorem, 17 continuous, 2 convergence of types, 19 convolution, 4 cumulant generating function, 32
non-negative definite, 21 normal, 28 Poisson, 28 probability distribution, 1 probability distribution function, 1 proper, 1
defective, 1 density, 2 distributions of class L, 69 domains of attraction, 59
random variables, 4
gamma, 28
Schwarz inequality, 30 selection theorem, 10 self-decomposable distributions, 69 singular, 2 stable distributions, 55 subordinator, 58
infinitely divisible, 43
triangular arrays, 72
Jordan decomposition, 3
weak law of large numbers, 18
entire, 27 Feller measure, 46
81