This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
,l.).
Sketch of Proof Apply Proposition 1.1.2(vii) and Definition 3.3.1 to get (i). (ii) follows from (i) and (iii) follows from (ii) and Definition 3.1.1. O Proposition 3.3.2 (Dulmage and Mendelsohn, [77)) Let D be a primitive digraph and let A= A(D). Each of the following holds. (i) For any integer m > 0, Am is primitive. (ii) Given any u E V(D), there exists a smallest integer hu > 0 such that for every v E V(D), D has an (u, v)walk of length h.,. (This number hu is called the reach of the vertex
u..)
(iii) Fix an u E V(D). For any integer h ;::: hu, D has an (u., v)walk of length h for every v E V(D). (iv) IfV(D) = {1,2,··· ,n}, theny(D) =max{ht,h2,··· ,h,.}. Sketch of Proof (i) and (ii) follow from Definitions 3.2.3 and 3.3.1, and (iv) follows from (iii). (iii) can be proved by induction on h' = h  hu. When h' > 0, since D is strong, there exists wE V(D) such that (w,v) E E(D). By induction D has an (u,w)walk of length hu + h'  1 = h  1. Theorem 3.3.1 (Dulmage and Mendelsohn, [77)) Let D be a primitive digraph with IV(D)I = n, and lets be the length of a directed cycle of D. Then y(D) ::;; n
+ s(n 
2).
108
Powers of Nonnegative Matrices
Proof Let C8 be a directed cycle of D with length 8. Note that D(A•) has a loop at each vertex in V(C,). Let u, v E V(D) = V(D(A•)) be two vertices. If u E V(C,), then since D(A') is primitive (Proposition 3.3.2), and since D(A•) has a loop at u, D(A') has an (u, v)walk of length n 1. By Proposition 1.1.2(vii), the (u, v)entry of A•(nl) is positive. Hence D has a (u, v)walk of length 8(n 1). If u ¢ V(C,), then since D is strong, D has an (u, w)walk of length at most n s for somew E V(C,), and soD has an (u,v)walkoflength at mostn8+8(n1) = n+8(n2). It follows by Proposition 3.3.2 that y(D) ::; n + 8(n 2). D Corollary 3.3.1A (Wielandt, (274]) Let D be a primitive digraph with n vertices, then
y(D) ::; (n 1)2

1.
Sketch of Proof By induction on n. When n;::: 2, D bas a cycle of length 8 > 1, since D 8 ::; n  1, and so Corollary 3.3.1A follows from Theorem 3.3.1. 0
is strong. Since D is primitive,
Proposition 3.3.3 Fix i E {1, 2} and let D be a primitive digraph with n;::: 3+i vertices. Each of the following holds.
(i) If y(D)
= (n 1) 2 + 2 i, then D is isomorphic to D;.
(ii) There is no primitive digraph on n vertices such that
Proof (i). Let 8 be the length of a shortest directed cycle of D. By Theorem 3.3.1, (n 1) 2 + 2 i y(D) ::; n + 8(n 2). It follows that 8 n 1. Build D from this n 1 directed cycle to see that D must be D 1 or D 2 • (ii). By (i), 8 ::; n 2, and so by Theorem 3.3.1, y(D) ::; n 2  3n + 4. D
=
=
Example 3.3.1 Let C,. = v1 v2 • • • , VnV! be a directed cycle with n ;::: 4 vertices. Let D1 be the digraph obtained from C,. by adding an arc (vnt.vl). Then by Proposition 3.3.2 and Theorem 3.1.2,
y(D1)
= max{hv,} = hv. = n + t/l(n, n 1) = (n 1) + 1. 2
Thus the bound in Corollary 3.3.1A is best possible. Example 3.3~2 (Continuation of Example 3.3.1) Assume that n;::: 5. Let D2 be obtained from D 1 by adding an arc (v,.,V2). Note that y(D2 ) = (n 1) 2 • Proposition 3.3.3 indicates that D 1 is the only digraph, up to isomorphism, that have ~(D 1 ) = (n 1) 2 + 2  i, (1 ::; i ::; 2). Moreover, there will some integers k such that 1 ::; k::; (n 1) 2 + 1 but no primitive digraph D satisfies y(D) = k.
Powers of Noll1legative Matrices
109
Example 3.3.3 (Holladay and Varga, (129]) Let n ~ d > 0 be integers. H A e Mt is irreducible and has d positive diagonal entries, then A is primitive and y(A) :5 2n d 1 (Proposition 3.3.4(ii) below). Let A(n,d) E Mn be the following matrix 1
1
0 1
A(n,d) =
0 1
0
1
0
where A(n, d) has exactly d positive diagonal entries. Then y(A(n, d)) =2nd 1.
Proposition 3.3.4 Let D be a strong digraph with n = IV(D)I and let d > 0 be an integer. Then each of the following holds. (i) H D has a loop, then D is primitive. (ii) H D has loops at d distinct vertices, then y(D) :5 2n  d  1. (iii) The bound in (ii) is best possible. Sketch of Proof (i) follows from Theorem 3.2.1. (ii). By (i), D is primitive. For any u, v E V(D), D has an (u, w)walk of length n d for some vertex w with a loop, and a (w, v )walk of length at most n  1. Thus hu :5 2n  d 1, and (ii) follows from Proposition 3.3.2(iv). (iii). Compute y(A(n,d)) for the graph A(n,d) in Example 3.3.3 to see y(A(n,d)) = 2nd1.0 Definition 3.3.2 For integers b > a > 0 and n > 0, let (a, W
=
{k : k is an integer and a :5 k :5 b}
En
=
{k : there exists a primitive matrix A E Mt such that y(A) = k}.
By Theorem 3.3.1 and by Proposition 3.3.3, En c (1, (n 1) 2
+ 1)0.
Theorem 3.3.2 (Liu, (168]) Let n1 ~ d ~ 1 be integers let Pn(d) be the set of primitives matrix in Mt with d > 0 positive diagonal entries. H k e {2,3,··· ,2nd 1}, then there exists a matrix A e Pn(d) such that y(A) = k. Sketch of Proof For any integer k e {2, 3, · · · , n} we construct a digraph D whose adjacency matrix A satisfying the requirements. If 1 :5 d < k :5 n, the consider the adjacency matrix of the digraph D in Figure 3.3.1.
Powers of Nonnegative Matrices
110
k+l
k+2
n1 n
Figure 3.3.1 Note that y(i,j) { ; :
ifi =j = 1, otherwise.
Thus y(A) = k in this case. Digraphs needed to prove the other cases can be constructed similarly, and their constructions are left as an exercise. D
Theorem 3.3.3 (Shao, [245]) Let A E Bn be symmetric and irreducible. (i) A is primitive if and only if D(A) has a directed cycle of odd length. (ii) If A is primitive, then y(A) :::;; 2n  2, where equality holds if and only if
Powers of Nonnegative Matrices
111
Proof Let D = D(A). Since A is reduced and symmetric, Dis strong and every arc of D lies in a directed 2cycle. Thus by Theorem 3.2.2, D is primitive if and only if D has a
directed odd cycle. Assume that A is primitive. Then A2 is also primitive by Proposition 3.3.2. Since A is symmetric, a loop is attached at each vertex of V(D(A 2 )) = V(D). Thus by Proposition 3.3.4(ii), y(A2 ) ~ n 1, and so A2(nl) > 0. Hence y(A) ~ 2n 2. Assume further that y(A) = 2n 2. Then in D(A2 ), there exist a pair of vertices u, v such that the shortest length of a (u,v)path in D(A2 ) is n 1. It follows that D(A2 ) must be an (u, v )path with a loop at every vertex and with each arc in a 2cycle. If D has a vertex adjacent to three distinct vertices u',v',w' E V(D), then u'v'w'u' is a directed 3cycle in D(A2 ), contrary to the fact that D(A2 ) is an (u, v)path with a loop attaching at each vertex. It follows that D(A) is a path of n vertices and has at least one loop. By 7(A) 2n 2 again, D has exactly one loop which is attached at one end of the path. D
=
Theorem 3.3.4 (Shao, (245]) For all n ;;::: 1, E,.
s;; .En+1 •
Moreover, if n ;;::: 4, then
E,.CEn+l·
Proof Lett E E,., and let A= (a0;) E B,. be a primitive matrix with y(A) = t. Construct a matrix B = (b,3 ) E Bn+l as follows. The n x n upper left corner submatrix of B is a..,n· Let A, for 1 ~ i ~ n, bo,n+l =a;,,., for 1 ~ j ~ n, bn+l,j =a,.,;, and bn+l,n+l {1, 2, · · · , n }. Then D(B) is the digraph obtained from D(A) by adding a V(D(A)) new vertex n+ 1 such that (i,n+ 1) E E(D(B)) if and only if (i,n) E E(D(A)), such that (n+l,j) E E(D(B)) ifandonlyif(n,j) E E(D(A)), and such that (n+l,n+l) E E(D(B)) if and only if (n,n) E E(D(A)). By Theorem 3.2.2, B is also primitive. By Definition 3.3.1 (or by Exercise 3.13(iv)), y(B) = y(A), and sot E En+l· If n;;::: 4, then by Example 3.3.1, n 2 + 1 E En+l E,., and so the containment must be proper. D
=
=
Ever since 1950, when Wielandt published his paper [274) giving a best possible upper bound of y(A), the study of y(A) has been focusing on the problems described below. Let A denote a class of primitive matrices. (MI) The Maximum Index problem: estimate upper bounds of y(A) for A E A. (IS) The Set of Indices problem: determine the exponent set
E,.(A)
= {m
: m > 0 is an integer such that for some A E A, y(A)
= m}.
(EM) The Extremal Matri:l: problem: determine the matrices with maximum exponent in a given class A. That is, the set
EM(A) ={A E A : y(A) = max{"Y(A') : A' E A}}.
Powers of Nonnegative Matrices
112
(MS) The Set of Matrices problem: for ay0 E u,.E,.(A), determine the set of matrices MS(A,yo) ={A E A : y(A} ='YO}· In fact, Problem EM is a special case of Problem MS.
We are to present a brief survey on these problems, which indicates the progresses made in each of these problems by far. First, let us recall and name some classes of matrices.
Some Classes of Primitive Matrices Notation P,. P,.(d)
T,. F,. DS,.
CP,. NR,
s... s~
Definition n x n primitive matrices in B,. matrices in P,. with d positive diagonal entries matrices A E P,. such that D(A) is a tournament fully indecomposable matrices in P,.
P,.nn,. circulant matrices in P,. nearly reducible matrices in P,. symmetric matrices in P,. matrices in S,. with zero trace
Problem MI This area seems to be the one that has been studied most thoroughly. Results Notation Authots and References y(A) $; (n 1} 2 + 1 P,. Wielandt, [274] Dulmage and Mendelsohn, [77] y(A) $; n + s(n 2) y(A) $; 2n  d 1 Holladay and Varga, [129] P,.(d) y(A) $; n+ 2 T,. Moon and Pullman, [205] y(A) $; n 1 F,. Schwarz, [232] n2 L4+1J ifn 5,6, or y(A) $; DS,. Lewin, [159] n 0 (mod4} 2
=
=
L~J 4
CP,. NR,
s.. son
KimButler and Krabill [144] Brualdi and Ross, [36] Sha.o, [245] Liu et al, [177]
y(A) y(A} y(A) y(A}
otherwise $; n 1 $; n 2  4n + 6 $; 2n  2 $; 2n  4
Powers of Nonnegative Matrices
113
Problem IS Let Wn = (n  1) 2 + 1. Wielandt (1950, [274]) showed that En ~ [1, wn] 0 ; Dulmage and Mendelsohn (1964, [77]) showed that En C [1, wn] 0 • In 1981, Lewin and Vitek [157] found all gaps (numbers in [1, wn] 0 but not in En) in [l ~n j + 1, Wn and conjectured that [1, l W; j has no gaps. Shao (1985, [247]) proved that this LevinVitek Conjecture is valid for sufficiently large nand that [ 1, L~n J+ 1) 0 has no gaps. However, when n 11, 48 ¢ E 11 and so the conjecture has one counterexample. Zhang continued and complete the work. He proved (1987, (282]) that the LevinVitek Conjecture holds for all n except n 11. Thus the set En for the class Pn is completely
r
r
=
=
determined. Results concerning the exponent set in special classes of matrices are listed below. Notation Pn(n) Pn(d)
Authors and References Guo, [110] Liu, [168]
Results [1,nW [2,2nd W
Tn Fn DSn CPn NR,.
Moon and Pullman, [205] Pan, [209]
[3,n+2] 0 [1,n W Unsolved Unsolved Characterized [1,2n 2] 0 \ S S = {m is an odd integer and n ~ m ~ 2n  3} [2, 2n  4] 0 \ 81 m is an odd integer 81
1~d
Sn
Shao and Hu, [249] Shao, [245]
son
Liu et al, [177]
={
andn2~m~2n5}
Problem EM In Proposition 3.3.3, it is indicated that EM(Pn)
= {PD1P1
:
Pis a permutation matrix},
where D 1 is defined in Example 3.3.1. However, for certain classes of primitive matrices, the extremal matrices may not be unique under the ~~~ equivalence and are hard to characterize, even when under the condition that the number of positive entries is minimized ([168]). Several such problems remain unsolved. Results concerning Problem EM in special classes of matrices are summarized below.
Powers of Nonnegative Matrices
114 Notation Pn(d) Tn Fn DSn CPn N&,.
Sn
~
Status Solved Solved Partially Solved Solved Solved Solved Solved Solved
Authors and References Liu and Shao, [182] Moon and Pullman, (205) Pan, (209] Zhou and Liu [290] Huang, (135] Brualdi and Ross, (36] Shao, [245) Liu et al, [177]
ProblemMS This problem appears to be harder. From the view point of matrix equations, this is to find all primitive matrices solution A to the equation AI: = J, for any k E En. The study of this problem is just the beginning. See (209], (292) and (270), among others.
3.4
The Index of Convergence of Irreducible Nonprimitive Matrices
Throughout this section, the matrices and the matrix operations will also be Boolean. Definition 3.4.1 Recall that p(A) denotes the period of a matrix A e Bn and k(A) the index of convergence of A (Definition 3.2.2). For integers n ~ p ~ 1 with n > 1, define
mn,p ={A E Bn : p(A) Note that IBn,l
=p}.
= Pn, the set of all n by n primitive matrices. Denote k(n,p)
=max{k(A)
: A e mn,p}·
Theorem 3.4.1 (Heap and Lynn, (119]) Write n = pr + 8 for integers r and 8 such that 058
k(n,p) 5 p(,2  2r + 2) + 28. Equality holds when
8
= 0.
Note that Wielandt's theorem (Corollary 3.3.1A) is the special case of Theorem 3.4.1 when p = 1. The next theorem, due to Schwarz (Theorem 3.4.2), implies Theorem 3.4.1. The proof here is also given by Schwarz.
Theorem 3.4.2 (Schwarz, [233]) Write n = pr+8 for integers rand 8 such that 0 5 s < p
Powers of Nonnegative Matrices
115
and let ifr>1 ifr 1.
=
Then k(n,p) :5 JJWn + s. Some notations and lemmas are needed in the proof of Theorem 3.4.2. Definition 3.4.2 Let A E Bn with p(A) 3.2.3D, A is permutation similar to 0
= p.
If A is not primitive, then by Corollary
At 0
Aa
(3.7)
0
A,_t 0
A,
=
1, 2, .. · ,p (mod p)) is primitive. Denote the matrix where each A, E M,..,n'+" (i of the form (3.7) by (n1.A~,na,A2 , .. • ,n,.,A,.,n1 ), or simply (A1,A 2 , .. ·,A,), when no confusion should arise. Fix an integer p ~ 1. For any integers m ~ 0 and n 1 , n2, .. • , n, > 0, and any Z; E Mn1,ni+., (i 1,2, .. · ,p (mod p)), define (Zt,Za, · .. ,Z,)m to be the block matrix
=
(A1;] such that if j  i
=m (mod p)
otherwise.
Thus (At,A2,"' ,A,)= (At,Aa, ... ,A,)t. For any m, j ~ 1, define
The following three lemmas are obtained form the definitions and by straightforward computations. Lemma 3.4.1 If A= (A1.A2 , ... ,Ap), then
(i) Am= (At(m),Aa(m), ... ,Ap(m))m. (ii) A 1(ph) = (A 1(p))h, for each i = 1, 2, · · · ,p (mod p), and for each integer h ~ 1. (iii) For integers k,j,l,q with k,j,l > 0, q ~ 0, j + q :::; p, and 1 :::; l :::; p, .A,(k) A,(q)AI+g(k q). Lemma 3.4.2 Let A, B E Bn. Each of the following holds. (i) If AB is primitive, then A has no zero rows and B has no zero columns.
=
116
Powers of Nonnegative Matrices
(ii) H A has no zero columns and B has no zero rows, and if AB is primitive, then BA is also primitive. Moreover, h(BA)  y(AB)I ~ 1.
(iii) H A= (At, A2, · · · , A,.) E B,. is an irreducible matrix with p(A) = p, then each A;(p) (1 ~ i ~ p), is primitive, and h(A,{p)) y(A;{p))l ~ 1, (1 ~ i,j $ p). Lemma 3.4.3 Let A= (nt,At.n2,A2,··· ,np,A,.,nt) = (At,A2 , ... ,A11 ) E B,. be an irreducible matrix with p(A) p. Then k(A) is the smallest integer k > 0 such that A;(k) = J, for all i = 1,2,··· ,p.
=
Lemma 3.4.4 Let A= (At.A2 , .. • ,A11 ) E B,. be an irreducible matrix with p(A) = p. Let t be an integer with 1 ~ t ~ p. H for 1 ~it < i2 < · · ·
Proof Let h = max{'Y0,y;., · · · ,y;,} and k =ph+ pt. Since h 2! y(A;1 {p)),
For any 1 ~ l
~ p,
since
I{it. ... 'it}l + l{l, l + 1, l + 2, ... 'l + p t}l
= p + 1 > p,
there exist j and q (1 ~ j ~ t, 0 ~ q ~ p t) such that some i; A;; Al+q· It follows that for each l = 1, 2, · · · ,p,
=
A,(k)
Therefore k(A)
~
= l + q (mod p), and so
=
A,(q)A,+q(k q)
=
A,(q)A,+q(ph)A,+q+ph(k ph q)
= =
A,(q)A;,; (ph)AI+q+ph(k ph q) A,(q)JA,+q+ph(p t  q)
k follows by Lemma 3.4.3.
= J.
O
Lemma 3.4.5 Let A= (n1 ,A1 ,~,A 2 , ... ,A11 ,n 1) E B,. be an irreducible matrix with p(A) = p, and let m = min{nt.n2 , ... ,np}. Then k(A) ~ p(m2

2m+ 3)  1.
Proof It follows from Lemma 3.4.4 and Corollary 3.3.1A. O Proof of Theorem 3.4.2 Since k(A) = k(PAPt) for any permutation matrix P, we may assume that A= (nt,At,n 2 ,A2, ... ,n,.,A,.,nt), where nt +n2 + ·· · +n,. = n. Let m min{n1. .. · ,np}· Since n rp+ s where 0 ~ s < p, m ~ r.
=
=
Powers of Nonnegative Matrices
117
Case 1 m 5 r  1. Then r ;?:: m + 1 ;?:: 2, and so by Lemma 3.4.5,
k(A)
5
p(m2
<
p(r2 

2m+ 3)  1 5 p(r2
4r + 6)  1

2r + 2) + s.
Case 2 m = r. Since n1 + na + · · · + np = n = pr + s = pm + s, there must be 1:::; i1 < ia <···
=
k(A) 5 pmaxb£,,··· •'Y•.·} +p (p s) 5p(r 2r+2) + s. When r = 1, n = p + s. Thus among any s + 1 members in {n1, · · · , np}, at least one of them is 1. Accordingly, one of the matrices of Ai,Ai+l>" · · ,Ai+s1 is a J, since these matrices has no zero rows nor zero columns. It follows that Ai(s) = J, i 1, 2, · · · ,p, and so k(A) 5 s, by Lemma 3.4.3. 0
=
Schwarz indicated in [233] that in Theorem 3.4.2, the upper bound can be reached when n = 7 and p 2. Can the upper bound be reached for general n and p? Shao and Li [252] completely answered this question. Two of their results are presented below. Further details can be found in [252].
=
=
Theorem 3.4.3 (Shao and Li, [252]) Let A E IBn,p with p = 2 and n 2r + 1, r > 1. Then k(A) k(n,p) if and only if there is a permutation matrix P such that P AP 1 e {M1, Ma, M3 }, where
=
M
1
=[
0 H1 ] M YiO ' 2
= [ Y20 0 H1 ]
'
M
3
=[
0 H2 ] Y10'
and where
0
1
0
0
0 1
H1 = 1 0
0 1
1
0 0
1
; Ha =
0 0
1 1
(r+l)xr
0 1
0
0
0
0
and
Y,
~ [I
l 1
1
, Ya= rx(r+l)
[I
1
:L~,,
(r+l)xr
Powers of Nonnegative Matrices
118
Theorem 3.4.4 (Shao and Li, (252]) When r > 1, r = 1 and 8 > 0, or r k(n,p) can be partitioned into 2' + the matrices A e ffin,p with k(A) and 1 equivalence classes, respectively, under the relation ~P·
=
= 1 and 8 = 0, 8 •
2•1, 2' 1,
Theorem 3.4.2 can be viewed as a Wielandt type upper bound of the index of convergence of irreducible nonprimitive matrices. In order to obtain a DulmageMendelsohn type upper bound, we need a few more notions. Definition 3.4.3 Let A E ffin,p· For 1 ~ i,j ~ n, let kA(i,j) be the smallest integer (A1);.J, for all l ;?:: k; and define mA(i,j) be the smallest k ;?:: 0 such that (A1+P)i,j integer m ;?:: 0 such that (Am+"P}iJ = 1 for all a ;?:: 0.
=
Example 3.4.1 Let A E IBn,p and let D = D(A) with V(D) = {v1,t12,··· ,vn}· By Proposition 1.1.2(vii), kA(i,j) is the smallest integer k;?:: 0 such that for each l ;?:: k, D has a directed (v1, VJ )walk oflength l + p if and only if D has a directed (v;, VJ )walk of length l; and mA(i,j) is the smallest integer m ;?:: 0 such that for any a ;?:: 0, D has a directed (v;, VJ )walk of length m + ap. Definition 3.4.4 Let A E ffin,p and D = D(A) with V(D) = {v1o··· ,vn}, l(A) = {l1,l2,··· ,l,} denote the set of lengths of directed cycles in D(A), and let d,(A)(i,j) denote the shortest length of a directed (v1,vJ)walk in D that intersects directed cycles in D of lengths in Z(,P). Also, let p = gcd(h, 12 , • • • ,l.), and define

l1 z2
z.
we can study
Theorem 3.4.5 (Shao and Li (251}) Let A E ffin,p· For 1 ~ i, j ~ n, each of the following holds. (i) k(A) = maxl~iJ~n{kA(i,j)}, and (ii)kA(i,j)={ m A(i,j)p+1 0
ifmA(i,j);?::p1 ifmA(i,j)
Proof It suffices to prove (ii). Let m = mA(i,j) and k = kA(i,j). Let D = D(A) with V(D) = {v1o .. · ·, vn}· Suppose that l;?:: m p + 1 is an integer. m (mod p), then l = m + ap for some integer a ;?:: 0. By Definition 3.4.3, Hl (A1) 1J = (A1+P)i,3 = 1. Assume that l '¢ m (mod p). By Proposition 1.1.2(vii), D has a directed (v1, v3)walk of length exactly m. It follows by Lemma. 3.2.1(ii) that D does not have directed (v1, VJ)walk otlength exactly land l + p, and so (A'hJ = (A~jP) = 0.
=
Powers of Nonnegative Matrices
119
Thus fur each l ~ mp+1, (A 1)i1 = (A1+P).1, and so by Definition 3.4.3, k :s; mp+1. On the other hand, by Definition 3.4.3, (AmP) 11 = 0 =F 1 = (Am) 11 , and so k ~ mp+l.
D Theorem 3.4.6 (Shao and Li [251]) Let A E IDn,, and D
= D(A).
Let l(D)
=
{it, la, ... 'z.}. For 1 :s; i,j :s; n,
Proof Let W be a shortest directed (v,, VJ)walk which intersects directed cycles of each length in l(D). Then by the definition of the Frobenius number, for an integer a ~ 0, D has a directed (v1, v1)walk of length exactly dL(A)(i,j) + ~(lt.l2 , • .. z.} + ap, and so the theorem follows by Definition 3.4.3. O By applying the techniques used in Theorems 3.4.5 and 3.4.6, Shao, Shao and Wu obtained the DulmageMendelsohn type upper bound. Theorem 3.4.7 (Shao, [242), [243), Wu and Shao [279)) Let A E IDn,P• D l =min{l1 E l(D)}. Then
= D(A) and
Theorem 3.4.8 (Shao, [242]) Let A e IDn,p and D = D(A). Write n = pr + 8, where 0 :S; 8 < p. H l(D) contains three distinct cycle lengths different from p, then
~r + 4 J+ 8.
k(A) :S; PL r2 Definition 3.4.5 Let A E IDn,p· Define m(A)
max
1$i,j$n
{m(A) : A E IDn,p}
IMn,, ln,p
mA(i,j)
=
{k(A) : A
e mn.p}
Shao and Li [251) investigated the structure of In,p and found that In,. may have gaps. The complete characterization of In,p is done by Shao and Li [251) and Wu and Shao [279]. Lemma 3.4.6 below can be obtained by quoting Lemmas 3.4.3 and 3.4.4 (with t = 1). Lemma 3.4.6 (Shao and Li, [251]) Let A 1,2, ... ,p,
= (A1 , A2 , • .. , A,)
E IBn,p· Then for i
yy(A 1(p))  p + 1 :S; k(A) :S; py(A,(p)) + p 1.
=
Powers of Nonnegative Matrices
120
Theorem 3.4.9 (Shao and Li, [251]) Let n,p,k1 ,k:! be positive integers. such that n = pr + s, where 0 $ s < p. H for all k with k1 $ k $ k:z, k ¢ E,., then for all m with ktP $ m $ k:zp, m ¢ In,p· In particular (when kt = k:z = k), if k ¢ Er, then kp ¢ In,p· Proof Suppose that there exists an m with k 1p $ m $ k2p and m E In,p· Then there exists an A E m,..r> with m k(A). By Corollary 3.2.3D, we may assume that there exist matrices At,A2,··· ,Ap such that A= (nt,At,n2,A2, · · · ,np,Ap,nt) satisfying Theorem 3.2.3. pr + s < p(r + 1), there must be some n; ::; r. n Since n1 + n2 + · · · + np By Corollary 3.2.3B, A;(p) is an n; x n; primitive matrix, and so by Theorem 3.3.4,
=
= =
'Y;
=y(A;(p)) E En; ~ Er. On the other hand, by Lemma 3.4.6, P'Yi  p + 1 $ m $ P'Yi + p 1,
and so k1 $ 'Yi $ k2, contrary to the assumption that any k with k1 $ k $ k2 is not in Er·O
Example 3.4.2 Dulmage and Mendelsohn [77] showed that if r ~ 4 is even, then for any k with r 2  4r + 7::; k ::; r 2  2r, we have k ¢ Er. By Theorem 3.4.9, for any p > 0, s ~ 0 and n = pr + s, we have pk ¢ I.,.,p, and so I,.,p may have gaps. Theorem 3.4.10 (Shao and Li, [251], Wu and Shao [279]) Let n Then
=pr+s with 0 $ s < p.
where It(n,p)
=
l2(n,p)
=
2 {1,2,···,p( !r+ 4 J+s} and
~p.~p2 ~·· {r2(~1),···,r2 (~2)+n} '"I
3.5
+ '"2 i!: (r + S)p pd(Plo pt) = P
The Index of Convergence of Reducible Nonprimitive Matrices
In 1970, Schwarz presented the first upper bound of k(A) for reducible matrices in B,.. Little had been done on this subject until Shao obtained a DulmageMendelsohn type upper bound in 1990.
Powers of Nonnegative Matrices
121
Theorem 3.5.1 (Schwarz, [233]) For each A is reducible, then k(A) ~ (n 1) 2 •
e Bn, k(A)
~
(n 1) 2 + 1. Moreover, if A
Theorem 3.5.2 (Shao, [243]) Let A E Bn, D = D(A), and D11D2,··· ,De the strong components of D. Let no = m&XtSiSc{IV(Di)l}, and let so be the maximum of shortest lengths of directed cycles in eacll Di, 1 ~ i ~ c, if D has a directed cycle, or 0 if D is acyclic. Then k(A)
~ n + so ( d~) 
2) .
Theorem 3.5.2 implies Theorem 3.5.1 and Theorem 3.3.1 (Exercise 3.17). In Theorem 3.5.3 below, Shao (243] applied Theorem 3.5.2 to estimate the upperbound of k(A) for reducible matrices A. Lemma 3.5.1 Let X E Bn have the following form:
B 0] X=[ XT a
(3.8)
where BE Bn 1 and a E {0, 1}. Each of the following holds. (i) H a = 0, then k(B) ~ k(X) ~ k(B) + 1. (ii) H a= 1, then k(B) ~ k(X) ~ max{k(B), n 1}. Proof The relationship between Xil+1 and Bi:+ 1 can be seen in Exercise 3.19. Thus by Definition 3.2.2, k(B) ~ k(X). When a= 0, Lemma 3.5.1(i) follows immediately from direct matrix computation (Exercise 3.19(i)). Assume that a= 1. Since B E Bn1> for any k ~ n 2, I+ B + ···B" = I+ B + ·· · + nn2 , by Proposition 1.1.2(vii). By matrix computation (Exercise 3.19(ii)), if and so k(X) ~ max{k(B),n1}. 0 m~ max{k(B),n1}, then xm
=Xm+p(B),
Lemma 3.5.2 Let n ~ 3 be an integer and let A E Bn be a reducible matrix. H k(A) > n 2  5n + 9, then there exists a matrix X E Bn with the form in (3.8) such that B E Bn1 is primitive, such that D(B) has a shortest directed cycle with length n  1, and such that either A ~P X or AT ~P X. Proof Let D = D(A). H every strong component of D is a single vertex, then An = An+l = 0, and so k(A) ~ n < n 2  5n + 9, a contradiction. Hence D must have a strong component D 1 with IV(D1 )1 > 1. By Theorem 3.5.2, k(A>
~ n +so ( dC~> + 2) ,
(3.9)
where so and no are defined in Theorem 3.5.2. Since A is reducible, so~no~n1.
(3.10)
Powers of Nonnegative Matrices
122
If so $ n 3, then by (3.9) and (3.10), k(A) $ n+ (n 3)(n 1 2} = n 2  5n +9, a contradiction. If s0 = n 2 and no= n 2, then by (3.9), k(A) $ n+ (n 2)(n 4) < n 2  5n+9, another contradiction. If so = n 1, then by (3.10), no n 1. Then D has a strong component D 1 which is a directed cycle of n 1 vertices. Therefore we may assume that A or AT has the form (3.8), and that the submatrix Bin (3.8) must be a permutation matrix. It follows that ED In1 = Bnl, and so k(B) = 0. By Lemma 3.5.1, k(A) $ ma.x{k(B) + 1, n 1} ~ n  1 < n 2  5n + 9, a contradiction. Therefore it must bethecasethat s0 = n2and n 0 = n1, and so we may assume that A or AT has the form (3.8) and that the associate directed graph D(B) of the submatrix B in (3.8) has shortest directed cycle length n 2. It remains to show that B is primitive. If not, then by Theorem 3.2.2, every directed cycle of D(B) has length exactly n 2. Since D(B) is strong, B Bnl, and so k(B) 0, which implies by Lemma 3.5.1 that k(A) $ n 2  5n + 9, a contradiction. Therefore, B must be primitive. 0
=
=
=
=
Theorem 3.5.3 (Shao, [243]) Let A E Bn be a reducible matrix. Then k(A) $ (n 2) 2 + 2.
(3.11)
Moreover, when n 2!: 4, equality holds in (3.11) if and only if A where 0
1
0
0
1
1
1
0
::::!p
Rn
or AT
::::!p
R,.,
0 1
0
Rn= 1 0 0 0 0 0 0 0 0
Proof Note that (3.11) holds trivially when n E {1, 2}. Assume that n > 3. Since n 2  5n + 9 $ (n  2) 2 + 2, we may assume that k(A) > n 2  5n + 9. By Lemma 3.5.2, we may assume that A or AT has the form (3.8). By Theorem 3.5.1 or Theorem 3.5.2, k(B) $ (n 1) 2 + 1 and so (3.11) follows by Lemma 3.5.1. Now assume that n 2!: 4 and that A ::::!p Rn or AT ::::!p .R,.. Then direct computation
Powers of Nonnegative Matrices
123
yields
.Ri"2)2 +1
=[ 0 1
=
:l'
1 0
=
and so k(.R,.) (n 2) 2 + 2. Thus k(A) = k(.R,.) (n 2) 2 + 2. Conversely, assume that n ~ 4 and k(A) (n  2) 2 + 2 > n 2  5n + 9. By Lemma 3.5.2, we may assume that A~, X or AT~, X, where X is of the form (3.8). Note that k(B) $; (n 1)2 + 1. H a= 1 in (3.8), then by Lentma 3.5.1,
=
= k(X) $; max{k(B), n 1} $; (n 2) 2 + 1, contrary to the assumption that k(A) = (n 2) 2 + 2. Therefore we must have a= 0, and k(A)
so by (nk(B)
2? + 2 = k(A) = k(X) $; k(B) + 1 $; (n 2) 2 + 2,
= (n 2) + 1. By Proposition 3.3.1 and Proposition 3.3.3, D(B) must be the digraph 2
in Example 3.3.1 with n 1 vertices, and so we may assume that B is the (n 1) x (n 1) upper left comer submatrix of .R,.. It remains to show that the vector in the last row of A in (3.8) is xT (1, 0, 0, · • · , 0). By direct computation,
=
B(n2) 2
=[
0 J1x(n2)
Since k(X)
Jcn2)x1 ] • Jn2
= k(A) = (n2) 2+2, x
Therefore xT
Theorem 3.5.4 Let n
~
1 be an integer. There is no reducible matrix A E Bn such that
n2

5n + 9 < k(A)
< (n  2) 2 •
(3.12)
Proof By contradiction, assume that there exists a reducible matrix A E Bn satisfying (3.12). Then n ~ 7. By Lentma 3.5.2, A ~, X or AT ~, X where X is the matrix in (3.8) such that B is primitive and such that the shortest length of a directed cycle in D(B) is n  2. By Proposition 3.3.3 and Proposition 3.3.1, there are exactly two such digraphs (as described in Proposition 3.3.3), and k(B) ~ (n 2) 2 • By Lemma 3.5.1, (n 2) 2 > k(A) = k(X) ~ k(B) ~ (n 2) 2 , a contradiction. O Theorem 3.5.5 Let n such that
~
13 be an integer. H n is odd, then there is no matrix A E Bn
Powers of Nonnegative Matrices
124 if n is even, then there is no matrix A E B,. such that
Sketch of Proof Assume such an A exists. By Proposition 3.3.1 and Proposition 3.3.3(ii),
A is not primitive. By Theorem 3.5.3, A is not reducible. Therefore, A must be irreducible and imprimitive, and so p(A) = p ;;:: 2. Write n = pr + 8 with 0 :5 8 < p. By Theorem 3.4.2, k(A)
:5 p(r2
=

2r + 2) +
8
p(r2 +5)2(pr+8)3(p8)
n2
:5 pr2 +5p2n3:5+3n3 p
n2
<_ 2 + 3n 3 < n2  4n + 6 ,
0
and so Theorem 3.5.5 obtains. For integers j, m, n ;;:: 0, let
Em+j
=
RI,. = BI,.
{a+j: aE.E,.};
{k : k(A) = k for some reducible A E B,.};
= {k : k(A) = k for some A E B,.}
Jiang and Shao completely determined RI,. and BI,.. For Problem IS in other classes of matrices, the study has just begun. Theorem 3.5.6 (Jiang and Shao, [139]) n.1
Rln
=
i
U UCEni+j), i=l j=O n1
BI,.
i
U U(Eni + j).
=
i=O j=O
Given a matrix A E B,., if D(A) has a directed cycle of length 8 and if one of the length 8 directed cycle in D(A) has exactly t vertices, we say that A has 8cycle positive elements on exactly t rows. Theorem 3.5. 7 (Zhou and Liu, [288]) Let n ;;:: t ;;:: 1 be integers. Let A E B,. be a matrix such that with 8cycle positive elements on exactly t rows. If 8 1 or if 8 is a prime number, then
=
k(A) :5 {
(n t  1)2 + 1 (8
+ 1)n t 8
if t
<  n  J 8(n  1) + !4
ift >n J8(n1)
 l! 2•
+ l1·
125
Powers of Nonnegative Matrices
The upper bound is best possible except when t > nJs(n 1) +
l! and gcd(s, n) > 1.
Theorem 3.5.7 has been improved by Zhou [285]. An important case of Theorem 3.5.7 is when s
= 1.
Theorem 3.5.8 (Liu and Shao, [183], Liu and Li, [179]) Let n ~ d ~ 1 be integers. Suppose that A E B,. has d positive diagonal entries. Then (n  d 1) 2 + 1 k(A) < { 2nd 1 
=
if 1 < d <

if d ~

L2ns,qn::JJ 2
r nS2,tiii=il 2
=
Let I,.(d) {k : k k(A) for some A E B,. with d diagonal elements }. Liu et al completely determined I,.(d) as follows. Theorem 3.5.9 (Lin, Li and Zhou, (181])
{1,2, ... ,2nd1}U (u~dU~=oEnd>+;)
J,.(d)
={
if 1 < d 
< 2n3y'4H 
2
{1, 2, .. · , 2nd 1}
Theorem 3.5.10 (Liu, Shao and Wu, (184]) H A E 0,., then
k(A) 5 {
= 5, 6 or n = 0 (mod 4)
r~· + 11
if n
fztl
otherwise.
Moreover, these bounds are best possible. The extremal matrices of k(A) in Theorem 3.5.8 and Theorem 3.5.10 have been characterized by Zhou and Liu ([288] and [287]).
3.6
Index of Density
Definition 3.6.1 For a matrix A E B,., the mazimum density of A is
and the indez of ma:rimum density of A is
h(A) = min{m > 0 : IIAmll = JS(A)}. For matrices in m,.,,, define h(n,p)
= max{h(A)
: A E ffin,p}.
126
Powers of Nonnegative Matrices
Example 3.6.1 Let A e B,. be a primitive matrix. Then p(A) = n 2 and h(A) = "Y(A). Thus the study of the index of density will be mainly on imprimitive matrices. For a generic matrix A E B,. with p(A) > 1, p(A) < n 2 and h(A) $ k(A) +p1 (Exercise 3.23).
Jn1xn1
0
0
J1>2xn2
0
0
Bo = [
0 0
Jn,xn,
0
0
Jn,xna
0
0 0
0 0
0 0
B1= Jn.,xn,
and
B, = Bi, 1 $ i $
J,..,_,xn.,
0
p  1. Then
k(A) =min{m
>0
: Am =B;,j
=m (modp),O $ j
$p1}.
Proof Let m 0 =min{m
>0
: Am= B;,j ::m (modp),O $ j $p1}.
Let k = k(A) and write k = rp + j, where 0 $ j < p. Since each A,(p) is primitive, "Y(A,(p)) exists. Let e = max1SI~{"Y(A,(p))}. Then A""= Bo, and so
However, as A""= Bo, A(r+e)p = Bo also, and so Ale
== Alc+ep = A(r+•)P+i = B;. Thus
mo$k. On the other hand, write m 0 = lp + j with 0 $ j A""'+"= B;A" B;, and so k $mo. 0
< p. Then
A""' = B;, and so
=
Corollary 3.6.1 Let A= (n1,A11n2,A2,··· ,n,,Ap,n1) Em,.,, and let "Yi Then for each i = 1,2,··· ,p,
p("Yi  1)
< k(A) < P("Y; + 1).
Proof Note that A0 =I, for each i, by the definition of "Yh
(A1(p))7 • 1 < J ==> APh• 1> < B 0 ==> k(A) > p('y,  1).
= "Y(Aa(p)).
Powers of NODDegative Matrices
127
To show k(A) < P("Yi + 1), for each j with 1 $ j $ p, it suffices to show that A;(p("Yi)1) = J. Writei :aj+t (modp), whereO $ t < p. Then A,= A;+t and so (AJ+t(p))r• = J. It follows
A;(p(y.+ 1)  1)
=
A;(py, + p 1)
= =
(A;·· ·A;+i · · ·AJ+p1)7 ' A;·: ·AJ+p2 A;(t)(AJ+t(p)) 7 ' AJ+t(p 1 t)
= J.
D Definition 3.6.2 Let aT= (a1 ,G2, •· · ,a,) be a vector. The circular period of a, denoted by r(a) or r(at, a2, • • · , a,), is the smallest positive integer m such that
With this definition, r(a11 a2,··· ,a,)IP· Heap and Lynn [119] started that investigation of h(A) and (n,p). Sbao and Li [251] gave an explicit expression of h(A) in terms of the circular period of a vector, and completely determined ii(n,p). Theorem 3.6.2 (Heap and Lynn, [119], Shao and Li, [251]) Let A E ffin,p with the form A= (nt,At,··· ,n,,A,,nl), and letT =r(n11 n 2 ,··· ,n,). Each of the following holds. p
(i) p(A)
= L: n~, i=l
(ii) h(A)
= min{m
: m?: k(A), rim}= rlk~) J.
Sketch of Proof Let m?: 0 be an integer with m = j (mod p), such that 0?: j < p. Define n; = n;' whenever j j' (mod p). By Theorem 3.6.1,
=
p
m?: k(A) ~Am =B; ~ IIAmll
=Enini+i• i=l
and p
m
< k(A) ~Am< B; ~ IIAmll < Enini+i· i=l
p
L: n
p
=
1 p
2  E n,n,+i 2 E
As
=
=
128
Powers of Nonnegative Matrires
Theorem 3.6.3 (Shao and Li, [251]) For integers n ;?: p ;?: 1, write n 0~8
=max{k(A)
= rp + 8, where
: A E IBn,,}.
Then ii.(n,p)
= =
Plk(n,p) J p
p(r2 2r+ 2) { p(r2 2r+3) p
1
Moreover, if k(A)
ifr > 1,8 =0 if r > 1, 0 < 8 < p if r 1, 0 < 8 < p . ifr 1,8 = 0.
= =
= k(n,p), then h(A) = ii.(n,p).
Sketch of Proof For each A E 1Bn,p 1 A~, (nt, At.··· , n,, A,, n1).
LetT= tau(n1. n2, · · · , n,). By Theorem 3.6.2, k(A)
= rl k(A) J ~ Plk(A) J ~ Plk(n,p) J. p
T
p
Assume that for some A~, (n1, At.··· , n,, A,, n 1) E IBn,,, k(A) = k(n,p). By Theorem 3.4.2 and Theorem 3.4.3, wemayassumethat (n1,n2, · · · ,n,) (r+l, · · · ,r+1,r, · · · ,r),
=
and so h(A) =plk(n,p) J. p
O
Definition 3.6.3 The index set for h(A) is H(n,p)
= {h(A)
: A E IBn,,}.
Thus H(n, 1) =En. Theorem 3.6.4 (Shao and Li, [251]) For integers n ;?: p ;?: 1, write n = rp + 8, where 0 ~ 8 < p. Each of the following holds. (i) H k ¢ Er and if k1 ~ k ~ ~. then for each integer m with pk1 < m ~ p~, m¢ H(n,p). (ii) H r is odd and if r ;?: 5, then (p(r2  3r + 5) + 1,p(r2  2r}j0 n H(n,p) = 0. (iii) H r is even and if r;?: 4, then [p(r2  4r + 7) + 1,p(r2  2r})O n H(n,p) = 0. Definition 3.6.4 For integers n ;?: p ;?: 1, let SIBn,p denote the set of all symmetric imprimitive irreducible matrices. Example 3.6.2 Let A E S1Bn,2 and let D the diameter of Dis k(A) + 1.
= D(A). Then D(A) is a bipartite graph and
Powers of Nonnegative Matrices
129
=
_ { k(A) if n1 n2 h (A)1: A 2l¥J ifn1 #n:~. Proof This follows from Theorem 3.6.2.
O
Example 3.6.3 Define
SKn,2 = {k(A) : A
e SIBn.2} and SHn,2 =
{h(A) : A
e SIB,..2 }.
For integers n ;':::: k+2 ;':::: 3, let G(n,k) to be the graph obtained from Knlc,l by replacing an edge of Knlc,l by a path kedges. Then we can show that k(A(G)) = k, and so
SKn,2 = {1,2, · · · ,n 2}. The same technique can be used to show the following Theorem 3.6.6. Theorem 3.6.6 (Shao and Li, [251]) Let n ;':::: 2 be an integer. Each of the following holds. (i) H n is even, then SH,., 2 = [1, n  2] 0 • (ii) H n is odd, then SH,.,2 consists of all even integers in [2, n W. For tournaments, Zhang et al completely determined the index set of maximum density. Theorem 3.6.7 (Zhang, Wang and Hong, [284]) Let ST,. = {h(A) : A
{1} ST,.=
3. 7
{1,9} {1,4,6,7,9} {1,2,···,8,9}\{2} {1, 2, · · · , n + 2} \ {2} {1,2,··· ,n+2}
e T,.}.
Then
if n = 1,2,3 ifn=4 ifn=5 ifn=6 ifn=7,8,··· ,15 ifn;::: 16.
Generalized Exponents of Primitive Matrices
The main purpose of this section is to study that generalized exponents ezp(n, k), f(n, k) and F(n,k), to be defined in Definitions 3.7.1 and 3.7.2, and to estimate their bounds. Definition 3.7.1 For a primitive digraphD with V(D) = {vt,t/:a, · · · ,v,.}, and for v;,v; E V(D), define exJ>D(v;,vi) to be the smallest positive integer p such that for each integer t ;': : p, D has a directed (v;,v;)walk oflength t. By Proposition 3.3.1, this integer exists. For each i = 1, 2, · · · , n, define
Powers of Nonnegative Matrices
130
For convenience, we assume that the vertices of D are so labeled that
With this convention, we define, for integers n;;::: k;;::: 1, exp(n,k) =
max D Ia primitive a.ncl
IV(DII•
Let D be a primitive digraph with IV(D)I =nand let X ~ V(D) with lXI = k. Define expv(X) to be the smallest positive integer p such that for each u E V(D), there exists a v E X such that D has a directed (u, v )walk of length at least p; and define the kth lower multiexponent of D and the kth upper multiexponent of D as f(D,k)
=
min {expv(X)} and F(D,k) =
X~V(D)
max {expv(X)},
X~V(D)
respectively. We further define, for integers n ;;::: k ;;::: 1, f(n,k)
=
{f(D, k)} and
max IV(DII=•
F(n,k)
=
{F(D,k)}.
max D
1• primltl.e aad.
IV(D)I=•
These parameters exp(n,k), f(n,k) and F(n,k) can be viewed as generalized exponents of primitive matrices (Exercise 3.25). Example 3.7.1 Denote e:z:p(n) e:z:p(n) = (n 1) 2 + 1.
= e:z:p(n,n).
By Corollary 3.3.1A and Example 3.3.1,
Definition 3.7.2 Let D,. denote the digraph obtained from reversing every arc in the digraph D 1 in Example 3.3.1, and write V(D,.) = {v1 ,v2 , ... ,v,.}. For convenience, for j > n, define VJ =Vi if and only if j = i (mod n). For each Vi E V(D,.) and integer t :2:: 0, let Rt (i) be the set of vertices in D,. that can be reached by a directed walk in D of length t.
E V(D,.), lett :2::0 be an integer. Write t = p(n 1) +r, where r ~ n  1. Each of the following holds. (i) H t :2:: (n 2)(n 1) + 1, then Rt(1) V(D,.). (ii) H t ~ (n 2)(n 1) + 1, then Rt(l) {vr, Vtr, .. · , Vpr, Vpr+l}· (iii) H t;;::: (n 2)(n 1) + m, then Rt(m) = V(D,.).
Lemma 3.7.1 Let
Vm
p, r :2:: 0 are integers such that 0 ~
= =
Powers of Nonnegative Matrices
131
(iv) H 0 ~ t ~ m 1, then Rt(m) = {vmt}· (v) H m 1 ~ t ~ (n 2)(n 1) + m, then Rt(m)
=Rtm+1(1).
Proof (i) and (ii) follows directly from the structure of D,.. (iii), (iv) and (v) follows from (i) and (ii), and the fact that in D,., there is exactly one arc from vr. to v6 _ 1, 2 ~ k ~ n. Theorem 3. 7.1 Let n ;:: k ;:: 1 be integers. Each of the following holds. (i) ezpv,. (k) = n2  3n + k + 2. n1 n1 (ii) j(D,., k) = 1 + (2n k 2)lkJ  klkj 2 • Proof By Lemma 3.7.1, ezpv,.(k) = (n 2)(n1) +k = n 2 3n+k+2.
=
=
1, (ii) follows by Example 3.3.1. Assume that k < n. Write n  1 qk+8, where 0 ~ 8 < k. Then the right hand side of (ii) becomes (q1)(n1)+1+8(q+l). We construct two subsets X andY in V(D,.) as follows. Let X = { Vi 1 , vi.,· · · , v,.} such that i 1 = 1, and such that, for j ;:: 2,
Note that when k
.
z;
Let Y
= {vi1+q1
:
= { i;1 . +q+ 1 Zj1 +q
if2~j~8+1 if8+2~j~k.
1 ~ j ~ 8} andY= V(D,.) \ Y. We make these claims.
Claim 1 H X*= {u1,u2 , • • • ,u~:} ~ V(D,.), then
expv,. (X*) ;:: (q 1)(n 1) + 1 + 8(q + 1). Note that from any vertex v E X*, v can reach at most n 8 vertices by using directed walks of length (n 1)(q 1) + 1. H a vertex v~, where 1 ~ l < 8(q + 1)  1, cannot be reached from X* by directed walks of length (n 1)(q 1) + 1, then adding a directed walk of length 8(q + 1)  1 cannot reach the vertex v,.. Thus Claim 1 follows. Claim 2 Every vertex in Y can be reached from a vertex in X by a directed walk of length 1 + (n 1)(q 1) in D,.. In fact, by Lemma 3.7.1, Rl+(n1)(q1)(va.)
{vn1 1 V,.,V1,V2,··· ,Vi1 +q2}
Rl+(n1)(q1)(Vi2 )
=
{vio1 1 Vi2 , ·
Rl+(n1)(q1)(vi~:)
=
{vi.1,
••
,Vi2+q2}
v,., ···,
Vi•+q2}·
Thus Claim 2 becomes clear since (i;  1)  (ij1
+ q 2) =i; 
i;1  q + 1
={ ~
if2~j~8+1
if 8+2
~j ~
k.
Powers of Nonnegative Matrices
132
Claim 3 Every vertex ViE V(D,.) can be reached by a directed walk from a vertex in Y of length s(q + 1); but not every vertex can be reached by a directed walk from a vertex in Y of length s(q + 1) 1. It suffices to indicate that v,. cannot be reached by a directed walk from a vertex in Y of length s(q + 1)  1. In fact, since s(q + 1)  1 = i. + q 1, if v,. can be reached by a directed walk in D,. of length s(q + 1) 1, then the initial vertex of the walk must be E Y. By Claims 2 and 3, f(Dn,k) :5 expv,.(X) :5 (q1)(n1)+1+s(q+1). This, together with Claim 1, implies {ii). D
V•(q+l)I
Theorem 3. 7.2 Let n ;?:: k ;?:: 1 be integers. Then F(D,., k)
= (n 1)(n k) + 1.
Proof Let X'= {v1.v2,··· ,v~:I,v,.}. By Lemma 3.7.1, Dn has no directed walk of length (n 1)(n k) from a vertex in X' to v,.. Thus F(D,., k) ;?:: (n 1)(n k) + 1. On the other hand, by Lemma 3.7.1 again, for any vertex Vie V(D,.), the end vertices of directed walks from v, oflength (n 1) (n k) + 1 consists of n k + 1 consecutive vertices in a section of the directed cycle VI V2 • • • Vn v1 • Since any k distinct such sections must coverall vertices of Dn, for any X~ V(Dn) with lXI =k, expv.. (X) :5 (n1)(nk)+l. This proves the theorem. D Lemma 3.7.2 Let n;?:: k;?:: 1 be integers and let D be a primitive digraph with V(D,.) {v1,v2, · ·· ,v,.}. H D has a loop at v,, 1:5 i :5 r, then expv(k) :5 {
n1
ifk:5r
n1+kr
if k;?:: r.
=
Proof Assume that D has a loop at VI. V2, • • • , vr. Then expv(vi) :5 n 1, 1 :5 i :5 r. Thus if k :5 r, then expv(k) :5 n 1. Assume k >rand L ={vi,'" ,vr}· Since Dis strong, V(D) has a subset X with lXI = k  r such that any vertex in X can reach a vertex in L with a directed walk of length at most k  r, and any vertex in L can reach a vertex in X with a directed walk of length at most k r. Thus expv(v) :5 (n 1) + (k r), Vv e XU L. D Lemma 3. 7.3 Let n;?:: k ;?:: 2 be integers and let D be a primitive digraph with jV(D)I Then
=n.
expv(k) :5 expv(k 1) + 1.
Proof Assume that expv(vi) such that (vi,v) E E(D). D
= expv(i), 1 :5 i :5 n. Since Dis strong, D has a vertex v
Powers of Nonnegative Matrices
133
Theorem 3. 7.3 (Brualdi and Liu, [33]) Let n ~ k ~ 1 be integers and let D be a primitive digraph with IV(D)I = n. H 8 is the shortest length of a directed cycle of D, then (k)
ifk~8
< { 8(n 1)
expv
8(n1+k8)
ifk>8
Sketch of Proof Given D, construct a new digraph D<•> such that V(D<•>) = V(D), where (x, y) e E(D<•>) if and only if D has a directed (x, y)walk of length 8. Then D' has at least s vertices attached with loops, and so Theorem 3.7.3 follows from Lemma 3.7.2.
0 Theorem 3. 7.4 (Brualdi and Liu, [33]) Let n exp(n,k)
= n2 
~
k
~
1 be integers. Then
3n+ k + 2.
Proof Let D be a primitive digraph with IV(D)I = n. By Lemma 3.7.3, expv(k) ~ expv(1) + (k 1). Let 8 denote the shortest length of directed cycles in D. If 8 ~ n 2, then by Theorem 3.7.3, expv(l) ~ n 2  3n + 2 and so the theorem obtains. Since D is primitive, by Theorem 3.2.2, n ~ n 1. Assume now 8 = n 1. Since D is strong, D must have a directed cycle of length nand soD has D,. (see Definition 3.7.2) as a spanning subgraph. Theorem 3.7.4 follows from Theorem 3.7.1(i) and Theorem 3.7.3.
0 Shao et al [253] proved that the extremal matrix of exp(n, k) is the adjacency matrix of D,.. In [185] and [241], the exponent set for expv(k) was partially determined. Lemma 3.7.4 Let n ~ k > 8 > 0 be integers and let D be a primitive digraph with IV(D)I =nand with 8 the shortest length of a directed cycle of D. Then f(D, k) ~ n k.
=
Proof Let Y c V(D) be the set of vertices of a directed cycle of D with IYI s. Since Dis strong, V(D) has a subset X such that Y c X~ V(D) and such that every vertex in X \ Y can be reached from a vertex in Y by a directed walk with all vertices in X. Thus any vertex in V(D) can be reached from a vertex in X by a directed walk of length exactly n  k. D Lemma 3. 7.5 Let n s, then
~ 8
> k ~ 1 be integers. f(D,k)
~
H D has a shortest directed cycle of length
1+8(nk1).
Proof Let C, = x 1 x2 • • ·x.x1 be a directed cycle in D. Siilce Dis strong, we may assume that there exists z E V(D) \ V(C.) such that (x1, z) E E(D).
Powers of Nonnegative Matrices
134
Let X= {x1,x2, · · · ,x,}, and let Y be the set of vertices in D that can be reached from vertices in X by a directed path of length 1. Then {z, x2, · · · , XHt} ~ Y. V(D), where (u, v) E E(D<•>) if and Construct a new digraph D(•) with V(D<•>) only if D has a directed (u, v)walk of length 8. Then D' has a loop at each of the vertices x2, · · · x1c+1, and (x2, z) E E(D<•>). Thus, each vertex in D(•} can be reached from a vertex in Y by a directed walk of length exactly n k 1, and so every vertex in D can be reached from a vertex in X by a directed walk of length exactly 1 + 8(n k 1). O
=
Lemma 3.7.6 can be proved in a way similar to the proof for Lentma 3.7.5, and so its proof is left as an exercise. Lemma 3. 7.6 Let n > 8 ;::: k ;::: 1 be integers such that kl8. Let D be a primitive digraph with IV(D)I =nand with a directed cycle of length 8. Then /(D,k) :$;
1+ 8(n:1).
Theorem 3.7.5 (Brualdi and Liu, [33]) Let n
f(n,k) :$; n 2

> k;::: 1 be integers.
Then
(k+2)n+k+2.
Sketch of Proof Any primitive digraph on n vertices must have a directed cycle of length 8 :$; n 1, by Theorem 3.2.2. Thus Theorem 3.7.5 follows from Lemmas 3.7.4 and 3.7.5.
0
Theorem 3.7.6 Let n > k ;::: 1 be integers such that kl(n 1). Let f*(n,k) = max{/(D,k) : D is a primitive digraph on n vertices with a directed cycle of length 8 and kl8}. Then
f*(n,k)
= n2 
(k 2)n+2k+ 1_ k
Proof This follows by combining Lemma 3.7.6 and Theorem 3.7.1(ii).
0
=
Lemma 3.7.7 Let D be a primitive digraph with IV(D)I n, and let 8 and t denote the shortest length and longest length of directed cycles in D, respectively. Then
F(D,n 1) :$; max{n 8 1 t}. Proof Pick X c V(D) with lXI = n  1. H V(C) ~ X for some directed cycle C of length p, where 8 5 p 5 t, then any vertex in D can be reached by a directed walk from a vertex in V (C) of length n  p. Hence we assume tllat no directed cycle of D is contained in X. Let u denote tile only vertex in V (D) \ X. Then every directed cycle of D contains u.
Powers of Nonnegative Matrices
135
Let C1 be a directed cycle of length t in D. Then u E V (C1). Since D is strong, every vertex lies in a directed cycle of length at most t, and so every vertex in X can be reached from a vertex in X by a directed walk of length exactly t. Since D is primitive, and by Theorem 3.2.2, D has a directed cycle 0 2 of length q with 0 < q < t. Lett= mq + r with 0 < r ~ q. let v E V(Cl) be the (t r)th vertex from u. Then C1 has a directed (v, u)path. By repeating C2 m times, D has a directed (v, u)walk of length t. Hence expv(X) ~ max{n s, t}. D Theorem 3.7.7 F(n,n 1)
= n.
Proof By Theorem 3.7.2, F(n,n 1) ~ F(Dn,n 1) = n. By Lemma 3.7.7, for any primitive digraph D with IV(D)I =n, F(D,n1) ~ max{ns,t} ~ max{nl,n} = n.
0 Lemma3.7.8 Let n ~ m ~ 1 beintegersandletD be a primitive digraph with IV(D)I such that D has loops at m vertices. Then for any integer k with n ~ k ~ 1, F(D,k)
~{
=n
ifk>nm
n 1
2nmk
ifk~nm.
Proof Let X!;;; V(D) with lXI = k. Assume first that D has a loop at a vertex vEX. Then every vertex of D can be reached from v by a directed walk of length exactly n 1, and so F(D, k) ~ n 1. Note that when k > n m, X must have such a vertex v. Assume then k ~ n  m and no loops is attached to any vertex of X. Then X has a vertexx such that D has a directed (x,w)path of length at most nmk+1, for some vertex w E V(D) at which a loop of D is attached. Thus any vertex in D can be reached from a vertex in X by a directed walk of length exactly 2n  m  k. O Theorem 3.7.8 (Brualdi and Liu, [33]) Let n ~ k ~ 1 and 8 > 0 be integers. H a primitive digraph D with IV(D)I = n has a directed cycle of length 8, then F(D k) < { 8(n 1) ' 8(2n 8 k) Sketch of Proof Apply Lemma 3.7.8 to
ifk>n8 if k ~ ns.
n<•>. D
Theorem 3.7.9 (Liu and Li, [179]) Let n ~ k ~ 1 be integers, and let D be a primitive digraph with IV(D)) = n and with shortest directed cycle length s. Then F(D,k) ~ (n k)s + (n s). Proof It suffices to prove the theorem when n > k ~ 1. Let c. be a directed cycle of length s and let X !;;; V(D) be a subset with lXI = k < n. Let v e V(D) and let
Powers of Nonnegative Matrices
136
+ (n  s). We want to find a vertex x E X such that D has a directed (x, v)walk of length exactly t. Fix v E V(D). Then there is a vertex x' EX such that D has a directed (x',v)walk of length d::;; n s. Since is a directed cycle, then for any h;::: d, there exists a vertex x" E V(C.) such that D has a directed (x'',v)walk of length h. Note that in n<•l, x" is a vertex at which a loop is attached. Since lXI = k and since n<•l has a loop atx"' we can find X E X such that n<•l has a directed (x, x'')walk of length n k. Thus D has a directed (x,x")walk of length s(n k), and soD has a
t ;::: (n  k)s
c.
directed (x, v)walk of length t ;::: s(n k) + (n s), consisting of a directed (x, x")walk and a directed (x',v)walk. 0 Theorem 3.7.10 (Liu and Li, [179]) Let n > k;::: 1 be integers. Then
F(n,k)
= (n l)(n k) + 1.
Proof Let D be a primitive digraph with IV(D)I =nand let s denote the shortest length of a directed cycle of D. Since Dis primitive, s;::: n 1. Thus by Theorem 3.7.9,
F(D,k)
= =
s(nk)+ns=s(nk1)+n ~~~k~+n=~~~~+1.
Theorem 3.7.10 proves a conjecture in [33]. By Theorem 3.7.2, the bound in Theorem 3.7.10 is best possible. The extremal matrices for F(D, k) have been completely determined by Liu and Zhou [186]. The determination of f(n, k) for general values of nand k remains unsolved. Conjecture Let n ;::: k
+ 2 ;::: 4 be integers.
f(n,k)
3.8
Show that
n1 n1 = 1 + (2n k 2)LkJLkJ
2
k.
Fully indecomposable exponents and Hall expo
nents Definition 3.8.1 For integer n > 0, let F,. denote the collection of fully indecomposable matrices in B,., and P,. the collection of primitive matrices in B,.. For a matrix A E P,., define /(A), the fully indecomposable exponent of A, to be the smallest integer k > 0 such that AI: E F ,.. For an integer n > 0, define
f,. =max{/(A) : A E P,.}.
Powers of Nonnegative Matrices
137
The Proposition 3.8.1 follows from the definitions. Proposition 3.8.1 Let n
> 0 be an integer. Then
Pn ={A : A E Bn and for some integer k
> O,A11
E Fn}·
Schwarz [232] posed the problem to determine fn, and he conjectured that In However, Chao [53] presented a counterexample.
S n.
Example 3.8.1 Let
Ms=
0 0 0 1 1
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 1 0 0 0
Then we can compute M~, fori= 2,3,4,5 to see that f(Ms) ~ 6. In fact, Chao in [53] showed that for every integer n ~ 5, there exists an A E P n such that f(A) > n. However, Chao and Zhang [54] showed that if trA > 0, then /n S n. Example 3.8.2 For a matrix A E P n and an integer k Ak+I E Fn. Let
A=
1 0 0 0 0 1 0 1 0
0 0 0 0 1
0 1 0 0 0 0 0
> 1, that Ak E F n does not imply
0 0 0 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0
Then we can verify that A8 ,A9 E F 7, A 10 ,A11 ¢ F7, and Ai E F7 for all i ~ 12. Definition 3.8.2 For a matrix A E Pn, define f*(A), the strict fully indecomposable e:xponent of A, to be the smallest integer k > 0 such that for every i ~ k, A' E F n· Define f~ = max{f*(A) :
Thus in Example 3.8.2, f(A)
A E Pn}·
= 8, f*(A) = 12.
Proposition 3.8.2 Let A E Pn. Then !(A)
S J*(A) S y(A).
138
Powers of Nonnegative Matrices
Proof Let k =!(A). Then AI: E Fn, and so f(A) :::;; f*(A). By Proposition 3.3.1 1 for any and k' ~ y(A), we have AI:' > 0, and so AI:' E F n· Therefore, /*(A) :::;; y(A). O Lemma 3.8.1 Let A E Bn and D = D(A) with V(D) = {vt,v2,··· ,vn}· For a subset X ~ V(D), and for an integer t > 0, let .Rt(X) denote the set of vertices in D which can be reached from a vertex in X by a directed walk of length t, and let Ro(X) =X. Then for an integer k > 0, the following are equivalent. (i) A" E Fn. (ii) For every non empty subset X ~ V(D), IR~:(X)I > lXI. Proof This is a restatement of Theorem 2.1.2.
O
=
Lemma 3.8.2 Let D be a strong digraph with V(D) {VI. v2 , • • • , vn} and let W = {v,.,v12,··· ,v,.} ~ V(D) be vertices of D at which D has a loop. Then for each integer t>O,
IRt(W)I
~min{8+t,n}.
Proof Suppose that .Rt(W) ::f; V(D). Since D is strong, there exist u E Rt(W) and v E V(D) \ Rt(W) such that (u,v) E E(D). Since v ¢ .Rt(W), we may assume that the distance in D from v 01 to u is t, and the distance in D from v;; to u is at least t, for any j with 1 :::;; j :::;; 8. As the directed (v,.,u) in Rt(W) contains no vertices in W {tit,}, IRt(W)I ~ (8 1) + (t + 1) = 8 + t. 0 Theorem 3.8.1 (Brualdi and Liu, [31]} let n has 8 positive diagonal entries. Then
1 be integers. Suppose that A EP,.
~ 8 ~
f*(A) :::;; n s + 1.
Proof Let D = D(A) and let W denote the set of vertices of D at which a loop is attached. Let XC V(D) be a subset with n > lXI = k > 0. By Lemma 3.8.1, it suffices to show that
IRt(X)I ~ lXI + 1, for each t
~
n s + 1.
(3.13)
Since n > k, we may assume that IRt(X)I < n. If X n W ::f; 0, then by Lemma 3.8.2 and sincet~ns+1
IRt(X)I~
IRt(XnW)I ~ IXnWI +t ~ IXnWI +n s+ 1 ~ lXI + 1.
Thus we assume that X n W = 0. Let x• E X and w• E W such that the distance d from x• to w• is minimized among all x E X and w E W. By the minimality of d, d :::;; n + 1 lXI IWI
=n  + 1 8
k
< t.
Powers of NODllegative Matrices
139
Since w* E W, x• can reach every vertex in R~:(w*) by a directed walk in D of length exactly t. By Lemma 3.8.2,1Rt(X)I ~ I.Rt({w*})l ~ IR~:(w*)l ~ k+ 1, and so (3.13) holds also. D Corollary 3.8.1A (Chao and Zhang, (54]) Suppose A E P,. with tr(A) > 0. Then !(A)~ /*(A)~
n.
Corollary 3.8.1B Let A E P,. such that D(A) has a directed cycle of length r and such that D(A) has 8 vertices lying in directed cycles of length r. Then f(A) ~ r(n 8 + 1). In particular, if D has a Hamilton directed cycle, then f(A) ~ n. Corollary 3.8.1C Let A E P,. such that the diameter of D(A) is d. Then /(A)
~
2d(n d).
Corollary 3.8.1D Let A E P,. be a symmetric matrix with tr(A)
= 0. Then /(A) ~ 2.
Proof Corollary 3.8.1A follows from Theorem 3.8.1. For Corollary 3.8.1B, argue by Theorem 3.8.1 that (Ar)n• 1 E F,.. If the diameter of Dis d, then D has a directed cycle of length r ~ 2d, and D has at least d + 1 vertices lying in directed cycles of length r. Thus the other corollaries follow from Corollary 3.8.1B. D Theorem 3.8.2 (Brualdi and Liu, (31]) For n
~
1,
f,. ~ r
+ 1) ~ r(n r + 1).
When n is odd, since D is primitive, D must have a directed closed walk of length different from (n + 1)/2. Since r(n r + 1) is a quadratic function in r, we have n 2 t2n
/(A) ~ { n•t:n3
which completes the proof.
ifn is even ifn is odd,
D
=
Conjecture 3.8.1 (Brualdi and Liu, [31]) For n ~ 5, /n 2n 4. Example 3.8.1 can be extended for large values of nand so we can conclude that f,. ~ 2n 4. Liu (170] proved Conjecture 3.8.1 for primitive matrices with symmetric 1entries.
Powers of Nollllegative Matrices
140
Example 3.8.3 Let n ~ 5 and k ~ 2 be integers with n ~ k+3. Let D be the digraph with V(D) = {VlJ v:z, ••• , vn} and with E(D) = {(vi, Vi+l) : 1 $ i $ n k} U {(vn1:+1. 'Ill)} U {(vnkl>v;),(v;,vl) : nk+2 $j $ n}. LetA= A(D) and let X~;= {vl:1:+1•""" ,vn}· Then wecanseethat foreachi = 1,2, · · • ,k, I.R.cn1:)l(X~:)I i, and so f*(A) ~ k(nk). (See Exercise 3.25 for more discussion of this example.)
=
Lemma 3.8.3 Let D be a strong digraph with IV(D)I of D of length r > 0. (i) H X s;; V(C.), then R.r+;(X)
= n, and let Cr be a directed cycle
s;; R(i+l)r+;(X), (i ~ 0, 0 $ j $
(ii) H X= V(Cr), then Ri(X)
r  1).
s;; R(i+l)(X), for each i ~ 0.
Proof (i). Let z E Rir+;(X) and x E X. Since x E V(Cr), any direct (x,z)walk of length ir + j can be extended to a direct (x,z)walk of length (i + 1)r + j by taking an additional tour of (ii). Let z E R.(X) and x E X = V(C.). Let x' E V(Cr) be the vertex such that (x',x) E E(Cr)· Then D has a directed (x',z)walk oflength i + 1. D
c•.
Lemma 3.8.4 Let r > 8 > 0 be two coprime integers, and let D be a digraph consists of exactly two directed cycles Cr and c., of length rand 8, respectively, such that V(C.) n V(C,) :/: 0. H 0 :/:X s;; V(C.), then IR.(X)I ~ min{n, lXI + l}, i ~ lr and l
> 1.
(3.14)
Proof Let x<•> denote the vertices in V(Cr) that can be reached from vertices in X by a directed walk in Cr of length i. Thus if i j (mod r), x< •> = X. Assume first that r $ i < 2r. H X= V(Cr), then since i ~ r, IR.(X)I ~ min{n, lXI + i}, and so (3.14) holds. Thus we assume X:/: V(Cr) and l = 1. H R.(X) g; V(C.), then IR.(X)I ~ lXI + 1. Assume then Ri(X) s;; V(C.). H (3.14) does not hold when l = 1, then IR.(X)I = lXI, and so R.(X) = x<•>. Since Ri(X) s;; V(C.), we have R.s(X) s;; R.(X), whicll implies that x(i•) = x s;; V(Cr), then y<•> = Y, contrary to the
=
assumption that r and 8 are coprime. Now assume that l ~ 2 and argue by induction on l.
Note that (3.14) holds if
IRcll)r+i(X)J = n, and so assume that IRcll)r+;(X)I < n. H Rir+;(X) = R(ll)r+j(X), then for each t > l  1, Rtr+;(X) = R(ll)r+;(X). Since D is primitive, for t large enough, we have IRcll)r+;(X)I = I.Rtr+;(X)I = n, a contradiction. Hence by Lemma 3.8.3, R(ll)r+;(X) = Ri)r+;(X), for each j with
141
Powers of Nonnegative Matrices
0 ::; j
~
r  1. It follows
IRzr+;(X)I
which completes the proof.
~
IRczl)r+i(X)I + 1
~
min{n, lXI + (l 1)} + 1
~
min{n, lXI + l}
D
Theorem 3.8.3 (Brualdi and Liu, [31]) Let A E Pn· H D(A) has exactly 2 different lengths of directed cycles,then
Proof Let D(A) has directed cycles Cr and Cr, of lengths rands, respectively, such that rands are coprime and such that V(Cr) n V(C.) # 0. Let D* denote the subgraph of D induced by E(Cr) U E(C.). Let Y ~ V(D) be a subset, where 1 ~ k = IYI ~ n1. First assume that !YnV(Cr)l ~ p ~ 1. By Lemma 3.8.4, I.R;(Y)I ~ k + 1, (i ~ (k p + 1)r), and so by r ~ n (k p), it follows that
(k p+ 1)r ~
L41 (n+ 1) 2J.
Now assume that Y n V(Cr) = 0. Then r ~ n k and D has a directed (y, x)walk from a vertex y E Y to a vertex x E V( Cr) of length t, where t ~ n  r  k + 1. By lemma 3.8.4, j.R;({x})l ~ k + 1, i ~ kr. Therefore j.R;(Y)I ~ k + 1, fori~ kr + n r k + 1. It follows that
Hence for all Y 0 # Y ~ X, I.R;(Y)I which completes the proof.
~ IYI + 1, i ~ l
(n:
1)2 J,
D
From the discussions above on f~ (see also Exercise 3.25), we can see that the order of
1: will fall between O(n2 /4) and O(n2 /2). It was conjectured that f~ ~ l(n + 1) 2 /4J
and this conjecture has been proved by Liu and Li (178].
142
Powers of Nonnegative Matrices
Definition 3.8.3 A matrix A E Bn is called a Hall matriz if there exists a permutation matrix Q such that Q ~A. Let Hn denote the collection of all Hall matrices in Pn. Hn ={A E Bn : A" E Hn for some integer k}. For an matrix A E Hn, h(A), the Hall e:q,onent of A, is the smallest integer k that A 11 € Hn. Define
> 0 such
hn = max{h(A) : A E Hn n IBn}, where ffin is the collection of irreducible matrices in Bn. Similarly, for an matrix A E Hn, h*(A), the strict Hall e:q,onentof A, is the smallest integer k > 0 slich that A' E H,., for all integer i ;?: k. Define H~ = {A E Bn h*(A) exists as a finite number}, and h~ = max{h*(A) : A E Hn n IBn},
Example 3.8.4 In general, Pn C Hn. When n tr(P) = 0, then P E Hn \Pn.
> 1, if Pis
a permutation matrix with
Example 3.8.5 Let
A=
Then we can verify that A
0 0 0 0 0 0
0 0 0 0 0 0 1 1
e Pr \Hr.
1 1 0 0 0 0 1
0 0 1 0 0 0 1
0 0 1 0 0 0 1
0 0 1 0 0 0 1
0 0 0 1 1 1 0
A2 E Hr but A3 ¢ Hr, and A' E Hr, for all i ;?: 4.
Proposition 3.8.3 follows from Hall's Theorem for the existence of a system of distinct representatives (Theorem 1.1 in (222]); and the other proposition is obtained from the definitions and Corollary 3.3.1A. Proposition 3.8.3 Let A E Bn and let D = D(A). Each of the following holds. (i) A is Hall if and only if for any integers r > 0 and s > 0 with r + s > n, A does not have an O,.x• as a submatrix. (ii) For some integer k > 0, A 11 E Hn if and onlyifforeachnonemptysubset X~ V(D), IR~:(X)I ;::: lXI. Proposition 3.8.4 Each of the following holds: (i) If A E H~, then h(A) ~ h*(A)
< y(A)
~ n 2  2n + 2.
Powers of Nonnegative Matrices
143
(ii) If A e P,., then h(A)::; j(A) and h*(A)::; J*(A). (iii) For each n > 1, F n k H,.. Example 3.8. 7 Let
A~ ~ [
Then A11
0 0 0 1
1 1 0 0
1
e ~ if and only if 4lk, and so A e H,. \ H:.
Example 3.8.8 It is possible that h*(A) > f(A). Let
A=
0 0 0 0 0 0 0 0 1 1
0 0 0 0 0 0 0 0 1 1 1 1
0 0 0 0 0 0 0 0
Then A f. HIO, A2 e Fto, (and so A br any k ~ 4. Therefore, h*(A) Definition 3.8.4 Let A
1 1 1 0 0 0 0 0 1
0 0 0 1 0 0 0 0
0 0 0 1 0 0 0 0 1 1 1 1 1
0 0 0 1 0 0 0 0 1 1
0 0 0 1 0 0 0 0 1 1
0 0 0 0 1 1
0 0 0 0 1 1 1 1
1 1 0 0 0 0
e PIO and A2 e Hio), A 3 f. Hto but Ale e Fto k Hio.
= f*(A) = 4 > 2 = f(A) = h(A).
e B,. and let An A12 [
A,.I lethe Frobenius normal form (Theorem 2.2.1) of A. By Theorem 2.2.1, each A" is meducible, and will be called an irY"educible block of A, i 1, 2, · • · ,p. A block ~i is a .nrial block of A if Att 01x1·
=
=
By definition, we can see that if A has a trivial block, then A 1.8.5 obtains.
f. H,.,
and so Lemma
144
Powers of Nozmegative Matrices
Lemma 3.8.5 Let A E B,.. Then A E H,. if and only if every irreducible block of A is a Hall matrix. Theorem 3.8.4 (Brualdi and Liu, [33]) Let A e B,.. Then A e fl,. if and only if the Frobenius standard form of A does not have a trivial irreducible block. Proof We may assume that A is in the standard form. H A has a trivial irreducible block. Then for any k, A" also has a trivial irreducible block, and soAk f. H,.. Assume then that A has no trivial irreducible block. Then each vertex Vi e V(D(A)} lies in a directed cycle of length m 1, (1 5 i 5 n). Let p = lcm(m1, ma, · · · , m,.). Then each diagonal entry of AP is positive, and so A e fl,.. D Definition 3.8.5 Recall that if A 0 0
e B,. is irreducible, then A is permutation similar to B1 0
0 Ba
0 0 (3.15)
0 B,.
0 0
0 0
Bh1 0
where B, E M~:1 x~:,+, (1 5 i 5 h) and k~a+l = k1. These integers k,'s are the imprimitive parameters of A. Let P e B,. be a permutation matrix and let Y11 Y2, · · · , Y,. e B~: be h matrices. Then P(Yi, Y2 , • • • , Y,.) denotes a matrix in B~:h obtained by replacing the only 1entry of the ith row of P by Yo, and every 0entry of P by a Okxl:, (1 5 i 5 h). Theorem 3.8.5 (Brualdi and Liu, [33]) Let A imprimitive parameters are identical.
e IB,..
Then A
e H~ if and only if all the
= B 1Ba · • ·Bh, Xa = BaBs · · · B~aB1, · · ·, X,. = B,.Bl · · · B1a1· Suppose first that k = k1 = ka = ·· · k,.. Then the matrices X 1 , X 2 , • • ·, X,. are in P~:, and so there exists an integer e > 0 sucll that Xf = J~;, for any integer p ~ e and 15i$h. Let q ~ eh be an integer and write q = lh + r, where I ~ e and where 0 5 r
=
xf
Powers of Nonnegative Matrices O(n1:1 )xA:2
145
as a submatrix. Since (n k1 ) + k2 > n, All& ¢ Hn, by Proposition 3.8.3. D
The following results concerning the Hall exponent, analogous to those concerning the fully indecomposable exponent, are obtained by Brualdi and Liu [33] with similar techniques. Theorem 3.8.6 (Brualdi and Liu, [33]) Let A
e IBn.
Htr(A)
= s > 0, then h*(A) ::; ns.
Theorem 3.8.7 (Brualdi and Liu, [33]) Let n ;:::: 3. Then n 2 1
hn:5 L4 J. Theorem 3.8.9 (Brualdi and Liu, [33], Zhou and Liu, [286]) Let A
e Pn.
Then h*(A)::;
[n2 /4J. Shen et al [240] introduced the exponent of rindecomposability as a generalization of fully indecomposable exponents and Hall exponents. Several useful results have been obtained in [240] and [176]. Definition 3.8.7 The exponents can be defined in a weak sense. For a matrix A E IBn, the weak exponents are defined as follows. (i) The weak primitive exponents of A, denoted by eu,(A), is the smallest integer p > 0 such that A+ A2 +···+APE Pn. (ii) The weak fully indecomposable e:xponent of A, denoted by fw(A), is the smallest integer p > 0 such that A+ A2 + · · · + AP E F n· (iii) The weak Hall exponent of A, denoted by hw(A), is the smallest integer p > 0 such that A+ A 2 + ... + AP e Hn. Theorem 3.8.10 (Liu, [171]) For A
e IBn,
ew(A)::; 2, !w(A)::;
n
l2J + 1,
and !w(A) :5
n
f 2l
Moreover, each of these bounds is best possible. Theorem 3.8.11 (Liu, [171]) For A E IBn, let WE(n),FE(n) and HE(n) denote the set of integers that can be the value of eu,(A), fw(A) and hw(A), respectively. Then WE(n) FE(n) HE(n)
= = =
{1,2} n {1,2, ... , L2J+ 1} n {1,2, ... 'f2l}.
Powers of Nonnegative Matrices
146
3.9
Primitive exponent and other parameters
The investigation of the relationship between y(A) and the diameter of D(A), and between y(A) and the eigenvalues of A has recently begun and is on the rise. We use these notations in this section: For a matrix A e B,. and D = D(A), m = m(A) denotes the degree of the minimal polynomial of A; d = d(D) denotes the diameter of D, and d(Vi, v;) the distance from v1 to v; in D. Given a matrix A, let (A)i; denote the (i,j)entry of A. Therefore, if A= (as;), then (A)i,i =as;. Similarly, given anndimensional vector a, (a); denotes the jth component of a. Finally, for an integer k with 1 ::;; k :5 n, let e 11 denote the ndimensional vector whose kth component is a 1 and whose other COII!ponents are 0.
Proposition 3.9.1 Let A E Pn, D = D(A), d = d(D) and m = m(A). Each of the following holds. (i) y(A) ~d. Moreover, if each diagonal entry of A is positive, then y(A) =d. (ii) H the lengths of a shortest directed cycle of D is not bigger than d, then y(A) :::; dn. (iii) (See (96])
{ (iv) HV(D)
I+ A + .. · + Amt > 0 A+A2 +· .. Am> 0
={vt,V2,"' ,vn}, then ifi=j if i =1 j.
Proof (i) follows from the graphical meaning of y(A). To prove (ii), consider the graph Each vertex of a shortest directed cycle is a loop vertex in D(•), from which any vertex can be reached by a directed walk of length at most n  1. Thus in D, any vertex u can reach a vertex in c. by a directed walk of length at most d, and any vertex in V(C,) to any other vertex v by a directed walk of length at most s(n 1). It follows that y(A) :5 d + s(n 1) :5 d + d(n 1). D
nC•>.
c.
Problem 3.9.1 By examine the graph in Example 3.3.1, it may be natural to conjecture that if A e P,., and if d is·the diameter of D(A), then
y(A) :5 ~ + 1. Note that the degree m of the minimal polynomial of A and d are related by m A weaker conjecture will be
y(A) :5 (m 1) 2 + 1.
(3.16) ~
d + 1.
(3.17}
Powers of Nonnegative Matrices
147
Hartwig and Neumann proved (3.17) conditionally. Lemma 3.9.1 below follows from Proposition 3.9.1 and Proposition 1.1.2(vii).
Lemma 3.9.1 (Hartwig and Neumann, [117]) Let A e P,., D Suppose V(D) = {VJ.,V2,··· ,v,.}. (i) H v~; is a loop vertex of D, then Am 1e 11 > 0. (ii) H each vertex of Dis a loop vertex, then Aml > 0.
= D(A)
and m
= m(A).
e P,., D = D(A) and m = m(A). Then y(A) ~ (m 1) 2 + 1, if one of the following holds for each vertex v e V(D): (i) v lies in a directed cycle of length at most m  1, (ii) v can be reached from a vertex lying in a directed cycle of length at most m  1 by a directed walk of length one, or (iii) v can reach a vertex lying in a directed cycle of length at most m 1 by a directed Theorem 3.9.1 (Hartwig and Neumann, [117]) Let A
walk of length one. Sketch of Proof Let V(D) = {v1,V2,··· ,vn} and assume that v11 lies in a directed cycle of length j 11 < m. Then Vfc is a loop vertex in D(Ai• ). Since Ai• e P,. with m(Ai•) ~ m(A) m, it follows by Lemma 3.9.1 that (Ai•)mlek > 0, and so
=
A(m1) 2e~:
= A((m1)i•)(m1) [(Ai•)mle~:) > 0.
Thus (i) implies the conclusion by Lemma 3.9.1. Assume then v~; can be reached from a vertex lying in a directed cycle of length at most m  1 by a directed walk of length one, then argue similarly to see that A(m1)2+lel:
=A(m1)2(Ae~:) > 0,
and so (ii) implies the conclusion by Lemma 3.9.1 also.
That (iii) implies the conclusion can be proved similarly by considering AT instead of
A, and so the proof is left as an exercise.
O
Theorem 3.9.2 (Hartwig and Neumann, [117]) Let A
y(A)
~
e P,. with m = m(A).
Then
m(m 1).
Proof Let D = D(A) with V(D) = {VJ.,V2,··· ,v,.}. By Proposition 3.9.1(iii), for each u* E V(D), there is an integer j1: with 1 ~ j,. ~ m such that Vfc is a loop vertex of D(Ai• ). By Lemma 3.9.1, (Ai•)m 1e,. > 0. It follows that Am(m1)e(l:) and so Am(m 1)
= A(m;.)(m1) [(Ai•)m1e~:] > 0,
> 0, by Lemma 3.9.1(ii). 0
Powers of Nonnegative Matrices
148
Theorem 3.9.3 (Hartwig and Neumann, [117]) Let A E Pn be symmetric with m the degree of minimal polynomial of A. Then y(A) :::; 2(m 1). Sketch of Proof As A is symmetric, every vertex of D(A2 ) is a loop vertex. Then apply Lemma 3.9.1 to see (A2 )m 1 e~: > 0. 0 Theorem 3.9.4 (Hartwig and Neumann, [117]) LetA E Pn such that D(A) has a directed cycle of length k > 0, and let m and mA• be the degree of the minimal polynomial of A and that of AI:, respectively. Then y(A) :::; (m 1) + k(mA•  1). Proof Let C~: denote a directed cycle of length k. Then V(C~:) are loop vertices in D(A•). Any vertex in D(Ak) can be reached from a vertex in V(C~:) by a directed walk of length at most mA•  1, and so in D(A), any vertex can_ reach another by adirected walk (via vertices in V(C~:)) oflength at most k(mA•  1) + (m 1). 0 Theorem 3.9.5 (Hartwig and Neumann, [117)) Let A E Pn such that A has r distinct eigenvalues. Then D(A) contains a directed cycle of length at most r. Proof H p(A), the spectrum radius of A, is zero, then A2 = 0. Thus r = 1 and, by Proposition 1.1.2(vii), D(A) has no directed cycles. Assume that p(A) > 0 and that Spec(A)
=(
A1
h
A2 l2
.. · .. ·
Ar ) • lr
Argue by contradiction, assume that every directed cycle of D(A) has length longer than = 0, by Proposition 1.1.2(vii). Thus
r. Then for each k with 1:::; k:::; r, tr(AA:)
[ ~i .~ ::: Ar
~
...
l[ [ ~ l[~:: l .~
~ .~.] .~.JA~
(3.18)
o
lr
Note that (3.18) is equivalent to the homogeneous system 1
A2
=[
A~l
Arlr
]·
(3.19)
0
The determinant of the coefficient matrix in (3.19) is a Vandermonde determinant with A; :f: 'AJ, whenever i :f: j. Thus the system in (3.19) can only have a zero solution Alh A2l2 Arlr = 0, a contradiction. 0
=
= ···
Corollary 3.9.5A ([117]) Let A E Pn with m eigenvalues, then y(A):::; (m1) 2 •
= m(A).
If A has at most m 2 distinct
Powers of Nonnegative Matrices
149
The conjectured (3.17) remains unsolved in (117). In 1996, Shen proved the stronger form of (3.16), therefore also proved (3.17). For a simple graph G, Delorme and Sole (73] proved "Y(G) can have a much smaller upper bound. Theorem 3.9.6 (Delorme and Sole, [73]) Let G be a connected simple graph with diameter + 1, then 'Y(G) ~ d +g. In particular, if G is not bipartite, then "Y(G) ~ 2d. d. H every vertex of G lies in a closed walk of an odd length af most 2g
Example 3.9.1 The equality "Y(G) = 2d may be reached. Consider these examples: G is the cycle of length 2k+ 1 (d = k and "Y = 2k); G = K,. with n > 2 (d = 1 and "Y = 2); and G is the Petersen graph (d = 2 and "Y = 4). The relationship between "Y(A) and the eigenvalues of A is not clear yet. Chung obtained some upper bounds of "Y(A) in terms of eigenvalues of A. For convenience, we extend the definition of "Y(A) and define "Y(A) oo when A is imprimitive.
=
Theorem 3.9.7 (Chung, [58]) Let G beakregular graph and with eigenvalues A; so labeled that IA1I ~ IA2I ~ · · • ~ IAnl· Then log(n 1)
"Y(A)
~ flogklogiA2Il·
Proof Let u1o u2, · .. , Un are orthonormal eigenvectors corresponding to A1, A2, .. · , An, respectively, such that U1 Thus if
C!
1 = ..;n,Jnxl and Al = k.
1) m > n 1, then (Am)r,s
=
,E ~m(Uiul)r,s
~
km I,EAf'(u,)r(uo).,
~
km IA21m {.E l(uo)r(uo)sl}
n
n
i>l
i>l
k: }! }! = k: IA2Im{1(u,)~}t{l(uo)~}l ~
IA21m {.EI(uo)rl 2 i>l
> 0.
{.EI(uo)sl 2 i>l
150
Powers of Nollllegative Matrices
Therefore, ifm >
L 10~0:~l~g~~21 J, then Am> 0. 0
With similar techniques, Chung also obtained analogous bounds for non regular graphs and digraphs. Theorem 3.9.8 (Chung, [58]) Let G be a simple graph with eigenvalues A& so labeled that IXII ~ I.X2I ~ · ·· ~ fA,. I. Let ui be an eigenvector corresponding to AI, let w = min,{l(ui)il} and let d(G) denote the diameter of G. Then d(G)
<
(A)
"Y
< flog(1 w2) logw2)l
logi.XIIlogi.X2I
Theorem 3.9.9 (Chung, [58)) Let A e B,. be such that each row sum of A is k, and A has n eigenvectors which form an orthonormal basis. Then
"Y
(A)
< f log(n 1) l

logklogi.X2I ·
For further improvement along this line, readers are referred to Delorme and Sole [73). For a matrix A e Bm,n• define its Boolean rank b(A) to be the smallest positive integer k such that some FE Bm,l: and G E B~:,,., A= FG. With Lemma 3.4.2, Gregory et al obtained the following. Theorem 3.9.10 (Gregory, Kirkland and Pullman, [106)) Let A E P,.. Then "Y(A) ~ (b(A)  1)2 + 2. The next theorem can be obtained by applying Lemma 3.7.3 and Exercise 3.23. Theorem 3.9.11 (Liu and Zhou, [185), Neufeld and Shen, [207]) Let A e P,. and let r denote the largest outdegree of vertices of a shortest cycle length with length s in D(A). Then "f(A) ~ s(n r)
+ n ~ (n r + 1) 2 + r 1.
Open Problem From Proposition 3.9.1(i), one would ask the question what the matrices A with "Y(A) =dare? In other words, the problem is to determine the set
{A E P,. : "Y(A) = d, where dis the diameter of D(A)}.
3.10
Exercises
Exercise 3.1 Let ai,tl2,as > 0 be integers with gcd(a1ta2,a3 ) = 1. Let d = gcd(ai,Il2) and write ai a~d and a2 a~d. Let ui,u2, zo,!fo,ZQ be integers satisfying a~ui +
=
=
Powers of Nonnegative Matrices
14112
151
= 1 and a1xo + ll2Yo + aszo = n respectively.
OJX + ll2Y + asz
Show that all integral solutions of
= n can be presented as x {
=xo + a~t1 u1aat2
y =Yo a1t1  u2ast2
z
= zo +dt2,
where h, t2 can be any integers. Exercise 3.2 Let 8 ~ 2 be an integer and suppose rl> r2, · · · , r, are real numbers such that r 1 ~ r 2 ~ • • • ~ r. ~ 1. Show that
Exercise 3.3 Assume that Theorem 3.1.7 holds for induction on 8.
8
= 3.
Prove Theorem 3.1.7 by
Exercise 3.4 Let D be a strong digraph. Let d'(D) denote the g.c.d. of directed closed trail lengths of D. Show that d'(D) = d(D). Exercise 3.5 Let D be a cyclically k partite directed graph. Show each of the following. (i) If D has a directed cycle of length m, then kim. (ii) If hlk, then D is also cyclically hpa.rtite. Exercise 3.6 Show that if A e M;t is irreducible with d = d(D(A)) > 1, and if hid, then, there exists a permutation matrix P such that P Ah pI = diag(A1 , A2 , • • • , A,.). Exercise 3. 7 Prove Corollary 3.2.3A. Exercise 3.8 Prove Corollary.3.2.3B. For (i), imitate the proof for Theorem 3.2.3(iii). Exercise 3.9 Prove Corollary 3.2.3C. Exercise 3.10 Prove Corollary 3.2.3D. Exercise 3.11 Show that in Example 3.2.1, A is primitive, B is imprimitive and A ....., B. Exercise 3.12 Let D1,D2 be the graphs in Examples 3.3.1.and 3.3.2. Show that 'Y(D;) = (n 1) 2 + 2 i. Exercise 3.13 Complete the proof of Theorem 3.3.2. Exercise 3.14 Prove Lemma 3.4.1. Exercise 3.15 Prove Lemma 3.4.2. Exercise 3.16 Prove Lemma 3.4.3. Exercise 3.17 Prove Lemma 3.4.5.
152
Powers of Nonnegative Matrices
Exercise 3.18 Let A E B, and let no, s0 be defined as in Theorem 3.5.2. Apply Theorem 3.5.2 to prove each of the following. (i) If A E IB,.,,, then k(A)
~ n + s0 (~ 
2).
(ii) Wielandt's Theorem (Corollary 3.3.1A). (iii) Theorem 3.5.1. Exercise 3.19 Let X be a matrix with the form in Lemma 3.5.1. Show each of the following. (i) If a 0, then
=
(ii) If a
= 1, then
Exercise 3.20 Suppose that A E B,. with p(A) k(A) +pl.
> 1. Show that p(A) < n 2 and h(A) S
Exercise 3.21 Let n > 0 denote an integer, and let D be a primitive digraph with V(D) {vt,· · · , v,.} such that expv(vt) ~ expv(va) ~ · · · ~ expv(v,.). Show that (i) F(D,1) = expv(v,.) ='Y(D) and /(D,1) = expv(v1 ). (ii) f(n, n) 0, /(n,1) exp(n,1), and F(n,1) exp(n).
=
=
=
=
Exercise 3.22 Suppose that r is the largest outdegree of vertices of a shortest cycle with length s in D. Show that expv(1) ~ s(n r) + 1. Exercise 3.23 Let A E B,. be a primitive matrix and let D = D(A). For each positive k ~ n, show each of the following. (i) expv(k) is the smallest integer p > 0 such that A~' has k all one rows. (That is J1cxn is a submatrix of AP.) (ii) J(D,k) is the smallest integer p > 0 such that AP has a k x n submatrix which does not have a zero column. (iii) F(D, k) is the smallest integer p > 0 such that AP does not have a k x n submatrix which had a zero column. Exercise 3.24 Let n ;::: k 2: 1. Then f(D,.,k)
={
1 + (n k 1)(n 1) k 2(n k)  1
if n 1
if 'i ~ k
=O(mod k) < n 1
Powers of Nollllegative Matrices
153
Exercise 3.25 Show that f(n, n 1) = 1 and f(n, 1) = n 2  3n + 3. Exercise 3.26 Prove Lemma 3.7.6. Exercise 3.27 Let D be the digraph (i) Show that f*(A) = k(n k). (ii) Show that for n ;;:: 5,
Exercise 3.28 Let A e P,. with m most m 2, then y(A) ~ {m 1) 2 • Exercise 3.29 Let A E P,. with m
= m(A).
H D(A) has a directed cycle of length at
=m(A) ;;:: 4. H every eigenvalue of A is real, then
y(A) ~ 3(m 1) ~ (m 1) 2 •
Exercise 3.30 Prove Corollary 3.9.5A. Exercise 3.31 Let A E P,. with m = m(A). If A has a real eigenvalue with multiplicity at least 3, or if A has a non real eigenvalue of multiplicity at least 2, then y(A) ~ (m1) 2 • Exercise 3.32 Prove Theorem 3.9.10. Exercise 3.33 Prove Theorem 3.9.11.
3.11
Hint for. Exercises
Exercise 3.1 First, we can routinely verify that for integers t1o t2,
=
z xo + a~t1 u1aat2 { y =Yo  a1t1  u2ast2
= zo +dt2, satisfy the equation a1x + G2'Y + aaz = n. z
Conversely, let x,y,z be an integral solution of the equation a1x+a2 y+a3 z
=n. Since
we derive that
d(at(x xo) + bt(Y yo))= c(z zo). Since gcd(c, d)
= 1, there exists an integer t 2 such that z = zo + dt2 • It follows that a1(x xo) + bt(Y Yo)= ct2.
154
Powers of Nonnegative Matrices
It follows that there exists an integer t 1 such that
Exercise 3.2 Argue by induction on
8
?::: 2.
Exercise 3.3 First prove a fact on real number sequence. Let u1, u2 be real numbers such that u 1 ?::: u 2 ?::: 1. Then U1
~u1.
 1+u2 U2
This can be applied to show that if u 1 ?::: u 2 ?::: • • • ?::: U&. ?::: 1, then
For nu.mber a1,aa, ···,a, satisfying Theorem 3.1.7, let gcd(a1,d2) da, gcd(alo · .. , a,1) = d,2, 8  1 > 2. Then ,~..(
'I'
a1o • .. , a,
)
~
a1aa d1
asd1 d2
= dto gcd(atoaa,a3 ) =
a,_l ds3 a,_2
  +   + · · · + ....::.....:......:o.....::.
+a.a,2 
•
2:::: a; + 1 i=l
<
a~aa +as 1
[E (d~i i=l
1) + ds2] •
i+l
This, together with the fact above on real number sequence, implies that
cp(a1, ···,a,)<
a1aa T + asd1.
H, among a 1 , a2, · · · , a., there are 8  1 numbers which are relatively prime, then by induction, Theorem 3.1.7 holds. Therefore, d1 > 1. H d1 = p0 is a prime power, for some prime number p and integer a > 0, then ds2 = ri for some integer b with 0 < b < a. Hence we have gcd(a~o a 2, · · · , a,) = p6 for some integer o> 0, a contradiction. Therefore, d1 must have at least two prime factors. Thus d1 ?::: 6, a2::;; a1 d~o as::;; a2 2::;; n 8. As d1la2, we have dt::;; n/2, and so
Powers of Nonnegative Matrices
155
As n(nd:•h) is a decreasing function of d11 ad as d1 ;::: 6, we have both and (n d1  2)d1 5 (n 2) 2 /4.
n(ni,d,)
5 (n 2) 2 /4
Exercise 3.4 Note that d'ld since cycles are closed trails. By Euler, a closed trail is an edgedisjoint union of cycles, and so did'. Exercise 3.5 Apply Definition 3.2.5 and combine the partite sets. Exercise 3.6 By Corollary 3.2.3A and argue similarly to the proof of Lemma 3.2.2(i). Exercise 3.7 By the definition of d(D) (Definition 3.2.4). Exercise 3.8 For (i), imitate the proof for Theorem 3.2.3(iii). (ii) is obtained by direct computation. Exercise 3.9 By Lemma 3.2.2(i), P A"pl = diag(B1 , .. · , B.,). By Corollary 3.2.3B, B, B:"' > 0 for some smallest integer m 0 > 0. Let m = m&Xi{m,}. Then Bf' > 0 and Bi"H > 0, and so p(A) d, by definition. Exercise 3.10 (i) follows from definition immediately. Assume that p > 1. Then by Corollary 3.2.3C, p = d = d(D(A)). Then argue by Theorem 3.2.3. is primitive. Therefore,
=
Exercise 3.11 It can be determined if A is primitive by directly computing the sequence A, A2 , • • • • An alternative way is to apply Theorem 3.2.2. The digraph D(A) has a 3cycle and a 4cycle, and so d(D(A)) 1, and A is primitive. Do the same for D(B) to see d(D(B)) = 3. Move Column 1 of A to the place between Column 3 and Column 4 of A to get B, and so A "'p B.
=
Exercise 3.12 Direct computation gives y(D1 ) = y(vn,vn) and 7(D2) = y(vl>vn), Exercise 3.13 Complete the proof of Theorem 3.3.2. The following take care of the unfinished cases. If k E {2,3, · · · ,n} and k 5 d 5 n15 n, then write d = k+l for some integer l with 0 $ l 5 n  k  1. Consider the adjacency matrix of the digraph D in figure below.
156
Powers of Nonnegative Matrices
k+l
k+2
1
k+l+2
n
Figure: when k E {2, 3, · · · , n} and k ~ d ~ n 1 ~ n
Again, we have
'Y(i,j) { ; :
ifi =i = 1, otherwise,
and so 'Y(A) = k in this case also. Now assume that k E {n+1,n+2, ... ,2nd1}. Note that we must haved < n1 in this case. Write k 2n  l for some integer l with d + 1 ~ l ~ n  1. Consider the ac:ljacency matrix of the digraph D in the figure below
=
Powers of Nonnegative Matrices
1
ld1
157
l1
ld
n
Figure: when k E {n+ 1,n+2,··· ,2nd1}. Note that in this case, we must have d < n  1. Thus =2nl=k y(i,j) { $; 2n l = k
ifi=j andj=n, otherwise,
and so y(A) = k, as desired. Exercise 3.14 Definition 3.4.2. Exercise 3.15 (i) follows from Definition 3.2.3. (ii). Use (BA)l+1 = B(AB)" B and U JV = J. (iii). Apply (ii). Exercise 3.16 Let k be an integer such that A;(k) = J, Vi = 1, 2, · · · ,p. Then by Lemma 3.4.1, A"= (A1 (k), · · · ,Ap(k))~c and Ak+P = (A1 (k+p), · · · ,Ap(k+p))lc+p· Thus A;(k) = A;(k+p), Vi, and k+p ::p (modp), and so A• = A•+P. It follows that k(A) $; k. Conversely, assume that for some j, A1 (k 1} =F J. Note that Alc 1 = (A 1 (k1},··· ,Ap(k 1))1c1 and A"1+P = (A1(k 1 + p),··· ,Ap(k 1 + p}}lcl+p· Since A1 (k 1 +p) = J =/= A 1 (k 1), A" 1 =/= A11  1+P, and so k(A) > k 1. Exercise 3.17 Let m = n; for some i. Then A,(p) E Mm,m is prinlitive, and so by Corollary 3.3.1A, y(A;(p}) $; m 2  2m+ 2. Apply Lemma 3.4.4 with t = 1 and i 1 = i to get the answer. Exercise 3.18 (i). When A is irreducible, n =no and p = d(D). (ii). Theorem 3.3.1 follows from (i) with p = 1. (iii). When A is reducible, apply a decomposition. Exercise 3.19 Argue by induction on k.
Powers of NoDD.egative Matrices
158
Exercise 3.20 By Definition (of primitive matrix}, J.t(A) = n 2 if and only if A is primitive, and if and only if p(A) = 1. The inequality of h(A) follows from the definitions of p(A) and k(A}. Exercise 3.21 V(D) = {v11· · · (i) F(D, 1} (ii) f(n, n)
Let n > 0 denote an integer, and let D be a primitive digraph with ,v,.} such that expv(VI) $ expv(V2) $ ··· $ e:~:pv(v,.). Show that e:~:pv(v,.) 'Y(D) and f(D, 1} expv(vt)· 0, /(n, 1) exp(n, 1), and F(n, 1} e:~:p(n).
= =
= =
=
=
=
=
Exercise 3.22 Let w e V(C.) and d""(w) r. Let vl {v I (w,v) e E(D}}. Then IVil = r. Denote V(C.) n Vi= {wt}. Then D has a directed path oflength 8 from w1 to a vertex in V1 • In n•, there is at most one vertex, say x, which cannot be reached from loop w1 by a walk of length n  r. Thus a path of length n  r + 1 from 101 to x must pass through some vertex z (say) of V1 , and so there is a path of length n r from z to z. It follows that there is a walk of length 8(n r) + 1 from w to z in D(A). Exercise 3.23 Apply definitions. Exercise 3.24 Apply Theorem 3.7.1. Exercise 3.25 By Theorem 3.7.5, /(n,n1) $ 1 and f(n, 1) $ n 2 3n+3. By Theorem 3. 7.l(ii), f(Dn, 1) = n 2  3n + 3.
=i
Exercise 3.26 Let t and let C. denote a directed cycle of length 8. Pick X :::; {x1 ,z2 ,··· ,x11 } ~ V(C.) such that C, has a directed (z;,Xi+I)path of length t, where x; xj whenever j j' (mod k). Since D is primitive and since n > 8, we may assume that (x 11 z) E E(D) for some z E V(D) \ V(C.). Let Y be the set ofvertices that can be reached from vertices in X by a directed path of length 1. Then {x; 1 , • • • , x;,., z} ~ Y. Construct n
=
=
1:
Exercise 3.27 First use Example 3.8.3 to show that $ k(n k). As a quadratic function ink, k(n k) has a maximum when k = n/2. The other inequality of (ii) comes from Proposition 3.8.2 and Wielandt' Theorem (Corollary 3.3.1A). Exercise 3.28 Apply Theorem 3.9.4 with mA• $ m and k $ m 2. Exercise 3.29 Since p(A) > 0 and tr(A) > 0, D(A) must have a directed cycle of length 2$m2. Exercise 3.30 Apply Theorem 3.9.5 and then Exercise 3.42. Exercise 3.31 In either case, A has at most m  2 distinct eigenvalues. Apply Corollary 3.9.5A.
Powers of Nonnegative Matrices
Exercise 3.32 Suppose that A (b1) 2 + 2.
159
= Xnx&Yi.xn·
By Lemma 3.4.2, y(A) :5 y(XY)
Exercise 3.33 Apply Lemma 3.7.3 and Exercise 3.22.
+ 1 :5
Chapter 4
Matrices in Combinatorial Problems 4.1
Matrix Solutions for Difference Equations
Consider the difference equation (also called recunence relation) with given boundary conditions Un+k
=
al'lln+k1 'Ui
+ a2'lln+l:2 + ••· + a1c'Un + b,.
= c,, 0:::; l:::; k 1,
(4.1) (4.2)
where the constants a 1 , • • • , a~;, co,·· · , c~: 1 and the sequence (b,.) are given. A solution to this equation is a sequence (u,.) satisfying (4.1) and (4.2). H b,. 0, for all n, then the resulting equation is the corresponding homogeneous equation to (4.1).
=
Definition 4.1.1 The equation (4.3) is called the characteristic equation of the difference equation in (4.1), and the matrix 0 0
1
0
0 0
0 0
0 1
A=
(4.4) 0
0
0
0
1
a~:
ar.1
a1:2
ll2
a1
is called the campanian matN of equation (4.3). Note that by HamiltonCayley Theorem,
A 11

a1A1c 1 t12A11 
161
2  • • ·
a~:I
=0
Matrices in Combinatorial Problems
162
A usual way to solve (4.1) and (4.2) is to solve the characteristic equation of the difference equation, to obtained the homogeneous solution, which satisfies the difference equation (4.1) when the constant b,. on the right hand side of the equation is set to 0, and the particular solution, which satisfies the difference equation with b,. on the right hand side. The homogeneous solution is usually obtained by solving the characteristic equation (4.3). However, when k is large, (4.3) is difficult to solve. The purpose of this section is to introduce an alternative way of solving (4.1), via matrix techniques.
Theorem 4.1.1 (Liu, (169]) Let A be the companian matrix in (4.4), and let C
=
(co,CI, · · · ,c.1:1f
B;
=
(O,O,···,O,b;)T,j=0,1,2,···
and let
A me+ Am 1Bo + Am 2 B1 + ·· · + A.l: 1Bm.1:
= (a<m), · · ·) T.
(4.5)
Then (a<m)) is a solution to (4.1).
Proof By (4.4),
Amc Ami 1B;
= =
A:
I:a•Am•c, and i=1 .1:
I:BiAmi 1B;, j
= 1,2,···.
i=1
Thus by (4.5), ( a
= A"+"c + Am+A:1 Bo + Am+A:2 B1 + ... + Ak1 B,. A: A: 1: = I: a,An+kiC +I: aiAn+A:1i Bo + .. •+I: BiAI:iBn1 + Ak1 B,. i=1
=
Bt
i=1
i=1
~An+.l:2iBi1)
(An+l:1+ tAn+lc1iBi1) +B2 (An+k2C+ •=1 •=1
Matrices in Combinatorial Problems
163
Note that
A'
=
A'Bi Al:tB,.
=
[
0]
0 1 0 ...
0 ...
, for i
:
:
= 0,1,2, · · · , k 1.
(0, · • · )T, 0 ~ i ~ k  2, and any j, (b,.,· .. )T.
It follows that {a(n+lc)' ... ) T
=
at {a(n+k1)' ... ) T + a2 {a(n+lc2)' ..• ) T + (0, ... )T +as {a
+···
+a~: (a<">, ...
t+
+ (0, ... )T
(0,··· )T + (b,.,··· )T.
=
Therefore, a
(a
=
1>, •. ·)T =
( a<" 1) I and so (4.2) is also satisfied.
·t
••• )
T
=
AoC=(eo,···)T AC=
(ct,···f
A" 10 =
(Ck11 •••
)T
O
We shall find a combinatorial expression of the aCml•s. Let Am
= (ai:_j>).
Then by
(4.5), we have, for m ;:: k, that k1
a<m> = L
i=D
m1:
coa~~1
+L
b,a1~i1).
(4.6)
i=D
Definition 4.1.2 For a matrix A= (a13) eM,., define a weighted digraph D (called the {1,2, .. · ,n}. Jfur i,j e V(D), weighted associate digraph of A) as follows. Let V(D) an arc (i,j) e E(D) with weight w(i,j) = BiJ exists if and only if a0J :j:. 0. If T = v1 + w(vo, Vo+t)· V2 + ... + VI; is a directed walk of D, then the weight ofT is w(T) = Therefore if Am = (a~j>), then a~j> is the sum of the weights of all directed (i,j)walks of length m (the weighted version of Proposition 1.1.2(vii)).
=
n:::;
L emma 4 •1 •1 a1(m) J
= a33(m+lj) .
164
Matrices in Combinatorial Problems
Proof Note that for 1 :::;; m :::;; k  1, (m) _
(m+lj) _ {
a,i  aii

ifm=j1 ifj::S:m::S:k1.
1
0
Now assume m ?: k. Note that any directed (1,j)walk of length m must be the form 1~
2~ ···~j ~···~ k~
...
~j.
which yields a onetoone correspondence between all directed (i, j)walks of length m and the directed (j,j)walks of length m j + 1. Since w(i, i + 1) = a&,i+l = 1, for all1 :::;; i :::;; j  1, we have a~jl = a~j+li). O Lemma 4.1.2 Define
j(t)
= 0 for each t < 0, j(o) = 1 and (
•• ~ 0, (&
Then for j
=
1. 2, . ..•
+ 82 + •••+ SJ:
81
)
als 1 112s 2 ••• al:•• •
St,82,""' ,8/c
•1+2•2+···+•··=·
•>
= 1,2,··· ,k, j
(m) _ "
ajj

.
L..J G/c•+1
/(m/c+i1)
(4.7)
i=l
Proof By Definition 4.1.2, D has these directed (k, k)walks:
Weight
k~k1~k
Length 1 2
k~k2~k1+k
3
as
Type Ct
k~k
02 Cs
Walk
a1
a2
k
Therefore, any directed (k, k)walk of length m must have 81 of Type Ct. C2, ... ,8/c of Type C~c. For any j with 1:::;; j:::;; k 1, D has these directed (j,j)walks: Type 0{
Walk j~···+k···~k~1~2~···~j
c~
;~
c~
;~
.. ~k···~k~2~3~ ... ~;
.. ·~k .. ·~k~3~4+···~;
82
of Type
Matrices in Combinatorial Problems
165
For each i with 1 ~ i ~ j, the first directed (j,k)wal.k of length k j and the last directed (k,j)wal.k of length j  i+ 1 of c; form a directed closed walk oflength k i + 1. Thus, for each j with 1 ~ j ~ k, (m)
aii i
E J=l
=
E
•t
+ 2•2 + ... + •••  "" •i ~O,t'l/:.•i+l
•t>O.c=••+•
=
i Eaki+1 J=l
E •t
+ 2•2 + • • • + ••• •s
~
+i
m  Jr O,(t = 1,2,··· ,It)
 1
Therefore the lemma follows by the definition of f(m).
O
Th.eorem 4.1.2 (Liu, [169]) The solution for (4.1) and (4.2) is i
l1
Um
= Bi:d(m1:+1) + E
Cj1 E
i=1
at.i+d(mlj+i)
+
ml+1 E
i=1
b;d(m•i+l)
J=l
Proof This follows from Theorem 4.1.1, (4.6) and {4.7).
0
Corollary 4.1.2A Another way to express Um is .1:
Um
j
=E
Cj1 E
j=1
B.l:i+d(mlHi)
+
m.1:+1 E b;tf(mTc3+1)
i=1
j=l
Corollary 4.1.2B (Tu, (261]) Let k and r be integers with 1 equation {
Un+lc Uo
~
r
~
k 1. The difference
= aun+r + bun + bn
=CO, U1 = c1, • • • , ''1:1 = CTc1
has solutions r1
Um
=E
Tel
c;bf(mkj)
+E
j=D
c;/(mj)
+
j=r
m1:+1
E
b;d(mTcj+l).
i=l
where
•• + (k a,v
Proof Let a1c = b, a1cr Theorem 4.1.2. O
or)v 0
=
tn
~
= a and all other Bi = 0.
Then Corollary 4.1.2B follows from
166
Matrices in Combinatorial Problems
Corollary 4.1.2C Letting a= b = 1, b,. we obtain the Fibonacci sequence
=
/(m)
= 0 r = 1, and eo = c1 = 1 in Corollary 4.1.2B,
= L 2•+v=m ~0
=··
Example 4.1.1 Solve the difference equation Fn+5
{
= 2Fn+4 + 3Fn + (2n 1)
Fa= l,F1
= O,F2 = 1,Fa = 2,F4 = 3.
In this case,
k = 5, r = 4, a = 2, b = 3, bn = 2n  1 eo= 1,c1 = O,C2 = 1,es = 2,c.t = 3. and so
Fn
=
3
L
n  4x 5
z=D
X
L(n5)/SJ (
+3
+6
+3
)
L
n  4x  7
z=D
X
L
n  4x  8
z=D
X
E
n  4x  4
z=D
X
L(n7)/SJ (
L(n.8)/SJ (
L(n4)/SJ (
3"'2n5z5 )
)
)
3"'2n15z7
3"'2n15z8
a"'2nsz4
n4 L(n4j)/5J ( . ) + L(2j3) n4z43 3"'2n5z4i. J~l z=D X
L
4.2
Matrices in Some Combinatorial Configurations
Incidence matrix is a very useful tool in the study of some combinatorial configurations. In this section, we describe how incidence matrices can be applied to investigate the properties of system of distinct representatives, of bipartite graph coverings, and of certain incomplete block designs. Definition4.2.1 Let X= {zt.X2 1 • • • ,xn} be a set and let A= {Xt.X2, ·· · ,Xm} denote a family of subsets of X. (Members in a family may not be distinct.)
Matrices in Combinatorial Problems
167
The incidence matri:l: of A is an matrix A= (ai;) E Bm,n satisfying: if Xj E Xi if Xj Sit X,, where 1 ~ i
~
m and 1 ~ j
~
n.
Example 4.2.1 The incidence of elements of X in members of A can also be represented by abipartitegraphGwith vertex partite sets X= {x1,x2, ·· · ,xn} andY= b/1,7/2 1 ' • • ,ym} such that XiYi E E(G) if and only if Xi E Xi, for each 1 ~ i ~nand 1 ~ j ~ m. Let A be the incidence matrix of A. Note that a set of k mutually independent entries (entries that are not lying in the same row or same column, see Section 6.2 in the Appendix) of A corresponds to kedges in E(G) that are mutually disjoint (called a matching in graph theory), In a graph H, a vertex and an edge are said to cover each other if they are incident. A set of vertices cover all the edges of H is called a vertex cover of H. Since a line in the incidence matrix A of A corresponds to either an element in X or a member in A, either of which is a vertex in G. Therefore, Theorem 6.2.2 in the Appendix says that in a bipartite graph, the number of edges in a maximum matching is equal to the number of vertices in a minimum vertex cover. Definition 4.2.2 A family of elements (xi : i E I) in Sis a system of representatives (SR) of A if Xi EAt, for each i E I. An SR (xi : i E I) is a system of distinct representatives (SDR) of A if for each i,j e I, if if: j, then Xi f: x;. Example 4.2.2 Let X= {1,2,3,4,5}, X 1 = X2 = {1,2,4}, Xs = {2,3,5} and X4 = {1,2,4,5}. Then both D 1 = {1,2,3,4} and D2 = {4,2,5, 1} are SDRs for the same family
X. However, for the same ground set X, if we redefine X 1 = {1,2},X2 = {2,4},Xs {1,2,4} and X4 = {1,4}. Then this family {X1.X2,Xs,X4} does not have an SDR.
=
Example 4.2.3 Let A be the incidence matrix of A. By Definitions 4.2.1 and 4.2.2, A set of k mutually independent entries of A corresponds to a subset of k distinct elements in X such that for some k members X,., x,., ·.. ,X,~ of A, we have xi E Xip for each 1 ~ j ~ k (called a partial transversal of A). Thus a partial transversal of IAI elements is just an SDR of A. Several major results concerning transversals are given below. Proposition 4.2.1 is straightforward, while the proofs for Theorems 4.2.1, 4.2.2 and 4.2.3 can be found in [113]. Proposition 4.2.1 Let X= {xt.x2 , ... ,xn} be a set and let A= {X1.X2, ... ,Xm} denote a family of subsets of X. Let A e Bm,n denote the incidence matrix of A. Each of the following holds.
Matrices in Combinatorial Problems
168
(i) The family A has an SDR if and only if PA, the term rank of A, is equal tom. (ii) The number of SDRs of A is equal to per(A). Theorem 4.2.1 (P. Hall) A family A subset J!;;; I, I U;eJ X;l ~ IJI.
= {Xi I i E I} has an SDR if and only if for each
Given a family X= {X1,X2, · · · ,Xm}, N(X) of SDRs of the family.
= N(X1. · · · ,Xm) denotes the number
Theorem 4.2.2 (M. Hall) Suppose the family X= {X1,X2, ·· · ,Xm} has an SDR. Hfor each i, IX•I ~ k, then N(X) ~ {
k'
.lc! (lcm)l
ifk=s;m > m.
if k
Van Lint obtains a better lower bound in [162]. For integers m > 0 and n 11 • • • , nm, let
Theorem 4.2.3 (Van Lint, [162]} Suppose the family X SDR. H for each i, IXol ~ n,, then
= {X1.X2 ,··· ,Xm} has an
N(X) ~ Fm(n1,n2,··· ,nm)·
Definition 4.2.3Let S be a set. A partition of Sis a collection of subsets {A1 ,A2, · · · ,A,.} such that (i) S = U~ 1 Ai, and (ii) A 1 n A; = 0, whenever i ::f; j. Suppose that S has two partitions {A11 A2 ,··· ,Am} and {B1,B2,··· ,Bm}· A subset E !;;; S is a system of common representatives (abbreviated as SCR) if for each i,j E {1,2,··· ,m}, En A.
::10 and EnnB; ::10.
Theorem 4.2.4 Suppose that S has two partitions {A1,A2, · · · ,Am} and {B~tB2, · · · ,Bm}· Then these two partitions have an SCR if and only if for any integer k with 1 :::;; k :::;; m, and for any k subsets A1, Ai2 , • • • , A.., the union U'=I A,1 contains at most k distinct members in {Bt.B2,··· ,Bm}· Interested readers can find the proofs for Theorems 4.2.14.2.4 in (222] and (162]. Definition 4.2.4 For integers k,t,>. ~ 0, a family {X1 ,X2 , • • • ,X6 } of subsets (called the blocks of aground set X= {xt.x2, ... ,x.,} is atdesign, denoted by S>.(t,k,v), if
Matrices in Combinatorial Problems
169
(i) IX;I = k, and (ii) for each t element subset T of X, there are exactly~ blocks that contain T. An S.x(2,k,v) is also called a balanced incomplete block design (abbreviated BIBD, or a (v,A:,~)BIBD) . A BIBD with b = v is a symmetric balanced incomplete block design (abbreviated SBIBD, or a (v,A:,~)SBIBD). An S1 (2,A:,v) is a Steiner system. Example 4.2.4 The incidence matrix of an 5 1 (2,3, 7) (a Steiner triple system):
A=
1 0 0 0 1 0 1
1 0 1 0 0 0 1 1 0 1 0 0 0 1 1 0 1 0
0 0 1 0 0 0 1 0 0 0 1 0
1 0 1 1 1 0 0 1 1 0 0 1
Proposition 4.2.2 The five parameters of a BIBD are not independent. They are related by these equalities: (i) rv = bk. (ii) ~(v 1) = r(k 1). Theorem 4.2.5 Let A= (a;J) E Bb,v denote the incidence matrix of a BIBD S,.(2,k,v). Hv > k, then (i) AT A is nonsingular, and
(AT A)1
= ((r ~)I,+ AJ,)1 = r ~.X (I,~ J,) .
(ii) (Fisher inequality) b::?: v. Proof By the definition of an S>.(2,A:,v), we have the following:
r(k 1)
=
.X(v 1)
AJ,,b
= = =
kJb
J,,bA ATA
rJ, (r .X)Iv + AJ,.
H v > k, then r > ~. and so the matrix (r .X)I, + .XJ, has eigenvalues r + (v + 1).X and .X r (with multiplicity v 1). Since all v eigenvalues of AT A are nonzero, AT A is nonsingular, and the rank of A is v. By direct computation,
Matrices in Combinatorial Problems
170
Note that A E B&,v and the rank of A is v. Fisher inequality b ~ v now follows.
0
Theorem 4.2.6 (BruckRyserChowla, (222]) H a (v, k, ~)SBffiD exists, and if v is odd, then the equation
has a solution in :c,y, and z, not all zero. We omit the proof of Theorem 4.2.6, which can be found in (222). In the following, we shall apply linear algebra and matrix techniques to derive the Cannor inequalities. From linear algebra, it is well known that the matrices P = .
and
Q=l• P=l•  1  (AATr~
~k r
J•).
By the definition of an incidence matrix, any m x m principal submatrix of Q can be obtained as follows: Pick a subfamily A' = {X{, X~,··· , x:,.} of A. Let J.Li; denote the common elements in both x; and in Xj, (1 ~ i ~ m, 1 ~ j ~ m), and let U = (J.L.;). Then 1 ( U J& ~k ) Qm=lmr~
r
is an m x m principal submatrix. Since Q is semidefinite positive, det(Qm) ~ 0. Hence we have the following. Theorem 4.2.7 (Cannor inequalities) With the notations above, if Qm is a principal submatrix of Q = 1& P, then det(Qm) ~ 0. Note that there are 26  1 such inequalities. When m = 1, this yields Fisher inequality. When m = 2, Cannor inequalities say that the number JL of common elements in two
Matrices in Combinatorial Problems
171
blocks must satisfy
(r k)(r >..) ~ i>..k TJ.'I· Readers interested in design theory are referred to [76].
4.3
Decomposition of Graphs
Definition 4.3.1 Let G be a loopless graph. A bipartite decomposition of G is a collection edgedisjoint complete bipartite subgraphs {G1,G2, · · · ,Gr} of G such that E(G) lfi+1E(G,).
=
Since a graph with a loop cannot have a bipartite decomposition, we assume all graphs in this section are loopless. Example 4.3.1 Let Hbeagraphandlet v E V(H). DenoteEa(v) theedgesinHincident withv. For a complete graph G = Kn with V(G) = {v1ov2,··· ,vn}, for 1::;; i:::; n1, let
G,
= (G {vl> · · · ,Vil})[Ea{v
1 ,. ••
,v1_ 1 }(v,)].
(Thus G, is the subgraph of G  { v1, • · • , v•d induced by the edges incident with v in G {vt,··· ,Vi1}.) Then {G1oG2, ... ,Gnd is a bipartite decomposition of Kn. What is the smallest number r such that Kn has a bipartite decomposition of r subgraphs? This was first answered by Graham and Pollak. Different proofs were later given by Tverberg and by Peck. Theorem 4.3.1 (Graham and Pollak, [105), Tverberg, [263), and Peck, [210]) H { G1, G2, · · · , Gr} is a bipartite decomposition of Kn, then r ~ n  1. Theorem 4.3.2 (Graham and Pollak, [105]) Let G be a multigraph with n vertices with a bipartite decomposition {G1,G2, · · · ,Gr}· Let A= A(G)= (tlfJ) be the adjacency matrix of G, and let n+, n denote the number of positive eigenvalues and the number of negative eigenvalues of A, respectively. Then r ~ max{n+,n}. Proof A complete bipartite subgraph G, of G with vertex partite sets Xi and Yi can be obtained by selecting two disjoint nonempty subsets X, and Yi from V(G). Therefore we write (X., Yi) for G,, 1 ::;; i ::;; r. Let z1, • · · , Zn ben unknowns, let z = (zt, z2, · · · , zn)T, and let q(z)
= zTAz = 2 2:
GoJZ&ZJ·
tSi<JSn
For each (X,, Yi), q,(z)
= { 2:
\...ex,
z,  ..2: zz) . ,eY;
172
Matrices in Combinatorial Problems
Since {G1,G2,· ·· ,Gr} is a bipartite decomposition ofG, r
q(z)
= zTAz = 2 E q;(z), i=l
By the identity 4ab
=(a+
b) 2 
q(z)
(a b) 2,
= ~ (tlHz)2 ) =1
(tl~'(z) 2), t=l
where lHz) and l~'(z) are linear combinations of z1,z2,·· · ,zn. Note that l~, l~, · • · , l~ will take zero values over a vector subspace W of dimension at least n r. Therefore, q(z) is semidefinite negative over W. Let E+ denote the vector subspace spanned by the eigenvectors of A corresponding to the positive eigenvectors. Then E+ has dimension 74 and q(z) is positive definite over E+. It follows that (n r) + n+ :5 n, and so r ~ n+. Similarly, r ~ n_.
0
Note that A(Kn) = Jn In has n 1 negative eigenvalues. Thus Theorem 4.3.2 implies Theorem 4.3.1. Definition 4.3.2 A bipartite decomposition {G1, G2, · · · , Gr} of a graph G may be viewed as ·an edge coloring of E( G) with r colors, such that the edges colored with color i induce the complete bipartite subgraph G;. A subgraph H of G is multicolored if no two edges of H received the same color. Theorem 4.3.2 has been extended by Alon, Brualdi and Shader in [3]. We refer the interested readers to [3] for a proof. Theorem 4.3.3 (Alon, Brualdi and Shader, (3]) Let G be a graph with n vertices such that A A(G) has n+ positive eigenvalues and n_ negative eigenvalues. Then in any bipartite
=
decomposition {G 1 , G2, · • · , Gr}, there is a multicolored forest with at least max{ n+, n_} edges. In particular, any bipartite decomposition of Kn contains a multicolored tree. Theorem 4.3.4 (Caen and Gregory, [44]) Let n ~ 2 and let K:.,n denote the graph obtained from Kn,n by deleting edges in a perfect matching. (i) If K:.,n has a bipartite decomposition {Gt. .. · , Gr }, then r ~ n. (ii) If r n, then there exist integers p > 0 and q > 0 such that pq n  1 and each
=
=
G; is isomorphic to KrNr Proof Let {X,Y} denote the bipartition of V(G!,n), where X= {x1tx2, ... ,xn} and
=
Y {YltY2,"' ,yn}· For each i with 1 :5 i :5 r, G; has bipartition {X;,Yi}. Let G~ be the subgraph induced by the edge set E(G;), and let A; A(Gn be the adjacency matrix of G~. Note that
=
(4.8)
Matrices in Combinatorial Problems For each i with 1 ~ i
~
r, let
(~(i) :z:(i) , , • :z:(i))T "'1 , 2 , , n
v. 
. 
such that for k
andy· _ ' 
(y(i) y(i) .. , y(i))T 1 ' 2 ' ' n
= 1, 2, · · • , n, (i) {
:z:lc
Therefore,~
173

= x;y'[,
1
0
if :z:~: e
x,
d (i) _ an Y1c 
other~e
1 ~ i ~ r. Let Ax
{
1 0
= (x1,X2,·" ,x,.) e
B,.,r and Ay
=
(Yt,Y2, · · · ,yn)T E Br,n• Then by (4.8),
JI=AxAy.
(4.9)
=
By (4.9), n rank(J I)~ r, and so Theorem 4.3.4(i) follows. By (4.9), for each i with 1 ~ i ~ r, y'[x1 = 0. For integers i,j with 1 ~ i,j ~ n, define U E Bn,n1 to be the matrix obtained from Ax by deleting Column i and Column ; from Ax and by adding an all1 column as the first column of U; and define V e Bnl,n to be the matrix obtained from Ay by deleting Row i and Row j from Ay and by adding an all 1 row as the first row of V. Then
is a singular matrix. Let
It follows that 0
= =
det(UV)
= det(I,. + U1 V1) = det(I2 + V1Ut)
1 (y'[ Xj)(yf x;),
and so for each i,j, y'[Xj
= yfX; = 1.
Ifr = n, then by (4.9), we must have
AyAx
= J I,
and so by (4.9) and (4.10),
AxJ = JAx, and AyJ =JAy,
(4.10)
Matrices in Combinatorial Problems
174
and so all the rows of Ax have the same row sum and all the columns of Ax have the same column sum and so there exists an integer p 2: 0 such that AxJ JAx pJ. Similarly, there exists an integer q 2:0 such that AyJ =JAy= qJ. It follows that
=
=
= (J I)J = (AxAy)J = Ax(AyJ) = Ax(qJ) = (pq)J, and so Theorem 4.3.3(ii) obtains. D (n 1)J
Definition 4.3.3 Let m1, m2, · · · , mt be positive integers, and let a be a graph. A complete (mt, m2, · · • , mt)decomposition of is a collection of edgedisjoint subgraphs {a1, · · · ,at} such that E(a) lf;:;1 E(ai) and such that each a. is a completefflipartite graph. Another extension of Theorem 4.3.1 is the next theorem, which is first proposed by Hsu [134].
a
=
Theorem 4.3.5 (Liu, [173]) H K,. has a complete (m1 ,m2 , • •• ,mt)decomposition, then t
n :5 ~)mi 1) + 1. i=l
Proof Suppose that {a 1, · · · , at} is a complete (m1, · · · , mt)decomposition of a, where
a. is a complete mi partite graph with partite sets Ai,1,Ai,2,·· · .~.m,, (1 :5 i :5 t). Let z 1, z2, · · · , x,. be real variables and for i,j = 1, 2, · · · , t, let Li.;
=
I:
Zl:·
lcEA;,;
Note that t
I: I:
L;,;Li,k
i=l ISi
I:
=
z;z;.
l;S;i
Consider, for each i with 1 :5 i :5 t, the system of equations
l
=0 Li2 =0 Li,l
~.::1 =0 Ek=lXk =0
By contradiction, assume that n > 2:~1 (mi 1) + 1. Then in the system there are more variables than equations, and so the system must have a nonzero solution (z1,z2,··· ,xn)·
Matrices in Combinatorial Problems
175
For such a solution,
o =
(~x.r =\s~snx,xi+ ~x~ t
=
n
22: L
L;JL;,~:+'Ex~ i=l
i=llSj
=
2:~>0 i=l
a contradiction.
0
Corollary 4.3.5 (Huang, [136]) H Kn has a complete (m1,m2 ,··· ,m,)decomposition with m1 m2 = · · • m, m, then
=
= =
t
< n1  m1·
Corollary 4.3.5 follows immediately from Theorem 4.3.5. When m = 2, Corollary 4.3.5 yields Theorem 4.3.1.
Definition 4.3.4 Let G be a simple graph and let x,y E V(G). Let oollection of k internally disjoint (x,y)paths: P~c(x,y)
P~;(x,y)
denote a
= {P1,P2,··· ,P~;},
where the paths are so labeled that IE(Pl)l ~ IE(~)l ~ · · · ~ IE(P~:)l. The kdistance between x and y is
and the kdiameter of G is d~;(G)
=
max
z,yEV(G)
d~;(x,y).
The following problem was posed by Hsu [134]: Given two sequences of natural numbers
Lc = {lt,l2,··· ,l,} and Dt = {d1.d2,··· ,dt}, is it possible to decompose a Kn (n > 2) into subgraphs F 1 , F2, · · · , Ft such that each F; is k;connected and such that d,, (F;) ~ d;, 1 ~ i ~ t. Such a decomposition, if exists, is called an (Lt.Dt)decomposition of
Kn.
Let f(Lt, Dt) denote the smallest integer n such that Kn has an (Lt. Dt)decomposition. When lt = l2 = · · · = lt =land d1 = d2 · · · = dt = d, we write f(l,d,t) for f(L,,D,). Theorem 4.3.6 Let Lt l)l;. Then
= {lt,l2, · · · ,lt} andDt = {d1,d2, · •· ,dt}, and let M = E!=
1 (l;+
(4.11)
Matrices in Combinatorial Problems
176
Proof Suppose that Kn has an (Lt, Dt)decomposition F1, Fa,··· , Jit. Then
(;) = Therefore, n(n 1)
~
tlE(F,)l ~ ~ tiV(F,)ll• ~ ~ t(l• + I)l, =
M, and so (4.11) obtains.
Corollary 4.3.6 f(l, d, t) Proof When i}
~
1+
~
D
.ji +24tl(l +I).
= la = ··· = lt = l, t
M
= L(l• + I)li = tl(l + 1), i=;l
which implies Corollary 4.3.6. Theorem 4.3.7 Let Lt Then
D
= {i},Z2,··· ,l,} and D, = {d11 ~,··· ,dt} with each do>
1.
t
f(Lt,Dt) :5
L'• + 1.
(4.12)
i=;l
Proof Since a complete (li +I)partite graph is 11connected, it suffices to decompose Kn into t subgraphs F1, F 2 , • • • , F, such that each F1 is a complete (l, +I)partite graph. Thus (4.I2) follows from Theorem 4.3.5. D By Theorem 4.3.6 and Theorem 4.3.7, we obtain the corollary below.
=
Corollary 4.3.6 Let M E!= 1 Z.(l, +I). Then (i) (I+ .../'l+iM)/2 :5 f(L,, Dt) :5 E!=I'i + 1. (ii) (I+ .ji + 4tl(l + I))/2 :5 f(l, d, t) :5 tl + 1. The upper bound in Corollary 4.3.6(ii) can be improved by applying a result of Huang (136]. Theorem 4.3.7 (Huang, (136]) Let b,m,n be integers such that n = b(ml)+m(bI) and 2 :5 b :5 m 1. Then Kn can be decomposed into b+ I complete mpartite subgraphs. Theorem 4.3.8 Fbr 3 :5 t :5 l + I and d
~
2,
f(l, d, t) :5 tl  t + 3.
4.4
(4.13)
MatrixTree Theorems
Let D be a digraph. Aside from the adjacency matrix A(D), there are other matrices associated with D. The study of these matrices can reveal combinatorial properties of the digraph D.
177
Matrices in Combinatorial Problems
Definition 4.4.1 Let D be a digraph with n vertices and without multiple arcs and loops. Let A(D) E Bn be the adjacency matrix of D. The path matrix of Dis n
P(D)
= L:Ak(D), 1:=1
where the multiplication and addition are Boolean. Theorem 4.4.1 Let D be a digraph with V(D)
= {v11 va, · · · , vn} and denote P(D) =
(p;;). (i) Dis strongly connected if and only if P(D) (ii) Define
P(D) 0 pT(D)
=[
= J.
.P¥1
Pl2P21
••.
P21P12
~~
···
PnlPln Pn2P2n
PlnPnl P2nPn2
l
Ynn
Hthe nonzero entries in the ith row of P(D) 0PT(D) is
thenD 1 = D[{v1,v;,,v;., · · · ,v;,}], thesubgraphofD induced by the vertices {v;,v;,,v;., · · · ,v;, is a maximal strong subgraph of D containing v; (that is, for any u e V(D) V(D1), D[V(D 1 ) U {u}] is not strong). Proof We only need to prove Part (ii). Note that for each t = 1, 2, · · · , k, p;;, = P;,; = 1. Therefore D has a directed (v;, v;,)path and a directed (v;, , v1)path, and so D has a closed walk Wo sucll that Wo contains all vertices v;, v;1 , • • • , v;•. If v; E V(Wo) {v;}, then D has a directed closed walk containing v; and v;, and so p;; =Pit= 1. It follows that j E {j11j2, · · · ,j~:}, and so Wo = D1. Suppose that there exists a v; e V(D)  V(D1 ) sucll that D[V(D 1 ) U {v;}] is also a strong subgraph of D. Then once again we havep;; =Pit= 1, and so j E {j1.i2, · · · ,j~:}, a contradiction again. D Definition 4.4.2 Let D be a digraph. For a vertex v E V(D) and an arc e e E(D), write 11 = tail(e) if in D e is an outarc of v; and write v = head( e) if e is an inarc of v. The incidence matrix of Dis B(D) = (b;;) e Mn,m, defined in Definition 1.1.3. Let B1(D) denote a matrix obtained from B(D) by deleting a row of B(D). We can imitate the proof of Theorem 4.4.1 to obtain the following. Proposition 4.4.1 Let c ~ 1 be an integer and let G denote the underlying graph of D (that is, G is obtained from D by replacing each directed edge of D by an undirected
Matrices in Combinatorial Problems
178 edge). Then G has c components if and only if r(B(D))
= r(BJ(D)) = n c,
where r(B(D)) and r(BJ(D)) are the rank of B(D) and the rank of B1(D), respectively. Theorem 4.4.2 Let D be a digraph with n vertices and let G denote the underlying graph of D. The arcs e;, , e;., · · · , ein• form a spanning tree of G if and only if the submatrix of B1(D) consists of the columns corresponding to these arcs has determinant 1 or 1. Proof Let B1 denote the submatrix of B1(D) consists of the columns corresponding to the arcs e;, e;2 , . . • , ein•. The subgraph D1 induced by these arcs has n vertices and n 1 arcs, and B1(D1) = B 1. H det(Bl) E {l,1}, then r(Bl) = n 1, and so by Proposition 4.4.1, D1 is a spanning tree in G. Conversely, if D1 is a spanning tree of G, then by Proposition 4.4.1, r(Bl) = n 1, and so by Theorem 1.1.4, B1 is a nonsingular unimodular square matrix. It follows that det(B1) E {1,1}. 0 Definition 4.4.3 Let G be a connected labeled graph. Define r(G) to be the number of distinct labeled spanning trees of G. H D is a weakly connected digraph, and if G is the underlying graph of D, then define r(D) = r(G). Theorem 4.4.3 (MatrixTree Theorem) H D is weakly connected, the number of spanning trees of the underlying graph of D is r(D)
= det(B/ (D) • Dj{D)).
Proof By BinetCauchy formula and by Theorem 4.4.2, det(Bt(D) · Dj{D)) (±1)2 = r(D). O Theorem 4.4.4 (Cayley) Let n ~ 1 be an integer. Then r(Kn) a labeled complete graph on n vertices.
= r(D) ·
= nn 2 • Here Kn denotes
Proof Let BJ(Kn) = (bi;). Then Bt(Kn)Bj(Kn) = (~;) E Mn1,n1 1 where for = 1,2, ... ,n 1,
i,j
,
b,1
=
n(n1)/2 ~ L.... b,kbilc
=
)2 =n 1 { "'(b LJ i/o 
lc=l
ifi=j
if i ;6 j.
1
It follows that
T
Bf(Kn)B! (Kn)
=[
n1
1
1
n1
1
1
1
n1
l ,
Matrices in Combinatorial Problems and so by Theorem 4.4.3, T(K,.) = det(B,(K,.)B'f(K,.)) = n" 2.
179
0
Definition 4.4.4 Let D be a digraph with V(D) = {VI.1J2,··· ,v,.}. Let dJ", di denote the outdegree and the indegree of Vi in D, respectively, where 1 $ i $ n. Define Mout = diag(dt,dt, · · · , d;t) and Mtn = diag(d},d;:, · · · ,d;), and define Gout = Mout  A(D), and Ctn = Mtn  A(D). Definition 4.4.5 Let D be a digraph and let v E V(D). A vsource tree is a weakly connected subgraph in which v has indegree zero and every other vertex has indegree exactly one. A vsink tree is a weakly connected subgraph in which v has outdegree zero and every other vertex has outdegree exactly one. The Propositions below follow from Definitions 4.4.4 and 4.4.5. Proposition 4.4.2 Each of the following holds. (i) For each row of Gout, the row sum is zero. (ii) Each column sum of Gout is zero if and only if D is eulerian. (That is, D has a directed trail that uses each arc exactly once). (ill) Each column sum of Ctn is zero. (iv) Each row sum of Gout is zero if and only if Dis eulerian. (v) Write Gout= (Ci;). For each i with 1 $ i $ n, the cofactor of the element Ci; in det(Cout) is equal to the cofactor of the element cu in det(Cout)· (vi) Write Ctn = (do;)· For each j with 1 $ j $ n, the cofactor of the element Ci; in det(Cm) is equal to the cofactor of the element c1; in det(C~n). Proposition 4.4.3 The underlying graph of a vsource tree or a vsink tree is a tree.
=
Theorem 4.4.5 Let D be a digraph with V(D) = {v1, v2, · · • , v,.}. Write Gout (Ci;) and Ctn = (do;). Each of the following holds. (i) For each j with j = 1, 2, · • · , n, the cofactor C•; of the element Ci; in det(Cout) is the number of spanning v1sink trees of D. (ii) For each j with j = 1, 2, · · · , n, the cofactor C_j. of the element S• in det( C;n) is the number of spanning v1source trees of D. Sketch of Proof By duality, only Part(i) needs a proof. Without loss of generality, assume that (v1,v1) E E(D). (The case when (vt,vi) E E(D) is similar). Let D' and D" denote the digraphs obtained from D by deleting and
180
Matrices in Combinatorial Problems
by contracting {v,,vt), respectively. Then
Cnn Since (v,,vt) is contracted, Cout(D") can be obtained from Oout(D) by deleting the ith row and the ith column, and by replacing cu by cu + Cii  1, Ct; by Ctj + eo;, and c;1 by c;1 + c;,, 2 $ j $ n. Let O(Dl) and O(DD denotethecofactorsofthe (i, 1)elementinGout(D) andCout(D'), respectively; and let C(Df) denote the cofactor of the (1, 1)element in Oout(D"). Then C22
O(Dt)
=
Co2
Cn2
=
Cl!i Cii
1
Cni
Cl!n
Con Cnn
+
1::12
Cl!i
Cl!n
0
1
0
Cn2
Cni
Cnn
O(DD+C(Dn.
Let T(D 1 ) denote the number of spanning v1sink trees in D. Given aspanningv1sink tree T, either Tis a spanning V1sink tree in D' if (v,,vt) f. E(T) or T', obtained from T by contracting (v,,v1 ), is a spanning v1 sink tree in D" if (v,,v1 ) e E(T). It follows that T(Dt)
= T(DD + T(Dn.
With this reduction formula, Part(i) of Theorem 4.4.5 can now be proved by induction on IE(D)I to show that T(Dt) = O(Dt)· 0
4.5
Shannon Capacity
The Shannon capacity 9(G) of a graph G, first introduced by Shannon [238], originates from the theory of error correcting codes. The determination of 8(G) is very difficult even for a graph with very few vertices. In 1979, Lovasz introduced a matrix method to study 9(G) and successfully solved the long lasting problem for determining B(Os). We shall introduce this method of Lovasz in this section. Throughout this section, the vertices of a graph G will be letters in an given alphabet, where two letters (vertices of G) are adjacent if and only if these letters can be confused. Thus the maximum number of single letter message that can be sent without the danger of
Matrices in Combinatorial Problems
181
confusion is a(G), called the independence number of G, which is the maximum cardinality of a vertex subset V' c;;;; V(G) such that the vertices in V' are mutually nonadjacent (vertex subset with this property will be called an independent vertex set). Definition 4.5.1 Let H 1 and H2 be two graphs with Vi = V(H1) and V2 = V(H2). The strong product of H1 and H2, is the graph H1 •H2, whose vertex set if the product v1 XV2, where two vertices (u 1, u 2) and (111o 112) are adjacent if and onl~ if one of the following occurs:
= =
(i) u1 111 and u2v2 E E(H2); or (ii) u2 112 and u1v1 E E(H1); or (iii) both u1111 E E(H1) and u2112 E E(H2). Let k
> 0 be an integer. For a graph G, define G<1>= G, and G(") = G("1) *G.
Example 4.5.1 Call two kletter words are confundable if for each 1 $ i $ k, their ith letters are equal or can be confused. Thus in G("}, vertices are kletters, and two kletter words are adjacent in G(") if and only if they are confundable. Definition 4.5.2 The Shannon capacity of G, is the number
E>(G) =sup Va(G<">) "
= 11+oo lim Va(G<">).
By Definitions 4.5.1 and 4.5.2, it follows that
e(G)
~
a(G).
Shannon in [238] showed that if a graph G is perfect (that is, x(G) holds. For other graphs, like the 5cycle C5, Shannon only showed
(4.14)
= w(G)), then (4.14) (4.15)
Recall that ifu
= (u1,u2, ... ,u,.)T and v = (vt.V:l,"' ,vm)T are two vectors, then
their tensor product is
Definition 4.5.3 Let G be a simple graph with vertex set {1,2, · · • ,n}. An orthonormal representation of G is a system of unit vectors {vt. v 2 , .. • , v,.} such that v, and VJ are orthogonal if and only if the vertices i and j are not adjacent. The value of an orthonormal representation {v1 , v 2 , • • • , v,.} is . 1 mmmax   • 1SiSn (cTUi) 2 '
Matrices in Combinatorial Problems
182
where c runs over all unit vectors. H c yields the minimum, then c is called the handle of the representation {v~, v2, • · · , v,.}. Let 9(G) =min{ value of {vt. v2, · · · , v,.}},
where the minimum runs over all orthonormal representation of G. The minimum yielding representation {v1 , v2, • · · , v,.} is an optimal representation. Proposition 4.5.1 Let G and H be graphs. Each of the following holds. (i) Every graph G has an orthonormal representation.
(ii) H {ut. u2, · • ·, u,.} and {v1, v2, •· ·, vm} are orthononnal representations ofG and H, respectively, then the vectors {u; ® vi : 1 $ i $ n, 1 $ ; $ m} is an orthonormal representation of G * H. (iii) 9(G *H) $ 9(G)9(H). (iv) c:t{G) $ 9(G). Proof For (i), we can take an orthonormal basis of anndimensional real vector space. Proposition 4.5.1(ii) follows from the following fact in linear algebra: H u, v,x,y are vectors, then (u ® v)T (x ® y)
= (uTx)(vTy).
(4.16)
Let {u1, u2, · · · , u,.} and {v~, v 2 , • • • , Vm} be optimal representations of G and H, with handles c and d, respectively. By (4.16), c ®dis a unit vector, and so by (4.16) again, 1
$
max:::=~~~:;:
=
Jl}8.X T( )2 (dT
iJ ((c ® d)T(u, ® v;))2
1
•J
c
1
no
v; )2
= 9(G)9(H).
This proves (iii). Without loss of generality, assume that {1, 2, · · · , k} is a maximum independent vertex set in G. Thus u 1 , • • • , u~: are mutually orthogonal, and so
Hence (iv) follows.
D
By Proposition4.5.1, for each k > 0, c:t(G(")) $ 9(G(A:)) $ (9(G))", and so by Definition 4.5.2, Theorem 4.5.1 below obtains. Theorem 4.5.1 {Lovasz, (189]) 9(G) $ 9(G).
Matrices in Combinatorial Problems
183
Proposition 4.5.2 Let u1, u2, · · · , u,. is an orthonormal representation of G and v 1, v 2 , • • • , v n is an orthonormal representation of Gc, the complement of G. Each of the following holds. (i) H c and d are two vectors, then n
n
L(udhi)T(c®d)
= L(ufc)(vfd) :5 lcl 2 ldl 2 •
i=l
i=l
(ii) H dis a unit vector, then n
8(G) ~ :E
(iii) 8(G)8(Gc)
~
n.
Proof By (4.16), the vectors no® v,, (1 =::; i =::; n) form an orthonormal basis and so Proposition 4.5.2(i) follows from n
jc®dj 2 ~ L(no®vi)T(c®d) i=l
and from (4.16). Let c be the handle for an optimal representation u 1 , u 2 , • • · , u,. of G. Then Proposition 4.5.2(ii) follows from Proposition 4.5.2(i). Let d be the handle for an optimal representation Vt' V2, ••• , v n of GC. Then Proposition 4.5.2(iii) follows from Proposition 4.5.2(ii). O Let G be a graph on vertices { 1, 2, · · · , n}. A denote the set of n x n real symmetric matrix A a,i) such that
=(
aii
= 1 if i = j
or if i and j are nonadjacent,
and let 8 denote the set of all n x n positive semidefinite matrices B
= (bii) such that
b;i
=
0 for all pairs of nonadjacent vertices (i, j)
tr(B)
=
1
Note that tr(BJ) is the sum of all the entries in B. Theorem 4.5.2 (Lovasz, [189]) Let G be a graph with vertices {1, 2, · · · , n }. Then each of the following holds.
(i) 8(G)
= ~{Amax(A)}.
(ii) 8(G)
=max{tr(BJ)}. BEB
Matrices in Combinatorial Problems
184
Proof Let {u1. ua, ···,On} be an optimal representation of G with handle
c. Define for
1 ~ i,j ~ n,
and A= (ai;)nxn• Then A EA. Note that the elements in the matrix B(G)I A are
= (c~)T(c~) i:Fj cTUj cTDJ
a
1;
=
8(G) au
lc
c~
r
+ (o(G) 
(cT~)2) ·
It follows that 8(G)I A is positive semidefinite and so Amax(A) ~ 8(G). Conversely, let A= (a,;) E A and let A= Amax(A). Then AlA is positive semidefinite, and so there exist vectors x1 , • • • , x,. and a matrix X = [x1. · · · , x,.] such that xT X = AI  A. Since A is an eigenvalue of A, rank(X) < n, and so there exists a unit vector c which is perpendicular to all the Xi's. Let Di=
Then for 1 ~ i,j
1
.JX(c+x,).
~n
=
1 x<1 + lxil
=
x
2)
=
1
Therefore, {ul. · • ·, Dn} is a orthonormal representation of G with
(cT~)2' 1 ~ i ~ n. Hence 8(G)
~
A. This completes that proof for Theorem 4.5.2(i).
By Theorem 4.5.2(i), there exists ann x n matrix A= (O£;) E A such that Amax(A) = B(G). Let BE B. Then by the choice of A and by the definition of 8, "
tr(BJ) =
n
n
n
L:Ebij = EL:a1;b;; = i=l i=l
tr(AB),
i=l J=l
and so 8(G) tr(BJ)
= tr((B(G)I A)B).
(4.17)
Matrices in Combinatorial Problems
185
It follows that both 8(G)I A and Bare positive semidefinite. Let ..\1 ,..\2 ,· ··,..\,.be the eigenw.lues of B, and let Wt, • • • , w,. be mutually orthogonal eigenvectors corresponding to ..\11 • • • , ..\,., respectively. Then n
tr((8(G)I A)B)
n
= 'L,e'f(8I A)Bw; = 'L,..\;w'f(8I A)w; ~ O. i=l
This, together with (4.17), implies that 8(G) ~ maxBeB tr(BJ). Conversely, let E(G) = {idt : 1 $; t $; m}. Consider the (m + 1)dimensional vectors ~
~
2T
h= (h~thill""" ,h;,.him•(L,~)}
1
where h = (h1 , h2 , • • • , k,.)T ranges though all unit vectors, and let i£ denote the set of all such vectors ii. Note that i£ is a compact subset in the (m +I)dimensional real space. Claim Let z (0,0,··· ,0,8(G))T. Then there exist vectors h11 h2 ,··· ,hN E i£, and nonnegative real numbers ct, £:2, • • • , CN such that
=
N
ECi =
1,
(4.18)
z.
(4.19)
i=l
N
=
'L,c;h; i=l
If not, then there exists a vector a= (at,tl2,··· ,am,ti)T and a constant a such that
aTh $;a, for all ii E il but aTz >a.
=
Note that for h (1, 0, · · · , O)T, the corresponding ii satisfies aTfa $;a, and soy$; a. On the other hand, that aT z > a implies 8(G)y >a, and so 8(G)y >a~ y. By Proposition 4.5.l(iv), 8(G) ~ 1, and so a~ y > 0. We may assume that y = 1, and so 8(G) > a. Now define A= (at;) with a;i=
{ ia~c +1, 1
if {i,j}
= {i~c,jlc}
otherwise
then that aTfa $;a can be written as hTAh~
" " ~""' =LJ LJ a;ih;hi $; a. i=l i=l
Since Amax(A) = max{xTAx : lxl = 1}, this implies that >..nax(A) $;a. However, A E A, and so by Theorem 4.5.2(i), 8(G) $;a, a contradiction. This proves the claim. Therefore (4.18) and (4.19) hold. Set
h, =
(h,,1,h,,2,··· ,hp,n)T
b;i
L, c,.hp,;hp,;
N
p=l
B
=
(b;J)
186
Matrices in Combinatorial Problems
Then B is symmetric and positive semidefinite. By (4.18) tr(B)
b••J• tr(BJ)
= =
= 1. By (4.19),
0,(1~k~m)
9(G).
Therefore B E 8, and so 9(G) ~ maxBeB tr(BJ}. This complete the proof of Theorem 4.5.2(ii). 0 Corollary 4.5.2 There exists an optimal representation {ut, · · · , u,.} such that 9(G)
= (cT~) 2 , 1 ~ i ~ n.
Theorem 4.5.3 (Lovasz, [189]) Let v 1 , v 2 , • • • , v n range over all orthonormal representation of ac' and d over all unit vectors. Then n
9(G) =max ~)dTv 1) 2 • i=l
Proof By Proposition 4.5.2(ii), it suffices to show that for some vi's and some d, 9(G) ~
:E:.l(dTvi)2.
=
(b;J) E 8 such that tr(BJ} Pick a matrix B there exists vectors w1, wa, ···, Wn such that b;J
=wfwJ,
= 9(G). Since B is positive semidefinite,
1 ~ i,j ~
n.
Since Be 8,
tr lwal = n
2
1, and
It;w• 12 = n
9(G).
Set v 1 = 1:•1, (1 i
/ltw•l· ~ i ~ n) and d = (tw•) •=1 •=1
Then the v 1's form a orthonormal representation of equality,
ac, and so by CauchySchwarz in
= (~lw•l 2) (~(dTv1 ) 2)
~ (~~w.l(dTv,)r = (~
Matrices in Combinatorial Problems
187
D
This proves the theorem.
Corollary 4.5.3 x(G) ;?: 9(Gc). Proof Let v1, v2, · · · , v n be an orthonormal representation of G, d a unit vector. Suppose that x(G) k and Vt. V2,·· ·, Vk be a partition of V(G) corresponding to a proper kcoloring. Then by Theorem 4.5.3,
=
i=l
Thus the corollary follows.
i=l
m=liEVm
D
Theorem 4.5.4 (LovW!z, [189]) Let G be a graph on vertices {1, 2, · · · , n}. A* denote the set of n x n real symmetric matrix A • (a;;) such that
=
lJiJ
= 0 whenever i and j
are adjacent.
Let A1 (A*) ;?: A2 (A*) ;?: • • ·;?: An(A*) be eigenvalues of an A* A*. Then 9(G)  max  A•e.A•
{1
A,(A*)} An(A*) ·
Sketch of Proof Let A*= (a,;) E A* and let f = (h, /2, · · ·, fn)T be an eigenvector of .X,(A*) such that 1£12 = 1/An(A*) (note that since tr(A*) = 0, An(A*) < 0). Define the matrices F = diag(/i,/2, ... ,fn) and B (biJ) = F(A* An(A*)l)F. Then B e B, and so by Theorem 4.5.2, n n n 9(G) ;?: tr(BJ) = L'L>•;Id; An(A*) L f l
=
i=l j=l
i=l
= ~[At(A*)/l An(A*)/lJ =1 ~:~~:~. Conversely, by Theorem 4.5.2, choose BE B so that 9(G) basically an inversion of the argument above. D
= tr(BJ), and then follow
Corollary 4.5.4 (Hoffman, [123]) Let G be a graph with A= A(G), and let A1 2::: A2 ;?: An be eigenvalues of A. Then
•• • ;?:
Al x(G) 2:::1 An. Proof This follows from Corollary 4.5.3 and Theorem 4.5.4.
D
Theorem 4.5.5 (LovW!z, [189}) Let G be a regular graph on n vertices and let A1 2::: A2 ;?: ... ;?: An be the eigenvalues of A= A(G). Then
B(G) < nAn . AlAn
(4.20)
188
Matrices in Combinatorial Problems
Equality holds in (4.20) if the automorphism group of G is edgetransitive. Proof Let v,, (1 5 i 5 n) be eigenvectors corresponding to >.1, respectively. Since G is regular, v 1 =j (1, 1, · · ·, 1)T, and each Vi is also an eigenvector of J. Note that real number x, the matrix J xA E A, and so by Theorem 4.5.2, >.max(JxA) ~ 8(G). However, the eigenvalues of J zA are n x>.t, x>.2 , • • • , x>.,., and so >.max(J xA) is either the first or the last. Hence the optimal choice of x n/(>.1  .\,), and so (4.20) follows from Theorem 4.5.2. Assume now that the automorphism group r of G is edgetransitive. Let 0 = (Cii) E A such that >.max(C) = 8(G). Note that each element in r corresponds to a permutation matrix. Define
=
=
1 " 1 0= lrl LJP OP.
Per
Then we can verify that C e A and has the form J xA. By Theorem 4.5.2, 8(G) = >.max(C) and so equality holds in (4.20). D Corollary 4.5.5A For an odd integer n
fJ(O,.)
Corollary 4.5.5B 8(05)
> 0,
=
ncos(1r/n) . 1 + cos(1r/n)
=../5.
Proof Applying Theorem 4.5.5 toG = 0,., we have Corollary 4.5.5A. In particular, 8(05 ) = ../5, and so by Proposition 4.5.1, f>(Os) 5 ../5. Thus Corollary 4.5.5B follows from (4.15). O We conclude this section with an application of Shannon capacity, due to Lovasz [189). For integers n, r with n ~ 2r > 0, let K(n, r) denote the graph whose vertices are rsubsets of an nelement set S, where two subsets are adjacent in K(n, r) if and only if they are disjoint. Theorem 4.5.6 (Lovasz, (189]) 8(K(n,r))
Proof For each and so
8
=(
n1) r1
E S, all the rsubsets containing
8
.
form an independent set in K(n,r),
8(K(n, r)) ~ a(K(n, r)) ~ ( n1) r_1 .
Matrices in Combinatorial Problems
189
On the other hand, as the automorphism group of K(n,r) is edgetransitive, (4.20) may be used to derive the desired inequality. Since K (n, r) is regular, j is an eigenvector
fur the eigenvalue ( n Let
1~ t
~r )
~ r. For each
T
of K(n, r).
C S with JTJ = t, let ZT be a real number such that
for every U C S with
lUI = t 1, E
ZT
= 0.
(4.21)
Uc.T
There are ( ; )  ( t : 1 ) linearly independent ( ; ) dimensional vectors of the type(··· ,ZT,··· ). For each such vector, and for each A XA=
E
C S with JAJ =r, define
ZT·
TcA,ITI=t
Then there are ( ; )  ( t : 1 ) linearly independent ( : ) dimensional vectors of
the type
x= (··· ,xA,··· ).
(4.22)
We shall show that every vector x of the form (4.22) is an eigenvector of the adjacency matrix of K(n, r), with eigenvalue ( 1)t ( n r t ) .
rt
In fact, for any Ao C S with
JAol = r, and for each 0 ~ i ~ t, define
Then we have
:E
AnAo=0
XA=
E
_ ( nrt) f3o. ( nrt) ZT
T n Ao 
IT! a t
e
rt
For each 0 ~ i ~ t, and for each U c S with (4.21), we obtain an recurrence relation for {J,: (i which yields
rt
JUJ = t 1 and with IU n Aol = i, by
+ 1)fli+l + (t i){J, =0,
190
Matrices in Combinatorial Problems
Therefore,
as desired. Hence ZAo is an eigenvector of the adjacency matrix of K(n, r). By this construction we have found
linearly independent eigenvectors of K(n, r ), as eigenvectors belonging to different eigenvalues are linearly independent. It follows that all the eigenvalues of K(n,r) are (1) t(nrt) , t=0,1,2,···,r,
rt
and so the largest and the smallest are the ones with t = 0 and t the proof for Theorem 4.5.6 is complete by applying (4.20). D
= 1, respectively.
Now
Corollary 4.5.6A The Petersen graph, which is isomorphic to K(5,2), has capacity 4. Corollary 4.5.6B (Erd&, Ko, Rado (82])
a(K(n,r))
4.6
= E>(K(n,r)) = (
n1) r1
.
Strongly Regular Graphs
In 1966, Erd&, Renyi and Sbs proved the so called friendship theorem (83): In a society with a finite population, if every two people have exactly one common friend, then there must be a person who is a friend of every other member in the society. The friendship theorem can be stated in graph theory as follows, whose proof will be postponed.
Theorem 4.6.1 (Erd5s, Renyi and
Ses, (83]) Let G be a graph on n vertices.
If for each
pair of distinct vertices x andy, there is exactly one vertex to adjacent to both x andy,
then n is odd and G has a vertex u e V(G) such that u is adjacent to every vertex in V(G u) and such that G u is a 1regular graph. Definition 4.6.1 For a graph G with v e V(G), let N(v) = No(v) denote the vertices that are adjacent to v in G. Let n,k,>.,p be non negative integers with n ~ 3. A kregular graph G on n vertices is an (n, k, >., p.)strongly regular graph if each of the following holds: (4.6.1A) If u, v e V(G) such that u and v are adjacent in G, then IN(u) n N(v)l =>.. (4.6.1B) Ifu,v e V(G) such that u and v are not adjacent in G, then jN(u)nN(v)l = p..
Matrices in Combinatorial Problems
191
Example 4.6.1 (i) The 4cycle is a (4,2,0,2)strongly regular graph. (ii) The 5cycle is a (5,2,0,1)strongly regular graph. (iii) H n ~ 6, the ncycle is not a strongly regular graph. (iv) The Petersen graph is a (10,3,0,1)strongly regular graph. (v) The disjoint union of two copies of K 3 is a (6,2,1,0)strongly regular graph. (vi) For m ~ 2, the complete bipartite graph Km,m is a (2m, m, 0, m)strongly regular graph. Proposition 4.6.1 Let G beakregular graph with n vertices and let A= A(G). Then G is an (n, k, .X, p.}strongly regular graph if and only if one of the following holds.
(i} A2 = kl +.XA+p(J I A). (ii} A2  (.X p)A (k p}I pJ.
=
Proof By Proposition 1.1.2(vii), and by Definition 4.6.1, that G is an (n, k, >.., p)strongly regular graph is equivalent to Proposition 4.6.1(i), and it is straightforward to see that Proposition 4.6.1(i) and Proposition 4.6.1(ii) are equivalent.
0
Proposition 4.6.2 Let G be an (n,k,>..,p)strongly regular graph. Then each of the following holds.
(i) k(k .X 1) = p.(n k 1) (ii) The complement of G is an (n, n k 1, n 2k + p 2, n 2k + .X)strongly regular graph.
(iii} p
= 0 if and only if G is the disjoint union of some K,+l 's.
Proof This follows from Proposition 4.6.1(i} and (ii).
O
Theorem 4.6.2 Let G be a connected (n, k, .X, p}strongly regular graph, let l and let
d = (.X p) 2 + 4(k p), = (k + l}(.X p) + 2k.
0
Then the spectrum of A
= n k 1 (4.23)
= A(G) is Spec(A)
=(
kp u 1 r 8
)•
where
(4.24}
Matrices in Combinatorial Problems
192 and where
r=l~k+l~~ 2 vd '
(4.25)
s= ~ k+l+~ .
Proof Let ~ 1 (A) ;:::: · · · ;:::: ~n(A) denote the eigenvalues of A. By Corollary 1.3.2A, ~1 (A) = k. Since G is connected, A is irreducible, and so the multiplicity of Al (A) = k is 1. By Proposition 4.6.l(i),
(A kl)(A 2

(A p)A (k p)/)
=0,
and sop and u in (4.24) are the other eigenvalues of A. H d = 0, then by k;:::: mu, ~ = p k. However, since G is a (n,k,~,p)stronglyregular graph,~~ k 1, a contradiction. Hence d > 0, and so k > p > u. By (4.24), write
=
~
=k+p+u+pu, p = k+pu.
=
Since k ;:::: p, we have p ;:::: 0 and p ~ 0. H u 0, then A = k + p ;:::: k, contrary to ~ ~ k 1. Henceu < 0. Consider G, the complement of G. By algebraic manipulation, we obtain the corresponding parameters for ac as follows:
Cl = d,p= u 1,0'= p 1.
ac, p;:::: 0, we must have p;:::: 0 and u ~ 1. Finally, we note that if rands are multiplicities of p and u, respectively, then
As for
{
r+s=n1 k+rp+su =0
and by algebraic manipulation again, (4.25) obtains.
O
Theorem 4.6.3 Let G be an (n, k, ~. p)strongly regular graph, let l = n  k  1, and adopt the notations in (4.23), (4.24) and (4.25). (i) If 6 = 0, then n1 A=pl,k=l=2p=r=s=  2 . (ii) If 6# 0, then ../d, p, u are integers. Moreover, if n is even, then and if n is odd, then 2../dl6.
../dl6 but 2Vd [6,
Sketch of Proof H6 = 0, then by (4.23), 2k/(p~) = k+l > k, and so 0 < p~ <2, which yields~= p 1. The other equalities follow from Proposition 4.6.2(ii) and (4.25).
Matrices in Combinatorial Problems The conclusion when 6:/:0 follow by (4.24) and (4.25).
193
D
A strongly regular graph satisfying Theorem 4.6.3(i) is also called a conference graph. By Theorem 4.6.2, if G is a connected strongly regular graph, then G has exactly 3 distinct eigenvalues. Theorem 4.6.4 Let G be a connected regular graph. Then G is strongly regular if and only if G has exactly 3 distinct eigenvalues. We are now ready to apply Theorem 4.6.2 to present a proof, due to Cameron [45], for Theorem 4.6.1. Proof for Theorem 4.6.1 Let G be a graph satisfying the conditions of Theorem 4.6.1. Claim 1 H u and v are not adjacent in G, then u and v must have the same degree. Let u, v E V(G) be distinct non adjacent vertices. Then there exists a unique vertex w such that w is adjacent to both u and v. Also, G has vertices x :/: v and y :/: u such that x is the only vertex adjacent to both u and w and y is the only vertex adjacent to v andw. Hz ¢ {w,:z:} is a vertex adjacent to u, then there exists a unique z' ¢ {w,y} such that z' is adjacent to both z and v. Note that we can exchange u and v to get the same conclusion on v. Therefore, u and v must have the same degree. Claim 2 H G is a kregular graph, then G = Kt, or G = Ks. H G is kregular, then G is an (n, k, 1, I)strongly regular graph. By Theorem 4.6.2, sr =of#= k/Jk1 is an integer, and so (k1)lle2. It follows that either k 0 or k = 2, and so G is either K1 or Ka. By Claim 2, we may assume that G is not a regular graph. Let u and v be two vertices with different degrees. Then by Claim 1, u and v must be adjacent. Let w be the only vertex in G adjacent to both u and v, and renaming u and v if necessary, we may assume that wand u have different degrees. Let x E V(G) {u,v,w}. Then by Claim 1 and by the assumption that u and v have different degrees, x must be adjacent to either u or v. Similarly, x must be adjacent to either u or w. However, since u is the only vertex adjacent to both v and w, x must be adjacent to u. Therefore, u is a vertex in G that is adjacent to every other vertex in G, and each component of G  u is a K2. D
=
C. W. H. Lam and J. H. van Lint [149] considered a generalization of friendship theorem to loopless digraphs, first proposed by A. J. Hoffman. Definition 4.6.2 A loopless digraph D is a kfriendship graph if for any pair of distinct vertices u, v e V(D), D has a unique directed (u, v)walk of length k, and for every vertex u e V(D), D has no (u,u)walk of length k.
194
Matrices in Combinatorial Problems
Note that if A E B,. satisfying A" = J I, then by Proposition 1.1.2(vii), D(A) is a kfriendship graph. The converse of this also follows from Proposition 1.1.2(vii). Therefore, to determine the existence of a kfriendship graph, it is equivalent to showing that the equation A" = J I has a solution. Example 4.6.2 Let F2 denote the directed cycle of length 2. Then for any odd integer k > 0, F2 is a kfriendship, (called a fish in [149]). Proposition 4.6.S Let D be a kfriendship graph with n vertices. and let A Then each of the following holds. (i) tr(A) 0.
= A(D).
=
(ii) A" = J I. {iii) For some integer c
~ 0, AJ = JA = cJ. (Therefore, A has constant row and column sums c; and the digraph D has indegree and outdegree c in each vertex.) (iv) The integer c in (ii) satisfies
n=c"+l.
(4.26)
Proof Since D is loopless, tr(A) = 0. By Proposition 1.1.2(vii) and Definition 4.6.2, A" J I. Multiply both sides of Ak J I by A to get
=
=
A"+1
= JA A =AJ A,
= =
which implies that AJ JA cJ, for some integer c. Multiply both sides of A"= J I by J, and apply Proposition 4.6.3(ii) to get
c"J=A"J= J 2 and so c"
=n 
1.

J=nJ J= (n1)J,
O
Theorem 4.6.5 (Lam and Van Lint, [149]) H k is even, no kfriendship graph exists.
=
Proof Let k 21. Assume that there exists a kfriendship graph D with n vertices. Let A= A(D) and let A 1 = A1• By Proposition 4.6.3(i), Af = J I. Therefore by Proposition 4.6.3, n must satisfy n c2 + 1 for some integer c. The eigenvalues of A must then be c with multiplicity 1, and i and i (where i is the complex number satisfying i 2 = 1) with equal multiplicities. This implies tr(A) = c, contrary to Proposition 4.6.3(i). O
=
=
Fork odd, a solution for A"= J I is obtained for each n c" + 1, where c > 0 is an integer. Consider the integers mod n. For each integer v with 1 :S v :S c, define the permutation matrix Pv = (p~)) by
P(v)
_ {
ii 
1 0
=
if j v ci (mod n) otherwise
(O :S i,j :S n _ 1).
Matrices in Combinatorial Problems
195
and define (4.27)
In fact, the matrix A of order n has as its first row (0,1, 1, · • · , 1, 0, · · · , 0, 0) where there are c ones after the initial 0. Subsequent rows of A are obtained by shifting c positions to the left at each step. The matrix A" is the sum of all the matrix products of the form
where (a11 a 2,··· ,a~:) runs through {1,2,··· ,c}", the set of all k element subsets of {1,2, · · · ,c}. The matrix P,.,P,.2 • • ·P,.., is a permutation matrix corresponding to the permutation
"
:t H (c)"z + Ea1(c)"• (mod n).
(4.28)
i=l
Theorem 4.6.6 (Lam and Van Lint, (149]) The matrix A defined in (4.27) satisfies A" J  I for odd integer k ~ 1.
=
Proof Note that if (all a 2, · .. , a~:) E {1, 2, · · · , c}", then 1.$lta,(c)"'15 n 1 a=l
(which is obtained by letting the a 1's alternate between 1 and c). Sinillarly, if (,Bll P2 ... · , ,8~:)
e
{1, 2, · • • , c}" also, then by (4.26), c" = n 1, and so
It follows that
.
Ea,(c)"• i=l
" =E.B•(c)"• (mod n) i=l
=
implies that the two sums are equal which is possible only if (all a2, • · · , a~:) (flt, P2, · · · , fJ~:). Therefore if (all a2, · .. , a~:) runs through 2, .. · , c}", the permutations in (4.28) form the set of permutations of the form x 1+ x + 'Y (1 .$ 'Y < n ). This proves the theorem. D
=
Theorem 4.6.7 (Lam and Van Lint, [149]) Let A be defined in (4.27) and let D D(A). Then the dihedral group of order 2(c + 1) is a group of automorphisms of the graph D.
=
Proof In the permutation :t I+ (c)x + v which defines Pv, substitute :t y + A(c" + 1)/(c+ 1). The result is the permutation y .+ (c)y+v. Hence for A= 0,1,2, ... ,c, we
Matrices in Combinatorial Problems
196
find a permutation which leaves A invariant. These substitutions form a cyclic group of order c. In the same way it can be found that the substitution
x = 1 y + A(c" + 1)/(c + 1) maps P., to the permutation y t+ (c)y + (c + 1 v) and therefore leaves A invariant. This, together with the cyclic group of order c above, yields a dihedral group acting on
n.o
=
It is not known whether the solution of A" J  I is unique or not. In [149], it was shown that when k = 3 and n 9, the dihedral group of Theorem 4.6. 7 is the full automorphism group of the friendship graph D. However, whether the dihedral group in Theorem 4.6. 7 is the full automorphism group of D in general remains to be determined.
=
We conclude this section by presenting two important theorems in the field. Let T(m) = L(Km) denote the line graph of the complete graph Km, and let ~(m) = L(Km,m) denote the line graph of the complete bipartite graph Km,m· Note that T(m) is an (m(m 1)/2, 2(m 2), m 2, 4)strongly regular graph, and ~(m) is an (m2 , 2(m2), m 2, 2)strongly regular graph. Theorem 4.6.8 (Chang, [51) and [52), and Hoffman, [126]) Let m ~ 4 be an integer. Let G be an (m(m 1)/2,2(m 2),m 2,4)strongly regular graph. If m :/; 8, then G is isomorphic to T(m); and if m = 8, then G is isomorphic to one of the four graphs, one of which is T(8). Theorem 4.6.9 (Shrikhande, [254]) Let m ~ 2 be an integer and let G be an (m2 , 2(m2),m 2,2)strongly regular graph. If m :/; 4, then G is isomorphic to £ 2 (m); and if m = 4, then G is isomorphic to one of the two graphs, one of which is £2 (4).
4. 7
Eulerian Problems
In this section, linear algebra and systems of linear equations will be applied to the study of certain graph theory problems. Most of the discussions in this section will be over GF(2), the field of 2 elements. Let V(m,2) denote themdimensional vector space over GF(2). Let B~,m denote the matrices B E Bn,m such that all the column sums of B are positive and even. For subspaces V and W of V(m, 2), V + W is the subspace spanned by the vectors in V U W. Let E = {e1 ,e2 , ···,em} be a set. For each vector x = (Xt.Z2,"' ,zm)T E V(m,2), the map
Matrices in Combinatorial Problems
197
yields a bijection between the subsets of E and the vectors in V(m, 2). Thus we also use V(E,2) for V(m,2), especially when we want to indicate the vectors in the vector space
are indexed by the elements in E. Therefore, for a subset E' ~ E, it makes sense to use V(E',2) to denote a subspace of V(E,2) which consists of all the vectors whose ith
e,
component is always 0 whenever e E  E', 1 ::;; i ::;; m. For two matrices B1,B2, write Bt !;;; B2 to mean that B 1 is submatrix of B2. Throughout this section, j (1, 1, · · · , 1)T denotes the mdimensional vector each of whose component is a 1.
=
Definition 4.7.1 A matrix Be B:_,m is separable if B is permutation similar to [ Bu 0
0 ] , B22
=
=
where Bu e B,.,,m, and B22 E B,.2 ,m2 such that n n1 +n2 and m m1 +m2, for some positive integers n1,n2 ,mt.m2 • A matrix B is nonseparable if it is not separable. For a matrix B e with rank(B) ~ n 1, a submatrix B' of B is spanning in B if rank(B') ~ n  1. Note that for every B e B:,,m, each column sum of B is equal to zero modulo 2. Thus if B E B:,,m is nonseparable, then over GF(2), rank(B) n 1. A matrix B e B:_,m is even if Bj 0 (mod 2); and B is Eulerian if B is both nonseparable and even. A matrix B e B,.,m is simple if it has no repeated columns and does not contain a zero column. In other words, in a simple matrix, the columns are mutually distinct, and no column is a zero column.
s:..m
=
=
Example 4.7.1 When G is a graph and B = B(G) is the incidence matrix of G, G is connected if and only if B is nonseparable; every vertex of G has even degree if and only if B is even; G is a simple graph if and only if B is simple; and G is eulerian (that is, both even and connected) if and only if B is eulerian. Proposition 4.7.1 By Definition 4.7.1, Each of the following holds. (i) If Bt. B 2 e Bn,m are two permutation sinillar matrices, then B 1 is nonseparable if and only if B2 is nonseparable.
(ii) Suppose that B1 E B,.,m and B2 E Bn,m' are matrices such that Bt !;;; B2. If Bt is nonseparable, then B 2 is nonseparable. (iii) Suppose that B1 E Bn,m and B2 E Bn,m' are matrices such that B1 !;;; B2. If Bt has a submatrix B which is spanning in B 11 then B is also spanning in B 2 • (iv) Let B B(G) e B,.,m be an incident matrix of a graph G. If B is nonseparable, then rank(B) n  1.
= =
=
Proof The first three claims follow from Definition 4.7.1. If B B(G) is nonseparable, then G is connected with n vertices, and so G has a spanning tree T with n  1 edges.
Matrices in Combinatorial Problems
198
The submatrix of B consisting of the n  1 columns corresponding to the edges ofT will be a matrix of rank n  1. 0 Definition 4.7.2 Let A,B e B:'.,m be matrices such that A ~ B. We say that A is cyclable in B if there exists an even matrix A' such that A ~ A' ~ B; and that A is subeulerian in B if there exists an eulerian matrix A' such that A ~ A' ~ B. A matrix B E B:'.,m is supereulerian if there exists a matrix B" e B:'.,m" for some integer m" S m such that B" ~ B and such that B" is eulerian. Let G be a graph, and let B = B(G) be the incidence matrix of G. Then G is subeulerian (supereulerian, respectively) if and only if B(G) is subeulerian (supereulerian, respectively}.
=
Example 4.7.2 Let G be a graph and B B(G) be the incidence matrix of G. Then G is subeulerian if and only if G is a spanning subgraph of an eulerian graph; and G is supereulerian if and only if G contains a spanning eulerian subgraph. What graphs are subeulerian? what graphs are supereulerian? These are questions proposed by Boesh, Suffey and Tindell in [12]. The same question can also be asked for matrices. It has been noted that the subeulerian problem should be restricted to simple matrices, for otherwise we can always construct an eulerian matrix B' with B e B' by adding additional columns, including a copy of each column of B. The subeulerian problem is completely solved in (12) and in [138). Jaeger's elegant proof will be presented later. However, as pointed out in [12], the supereulerian problem seems very difficult, even just for graphs. In fact, Pulleyblank [214) showed that the problem to determine if a graph is supereulerian is NP complete. Catlin's survey [48] and its update [57) will be good sources of the literature on supereulerian graphs and related problems. Definition 4.7.3 Let B e Bn,m• let E(B) denote the set of the labeled columns of B. We shall use V(B,2} to denote V(E(B},2). For a vector x E V(E(B),2}, let Ex denote the subset of E(B) defined in (4.29}. We shall write V(B x,2) for V(E(B) Ex,2), and write V(x, 2) for V(Ex, 2). Therefore, V(B x, 2) is a subspace of V(B, 2} consisting of all the vectors whose ith component is always 0 whenever the ith component ofx is 1, (1 SiS m), while V(x,2) is the subspace consisting of all the vectors whose ith component is always 0 whenever the ith component of xis 0, (1 S i S m). For a matrix B e Bn,m and a vector x e V(B, 2}, we say that Column i of B is chosen by x if and only if the ith component of x is 1. Let Bx denote the submatrix of B consisting of all the columns chosen by x. If E' ~ E(B), then by the bijection (4.29), there is a vector x E V(B,2) such that Ex= E'. Define BE• = Bx. Conversely, for each
Matrices in Combinatorial Problems
199
submatrix A ~ B with A E Bn,m 1 there exists a unique vector x E V(B,2) such that B. = A. Then denote this vector x as XA. A vector x E V(B,2) is a cycle (of B) if B. is even, and is eulerian (with respect to B) if B. is eulerian. Note that the set of all cycles, together with the zero vector 0, form a vector subspace C, called the cycle space of B; c.L, the maximal subspace in V(B,2) orthogonal to C, is the cocycle space of B. For x,y E V(m, 2), write x ~ y if y x ~ 0, and in this case we say that y contains x. Let BE Bn,m be a matrix and let x E V(B,2) be a vector. The vector xis cyclable in B if there exists an cycle y E V(B,2) such that x ~ y. Denote the number of non zero components of a vector x E V(n,2) by llxllo· Theorem 4.7.1 Let B E B!'.,m· A vector x is cyclable in B if and only if x does not contain a vector z in the cocycle space of B such that llzllo is odd. Proof Let C and c.L denote the cycle space and the cocycle space of B, respectively. Let x E V(B,2). Then, by the definitions, the following are equivalent. (A) x is cyclable in B. (B) there exists ayE C such that x ~ y. (C) there exists ayE C such that x = y + (y + x) E C + V(B x,2). (D) x E C+ V(B x,2). Therefore, xis cyclable if and only if x E C + V(B x,2). Note that
[C + V(B x, 2)].L
=c.L n V(B x,2).L = c.L n V(x, 2).
It follows that x is cyclable if and only if x is orthogonal to every vector in the subspace c.L n V(x, 2). Since x contains every vector in V(x, 2), and since for every nonzero vector z E c.L, llzllo is odd, we conclude that x is cycla.ble if and only if x does not contain a vector z in the cocycle space of B such that llzllo is odd. D Theorem 4.7.2 (Jaeger, [138]) Let A,B E B:.,m be matrices such that A~ B. Each of the following holds. (i) A is cyclable in B if and only if there exists no vector z in the cocycle space of B such that Uzflo is odd. (ii) If, in addition, A is a nonseparable. Then A is subeulerian in B if and only if there exists no vector z in the cocycle space of B such that llzllo is odd. Proof By Definition 4.7.2 and since A is nonseparable, A is subeulerian in B if and only if the vector XA is cycla.ble in B. Therefore, Theorem 4.7.2(ii) follows from Theorem 4.7.2(i). By Theorem 4.7.1, XA is cyclable in B if and only if XA does not contain a vector z in the cocycle space of B such that llzllo is odd. This proves Theorem 4.7.1(i). O
Matrices in Combinatorial Problems
200
Theorem 4.7.3 (Boesch, Suffey and Tindell, [12], and Jaeger, [138]) A connected simple graph on n vertices is subeulerian if and only if G is not spanned by a complete bipartite with an odd number of edges. Proof Let G be a connected simple graph on n vertices. Theorem 4.7.3 obtains by applying Theorem 4.7.2 with A= B(G) and B = A(Kn) (Exercise 4.12). D
=
Definition 4.7.4 A vector be V(n,2) is an even vector if llbll 0 (mod 2). A matrix HE Bn,m is collapsible if for any even vector bE V(n, 2), the system Hx=b, has a solution x such that Hx is nonseparable and is spanning in H, (such a solution xis called a bsolution). Let n, m, n 1 , m1o m2 be integers with n 2! n 1 2! 0, and m 2! m1o m2 2! 0. Let Bu E Bn,,m.,Bl2 E Bn1 ,m2 ,B22 E Bnn.,m2 and H E Bnn1 ,m(m1 +m2 )· Let Sf denote the column sum of the ith column of B22, 1 ~ i ~ m2, and let BE Bn,m with the following form B
= [ Bu 0
B12 B22
0 ] .
H
(4.30)
Define B / H E Bnno+l,mm2 to be the matrix of the following fonn
B/H
= [ B~t !~
],
where vJ; E V(m, 2} such that v'l; = (vmt+l> Vm 1 +2• · .. , Vm 1 +m2 ) and such that Vm 1 +i (mod 2), (1 ~ i ~ m2).
=
Sf
Proposition 4.7.2 Let B1oB2 E Bn,m such that B 1 is pennutation similar to B2. Then each of the following holds. (i) B 1 is collapsible if and only if B2 is collapsible. (ii) B 1 is supereulerian if and only if B 2 is supereulerian. Proof Suppose that B 1 = PB2 Q. Let bE V(n,2) be an even vector. Then since b is even, b' p 1 b e V(n,2) is also even. Since B 1 is collapsible, B 1 y = b' has solution y E V(m, 2) such that (Bt),.. is nonseparable. Let x = Q1 y. Then
=
and so Bx
= b has a solution x = Qy.
Matrices in Combinatorial Problems
201
=
=
Note that p 1(B1),. (B2Q),. (B2)•. Therefore, B. is nonseparable, by Proposition 4.7.1 and by the fact that (B1),. is nonseparable. This proves Proposition 4.7.2(i). Proposition 4.7.2(ii) follows from Proposition 4.7.2(i) by letting b 0 in the proof
=
above.
O
Proposition 4.7.3 H HE B!,m is collapsible, then His nonseparable, and rank(H) n1.
=
=
Proof By Definition 4.7.4, the system Hx 0 has a 0solution x, and so H. is nonseparable and spanning in H, and so Proposition 4.7.3 follows from Proposition 4.7.1.
0 Theorem 4.7.4 Let H be a oollapsible matrix and let B be a nonseparable matrix of the form in (4.30). Each of the following holds. (i) If B/H is collapsible, then B is collapsible. (ii) H B / H is supereulerian, then B is supereulerian. Proof We adopt the notation in Definition 4.7.4. Let b be an even vector, and oonsider the system of linear equations
[
B~, !: ! l (:) ~ (~) ~b,
(4.31)
where b1 E V(n1,2),b2 E V(n n1.2), X1 E V(mlJ2),x2 E V(m2,2) and xs E V(m
ifb1 is even otherwise Then b' is an even vector. Since B/H is collapsible,
(B/N)x12 = [
B~ 1 !~ ]( =~ )= ( i
),
has a b'solution x 12 • Therefore (G/H).12 is nonseparable and spanning in G/H. Since b is even, by the definition of 6, the system Hxs
(4.32)
h2  B 22 x 2 is also even. Since H is collapsible,
= b2 B22X2
has a (b2 B22x2)solution xs. Therefore, Hq is nonseparable and spanning in H.
(4.33)
202
Matrices in Combinatorial Problems
Now let
We have
Thus, to see that x is a bsolution for equation (4.31}, it remains to show that B,. is nonseparable and is spanning in B. Claim 1 B,. is nonseparable. Suppose that B,. is separable. We may assume that
B,.
H [
~]
= [ ~ ~ ] , where X =F 0 andY =F 0.
has a submatrix [
submatrix [
~H ]
that is a submatrix of [
! ].
(4.34)
and if [
~]
has a
:H ] that is a submatrix of [ ! ],then
By (4.33}, H,. 8 is nonseparable, and so we must have B has no zero columns, and so
Xn = 0.
By the definition of B~.m•
the columns chosen by xs are in the last m (m1 + m2) columns of B. By(4.34}and(4.35}, [
~] hasasubmatrix [
Xs;n]
thatisasubmatrixof[
(4.35}
B~t
!: ]•
and [ 0 ] has a submatrix [ 0 ] that is a submatrix of [ Bu B 12 ] • By (4.31) y ~~ 0 ~ and (4.33), XB/H =F 0. H YB/H =F 0, then (B/H)x12 is separable, contrary to (4.32}. Therefore, YB/H 0, and so
=
the columns chosen by x 12 are in the first (m 1 + m 2 } columns of B.
=
(4.36}
By (4.34), (4.35} and (4.36), Y Hxa and X is the first n1 rows of B/H,.12 • This implies that the last row of B/H,.12 is a zero row, and so B/H,.12 is separable, contrary to (4.31). This proves Claim 1.
Matrices in Combinatorial Problems
203
Claim 2 B,. is spanning in B.
It suffices to show that ra.nk(B,.) = n 1. Since (B/H)x12 spans in B/H, there exist It column vectors Vt. v2, • · · , vh in the first m1 columns of B, and 12 column vectors w1, w2, · · · , W!2 in the middle rna columns of B such that l 1 + h n 1 and such that v11 v2, • • • , v,., Wt, w2, · • · , W12 are linearly independent over GF(2). Since H,.,. spans in H, there exist column vectors u1, · • · , Uno1 in the last m  (m1 + m2) columns of B such that u 11 • • • , Unn,1 are linearly independent over GF(2). It remains to show that Vt, v 2 , • • • , v,., Wt. w2, · · · , w,, and u 11 · • · , Unn 1  1 are linearly independent. If not, there exist constants c1, · · · , c,,, ~ · · · ,C:., t!i, · · · , ~I such that
=
h !2 nn,1 :ECiv; + :E~w, + :E dtno = 0. i=l
i=l
(4.37)
i=1
Consider the first n1 equations in (4.37), and since v1, v2, · · · , v,., w1, w2, · · · , w,. are linearly independent over GF(2), we must have c1 ch 0 and cl_ C:. 0. This , together with the fact that u 1 , • • • , u,._,.,_ 1 are linearly independent, implies that
=··· = =
= ··· = =
d{. =··· = ~,., 1
=0. Therefore ra.nk(Bx) =n 1, as expected. 0
Definition 4.7.5 Let B E B~,m· Let T(B) denote the largest possible number k such that E(B) can be partitioned into k subsets E 11 Ea,· · · ,E,. such that each BE, is both nonseparating and spanning in B, 1 s; is; k. Example 4.7.3 Let G be a graph and let B = B(G). Then T(B) is the spanning tree packing number of G, which is the maximum number of edgedisjoint spanning trees in G. Proposition 4.7.4 Let BE B~,m be a matrix with T(B) bE V(n,2), the system Bx = b has a solution.
~
1. Then for any even vector
Proof We may assume that B E B~,nl and ra.nk(B) = n 1, for otherwise by tau(B) ~ 1, we can pick a submatrix B' of B such that B' is nonseparating and spanning in B to replace B. Since b is even and since every columns sum of A is even, it follows that the rank([B, b]) = n 1 = ra.nk(B) also. Therefore, Bx = b has a solution. O Theorem 4. 7.5 Let B E B~.m be a matrix with T(B)
~
2. Then B is collapsible.
Proof We need to show that for every even vector b E V(n, 2), Bx = b has absolution. Since T(B)?: 2, we may assume that for some B1 E B~1 ,,. and B2 E B~n,,m, B = [B11 B 2]suchthateachB,isnonseparableandspanninginB. Letx1 (1,1, ... ,1,0,··· ,O)T E V(n, 2) such that the first n1 components of x 1 are 1, and all the other components of x 1 areO. Write B = [B~, B 2] = [B~, 0] + [0, B2 ]. Since Bt E B~ 1 ,,., [B11 O]xt is even, and so the vector b [Bt,O]x1 E V(n,2) is also even.
=
204
Matrices in Combinatorial Problems
Since r([O,B,]) ~ 1, by Proposition 4.7.4, the system [O,B2)x2 solution x2, such that the first n 1 components of x 2 are 0.
=
Let x x1 + x2. Then since the last components of x2 are 0, we have
= b [B1oO]x1 has a
n n 1 components of x 1 are 0 and the first n 1
By the definition of Xtt B,. contains B1 as a submatrix, and so B .. is both spanning in B and nonseparable, by Proposition 4.7.1.
0
Theorem 4. 7.6 (Catlin, (47] and Jaeger, [138]) If a graph G has two edgedisjoint spanning trees, then G is collapsible, and supereulerian. We close this section by mentioning a completely different definition of E ulerian matrix in the literature. For a square matrix A whose entries are in {0, 1, 1}, Camion (46) called the matrix A Eulerian if the row sums and the column sums of A are even integers, and he proved the following result.
Theorem 4.7.7 (Camion [46]) An m x n (0, 1, 1) matrix A is totally unimodular if and only if the sum of all the elements of each Eulerian submatrix of A is a multiple of 4.
4.8
The Chromatic Number
Graphs considered in this section are simple, and groups considered in this section are all finite Abelian groups. The focus of this section is certain linear algebra approach in the study of graph coloring problems. Let r denote a group and let p
> 0 be an integer. Denote by V(p, r) the set of ptuples
(91,92•'" ,g,)T such that each g, e r, (1 $ i $ p). Given g = (gz,92,··· ,g,)T and h = (h1oh 2, ... ,h,)T, wewriteg~h to mean thatg, =F h;. foreveryiwith 1$ i $p. For notational convenience, we assume that the binary operation of r is addition and that 0 We also adopt the convention that for integers 1, 1,0, denotes the additive identity of and for an element g E r, the multiplication (1)(g) g, (O)(g) 0, the additive identity of r, and (1)g = g, the additive inverse of gin r.
r.
=
=
Let G be a graph, k ~ 1 be an integer, and let O(k) = {1,2, ... ,k} be a set of k distinct elements (referred as colors in this section). A function c : V(G) 1+ O(k) is a proper vertex kcoloring if f(u) =F f(v) whenever uv e E(G). Elements in the set O(k) are referred as colors. A graph G is kcolorable if G has a proper kcoloring. Note that a graph G has a proper kcoloring if and only if V(G) can be partitioned into k independent sets, each of which is called a color class. The smallest integer k such that G is kcolorable
Matrices in Combinatorial Problems
205
isx(G), the chromatic number of G. Hfor every vertex v e V(G), x(G v) < x(G) then G is kcritical or just critical.
= k,
Let G be a graph with n vertices and m edges. We can use elements in r as colors. D(G), and let Arbitrarily assign orientations to the edges of G to get a digraph D B B(D) be the incidence matrix of D. Then a proper lflcoloring is an element c e V(n,r) such that
=
=
where 0 e V(m,r). Viewing the problem in the nonhomogeneous way, for an element b E V(m,r), an element c e V(n,r) is a (r, b)coloring if (4.38) Definition 4.8.1 Let r denote a group. A graph G is fcolomble if for any bE V(m,r), there is always a (r, b)coloring c satisfying (4.38). Vectors in V(IE(G)I,f) can be viewed as functions from E(G) into r, and vectors in V(IV(G)I,f) can be viewed as functions from V(G) into r. With this in mind, for any be V(IE(G)I,f) and e E E(G), b(e) denotes the component in b labeled with element e. Similarly, for any c E V(IV(G)I,f) and v E V(G), c(v) denotes the component inc labeled with element v. Therefore, we can equivalently state that for a function be V(m,r), a (r, b)coloring is a function c E V(n,r) such that for each oriented edge e = (z,y) E E(G), c(z) c(y) ¢ b(e); and that a graph G is fcolorable if, under some fixed orientation of G, for ant function bE V(m,r), G always has a (r, b)coloring. Proposition 4.8.1 H for one orientation D orientation of G, G is also r colorable.
= D(G), that G is fcolorable, then for any
Proof Let D 1 and D 2 be two orientations of G, and assume that G is rcolorable under D1 • It suffices to show that when D2 is obtained from D 1 by reversing the direction of exactly one edge, G is r colorable under D 2 • Let B 1 B(D 1 ) and B 2 B(D2 ). We may assume that B 1 and B 2 differ only in Row 1, where the first row of B 2 equals the first row of B 1 multiplied by (1). Let b (b 11 ~,··· ,bm)T E V(m,r). Then b' = (b11 b2 ,··· ,bm)T E V(m,r) also. Since G is r colorable under D~, there exists a (r' b')coloring c' = (ell C2' .•• ' Cn) T E V(n,r). Note that c = (c1,C2,··· ,c,.)T e V(n,r) is a (r,b)coloring, and so G is also rcolorable under D2. 0
=
=
=
206
Matrices in Combinatorial Problems
Definition 4.8.2 Let G be a simple graph. The define the group chromatic number x1(G) to be the smallest integer k such that whenever r is a group with 1r1 ~ k, G is rcolorable. Example 4.8.1 (Lai and Zhang, [152]) For any positive integers m and k, let G be a graph with (2m+ k) + (m + k)m+k 1 vertices formed from a complete subgraph Km on m vertices and a complete bipartite subgraph K,..,f"2 with r 1 = m+k and r 2 (m+k)m+k such that
=
We can routinely verify that x(G) = m and Xl(G) = m + k (Exercises 4.19). Im:r,nediately from the definition of x1 (G), we have x(G) ~ Xl(G).
(4.39}
Xl(G') ~ Xl(G), if G' is a subgraph of G.
(4.40)
and
More properties on the group chromatic number x1 (G) can be found in the exercises. We now present the Brooks coloring theorem for the group chromatic number.
Theorem 4.8.1 (Lai and Zhang, [152]) Let G be a connected graph with maximum degree d(G}. Then Xl(G) ~~(G)+ 1,
where equality holds if and only if G complete graph on n vertices.
= C,. is the cycle on n vertices, or G = K,. is the
The proof of Theorem 4.8.1 has been divided into several exercises at the end of this chapter. Modifying a method of Wilf [275], we can applied Theorem 4.8.1 to prove an improvement of Theorem 4.8.1, yielding a better upper bound of Xl(G) in terms of .X1(G), the largest eigenvalue of G. Lemma 4.8.1 Let G be a graph with x1 (G} G' such that x1 (G) k and o(G') ~ k 1.
=
= k. Then G contains a connected subgraph
Proof By Definition 4.8.1, G is rcolorable if and only if each component of G is rcolorable. Therefore, we may assume that G is connected. By (4.40), G contains a connected subgraph G' such that X1 (G) = k but for any proper subgraph G" of G', Xl(G") < k. Let n = jV(G')I and m = IE(G')I.
Matrices in Combinatorial Problems
207
H 6(G') < k1, then G' has a vertex v of degree at most d ~ k2 in G'. Note that by the choice ofG', Xt(G'v) ~ k1. By Proposition 4.8.1, we assume that G' is a digraph such that all the edges incident with v are directed from v, and such that v corresponds to the last row of B = B(G'). Let v1o 112, • • • , v11 be the vertices adjacent to v in G', and correspond to the first d rows of B, respectively; and let e1 (v,v,), (1 ~ i ~d) denote the edges incident with v in G', and correspond to the first d columns of B, respectively. Let r be a group with 1r1 k  1. For any b = (blo 11:!, ••• , b11, bll+h · · • , bm)T E V(m,r). Let b' (b"+l• · · · ,bm)T =E V(m d,r) Since x1 (G' v) ~ k 1 = 1r1, there exists a (r, b')coloring c' = (c1 , c,, · · · ,c,._I)T E V(n 1,r). Note that 1r  {bt + c1. ~ + c,, · · · , b11 + c11}1 ~ (k  1)  d > 0, there exists a c,. E r  {bt +Ct.~+ ca,··· ,bd + C
=
=
=
=
=
Theorem 4.8.2 Let G be a connected graph and let .Xt be the largest eigenvalue of A(G). Each of the following holds. (i)
(4.41) where equality holds if and only if G is a compete graph or a cycle. (ii) (Wilf, [275})
where equality holds if and only if G is a compete graph or an odd cycle. Proof By (4.39), and by the well know fact that x(Cn) ~ 3 for any cycle Cn on n ~ 3 vertices, with equality holds if and only n is odd, it is straightforward to see that Theorem 4.8.2(ii) follows from Theorem 4.8.2(i). By Lemma 4.8.1, G has a connected subgraph G' satisfying Lentma 4.8.1. Thus .Xt(G) + 1 ~ .Xt(G') + 1 ~ Xt(G).
By Theorem 1.6.1, .X1(G') holds in (4.41). Then
~
6(G')
~
.X1(G)
(4.42)
k 1, and so (4.41) obtains. Assume that equality
= k 1 =Xl(G) 1
(4.43)
Hence equalities hold everywhere in (4.42). By Corollary 1.3.2A, G' is (k 1) regular. H k > 3, then by Theorem 4.8.1, G' = K,.. Denote
208
Matrices in Combinatorial Problems
such that V(G') = {v1o···
,v~:}.
Then
Wewanttoshowthat k =nandsoG = G'. Hk < n, thenletx = (1,1,··· ,1,e,O,··· ,Of be a ndimensional vector such that the first k components of x are 1, and the (k + 1)th component is E and all the other components are 0. By Theorem 1.3.2, (Ax)T x
.Xl(G);:::
Txif2 =
k(k  1) + 2e r;;=l a;,1:+1 k + e2 .
Note that E;=l a;,l:+l ;::: 0. H E;=l a;,l:+l > 0, then choose E so that 2 E~=l a;,lo+l > e(k 1), which results in .X1(G) > k 1, contrary to (4.43). Therefore, E;=l a;,l:+l = 0, and so a;,l:+l = 0 for each j with 1 :S j :S k. Repeating this process yields that A12 = 0, and so A21 = A?; 0, contrary to the assumption that G is connected. Therefore, we must have n k, and so G = G' Kn. With a similar argument, we can also show that when k = 2, G G' = On is a cycle.
=
=
=
=
0 Corollary 4.8.2 Let G be a connected graph with n vertices and m edges. Then Xl(G) :S 1 +~2m(: 1).
(4.44)
Equality holds if and only if G is a complete graph.
Proof Let .X1, .X2, · • · , .Xn denote the eigenvalues of G. By Schwarz inequality, by E~ 1 ~ 0 and E~ 1 ~ =2m,
=
.x~ = c.x1)2 = (~.x.f :Sen 1> ~~=en 1)(2m .xn. Therefore, (4.44) follows from that (4.41). Suppose equality holds in (4.44). Then by Theorem 4.8.2(i), G must be a complete graph or a cycle. But in this case, we must also have ).2 .\n, and so G must 3 = ··· be a Kn. (see Exercise 1.7 for the spectrums of Kn and On.) 0
= ).
=
The following result unifies Brooks coloring theorem and Wilf coloring theorem. Theorem 4.8.3 (Szekers and Wilf, [258), Cao, (42]) Let /(G) be a real function on a grapp G satisfying the properties (P1} and (P2) below: (P1) H His an induced subgraph of G, then /(H) :S /(G). (P2) /(G) ;::: 6(G) with equality if and only if G is regular. Then x(G) :S /(G)+ 1 with equality if and only if G is an odd cycle or a complete graph.
Matrices in Combinatorial Problems
209
We now turn to the lower bounds of x(G). For a graph G, w(G), the clique number of G, is the m&Jdmum k such that G has K,. as a subgraph. We immediately have a trivial lower bound for x(G): X1(G) 2:: x(G) 2:: w(G).
Theorem 4.8.4 below present a lower bound obtained by investigating A(G), the adjacency matrix of G. A better bound (Theorem 4.8.5 below) was obtained by Hoffman [123) by working on the eigenvalues of G. Theorem 4.8.4 H G has n vertices and m edges, then x(G) 2::
n2
Ln2 2m J.
Proof Note that G has a proper kcoloring if and only if V(G) can be partitioned into k independent subsets Vi, V2, · · · , V,.. Let = !Vii, (1 ~ i ~ k). we may assume that the adjacency matrix A(G) has the form
n,
A(G)
=[
~:: ~~
:::
~::
A~:2
···
A~o~:
Akl
l,
(4.45)
where the rows in [Au, A.2 , .. • , Ail,) correspond to the vertices in Vi, 1 ~ i ~ k. Since Vi is independent, A.• = 0 e B,.10 (1 ~ i ~ k), and so
2m= IIA(G)II ~
" n En~. 2 
(4.46)
It follows by Schwarz inequality that
" 2:: ("En,)2 = na.., kEn: i=l
i=l
and so Theorem 4.8.4 follows by (4.46).
D
Lemma 4.8.2 (Hoffman, [123]) Let A be a real symmetric matrix with the form (4.45) such that each A.. are square matrices. Then
"
~max(A) + (k 1)~min(A) ~ E~max(A.;). i=l
Theorem 4.8.5 (Hoffman, [123)} Let G be a graph with n vertices and m > 0 edges, and with eigenvalues ~ 1 ;?!: .\2 2:: • • ·;?!: ~... Then ~1
x(G);;::: 1 ~·
(4.47)
210
Matrices in Combinatorial Problems
Proof Let k = x(G). Then V(G) can be partitioned into k independent subsets V1, • · • , Vk, and so we may assume that A(G) has the form in (4.45), where At; = 0, 1 :5 i :5 k. By Lemma 4.8.2, we have
..\1 + (k1)..\,. :50. However, since m
4.9
> 0, we have..\,.< 0, and so (4.47) follows. O
Exercises
Exercise 4.1 Solve the difference equation {
Fn+S
= 2Fn+l + Fn + n, = 1.
Fo = 1,Fl = O,Fs
Exercise 4.2 Let S>.(t,k,v) beatdesign. Show that
Exercise 4.3 Show that for i s,.,(i,k,v), where
= 0, 1, · · · , t,
a tdesign S,.(t, k, v) is also an idesign
Ai=..\(v~)/(k~)· tl tl Exercise 4.4 Prove that bk = vr in BIDD. Exercise 4.5 Prove Theorem 4.3.8. Exercise 4.6 Let A= (lli;) and B = (b;;) be the adjacency matrix and the incidence matrix of digraph D(V, E) respectively. Show that
JVI lEI
JVI JVI
L L b;; = 0 and L La;; = lEI. i=l j=l
Exercise 4.7 For graphs G and H, show that 8(G *H)= 8(G)8(H). Exercise 4.8 If G has an orthonormal representation in dimension d, then 8(G) :5 d. Exercise 4.9 Let G be a graph on n vertices. (i) If the automorphism group r of G ~vertextransitive, then both 8(G)8(Gc) and 8(G)8(Gc) :5 n. (ii) Find an example to show that it is necessary for r to be vertextransitive.
=n
Matrices in Combinatorial Problems
211
Exercise 4.10 If the automorphism group r of G is vertextransitive, each of the following holds. (i) E>(G * ac) = IV(G)I. (ii) if, in addition, that G is selfcomplementary, then 9(G) JIV(G)j. (iii) e(Cs)
=
=.;s.
Exercise 4.11 Prove Proposition 4.6.2. Exercise 4.12 Prove Theorem 4.7.3. Exercise 4.13 (Cao, [42]) Let v be a vertex of a graph G. The kdegree of vis defined v. Let .6.~:(G) be the maximum kdegree of vertices in G. Show that (i) .6.~:(G) is the maximum row sum of A"(G). (ii) For a connected graph G,
to be the number of walks of length k from
where equality holds if and only if G is an odd cycle or a complete graph. Exercise 4.14 Let r be an Abelian group. Then a graph G is rcolorable if and only if every block of G is r colorable. Exercise 4.15 (Lai and Zhang, (152)) Let H be a subgraph of a graph G, and r be an Abelian group. Then (G,H) is said to be rextendible if for any bE V(IE(G)I,r), and for any (r, bl)coloring c1 of H, where b 1 is the restriction of bin E(H) (as a function), there is a (r, b)coloring c of G such that the restriction of c in V (H) is c1 (as a function). Show that if (G, H) is r extendible and H is r colorable, then G is r colorable. Exercise 4.16 (Lai and Zhang, [152)) Let G be a graph and suppose that V(G) can be linearly ordered as Vt.tJa,··· ,vn such that da,(vi) :$ k (i 1,2, ... ,n), where G, G[{v~ot~a, .. • ,v;}] is the subgraph of G induced by {VI,t/a, · · · ,v;}. Then for any Abelian group rwith 1r1 ~ k+ 1, (G;+l,Ga) (i 1,2, ... ,n1) is rextendible. In particular, G is r colorable.
=
=
=
Exercise 4.17 (Lai and Zhang, [152)) Let G be a graph. Then
Exercise 4.18 (Lai and Zhang, [152]) For any complete bipartite graph Km,n with n mm, Xt(Km,n) =m+ 1.
~
Exercise 4.19 (Lai and Zhang, [152]) For any positive integers m and k, there exists a graph G such that x(G) = m and x 1 (G) = m + k.
212
Matrices in Combinatorial Problems
Exercise 4.20 Let G be a graph. Show that (i) H G is a cycle on n;::: 3 vertices, then Xl(G) (ii) Xl(G) :S 2 if and only if G is a forest.
= 3.
Exercise 4.21 Prove Theorem 4.8.1.
Hints for Exercises
4.10
Exercise 4.1 Apply Corollary 4.1.2B to obtain k = 3,r
= 1,a = 2,/J =
1,bn = n,
Co= 1,c1 = 0,1:2 = 1. Exercise 4.2 Count the nun1ber oftsubsets in S,.(t,k,v) in two different ways. Exercise 4.3 For any subset S subsets containing
s from X
~
X with lSI
is ( v
~)
tz
I
= i, the nun1ber of = ways of taking t
while each tsubset belongs to ). of the x.·s in
an S.A(t,k,v). Thus the nun1ber of x.·s containing Sis). (
v~ )·
tz
On the other hand, the number of ways of taking tsubset containing S from X is
( k
~)
ta
).i (
where S belongs to .X. of the Xi's. Hence the number of Xi's containing Sis
:~ii).
Exercise 4.4 Count the repeated number of v elements in two different ways. Exercise 4.5 Let m = l + 1 and b = t 1 and apply Theorem 4.3.7. Then there exists a complete graph Kn with n = b(m1)+m (b1) (t1)l+ (l+1) (t2) = tlt+3, which can be decomposed into t complete (l + 1)partite subgraphs F1, F2, · · · , Ft. Clearly d~:(Fi) :S 2 :S d, and so (4.13) follows.
=
Exercise 4.6 In the incidence matrix, every row has exactly a +1 and a 1, and so the first double SUDl is 0. The second double sum is the sum of the indegrees of all vertices, and so the sum is IE(D)I. Exercise 4. 7 By Proposition 4.5.1, it suffices to show that 8(G *H) :S 8(G)8(H). Let Vl, ••. 'Vn, and Wit· •• ' Wm be orthonormal representations of and H, respectively, and let c and d be unit vectors such that
ac
n
~)vf c) 2 i=l
m
= 8(G), }:Cwfd) 2 =8(H). j=l
ac
ac
Then the Vj ® w;'s form an orthonormal representation of *H. Since *H ~ G * H, the vi® w;'s form an orthonormal representation of G *H. Note that c ®dis a unit
Matrices in Combinatorial Problems
213
vector. Hence n
9(G•H)
m
?: LL((v, ®w;)T(c®d))2 i=l i=l
n
m
i=l
i=l
= :E
Then
lbl = 1 and (ui ® 'Ui)Tb =
Hence 9(G)
1 .;a·
=:; d follows by the definition of 9.
Exercise 4.9 (i). View elements in r as n x n permutation matrices. By Theorem 4.5.2, there exists aBE B such that tr(BJ) = 9(G}, and consider the matrix
=
=
=
Then as PJ JP J, we have Be Band tr(BJ) 9(G). Since r is vertex transitive, 1/n. Imitate the proof of Theorem 4.5.3 to construct orthonormal representation v1, · · · ·, v,. and the unit vector d. Note that in the proof of Theorem 4.5.3, equality in the CauchySchwarz inequality held, and so we have in fact (using the notation in the proof of Theorem 4.5.3}
bu
=
Hence back to the proof of this exercise, we have
It follows by the definition of 9(G•) that 9(G•)
< l~i~n max  1(dTv )2

1
= __!:.._ 9(G} ·
The other assertion of (i) follows from Theorem 4.5.1.
214
Matrices in Combinatorial Problems (ii). Take G
= Kt,nh for n ~ 3.
Exercise 4.10 It suffices to show (i). Note that the "diagonal" of G *G" is an independent vertex set in G * a• .Hence S(G * G") ~ o:(G * a•) ~ IV(G)I. On the other hand, by Theorems 4.5.1 and 4.5.4 and by the previous exercise,
Exercise 4.11 Let j = (1, 1, · • · , 1)T E Bn,l· Multiply both sides of the equation in Proposition 4.6.1(i) from the right by j to get k 2 k+>..k+J'(n1k), and so Proposition 4.6.2(i) obtains. Let l = n k 1. Then G", the complement of G, is lregular. By Proposition 4.6.1,
=
(J I  A) 2
=li + (Z k + 1J l)(J I  A)+ (l k + .>. + 1)A.
This implies Proposition 4.6.2(ii), by Proposition 4.6.1. H G is the disjoint union of some Kk+ 1 's, then it is routine to check that G is an (n, k, k 1, D)strongly regular graph. Conversely, assume that G is an (n, k, >..,D)strongly regular graph. By Proposition 4.6.2(ii) and since I' = D, >.. k  1, which implies that each component of G is a complete graph.
=
=
Exercise 4.12 By Theorem 4.7.2 with B' B(K,.), a simple nonseparable matrix A~ B(K,.) is subeulerian if and only if there exists no vector z in the cocycle space of B(K,.) such that llzllo is odd. Exercise 4.13 (i) follows from Proposition 1.1.2(vii). For (ii), note that [a~:(G)j 1 /k satisfies properties (P1) and (P2) in Theorem 4.8.3. Exercise 4.14 Let r be an Abelian group. H a graph G is r colorable, then every subgraph of G is also ycolorable. In particular, every block of G is r colorable. It suffices to prove the converse for connected graphs with two blocks. Let G be a connected graph with two blocks G1 and G2 and assume that G1 and G2 are rcolorable. Let v0 be the cut vertex of G. Then v0 e V(Gl) n V(G2). Let m IE(G)I, m, = IE(G1)1 with 1 s; i s; 2, and let V(m1,r) denote the set of m 1tuples whose components are in r and are labeled with the edges in G,. For any b e V(m,r), we can get two vectors b 1 e V(m1 ,r) and b2 e V(m2,r). Since Gt and G2 are rcolorable, there exist a (r, bt)coloring Ct E V(IV(Gt)l. r) and an {r, b2)coloring C2 :e V(IV(G2)1. r). Let Ci(Vo) = g,, where 1 s; i s; 2, and for each v E V(G2), let c~(v) = c2(v) + 91 92· Then
=
c(v)
= { Ct(v), ~(v),
if v e V(Gl) if v E V(G2).
Matrices in Combinatorial Problems
215
It is routine to verify that cis a (r, b)coloring of G. Exercise 4.15 For any bE V(IE(G)I,r), since His rcolorable, H has a (r, ba)coloring Cg: V(H) I+ r, where be is the restriction of Bin E(H). Since (G,H) is rextendible, ca can be extended to a (r,G)coloring. Exercise 4.16 Let D be an orientation of E(Gi+1) such that every e = Vj11Jn E E(Gi+1) v;. to v;, if j 1 > h and from VJo to VJ. otherwise. For any b E V(IE(Gi+l)l,r) and any (r,1)coloring c1 of G1, where b1 is the restriction ofb in E(G1), we define a function c : Gi+l) I+ r as follows: Assuming that Vi,Vi+!t v•• Vi+l· ..• I Vir Vi+! are all the edgesjoiningvi+ 1 (0 ~r ~ k) inGi+1, we let c(v) = c 1(v) ifv E V(Gi), and let C(Vi+I) = g' such that g' E r' r{c(v.,)+b(vipvi+t)IP 1,2, ... ,r}. Since lrl ~ k+1, r' f. 0, and soc can be defined. It is routine to verify that cis a (r, b)coloring, and so is directed from
v(
=
=
=
(Gi+ll G,) is rextendible, where i 1, 2, ... In 1. By Exercise 4.15 and since G 1 is rcolorable, it follows that G is rcolorable.
=
=
Exercise 4.17 Let jV(G)I n, k maxH!;a{o(H)} + 1, and Vn be a vertex of degree at G {vn}· By assumption Hn1 has a vertex, say Vnb of degree most k. Put Hn1 G {vn,Vnd· Repeating this process we obtain a sequence at most k. Put Hn2 t11,1J2, • • • , vn such that each v; is joined to at most k vertices preceding it. Now Exercise 4.17 follows from Exercise 4.16.
=
=
Exercise 4.18Assumethat Km,n has the vertex bipartition (X, Y) with X= {:z:1,:Z:2,· · · ,:z:m} and Y {711, 112, · · · , Yn}. Let r be an Abelian group with r1 ~ m and D be an orientation of E(Km.n) such that every e = ZiYi E E(Km,n) is directed from Y:i to :z:,. Denote the set of all functions c : V(Km,n) t+ r by O(Km,n, r). For every function c E O(Km,n,r), we can get a function ex :X t+ r. Let O(X,r) ={ex : c E O(Km,n,r)}. Since 1r1 :s; m, IO(X,r)l = mlrl ~ mm. Assume that O(X,r) = {c1,c2,··· ,cr}, where r mlrl. Now we define fj : E(Km,n) t+ r) (l = 1, 2, · · · , r) as follows: If l ::/: j, let tj(:z:iY:J) = 0 for every i, and otherwise let ii(:z:.y1) = a1, E r such that {c1(z,) + a1, : i = 1,2,··· ,m} r. Let f = 'E~= 1 fj. Then for anyfunct~n c: V(Km,n) t+ r, there exists at least one arc e = 7/jZi E E(Km,n) such that c(y,)c(z,) = f(e). Hence Xl(Km,n) ~ m+l. On the other hand, by Exercise 4.17, Xl(Km,n) ~ m + 1.
=
=
=
Exercise 4.19 Let G be the graph in Example 4.8.1 Since G contains a Km, x(G) Apply Exercise 4.18 to show that X1 (G) = m + k.
=m.
Exercise 4.20 To see that, if G contains a cycle, than x1 (G) ~ 3, it suffices to show that X1(0n) > 2 for any n ~ 3, where On denote a cycle on n vertices. Let Z2 denote the group of 2 element. If Xl(On) = 2, then On is Z2 colorable. Denote V(On) = {v1,1J2, · · · , Vn}, such that the orientation of G is {(v1,vi+1) : i = 1,2,··· ,n (mod n)}. Assume first (Note that b(vn, Vl) = 1.) By that n is even. Let b = (0, 0, ... '0, l)T : E(On) I+
z2.
Matrices in Combinatorial Problems
216
the assumption that G is Z 2 colorable, G has a (r, b)coloring c: V(G) t+ Z2 such that c(v,)c(vi+t) :F b(v0, Vi+t), where i 1, 2, · · · , n (mod n). If c(vn) 1, then c(V2Ht) 0 and c(V2i) 1, and so c(vn) c(tJo) 1 b(vn,vt), a contradiction. (This shows that Xt(Cn) > 2. Together with Exercise 4.17, the above implies that x1 (G) 3. ) If n is odd, then choose b 0, and a similar argument works also. On the other hand, we can routinely argue by induction on IV (G) I to show that if G is a tree, then Xt(G)::; 2.
= = =
=
=
=
=
=
Exercise 4.21 If G is connected and not regular of degree A(G), then maXHcao(H) 5 A(G)  1 and so Xt(G) ::; A(G). Without loss of generality, let G be 2connected and A(G)regular. If G is a complete graph, then Xt(G) IV(G)I A(G) + 1. If A(G) 2, then G is a cycle and so by Exercise 4.20(i), x1 (G) 3 A(G) + 1. If G is 3connected and G is not complete, then there are three vertices Vt, V2 and vn (n IV(G)I) in G suclt that VtVn,V2Vn E E(G) and vtV2 ¢ E(G). If G is 2connected, let {vn,v'} be a cut set of G. Then there are two vertices vt and V2 belonging to different endblocks of G Vn. Now, we arrange the vertices of G {vt,v2} in nonincreasing order
=
=
=
= =
=
of their distance from Vn, say va, v4, · · · , Vn· Then the sequence Vt, V2, · · · , Vn is such that eaclt vertex other than vn is adjacent to at least one vertex following it, namely eaclt vertex other than Vn is joined to at most A(G)  1 vertices preceding it. Let D be an orientation of E(G) such that every e = ViVJ E E(G) is directed from v, to v; if i > j and from v; to v, otherwise. Fbr any b : E(G) t+ r, where 1r1 ~ A(G}, we define c : V(G} t+ r as follows: Assign al E r to c(vt) and G2 E r to c(v2) such
=
that a1 + b(vtVn) a2 + b(t~avn}; for v; (3::; j::; n}, let v;.v;,va2 V;,··· ,v0.v; E E(G) (r ::; A(G)  1 if j < n) be tlie edges joining v; and having i 11 < j (p 1, 2, · · · , r), and assign a; to c(v;) such that a; E r; r  {c(v;p} + b(vfpv;)IP 1, 2, •.. , r}. If j < n, thenr::; A(G)1 andsor; :F 0; ifj =n, thenrn :F 0; sinceat+b(vtvn) G2+h(V2vn)· It is now routine to verify that c is a (r, b)coloring.
=
=
=
=
Chapter 5
Combinatorial Analysis in Matrices 5.1
Combinatorial Representation of Matrices and Determinants
Definition 5.1.1 For a matrix A = (at;) E Mm,n 1 the weighted bipartite graph of A, denoted by K A, has vertex set V Vi U 1'2 and edge set E, where V1 u1, t£:1 1 • • • , um} and V2 111,112, • • • , 11.,} such that Ui11J e E with weight a,; if and only if ai.f '# 0. To represent a determinant by graphs, we adopt the convention to view K A as a weighted complete bipartite graph Km,n with partite sets Vi and V2, such that u;;11; e E with weight O;.j 1 for all U;; € Vi and Vj € '112· H A= (at;) eM.,, then we can extend the definition of D(A) by defining D(A} as the weighted digraph of A such that V(D(A)) = {v11 112,··· ,v.,}, where (11;;,11;) e E(D(A)) with weight a;;; if and only if a;;; :/; 0. For the convenience to interpret a determinant by digraphs, we can view D(A) as a complete digraph on n vertices with a loop at every vertex by assigning at; to the a:rc (v;;,11;) for every i,j = 1,2,··· ,n.
={
=
={
Example 5.1.1 For the matrix
3 0 1.2 0
0 the weighted bipartite graph is
217
~~],
3 6
218
Combinatorial Analysis in Matrices
Figure 5.1.1 Example 5.1.2 Let A= (a;;) eM,. be a square matrix. Let K,.,,. denote the weighted bipartite graph of A with partite sets Vi= {ut,u2,··· ,u,.} and V2 = {v1ov2,··· ,v.. }. For any permutation 1r in the symmetric group on n letters, the edge subset
is a perfect matching in K,.,,.. Therefore, this yields a one to one correspondence between S,., the symmetric group on n letters, and M,., the set of all perfect matchings of K,.,,.. For each 1r e S,., define
sign(1r) Let WA(F,.)
= { 11
if 1r is an even permutation if 11' is an odd permutation.
= sign(1r) Il(u;v;)eF,. a;;. Then the determinant of A can be written as det(A)
= :E
WA(F,.).
F,.EMn
Example 5.1.3 Let A= (a;;) eM,. be a square matrix. Let D(A) denote the weighted digraph of A with V(D(A}} = {v1 , V2, • • • , v,.}. For any permutation 1r in the symmetric
Combinatorial Analysis in Matrices
219
group on n letters, the arcs
F,.
= {(v1. Vor(l)), (v..(1)• Vor(..(1))), • • • , }
furm a subdigraph called a 1factor in D(A). Therefore, this yields a one to one correspondence between Sn, the symmetric group on n letters, and Vn, the set of aliifactors ofD(A). Note that each 1factor F., is a disjoint union of directed cycles in D(A). For a directed cycle C, define
II
WA(C) =
ao;,
II WA(C).
and WA(F,) =
OeF,
(v,,v;)EE(O)
Note that if k denotes the number of disjoint directed cycles in F,.., then
(1)"
= {l)n(l)n1: = {l)n sign(11').
Therefore, the determinant of A can be written as det(A)
= {l)n
L
WA(F,.).
F,E1>n
Definition 5.1.2 Two matrices A, B E Mn are diagonally similar if there is a non singular diagonal matrix D = diag(d1. ~. · · · , dn) such that DAD 1 =B. Example 5.1.4 H A and B are diagonally similar, then D(A) = D(B), when weights are ignored and zero weighted arcs dropped. However, it is possible that D(A) D(B), but A and B are not diagonally similar. Consider the example:
=
A=[~ ~].andB=[~ ~]· Clearly D(A) = D(B). H there were a. D=diag(db~) such that DAD 1 = B, then
{
2dt~r =1 ~dl
= 1.
A contradiction obtains. Theorem 5.1.1 (Fiedler and Ptak, [88]) Let A, B E Mn be two square matrices such that A is irreducible. Then A and B are diagonally similar if and only if each of the following holds: (i) D(A) = D(B), and (ii) For each directed cycle C in D(A), WA(C) WB(C).
=
220
Combinatorial Analysis in Matrices
=
Proof Assume first that A and B are diagonally similar. Then a,; 0 if and only if 0, for any i,j 1,2,··· ,n, and so D(A) D(B). Let D D(A) = D(B). By assumption, there is a D=diag(d1 ,~,··· ,dn) with~ ~0 such that DAD 1 =B. Thus,
b,;
=
=
=
diat;dj 1
= b,;,
H follows that for each directed cycle 0 Ws(O)
=
for any i,j
=
= 1,2,··· ,n.
= v,, vi2 · · · v,. v;,,
bit,i2bi2,ia · · · b,.,,,
~, a,,,.di;1di2a,2 ,iadi;1 · · · d,.,a,.,,,~ 1
=
O.t,i2Cli2,ia • · ·Gc,.,i 1
= WA(O).
Conversely, assume that both (i) and (ii) hold. Since A is irreducible, D(A) is strongly connected, and so by D(A) D(B), B is also irreducible. We argue by induction on IE(D(A))I (or equivalently, the number of non zero entries of A) to show that A is diagonally similar to B.
=
Suppose first that D
= 111 112 • • ·Vnvt is a directed cycle. By assumption,
a1,2<12,s • · · an,1
= WA(O) = Ws(O) =b1,2~,s · · · bn,l·
Define
=
~a, "+l
b •' , for i i,i+l
= 1, 2, · · · , n 1.
=
Let D diag(d11 da, · · · , d,.). Then DAD 1 =B. Now assume that Dis obtained from a strong digraph D' by adding a directed path v~:v~: 1 • • ·t1211J.Vn· (Thus D' may be viewed as D(A'), where A' is obtained from A by changing the nonzero entries a~:,~: 11 ·· · ,<12,t.a1,n to zeros. This can be done since Dis strong, and so every arc of D lies in a directed cycle of D). Therefore,
A
=[
a1,n
ai,i+1 a~:,k1
l [ l , and B
=
b1,n
ai,i+l
,
b~c,~:1
A'
B'
where A', B' E Mn~:+ 1 are irreducible, and, in A, only a~:,k1, • · · , a2,1, a1,n are nonzero entries outside A'; and in B, only b.~t,/c 1 ,··· .~.t.b 1 ,n are nonzero entries outside B'. By induction, there is a nonsingular D' diag(d~:,· · · ,d,.) such that D'A'(D') 1 B'. Define
=
=
.l. fi ach t=,,···,,.., • 12 1 .... = ~+lat+l,i b ,ore L
i+l,i
Combinatorial Analysis in Matrices and let D
221
= diag(dt, · .. ,d,.). Then DAD 1 = B, and so A and Bare diagonally similar.
0 Shao and Cheng [248] modified the conditions in Theorem 5.1.1(ii) to obtain necessary and sufficient conditions for matrices A and B which are not necessarily irreducible to be diagonally similar. Interested readers are referred to [248] for further details.
5.2
Combinatorial Proofs in Linear Algebra
Combinatorial methods have been applied to prove results in matrices. For example, the Jacobi identity was studied combinatorially by Jackson and Foato ([89]}. Brualdi gave combinatorial proofs of the Jordon canonical form of a matrix ([20]) and he also showed that the elementary divisors of a matrix can be determined combinatorially ([21] and [22]). In this section, we present the combinatorial proofs of two matrix identities given by Zeilberger ([280]}. Once again we adopt the following notational convention: For a matrix A, (A)i; denotes the (i,j)entry of A. Theorem 5.2.1 (CayleyHamilton) Let A E Mn. Then XA(A) = 0. Proof Fix an A E Mn. Let
Therefore, it suffices to prove this matrix identity: (5.1) As in Example 5.1.3, let D(A) denote the weighted digraph of A with V(D(A)) = {vt. t/2, • • • , vn} (viewed as a weighted complete digraph on n vertices, where an arc (v,,vi) has weight a.1), 'Dn the set of alllfactors of D(A) and let 'D~ denote the set of aliifactors of all subdigraphs of D(A). Then det(A)
= }: F,e"D.
WA(F,..) and det(In A)= }: WA(F,..). F,e'D:
Note that in (5.1), ak is the sum of the weights of all subgraphs of D(A) induced by k element subsets of V(D(A)), and that the (i,j)entry of Ank is the total weight of all directed (Vi, VJ )walks of length n  k. Fix i and j, let A = A(i,j) denote the collection of such subgraph pairs (P,C) in D(A): (Al) P i&a directed (v,,vi)walk, (A2) 0 is an arc disjoint union of directed cycles,
222
Combinatorial Analysis in Matrices
(A3) IE(P)I +IE(C) I= n. Let o(C) denote the number of cycles in C. Then the weight of (P, C) can be defined as follows.
W(P,C)
IT
= (1)o(O)
a1:1·
(v.,v,)EE(P)UE(O)
To verify (5.1), it suffices to verify both of the following claims. Claim 1 The (i, j)entry of the left hand side of (5.1) is
W(A(i,j))
L:
=
W(P,C).
(P,O)E.A(i,j)
Fix a k with 0 ::; k ::; n. Suppose that IE(P)I
= n k.
II
Then
ami
is the
(vm,vl)EE(P)
total weight of the directed (vi,v;)walk P, and the (i,j)entry of AnA: is the sum of the total weight of all directed (vi, v;)walk of length n k. By (A3), IE(C) I k, and so E(C) corresponds to a k element subset of V(D(A)). The total weight of these kelement subsets is equal to the sum of all k x k principal submatrices of (A), which is a~:. In other words,
=
a,.(Ank)i.;
II ami) (
L:
=(
E .4(<,;) I.B(P)I = n 
(vm,VI)EE(P)
(P, 0)
wlth
II
Thus Claim 1 follows by summing up from k
L: E .4(<,;) with IB(C)I = • (P, 0)
II am•) ·
(vm,VI)EE(O)
=0, 1, • · · , n.
Claim 2 For each (i,j),
L:
W(P,C)
= 0.
(P,O)E.A(i,j)
Fix a (P, C) E A(i,j). Starting from v;, P may revisit a vertex that is already in P, in which case P contains a directed cycle 0 1 ; or P will visit a vertex that is inC, in which case P and a directed cycle C2 in C share a common vertex. Obtain a new pair (P',C') E A(i,j) as follows: In the former case, move 0 1 from P to C, and in the latter case, move 0 2 from C toP. Note that W(P,C) = W(P',C'), and that the correspondence between (P, C) and (P', C') is a bijection. Therefore, Claim 2 follows. O Theorem 5.2.2 Let A= (Gi;),B
= (b,;) E Mn.
Then det(AB)
= det(A) det(B).
Proof Let Sn denote the symmetric group on n letters. For each
WA(7r) ws(7r)
= =
sign(7r)a1,,..(1)a2,,.(2) • · · an,,.(n)• sign(7r)b1,,.(1)~,,..(2) • • • bn,,..(n)·
11"
E Sn, define
Combinatorial Analysis in Matrices
223
Note the difference between the definitions of WA here and that of WA previously. With these notations, det(A)
= WA(8,.) = L
WA(7r), and det(B)
= ws(8,.) = L
ws(1r) .
...es.
Note that n
(AB)i,;
= :Ea.~:b~:;. k=O
Thus we can introduce a digraph D = D(A,B) with V(D) = {vlt t/2, • •• ,v,.}, where there are two parallel arcs from each v; to each v;, one with weight a;; (denoted by v; +A v;) and the other with weight b;; (denoted by v1 +s v;). With this model, we represent a;~:b~:; by a path Vi'IJ~tv;. This also motivates the following notation. Let Z(n) denote the set of pairs (j, 1r), where f is a function from {1, 2, · · · , n} into itself and where 1r E 8,.. Define
,.
w(j, 1r)
= sign(1r) IT (at/(i)b/(i),..(i))· i=l
Note that det(AB)
L
= w(Z(n)) =
(5.2)
w(J, 1r).
(/,1r)EZ(n)
H J E 8,., then r 1 7r E 8,., and so w(/,7r)
L
:E
w(/,11")
= WA(J)ws(J 111"). Hfollows that (5.3)
wA(f)ws(r 1 7r)
/,1rESn
=
L
WA(/)
/ES.
L
ws(1r)
= det(A)det(B).
•ES,.
By (5.2) and (5.3), it suffices to show that
L
w(j, 1r)
= 0.
(5.4)
/f/.S.. ,trESn
Since f fJ 8,., there exist b,i,i' E {1,2,··· ,n} such that /(i) = J(i') = b, where iIi'. Then, D has arcs v; +A V& and V;• +A vb. Choose a smallest b such that IJ1 (b)l ;;:::: 2, and after b is chosen, choose i, i' E 1 (b) such that i + i' is smallest. H v; and v;• are in the same cycle of 11', that is, 1r has a cycle
r
then there is a 11'1 E 8,., such that xis not in the cycle above, 1r'(x) the cycle above is broken into two cycles:
=1r(.z), and such that
Combinatorial Analysis in Matrices
224
H Vi and Vi• are not in the same cycle of 1r, that is, 1r has cycles Vi 4A Vh
+a
Vw(i) 4A •••
+a Vi and Vi1
+A Vb
+a
V>r(i') +A"'
then there is a 7r' E s.. , such that xis not in these two cycles, 1r'(x) these two cycles are combined into a cycle:
+a
Vi•,
= 1r(x), and such that
Thus, (j,1r) ++ (j,1r') is a one to one correspondence from the set {(/, 1r) and 11" e s..} onto itself such that
w(f, 1r) Therefore, (5.4) must hold.
5.3
=w(f,
f 'IS..
11"1 ).
O
Generalized Inverse of a Boolean Matrix
Definition 5.3.1 Let Bm,n denote the set of all m x n Boolean matrices. A generalized inverse (or just a ginverse) of a matrix A e Bn,m is a matrix B e Bn,m such that
ABA=A.
=
A matrix A (ai;) E Bm,n can be represented by a bipartite digraph B(R,.,Sn), called the bipartite digraph representation of A, where R,. = {u1,u2, ·· · ,um} and Sn {v1>1J2, ... ,vn} such that (u,,v;) is an arc if and only if a,; > 0. Conversely, given a bipartite digraph B(R,., Sn) whose arcs are all directed from a vertex in Rm to a vertex in S,., there is a matrix A e Bm,n• denoted by M(B(R,., S,.)), whose bipartite digraph representation is B(R,., S .. ). Note that in our notation, the arcs in a bipartite digraph B(V1 , l/2) are always directed from Vi to l/2. Thus B(R,., Sn) and B(S,., R,.) have identical vertex sets but the arcs are directed oppositely. H A e Bm,n has a bipartite digraph representation B(R,.,S,.) and if M(B(S,.,R,.)) is the bipartite digraph representation of a generalized inverse of A, then B(S,., R,.) is called the ginverse graph of A (or of B(R,.,S.. )). HG = B(R,.,Sn)UB(S,.,R,.), then G is called the combined graph of B(Rm,Sn) and B(S11 ,R,.).
=
Example 5.3.1 The bipartite digraph representation of the matrix
A= [ is the bipartite graph in Figure 5.3.1.
~~~
l
Combinatorial Analysis in Matrices
225
Sa
B(~,Sa)
Figure 5.3.1 Example 5.3.2 (Combined graph) Let A be the matrix in Example 5.3.1 with representation B(14, Ss) and G = B(~, Sa) U B(Sa, 14). (See Figure 5.3.2). We can verify that Bt = M(Sa,l4) is a ginverse of A, where
B1
=[
1 0 0 0 00 01 0 1 0 1
l
226
Combinatorial Analysis in Matrices
Ss
Figure 5.3.2 Example 5.3.3 (Nonuniqueness of ginverses) Each of the following is a ginverse of the matrix A in Example 5.3.1.
1000] [1 0 0 0 ] B2= [ 0 0 0 0 ,Bs= 0 0 0 0 , B4 0 1 0 0 0 101
000] = [1 0 0 0 1 0000
Definition 5.3.2 The set of all ginverses of A is denoted by A. A matrix B e A is a ma:rimum ginverse of A and is denoted by max: A, if B has the maximum number of nonzero entries among all the matrices in A; a matrix BE A is a minimum ginverse of A and is denoted by min A, if B has the minimum number of nonzero entries among all the matrices in A.
Combinatorial Analysis in Matrices
227
Note that both max A and min A are sets of matrices. For notational convenience, we also use max A to denote a matrix in max A, and use min A to denote a matrix in min A.
=
Example 5.3.4 Let A be the matrix in Example 5.3.1. Then max A B 1 and both B 2 and B• are min A. In fact, Zhou [291] and Liu [167] proved that while min A may not be unique, max A is unique as long as A ::f: 0. Theorem 5.3.1 Let A E Bm,n with a bipartite digraph representation B(Rm, Sn)· For a graph B(Sn,Rm), the following are equivalent. {i} B(Sn, Rm) is a ginverse graph of A. (ii) In the combined graph G B(Rm,Sn) U B(Sn,Rm), for each pair of vertices (u.,v;) with uo E Rm and v; E Sn, (u,,v;) is an arc of G if and only if G has a directed (u,, v; )walk of length 3.
=
Proof Denote M =(Yo;)= M(B(S,.,Rm)) and A= (ao;). Suppose first that B(S.. , Rm) is a ginverse graph of A. Then AM A each i,j,
L a;,g, aq; =a;;.
= A, and so for (5.5)
9
p,q
If (u;,v;) is an arcinG with u; E Rm and v; E s.. , then a.;= 1, and so by (5.5), there must be a pair (p, q) such that a;pgpqaq; a;; = 1. Hence a;, = g,9 = a9; = 1, which implies that G has a directed (u,, v; )walk of length 3. If (u;,v;) is not an arcinG, then by (5.5), a;,g,qaq; 0 for all choices of (p,q), and so G does not have any directed (u;, v;)walk of length 3, and so (ii) must hold. Conversely, assume that (ii) holds. Then AMA =A follows immediately by (5.5), and so (i) follows. O
=
=
=
Definition 5.3.3 Let G B(Rm,S,.) UB(S.. ,Rm) be a combined graph. For each pair of vertices (u,v), if (u,v) E E(G) only ifG has a directed (u,v}walkoflength 3, then we say that the pair (u,v) has the (13} property; similarly, if G has a directed (u,v)walk of length 3 only if (u, v) E E(G), then we say that the pair (u, v) has the (31} property. For a vertex u E Rm, if for each v E Sn, (u, v) has the (13) property {or (31) property, respectively), then we say that u is a vertex with the (13} property (or {31) property, respectively).
If each vertex in Rm has the (13) property (or (31) property, respectively), then we say that G has the (13) property (or (31) property, respectively). With these terminology, Theorem 5.3.1 can be restated as follows. Theorem 5.3.1' Let A E Bm,n and let B(Rm, Sn) be the bipartite digraph representation of A. Then a bipartite digraph B(Sn, Rm) is a ginverse graph of A if and only if the
Combinatorial Analysis in Matrices
228
combined graph G = B(R,, Sn} U B(Sn, R,} has both the (13} property and the (31} property. Corollary 5.3.1A Suppose that B(S.. ,R,} is a ginverse graph of B(R,,S.. ). Then for every pair of vertices u e Rm and v e Sn in the combined graph G = B(Rm,Sn} u B(S.. ,R,}, either d(u,v} = 1 or d(u,v) = oo.
< oo, then k must be an odd integer. By Theorem 5.3.1, if k > 1, then k > 3. Take a shortest directed (u, v)path P = tloVt · · · in G, where vo = u and""= v. Then by Theorem 5.3.1, d(v0 ,v3} 1, contrary to the assumption that Pis a shortest path. 0
Proof Note that if d(u, v} = k
v,.
=
Corollary 5.3.1B Suppose that B(S.. , R,} is a_ginverse graph of B(R,, Sn}· Then for vertices "'1•"'2 e R, and Vt,U2 e in the combined graph G = B(R,, UB(S.. ,R,}, if (u1,v1}, (u2, v2} E E(G}, and if (u1 , t12} ¢ E(G). then (Vt."'2} ¢ E(G).
s..}
s..
Lemma 5.3.1 In thedigraphG = B(R,,S.. )UB(S.. ,R,), if for some u e R, andv e Sn, t:L+(u) 0 or tr(v) 0, then the pair {u,v} has (13) property and (31) property.
=
=
Proof G does not have a directed (u, v )path of length 1 or 3.
O
Lemma 5.3.2 In the digraph G = B(R,, S.. ) U B(Sn, R,.,.), For any u e R,, if for any v E Sn, at least one vertex in {u, v} lies in a directed cycle of length 2, then u has (13) property. Proof By Definition 5.3.3, we may assume that (u, v) E E(G). Hone of u or v lies in a directed 2cycle, then G has a directed (u, v)walk of length 3. O
=
Given a combined graph G B(Rm,Sn) UB(Sn,R.n), when (ut.v1 ), ("'2,V2) E E(G) and (ult !J2) ¢ E(G), we say that the pair {vt,u2} is a forbidden pair. An arc (u, v) E E(G} with u e R, and v e is called a single arc if neither u not v lies in a directed cycle of length 2.
s..
We now present an algorithm to determine if a matrix A E Bm,n has a ginverse, and construct one if it does. The validity of this algorithm will be proved in Theorem 5.3.2, which follows the algorithm. Algorithm 5.3.1 (Step 1) For a given A E Bm,n, construct the bipartite digraph representation of A, denote it by B(R,, S,.). (Step 2) Construct a bipartite digraph Bo = B(S.. , R,) as follows: For each pair of vertices u E R, and v E S,., an arc (v, u) e E(Bo) if and only if { v, u} is a not a forbidden pair. (Step 3) Let G B(R,, Sn) l,J Bo.
=
229
Combinatorial Analysis in Matrices
H for each single arc (u,v) of G, the pair {u, v} has (13) property, then Bo is a ginverse graph of B(R,.,S,.), and M(B0 ) is the maximum ginverse of A; otherwise A does not have a ginverse.
Example 5.3.5
~~~] ~ ~ ~
'
MB( o) 
1 0 0
[01000] 00001 1 1 0 0 0 0 01 00
Since G has no single arcs, M(Bo) is the maximum ginverse of A. Lemma 5.3.3 The graph G (31) property.
= B(R,., S,.) U B0 produced in Step 3 of Algorithm 5.3.1 has
Proof Pick u e R,. and v e S,.. Suppose that G has a directed (u,v)walk uu'v'v of length 3, but (u,v) ¢ E(G). Then {u',v'} is a forbidden pair, and so by Step 2, (tl,u') ¢ E(G), a contradiction. Lemma 5.3.4 Let B'(S,.,R,.) beaginverse graph of A, then B'(S,.,R,.) is a subgraph of Bo. Proof Let G = B(R,.,S,.) UB0 (S,.,R,.), and G'
= B(R,.,S,.) u B'(S,.,R,.).
Let u e R,. and v e S,.. H (u,v) ¢ E(G), then {u,v} is a forbidden pair, by Step 2 of Algorithm 5.3.1. By Corollary 5.3.1B, and since B'(S,., R,.) is a ginverse of B(R,., S,.), we have (u, v} ¢ E(G'). 0 Theorem 5.3.2 Let A e Bm,n with a bipartite digraph representation B(R,.,S,.), and let B 0 = B 0 (S,., R,.) be the bipartite digraph produced by Algorithm 5.3.1. The following are equivalent. (i) A has a ginverse. (ii) In the combined graph G = B(R,.,S,.) UB0 , if for some u E R,.,v E S,., (u,v) E E(G) is a single arc, then the pair {u,v} has (13) property. Proof Assume first that B' = B' (S,., R,.) is a ginverse of A, and let G' = B(R,., S,.)UB'. If Part(ii) is violated, then there exists an arc (u,v) E E(G') with u E R,. and v E S,. such that G' has not (u, v )trail of length 3. By Lemma 5.3.2, (u, v) must be a single arc. It follows that G' does not have a (13) property, and so by Theorem 5.3.1', B' is not a ginverse of A, a contradiction.
230
Combinatorial Analysis in Matrices
Conversely, by Theorem 5.3.2(ii) and by Lemma 5.3.2, B 0 has (13) property. By Theorem 5.3.1', by Lemma 5.3.3 and by the fact that Bo has (13) property, Bo is a ginverse of A. 0 Now we consider properties of a minimum ginverse of A. We have the following observation, stated as Proposition 5.3.1. The straightforward proof for this proposition is left as an exercise. Proposition 5.3.1 Let A
e Bm,n·
Each of the following holds.
{i) Let BE Bn,m be a matrix. If for some min A, we have min A ~ B ~max A+, then B is also a ginverse of A. (ii) If Bo(S,.,R,.) is a maxA+, then a minium ginverse B*(S,.,R,.) of A can be obtained from B 0 (S,., R,.) by deleting arcs such that G B(R,., S,.) U B* (S,., R,.) has (13) property and such that B*(S,.,R,.) has the minimum number of arcs among all ginverses of A.
=
(iii) Let (11,u) with 11 e S,. and u e R,. be an arc in B0 (S,.,R,.). If cl+(u) = 0 or = 0 in G = B(R,.,S,.) U B0 (S,.,R,.), then the arc (11,u) can be deleted and the resulting graph B0  (11,u) is also a ginverse of A.
tr(11)
Theorem 5.3.3 Let B(R,.,S,.) be the bipartite representation of A, let B*(S,.,R.,.) be a minimum ginverse of B{R.,.,S,.), and let G* B(R,., S,.) U B*(S,.,R,.). If (u,11) is a single arc of G*, then G* must have a directed K 2,2 as a subgraph whose arcs are all directed from R,. to S,., and such that {u,v) is an arc of this directed K2,2•
=
Proof Since B*(S,.,R,.) is a ginverse, {u,11) has {13) property. Since (u,11) is a single arc, G* has a directed (u,v)path uv1u111 with u,u1 e R,. and 11,111 e S,.. If (u 11 111) is in G*, then G* has a desirable K2, 2. If (Ut, 111) is not an arc in G*, then as (u, 11 1) has (13) property, G* has either a directed (u, 111 )path u112u2111 with u 2 =I u, or a directed 2cycle 111'1.£2111. In either case, since (u2,11) has (13) property, G* contains (u2,v), and so a desirable K2,2 must exist. 0 By Theorem 5.3.3, we can first apply Algorithm 5.3.1 to construct a ginverse of B0 (S,.,R,.), and then obtain a minimum ginverse by deleting arcs from B0 • Interested readers are referred to [174] for details.
5.4
Maximum Determinant of a (0,1) Matrix
What is the maximum value of ldet(A)I, if A ranges over all matrices in B,.? What is the least upper bound of ldet(A)I, if A ranges over all matrices in M,.? The Hadamard inequality (Theorem 5.4.1) gives an upper bound of ldet(A)I, but determining the least upper bound seems very difficult. This section will be devoted to the discussion of this
231
Combinatorial Analysis in Matrices problem. Theorem 5.4.1 (Hadamard, [111]) Let A= (a;;) EM,... Then
,..
ldet (AW
~
fi 2>~;·
j==li==l
Moreover, if each a;; E {1, 1}, then ldet (A)I
~
fl /'fa~;~ ni,
(5.6)
where equality in (5.6) holds if and only if AAT = nl,... Definition 5.4.1 Let M,..(1, 1) denote the collection of all n x n matrices whose entries are 1 or 1. A matrix HE M,..(1,1) is a Hadamard matrizif HHT = nl,... Proposition 5.4.1 Suppose that there exists ann x n Hadamard matrix. Each of the following holds.
(i) a,. (ii) n
= ..;nn.
=1,2 or n s
0 (mod 4).
Proof (i) follows by Theorem 5.4.1. Suppose that H (hj;) is a Hadamard matrix. By the definition of a Hadamard matrix, H HT HT H nl,... Hence, for n > 2,
=
= =
,..
,..
~)h1; + h,;)(h1; + hs;) i=l
= }:h~; = n. i=l
=
Since h 1; + h 2; ±2, 0 and h 1; divisible by 4, and so (ii) holds.
+ h3; = ±2, 0, the left hand side of the equality above is D
It has been conjectured that a Hadamard matrix exists if and only if n (mod 4). See [237]. For each integer n ~ 1, define
a,.
=
max{ det(H) : HE M,..(1,1)},
fj,..
=
max{ det(B) : B E B,..}.
= 1, 2, or n =0
When n ¢ 0 (mod 4), the value of a,. is determined by Ehlich [80]. Williamson (276] showed that for n ~ 2, a,.= 2"' 1{3,... Therefore, we can study Pn in order to determine When A belongs to some special classes of (0,1) matrices, the studies of the least upper bound of ldet(A)I were conducted by Ryser with an algebraic approach and by Brualdi and Solheid with a graphical approach.
232
Combinatorial Analysis in Matrices
Example 5.4.1 Let A E Bn be the incidence matrix of a symmetric 2design S>.(2,k,n). Then
AAT =ATA= (k .\)In + AJn· It follows that
= k(k .\) !!jl. Note that the parameters satisfy .\(n 1) = k(k 1). jdet(A)I
Ryser [224] found that the incidence matrix of symmetric 2design s,.(2,k,n) yields in fact the extremal value of ldet(A)j, among a class of matrices in Bn with interesting properties. Theorem 5.4.2 (Ryser, [224]) Let Q = (q,;) E Bn with integers such that .\(n 1) = k(k 1). H
IIQII = t.
Let k ;::: A ;::: 0 be
t ~ kn and A ~ k  A, or if t ;::: kn and k  A ~ A,
(5.7)
then jdet (Q)I ~ k(k .\)~. Proof For a matrix E E Bn, let E(x,y) denote the matrix obtained from E by replacing each 1entry of E by an x, and each 0entry of E by a y. With this notation, set k.\ Ql p= A,
=Q(p,1)
and
7J = [
p z ] , zT Ql
(5.8)
where zT = (..,JP, ..;p, ·· · ,..;p). LetS,= L:j=1 ql;• for each i with 1 ~ i ~ n. By Theorem 5.4.1, n
·ldet('lJ)I :S
.../# + np IT ..jp + S,.
(5.9)
i=l n
Note that
Es• = tp
2
+ (n 2 
t)
=t(J} 1) + n 2 , and that
i=l
...2 P
_
+ np P
(k.\+.\n) _ k 2 (k.\) .\

,\2
•
n
It follows by (5.7) that
Es• :S kn(p 1) +n 2
2•
For each i, let 8, be a quantity such that
i=l
s,;::: s,, (1 ~ i :S n),
n
and
Es• ~ kn(p2 
1) + n 2 •
i=l
Thus
kn(p2
=
J....(p
n"'P

1) + n 2
+ np = n ( kr + .\n.\k+k.\) .\
) _ n(k  .\)k2 A2 •
+1 
Combinatorial Analysis in Matrices
233
It follows
!1(p+3'.)~ (~~(p+:s'·)r ~ (Ck;_;>Jc2r.
(5.10)
Combine (5.9) and (5.10) to get ldet(Q)I
~ k~ IT Vp+S, i=l
~ k~ e~r = e~r+l. By (5.8), we can multiply the first row of 7J by 1/.,;fJ and add to the other rows of 7J to get ldet(7J)I Note that ldet(Q(k/~,0))1
=pldet(Q(k/~,0))1 ~ (k~) n+l.
(5.11)
= (k/~)nl det(Q)I. It follows that
)n+l .
k ( pldet(Q)I ~'X .../k ~
Therefore the theorem obtains.
O
Theorem 5.4.3 (Ryser, [224]) Let Q = (q,i) E Bn be a matrix. H ldet(Q)I then Q is the incidence matrix of a symmetric 2design S.A(2, k, n). Proof H ldet (Q)I
= k(k.X)¥,
= k(k .X)¥, then
.x . I (x,ok )I = (k~)n+l
P detQ
Define 7J as in (5.8) and employ the notations in Theorem 5.4.2. By (5.11), ldet(7J)I
= (k~) n+l,
and so equality must hold in (5.10), which implies p
It follows that 7J7JT

+ s, =
(k.\)k 2
.\2
,
1 ~ i ~ n.
= k2(k .X)/~2 In+l• and so k2
Q1Qf = ,x2 (k .\)In pJn.
(5.12)
234
Combinatorial Analysis in Matrices
For each i, let r;
= :Ej=1 qii· By (5.12), p'lr;+ (nr;) (p2 l)r,
=
1l'
,X2(k.\) p
k2
= .\2 (k 
.\)  p n.
=
Hence r; k, for each i with 1 ~ i ~ n. For i ::1 ; with 1 ~ i,; ~ n, let f denote the dot product of the ith row and the jth row of Q. By (5.12),
fp 2

2(k  f)p + n  2k + f
= p
l(p2 + 2p+ 1) =2kp p+ 2k n. It follows that lk2 /.X2 = k 2 /).. and so f = .\. Therefore, Q is the incidence matrix of a symmetric 2design 8>..(2, k, n).
O
The following theorem of Ryser follows by combining Theorems 5.4.2 and 5.4.3. Theorem 5.4.4 (Ryser, (224]) Letn Let
QE Bn with IIQII =t = kn.
> k >A> Obeintegerssuch that.\(n1) = k(k1}.
Then ldet(Q)I ~ k(k A) n;;',
where equality holds if and only if A is the incidence matrix of a 2design 8>..(2, k,n). Definition 5.4.2 Let A E Bn. Define two bipartite graphs Go(A) and Gt(A) as follows. Both Go(A) and Gt(A) have vertex partite sets U {u1 ,112,··· ,un} and V =
=
{v~o t/2, • • • , Vn}·
lJi;
An edge u;vi E E(Go(A)) (u;v; E E(G1 (A)), respectively) if and only if
= 0 (a;;= 1, respectively). Note that Go(A) = Gt(Jn A).
A matrix A is acyclic if G 1 (A) is acyclic, and A is complementary acyclic if G0 (A) is acyclic. A matrix A E Bn is complementary triangular if A has only 1 's above the its main diagonal. For example, Jn  A is a triangular matrix. For each integer n ~ 1, define
In= max{l
det(A)I : A E Bn and A is complementary acyclic}.
Example 5.4.2 Since an acyclic graph has at most one perfect matching, if A is acyclic, then det(A} E {0,1,1}. Example 5.4.3 Suppose A is complementary acyclic, and let B be the matrix obtained from A by permuting two rows of A. Then B is also complementary acyclic with det(B) =  det(A). Therefore,
In= max{ det(A)
: A E Bn and A is complementary acyclic}.
Combinatorial Analysis in Matrices
235
Brualdi and Solheid successfully obtained the least upper bound of ldet(A)I, where A ranges in some subsets of B,. with the complementary acyclic property. Their results are
presented below. Interested readers are referred to [39] for proofs. Theorem 5.4.5 (Brualdi and Solheid, [40]) Let n ~ 3 be an integer and let A E B,. be a complementary acyclic matrix such that A has a row or column of all ones. Then ldet(A)I::; n 2.
(5.13)
For n ~ 4, equality in (5.13) holds if and only if A or AT is permutation equivalent to
L,.=[O~. 1
J,._lln1
1]·
(5.14)
For n = 3, equality in (5.13) holds if and only if A or AT is permutation equivalent to one of these matrices
Definition 5.4.3 For a matrix A E B,., the complementary term rank of A, PJA, is the term rank of J,.  A. Theorem 5.4.6 (Brualdi and Solheid, [40]) Let n ~ 3 be an integer and let A complementary acyclic matrix with PJA = n  1. Then ldet(A)I ::;
{
n2
if3::;n::;8
l n;s Jrn;sl
ifn
~8.
e B,. be a (5.15)
For n ~ 4, equality holds in (5.15) if and only if A or AT is permutation equivalent to L,. as defined in (5.14) (when 4 ::; n ::; 8), or JL.;:!J [
oT
j 1
z
0
(n ~ 8),
where Z has at most one 0. Theorem 5.4.7 (Brualdi and Solheid, (40]) Let n ~ 2 be an integer and let A E B,. be a complementary acyclic matrix with PJA = n. Then ldet(A)I ::;
{
n2
ifn::; 5
l n;t Jrn;tl
ifn~5.
(5.16)
Combinatorial Analysis in Matrices
236
Equality holds in (5.16) if and only if A or AT is permutation equivalent to Jn In, or
JL~J IL~J j [
or
o
J
0
More details can be found in [39). For most of the matrix classes, the determination of the maximum determinant of matrices in a given class is still open.
5.5
Rearrangement of (0,1) Matrices
The rearrangement problem of an ntuple was first studied by Hardy, Littlewood and Polya [114]. Schwarz (231) extended the concept to square matrices.
Definition 5.5.1 Let (a)= (a1.a2,··· ,an) be anntuple and let 11' be a permutation on the set {1, 2, .. · ,n}. Then (a,..)= (a.r(t),a,.(2), .. ·a..cn>) is a reammgement of(a). A matrix A e Mn can be viewed as an n 2 tuple, and so we can define the rearrangement of a square matrix in a similar way. Let
A,.
11'
be a permutation on {1,2,··· ,n}, and let A
= (BiJ)
E Mn. The matrix
= (aij) is called a permutation of A if al; = a,.(i),~rCi)• for all 1 :5 i,j :5 n.
Clearly a permutation or a transposition of a matrix A is a rearrangement of A. We call a rearrangement trivial if it is a permutation, or a transposition, or a combination of permutations and transpositions. Two matrices At, A2 rearrangement of At.
e Mn are essentially different if A2 is a nontrivial
=
For each A (Bi;) e Mn, define IIAII follows immediately from the definitions.
= E~t E.i=t Gij·
Proposition 5.5.1 below
Proposition 5.5.1 For matrices A 1 ,A2 e Mn, each of the following holds. (i) H A1 is a rearrangement of A2, then IIAtll = IIA2II· (ii) HAt is a trivial rearrangement of A:~, then IIA~II = IIA~II· Definition 5.5.2 Let
N = ma.x{IIA211 : A e Mn},
u
=
a=
and
N =min{IIA211 :
=
A e Mn}·
c
Let ={A e Mn : IIA2 11 N} and {A e Mn : IIA2 II N}, let denote the set of matrices A= (Bi1 ) e Mn such that for each i, a;i ~ Bi;• whenever;< j', and such that for each j, ao; ~ Bi•; whenever i < i'; and let Cdenote the set of matrices A a;i) E Mn such that for each i, a;; ~ aw whenever; < j', and such that for each;, Bi; :5 G;•; whenever i < i'.
=(
Combinatorial Analysis in Matrices
237
Theorem 5.5.1 (Schwarz, [231])
un c¢ 0, and un c¢ 0. Definition 5.5.3 For a matrix A e Mn, let A1 (A) and An(A) denote the maximum and the minimum eigenvalues of A, respectively. Let
=
max{A1 (A) : A
X=
minp.n(A) : A
X
e Mn}, e Mn};
and
and let
iJ = {A e Mn : A1 (A) =X} and
B
=
{A e Mn : An{A)
= X}.
Theorem 5.5.2 (Schwarz, [231])
iJnc ¢0,
and
8nc ¢0.
Definition 5.5.4 For integer n ~ 1 and u with 1 SuS n 2 , let Un(u) IIAII
= u}. Let =
max{IIA2 11 : A min{IIA2 11 : A
e Un(u}}, e Un(u)}.
= {A e Bn
and
Let Un(u) denote the set of matrices A= (a,1 ) E Un(u) such that for each i, a0; ~a,;, whenever j < j', and such that for each j, a,; ~ at•; whenever i < i'; and let Un(u) denote the set of matrices A= (a,1) e Un(u) such that for each i, a0; ~ a0;• whenever j < j', and such that for each j, as; S at•; whenever i < i'. Proposition 5.5.2 follows from Theorem 5.5.1 and the definitions. Proposition 5.5.2 For integers n ~ 1 and u with 1 S u S n 2 , each of the following holds. (i) N n(u) = max{IIA2 11 : A e Un(u)}, and Nn(u) = min{IIA2 II : A e Un(u)}. (ii) Let A = (a,;) e Un(u), let s, :Ej=1 a,; and r, = E~ 1 a;• denote the ith row sum and the ith column sum of A, respectively. Then
=
Example 5.5.1
[ 1~
011 1] ~
e Us{6)
and [1~
238
Combinatorial Analysis in Matrices
Aharoni discovered the following relationship between Nn(u) and Nn(n2 between Nn(u) and Nn(n 2  u). Theorem 5.5.3 (Aharoni, [1]) Let nand u be integers with n A e Un(u), then
~

u), and
1 and 1 SuS n 2 • H
(i) IIA2 II = 2un n 3 + II(Jn A) 2 11. (ii) N n(u) 2un n 3 + Nn(n2  u). (iii) Nn(u) 2un n 3 + Nn(n2  u).
= =
Parts (ii) and (iii) of Theorem 5.5.3 follow from Part (i) of Theorem 5.5.3 and the observation that if A e Un(u), then Jn A e Un(n2  u). In [1), Aharoni constructed four types of matrices for any 1 S u S n 2 , and proved that among these four matrices, there must be one A such that Nn(u) = IIA2 11. Theorem 5.5.3(ii) and (iii) indicate that to study Nn(u) and Nn(u), it suffices to consider the case when u ~ n 2 f2. The next result of Katz, points out that an extremal matrix reaching Nn(u) would have all its 1entries in the upper left corner principal submatrix. Theorem 5.5.4 (Katz, [142]) Let n, k be integers with n 2 ~ k2 ~ n 2 /2 > 0. Then
Corollary 5.5.4 Let n, k be integers with n 2 ~ k2 ~ n 2 /2 > 0. Then Nn(n2

k2 )
=k
3 
2k2 n + n 3 •
To study Nn(u), we introduce the square bipartite digraph of a matrix, which plays a
useful role in the study of IIA2 11. Definition 5.5.5 For a A = (~;) e Bn, let K(A) be a directed bipartite graph with Vertex partite sets (Vi, l/2), where Vt Ut, t£2, ••• , Un} and Vt, t12, ••• , Vn}, representing the row labels and the column labels of A, respectively. An arc (uh v;) is in E(K) if and only if aii = 1. Let Kt and K2 be two copies of K(A) with vertex partite sets (Vi, V2) and Wt, V~), respectively, where V: {u~,t4, · · · ,u~} and V~ {vLv~, · · · ,v~}, and where (u~,vj) E E(K2) if and only if as;= 1. The square bipartite digraph of A, denoted by SB(A),_is the digraph obtained from K1 and K2 by identifying "• with u~, for each i = 1, 2, · · · , n. The next proposition follows from the definitions.
={
=
v2 ={
=
Proposition 5.5.3 Let A e Bn. Each of the following holds. (i) IIA2 11 is the total number of directed paths of length 2 from a vertex in Vi to a vertex in v~.
Combinatorial Analysis in Matrices
239
(ii) For each t1; e 1'2 in SB(A), d(v1) ith columnnsum of A. n
= s; is the ith row sum of A, and ti+(v;) = r;
is the
(iii) IIA2 11
= L:d(t~;)~(t1;) = L:r;s;. i=l
i=l
Example 5.5.2 The square bipartite graph of the matrix At in Example 5.5.1 is as follows:
1
1
1
Figure 5.5.1 The graph in Example 5.5.1 Example 5.5.3 The value IIM2 11 may not be preserved under taking rearrangements. Consider the matrices 100] [11 At= [ 0 1 0 andA2= 10 0 0 1 0 0
~ ]·
Then At and A2 are essentially different and IIA~II =FIIA~II Theorem 5.5.5 (Brualdi and Solheid, [38]) H t1;:: n2

LiJf!J1, then
Moreover, for A E Un(u), IIA2 11 = Nn(u) if and only if A is a permutation similar to [ J, J,,,
X] J,
'
(5.17)
240
Combinatorial Analysis in Matrices
where X E M,.,, is an arbitrary matrix, and where k k+l =n.
~
0 and l
~
0 are integers such that
=
Sketch of Proof Construct a square bipartite digraph D SB(A1 ) as follows: every vertex in {uh··· ,u,} is directed to every vertex in {vl+l•"' ,vn}, where Z > 0 is an integer at most n. By Proposition 5.5.3(i), IIA~II = 0. Let A= Jn A 1 • By Theorem 5.5.3(i) and by IIA~II = o,
IIA2 II = 2on n3 = Nn(u),
=
where u IIAII = IIJn A1ll ~ n 2  LJJf!l· By Theorem 5.5.3(i), if A E Un(u) satisfies IIA2 11 = 2on n 3 , then II(Jn A) 2 11 = 0, and so by Proposition 5.5.3(i), SB(Jn A) must be some digraph as a subgraph of the one constructed above, (renaming the vertices if needed). Therefore, A must be permutation similar to a matrix of the form in (5.17). O Proposition 5.5.4 Let A e ii..(u) and let D notations in Definition 5.5.5. (i) H for some i < nand j > 1, (u;,v;) (uHl, v;) e E(D).
(ii) H u
~(
; ) , and if IIA2 11
=SB(A) with vertex set Vi UV2 uv;, using e E(D), then both {u1,v;1 ) E E(D) and
= Nn(u), then in D, that i > j
implies that (uo, v;) E
E(D).
{iii) H u
~(
; ) , and if A
e Un(u)
and
IIA2 11 = Nn(u), then every entry under the
main diagonal in A is a 1entry. Proof Part (i) follows from the Definition 5.5.4 and Definition 5.5.5. Part (iii) follows from Part {ii) immediately. To prove Part (ii), we argue by contradiction. Assume that there is a pair p and q such that p > q but (u,, v11 ) f. E(D). Since u ~ n(n 1)/2 and by Proposition 5.5.4(i), there must be ani such that (u;,vi) E E(D). Obtain a new bipartite digraph D1 = SB(Ao) from D by deleting (u;,v;),(v1,vD and then by adding (u,,v11 ) and (v,.,v~). Note that
where the degrees are counted in D. By Proposition 5.5.4(i) again, a(v;) ~ n (i 1), ~(v1 ) ~ i, d(v,.) ~ n p, and ~(v11 ) ~ q 1.
It follows by (5.18) and (5.19) that
IIA2 UU.Agll ~pq+ 1 ~ 2,
(5.19)
Combinatorial Analysis in Matrices
241
contrary to the assumption that IIA2 II
=Nn(u). O
Theorem 5.5.6 (Brualdi and Solheid, [38]) H u
= ( ; ) , then
Moreover, if A E U,.(u) and IIA2 } = N,.(u), then A is permutation similar to Ln, the matrix in Bn each of whose 1entry is below the main diagonal. Proof This follows from Proposition 5.5.4(iii). To investigate the case when ( ; )
O
< u < n2
LiJril, we establish a few lemmas.
Lemma 5.5.1 (Lin, [175]) Let A = (tli;) E Un(u), let A(u + epg) denote the matrix obtained from A by replacing a 0entry apq by a 1entry, and lets, and r 9 denote the pth column sum and the qth row sum, respectively. Then ifp;lq ifp=q
Proof Note that SB(A(u+e,9 ) isobtainedfromSB(A) byaddingthearcs (u,,v9 ), (v,,v~). If p ;l q, the number of the newly created directed paths of length 2 from vl to v~ in SB(A(u + e,q) is d"'(vq) + r(v,) = Tq + s,; if p = q, an additional path u,v,v~ is also created, and so Lemma 5.5.1 follows from Proposition 5.5.3(i). O Lemma 5.5.2 (Liu, [175]) Let L,. be the matrix in B,. each of whose 1entry is below the s; i· Let A denote the matrix obtained from Ln by changing the upper entries at (it, it), where 1 s; t s; r, from 0 to 1. H all the it's are distinct and all the it's are distinct, then
main diagonal. An (i,i)entry in L,. is called an upper entry if i
where
Ac· . ) _ {
~ Zt,Jt
(n 1) it+ it • • ntt+1t
ifit
Proof This follows immediately from Lemma 5.5.1 and Theorem 5.5.6.
O
With Proposition 5.5.4(iii) and Lemma 5.5.2, it is not difficult to prove the following theorem.
242
Combinatorial Analysis in Matrices
Theorem 5.5.7 (Liu, [175]) Let u
= ( ; ) + k, where 1::; k::; n, then
Nn(u)
= (;) +kn,
Theorem 5.5.8 {Brualdi and Solheid, [38]) H
Nn
u= ( n;
= ( ; ) + n2 = (
1 ) , then
n; 2 ) .
Moreover, if A E Un(u) and IIA2 } = Nn(u), then A is permutation similar to L~, the matrix in Bn each of whose 1entry is on and below the main diagonal.
= n. O
Proof This follows from Theorem 5.5.7 with k The case when u
=(
n; 1 )
+ k with 1 $
k $ n  1 can be studied similarly. With
the same arguments as in the proofs for Lemmas 5.5.1 and 5.5.2, we can derive the lemmas below, which provide the needed arguments in the proof of Theorem 5.5.9 (Exercise 5.10). Lemma 5.5.3 If u
~(
n;
1 ) , and if A E Un(u) and
IIA2 1l = Nn(u), then every entry
on or under the main diagonal in A is a 1entry. Lemma 5.5.4 Let L~ be the matrix in Bn each of whose 1entry is on or below the main diagonal. Let A denote the matrix obtained from L~ by changing the upper entries at (ito it), where 1::; t $ r, from 0 to 1. Hall the it's are distinct and all the it'S are distinct, then
.
Theorem 5.5.9 (L1U, [175]) Let u
Nn(u)
=
(n+1) +
=(
2
n then L2J,
n+2) + k(n + 2), 3
Theorem 5.5.10 (Brualdi and Solheid, [38]) H then
k, where 1 $ k $
n~
2 is even and if
u= ( n;
1)
+ j.
Combinatorial Analysis in Matrices
243
Proof Apply Theorem 5.5.9 with k
= n/2. O
Conjecture 5.5.1 (Brualdi and Solheid, [38]) Let k, l, n and q be integers such that 1 ~ k ~ n, n qk + Zand 0 ~ Z< k. Let
=
Un,k
=(
q;
1 ) r;2
+ nl,
and let
A..,k
=
Jk J
Jk Jk
where every entry of An,k below the mall diagonal is a 1entry. Then
N,.(u,.,~o)
= IIA~•• II· =
Theorem 5.5.10 proved Conjecture 5.5.1 for the special case when k 2 and n is even. To further approach this conjecture, we introduce some notations. Through the end of this section, let L~ (Sr) be the matrix obtained from L~ by changing the 0entries into 1entries at the positions (i,i + 1), with i 0 ~ i ~ io + r 1, for some 1 ~ io ~ n 1 (and call these newly added 1entries an Sr); let L~(Tr) be the matrix obtained from L~ by changing the 0entries into 1entries at the positions (i,i + j), with io ~ i ~ i0 + r 1 and 1 ~ j ~ r (i io), for some 1 ~ i0 ~ n 1 (and call these newly added 1entries a Tr)· Let A(Sr) = II(L~(Sr)) 2 11 II(L~) 2 11 and A(Tr) = II(L~(Tr)) 2 11II(L~) 2 11. Let IITrll = r(r + 1)/2 denote the number of 1entries of a Tr and let IISrll = r denote the number of 1entries of a Sr. Lemma 5.5.5
Proof By Lemma 5.5.1, A(Tr)
=
=
r
r
j=l
j=l
Li + Li(j + 1)
n
n(r;1)+(r;1)+2r:1(r;1)
= (
r; n
1 ) ( + 2(r; 2)) .
244
Combinatorial Analysis in Matrices
This proves Lemma 5.5.5. Lemma 5.5.6 H IITrll
0
= ~~ 1 11Tr,ll for some t > 1, then d(Tr) > ~~ 1 d(Tr;).
Proof By assumption and by r
> r,,
=
ti==1 (r,
> (
ri;
+1
) (r + 2)
2
1 ) (rd 2).
It follows by Lemma 5.5.5 that d(Tr) > ~~ 1 d(Tr;).
0
The next two lemmas can be proved similarly. Lemma5.5.7 d(Sr)
Lemma 5.5.8 If IISrll
=r(n + 3) 1.
= ~:== 1 HSr,ll for some t > 1, then d(Sr) > 1:::==1d(Sr;)·
Theprem 5.11 Suppose that
u= ( n;
1 )
+ k, LiJ < k ~
fn;
11
Then
N,.(u)
n1 = (n+2) +k(n+3) L2J. 3
Proof Assume that A E U,.(u) with IIA2 II = N,.(u). By Proposition 5.5.2, we may assume that A E U,.(u), and soL~ is a submatrix of A. By Lemma 5.5.8, the minimum of IIA2 II 1J of s2 and 2L 1J  k o£ s1 above the main can be obtained by putting k l
n;
n;
diagonal of L~. Therefore, •L~J
N,.(u)
= IICL:f"ll + }:
This completes the proof.
i=1
O
2L~Jk
d(S2)
+ }: i==l
d(SI)
Combinatorial Analysis in Matrices
5.6
245
Perfect Elimination Scheme
This section is devoted to the discussion of perfect elimination schemes of matrices and graph theory techniques can be applied in the study. Definition 5.6.1 Let A= (a.;) E Mn be a nonsingular matrix. The following process converting A into I is called the Gauss elimination process.
=
,n,
For each t 1,2, · · · (1) select a nonzero entry at some (ittjt)cell (called a pivot), (2) apply row and column operations to convert this entry into 1, and to convert the other entries in Row it and Column it into zero. The resulted matrix can then be converted to the identity matrix I by row permutations only. The sequence (it.jt), (i2,h), · • • , (in,jn) is called a pivot sequence. A perfect elimination scheme is a pivot sequence such that no zero entry of A will become a nonzero entry in the Gauss eiimination process. Example 5.6.1 For the matrix
[i i t
J
The pivot sequence (1, 1), (2,2), (3,3), (4,4) is not a perfect elimination scheme since the 0entry at (3,2) will become a nonzero entry in the process. On the other hand, the pivot sequence (4, 4), (3,3), (2, 2), (1, 1) is a perfect elimination scheme. Example 5.6.2 There exists matrices that does not have a perfect elimination scheme. Consider
A=
1 1 0 0 1 1 0 1 1 1 0 1 1 0 0 1
0 0 1 1
1 1 0 0 1
Note that if for a given (i, j), there exist s, t such that a.; ¥: 0,
246
Combinatorial Analysis in Matrices
Example 5.6.3 Matrices of the following type always have a perfect elimination scheme,
* *
* * * * * * A=
0
* *
0 where
* * *
* denotes a nonzero entry.
Definition 5.6.2 Let G be a simple graph, let w(G),x(G),a:(G) and k(G) denote the clique number (maximum order of a clique), the chromatic number, the independence number (maximum cardinality of an independence set), and the clique covering number (minimum number of cliques that covers V(G)), respectively. For a subset A ~ V(G), G[A] denotes the subgraph of G induced by A. A graph G is perfect if G satisfies any one of the following: (P1) w(G[AJ) (P2) a:(G[A])
= x(G[AJ), VA c V(G). = k(G[A]), VA c V(G).
(P3) w(G)a:(G) ~ IAI, VA C V(G). (The equivalence of that (P1), (P2) and (P3) is shown by Lovasz [190] in 1972.) Definition 5.6.3 A graph G is chordal if G does not have an induced cycle of length longer than three. Given a graph G, a vertex v E V(G) is a simplicial vertex if G[N(v)] is a clique. H [v11 v2, · · · , v,.] is an ordering of V(G) such that Vi is simplicial in G[{vi, · • · , v,.}], for each
=
i 1, 2, · · · , n 1, then [VI,··· , v,.] is a perfect vertex elimination scheme of just a perfect scheme. An edgee xy in a bipartite graph His bisimplicialifthe induced subgraph H[N(:z:)U N(y)] is a complete bipartite graph. Given an ordering [e11 e 2, • · · , em] of pairwise nonadjacent edges in H, let Si denote the vertices in H that are incident with {et,e2,··· ,e,}, where 1 S i S m, and let So 0. An ordering [e1, e2, · · · , em) of edges in H is a partial scheme if the e1's are pairwise nonadjacent edges in H and if for each i ~ 1, e; is bisimpli
=
=
cial in H S; 1 • An ordering [ell e2, • • • , em] of edges in H is a perfect edge elimination scheme (or just a scheme) if it is a partial scheme and if H Sm is edgeless. A bipartite graph H possessing a scheme is a perfect elimination bipartite graph. Theorem 5.6.1 (Dirac, [74]) Let G be a chordal graph. Then G has a simplicial vertex. H G is not a complete graph, then G has two nonadjacent simplicial vertices.
Sketch of Proof This is trivial if G is a complete graph. Argue by induction on
IV( G) I
Combinatorial Analysis in Matrices
247
and assume that G has two nonadjacent vertices u and v and a mi.nimum vertex set S separating u and v. Let Gu and Gv de.note the connected components of G S containing u and v, respectively. Let G[V(Gu)US] and G[V(Gv) US] denote the subgraphs of G induced by V(Gu) US and by V(Gv) US, respectively. By induction, either since G[V(Gu) US] contains two nonadjacent simplicial vertices, whence one of these two vertices must be in Gv, as G[V(Gu)US] is induced; or G[V(Gu)US] is complete. Apply induction to G[V(Gv) US] as well, and so the theorem follows by induction.
D
Theorem 5.6.2 (Fulkerson and Gross, (94]) Let G be a simple graph. The following are equivalent. (i) G is chordal. (ii) G has a perfect scheme such that any simplicial vertex of G can be the first vertex in a perfect scheme. (iii) Every minimal separation set of G induces a complete subgraph. Proof (iii)==> (i). Let u,x,v,YloY2o · · · ,yk,a be a cycleofG with length k+3 ~ 4. Note that any vertex subsetS separatingu and v must contain x,y1, • · · ,yk, and so xy; E E(G) is a chord of this cycle. (i) ==> (iii). Let S be a minimal vertex set separating u and v in G. Let Gu and Gv be the components of G \ S containing u and v, respectively. Since S is mi.nimal, each 8 E S is adjacent to some vertex in Gu and some vertex in Gv. For each pair of distinct vertices x,y e S, there exists an (x,y)path P whose internal vertices are all in Gu, and an (x,y)path Q whose internal vertices are all in Gv. It follows by (i) that xy E E(G), and so S induces a complete subgraph of G. (i) ==> (ii). Argue by induction. By Theorem 5.6.1, G has a simplicial vertex v, and so G v is also a chordal graph, which has a perfect scheme. The add v to this scheme to get a perfect scheme of G with v being the first vertex. (ii) ==> (i). Let a be a cycle of G and let v e V(a) such that in a perfect scheme of G, v is the first among all vertices in V(a). As !N(v) n V(a)! ~ 2, that v is simplicial implies that a has a chord. 0 Theorem 5.6.3 below gives an inductive structural description of chordal graphs. A proof of Theorem 5.6.3 can be found in [155]. Theorem 5.6.3 (Leukev, Rose and Tarjan, [155]) H G is a chordal graph, then there exists a sequence of chordal graphs G" 0 ~ i ~ 8, such that Go= G, G. is a complete graph and for each 1 ~ i ~ 8  1, G; is obtained by adding a new edge from Gil· Definition 5.6.4 The notions of associated digraph and the bipartite representation of a
248
Combinatorial Analysis in Matrices
(0,1) matrix will be extended. Let A= (as.t) e Mn. The associate directed digraph D(A) of A has vertex set {v11 112,··· ,vn} such that (v;,v;) e E(D(G)) if and only if both a1; :# 0 and i :# j. When A is symmetric, D(A) can be viewed as a graph, and in this case, we write G(A) for D(A). The bipartite representation B(A) of A has vertex partite sets X= {x11 x2,·· · ,xn} andY= {Yb1J2 1 ••• ,yn}, representing the rows and the columns of A, respectively, such that (xw;) e E(B(A)) if and only if ao; :# 0. For each i, x; and y; are partners corresponding to the vertex v1 in D(A). Some observations from the definitions and from the remarks in Example 5.6.2 are listed in Proposition 5.6.1. Proposition 5.6.1 Let A= (a;;) e Mn. (i) H A is symmetric, then D(A) can be viewed as a graph. An entry ail is a pivot if and only if v1 is simplicial in D(A). (ii) Suppose that A is symmetric such that each ali :# 0. A perfect elimination scheme of A with each au as a pivot corresponds to a perfect vertex elimination scheme of G(A). (iii) A bisimplicial edge of B(A) corresponds to a pivot in A; and a perfect elimination scheme of A corresponds to a perfect edge elimination scheme of B(A). Theorem 5.6.4 (Columbic [60]) and Rose [218]) Let A = (a;;) e Mn be a symmetric matrix with a,; :# 0, 1 ~ i ~ n. Each of the following is equivalent. (i) A has a perfect elimination scheme. (ii) A has a perfect elimination scheme of A with each a 11 as a pivot. (iii) G(A) is a chordal graph. Proof Part (ii) and Part (iii) are equivalent by Theorem 5.6.2. As (ii) trivially implies (i), it suffices to show that (i) implies (iii). By Proposition 5.6.1(iii), we may assume that B(A) has a perfect edge elimination scheme (e~, · · · , em]· Suppose G(A) has achordless cycle V111 v 112 •• ·v11Pv111 forsomep ~ 4. Since each au f. 0, the induced subgraph H = B(A)[{x., 11 x., 2 , • • • , x11p,y.,11 y112 , • • • ,y.,p}] is a 3regular graph with edge set ,y..,+,) : 1 ~ i ~ p 1} U {(x11pYa1 ), (y11py.. ,)} U
{(x..
U
{(y.,,x111+1 )
:
1 ~ i ~ p 1} U {x11,y.,1
:
1 ~ i ~ p}.
Since [e11 · · · ,em] is a perfect edge elimination scheme, there is a smallest j such that e; is incident with a vertex in V(H). The edge e; cannot be in E(H) since no edge in E(H) is bisimplicial. Without loss of generality, we assume that e; = x 11,y8 , for some y. ¢ V(H). Denote x. the partner of Ys·
Combinatorial Analysis in Matrices
=
249
=
Since p 2:: 4, in H, N(x,.,) {y,.,y,.2 ,y,..} and N(y,.,) n N(y,.2 } n N(y... ) {x,.,}. Since ei x,.,y. is bisimplicial, V(H) n N(y.) {x..,}. By symmetry, V(H) n N(x.) {y..,}. Note that x ..,y.., x 8 y 8 E E(B(A)) but :r;8 y112 ¢ E(B(A)), contrary to the fact that x.,,y, is bisimplicial. Therefore, p 3 and G(A) must be chordal. O
=
=
=
=
We conclude this section with two more results by Columbic and Goss [61) on the properties of perfect elimination bipartite graphs. Interested readers are referred to [61) for proofs. Theorem 5.6.5 (Columbic and Goss, [61]) H His a perfect elimination bipartite graph and if e :r;y is bisimplicial in H, then H {x,y} is also a perfect elimination bipartite
=
graph. Theorem 5.6.5 (Columbic and Goss, [61]) H a bipartite graph H has no pair of disjoint edges, the it is a perfect elimination bipartite graph.
5.7
Completions of Partial Hermitian Matrices
All the matrices considered in this section will be over the complex number field. Denote z the conjugate of a complex number z, and by A the conjugate of a matrix A with complex entries. An n x n matrix H is Hermitian if H H* HT.
by
= =
An n x n matrix A is positive definite Hermitian (positive semidefinite Hermitian, respectively) if for any ndimensional complex vector x ¥: 0, xT Ax > 0 (2:: 0, respectively). While all positive definite Hennitian matrices are positive semidefinite Hermitian, the matrix J,., when n ;::: 2, is positive semidefinite Hermitian but not positive semidefinite Hermitian. A Hermitian matrix A= (a,;) is a partial Hermitian matrix if some of the a,j's are not specified. These unspecified entries may be viewed as blank slots. Given a partial Hermitian matrix A, is it possible to file the blank slots with certain complex number so that the resulting matrix is positive semidefinite Hermitian? H the answer is affirmative, then the resulting Hermitian matrix if a positive semidefinite completion of the partial Hennitian matrix A. Results in this section are mostly from the work of Grone, Johnson, Sa and Wolkowwicz [107). Some of the facts and properties of positive semidefinite Hermitian matrix are listed in the proposition below. Proposition 5. 7.1 Each of the following holds. (i) Let A1, A2, · · · , A,. be positive semidefinite Hermitian matrices. H c1 , c2 , • • • , c,. are nonnegative real numbers, then ~~=l CiAt is also positive semidefinite Hermitian. is a real number, for (ii) Let A= (at;) be ann x Hermitian matrix. Then each
i = 1,2,· · · ,n.
n
a,,
250
Combinatorial Analysis in Matrices
(iii) A is positive semidefinite Hermitian if and only if every eigenvalue of A is nonnegative. (iv) Let A be a Hermitian matrix. Then A is positive semidefinite if and only if for each i = 1, 2, · · · , n, det(A,) ~ 0, where A, is an i x i principal submatrix of A. (v) Let A= (ai;) be a Hermitian matrix. Then det(A)
" a,,. :s; IT
(5.20)
i=l
Moreover, if ~i is diagonal.
> 0 for each i = 1, 2, · · · , n, then equality holds in (5.20) if and only if A
Definition 5.7.1 Let G be a graph vertex set {v1., 112, · • • , v,.}, and with loops permitted. A Gpartial matrix is a set of complex numbers, denoted (a,1 )a, where a,; is defined if and only if (v,v; E E(G). (As G is undirected, ai; is defined if and only if a;i is defined). (m1;) such that m,1 = au whenA completion of (~;)a is ann x n matrix M ever v,v; E E(G). We say that M is a positive completion (a nonnegative completion, respectively) if and only if M is a completion of (~;)a and M is positive definite (positive semidefinite, respectively). Given a graph Gwith V(G) = {vt,C2, · · · ,v,.}, the Gpartial matrix (a,;)a is Gpartial positive (Gpartial nonnegative, respectively), if both of the following hold. (5.7.1A) For all1 :s; i,j :s; n, a1; = a1,, and (5.7.1B) For any complete subgraph K of G, the corresponding principal submatrix (a,;)v;,v1 ev(K) of (a,;)a is positive definite (positive semidefinite, respectively).
=
Let G be a subgraph of J. A Jpartial matrix (bi;)J extends a Gpartial matrix (as;) if b,; =a,; for every v1v; e E(G) ~ E(J). A graph G is completable , (nonnegativecompletable respectively) if and only if any Gpartial positive (Gpartial nonnegative, respectiVely} matrix has a positive (nonnegative, respectively) completion. Let LP(G) denote the set of vertices of G at which G has loops. Without loss of generality, we assume that either LP(G) = 0 or LP(G) = {vt,1J2,··· ,v~:}, for some 1 :s; k :s; n. For notational convenience, we also use LP(G) to denote the subgraph of G induced by LP(G). Proposition 5. 7.3 below indicates that the terms "nonnegativecompletable" and "completable" coincide.
Proposition 5. 7.2 (Grone, Johnson, Sa and Wolkowicz, [107]) G is completable (nonnegativecompletable respectively) if and only if LP(G) is completable (nonnegativecompletable respectively). Proof It suffices to show that completable case. H G is completable, then LP(G) is also
Combinatorial Analysis in Matrices
251
completable, since any complete subgraph must be contained in LP(G). For the converse, denote
Aa(x)
= [ Au
A12
Ai2 xi +H
] ,
where A 11 is positive definite k x k (representing a positive completion of an LP(G)partial positive matrix), and His (nk) x (nk) Hermitian. Note that Aa(x) is positive definite for sufficient large positive x.
0
Proposition 5.7.3 (Grone, Johnson, Sa and Wolkowicz, [107]) Let G be a graph with G = LP(G). Then G is completable if and only if G is nonnegativecompletable. Proof Assume first that G is completable. For a Gpartial nonnegative matrix (lli;)a, define, for each integer n > 0, AA, (tli;)G + ~I. Then each A,. is a Gpartial positive matrix, and so A,. has a positive completion M,.. Since G LP(G), the sequence A,. is bounded and so has a convergent subsequence with limit A, which will be a nonnegative completion of (Gi;)a. Assume then G is nonnegativecompletable, and let (a,;)a be a Gpartial positive matrix. Choose e > 0 so that (a,1)a  ei is still a Gpartial positive matrix. Then (a 0;)a d has a nonnegative completion A, which yields a positive completion A+ el of
=
(a,;)a.
=
0
Definition 5.7.2 An ordering of a graph G is a bijection a: V(G) t+ {1,2, ... ,n}, and for vertices u,v e V(G), we say that u follows v (with respect to the ordering a:) if a:(v) < a:(u). A graph G is a band graph if there exists an ordering a of G and an integer m with 2 ~ m ~ n such that
uv
e E(G)
if and only if la:(u) a:(v)l ~ m 1.
Example 5.7.1 Let G denote the graph with
Let a be the map defined by a:(v0) = i, and let m = 3. Then we can check that G is a band graph. Note that every band graph is a chordal graph. But a chordal graph may not be a band graph. Let H denote the graph obtained from the 3cycle u 1'll2'lls'llt by adding two new vertices u4 and us such that u 4 is only adjacent to '1.12 and u 5 is only adjacent to u 3 • Then H has only one 3cycle and so it is chordal. H H were a band graph, then there would exist a bijection a and an integer 2 ~ m ~ 5 satisfying the definition of a band graph. Since H has a 3cycle, m > 2. H m 2': 4, the vertex u 0 with a:(u,) 3 will have degree 4,
=
252
Combinatorial Analysis in Matrices
=
=
absurd. Now assume that m 3. Then the vertices u; with c:t(u;) 1 or c:t(u;) have degree 2, whereas H has only one vertex of degree 2, absurd again.
=5 will
Theorem 5.7.1 (Dym and Gohberg, [78]) Every band graph is completable. Theorem 5. 7.2 (Grone, Johnson, Sa and Wolkowicz, [107)) A graph G is completable if and only if G is chordal. With the remarks before Theorem 5.7.1, we can see that Theorem 5.7.2 generalizes Theorem 5. 7.1. The proof of Theorem 5. 7.2 requires several lemmas and a new notion. A cycle C in a graph G is minimal if Cis an induced cycle in G. Note that any minimal cycle is chordless. For an edge e = uv e E(G), G + e denotes the graph obtained from G by joining u and v by the edge e. Lemma 5.7.1 Let G be a graph. The following are equivalent. (i) G has no minimal cycle of length exactly 4. (ii) For any pair of distinct nonadjacent vertices u, v e V(G), the graph G + uv has a unique maximal complete subgraph H with u,v e V(H). Proof (i) ==> (ii). Let u, v e V(G) be distinct nonadjacent vertices and let H 1 and H2 denote two maximal complete subgraphs in G + uv with u, v E V(H,) n V(H:a), but H; does not contain Hsi as a subgraph. Then there exist z; e V(H;) {u,v} such that Z1Z2 fl E(G + uv). But then, G[{z~o.z:a,u, v}] is a minimal4cycle of G, a contradiction. (ii) ==> (i). Suppose that G[{zltz2 ,u,v}] is a minimal4cycle of G. Then uv fl E(G) and u # v. Hence G + uv has a unique maximal complete subgraph H containing uv. Note that ZlJZ2 e V(H), and so Zt%2 e E(G), absurd. 0 Lemma 5.7.2 Let G' be a subgraph of G induced by V(G'). If G is completable, then G' is also completable. Proof Let (a~;)a• be aG'partial nonnegative matrix. Define a Gpartial nonnegative matrix (a;;)G as follows.
a;.;= {
a~.
o''
if v,v; e E(G') otherwise.
Then (a;;)a has a nonnegative completion M. The principal submatrix M' corresponding to rows and columns indexed by V(G') is then a nonnegative completion of (a~;)G•· D Lemma 5.7.3 H a k x k positive semidefinite matrix A= (a;;) satisfies a;.;= 1 whenever
li il ~ 1, then A= J,..
=
Proof We may assume that k ;::: 4 since the case for k 2, 3 are easy to verify. H there exists an a;.; =# 1 for some 1 ~ i < j ~ k, then j  i ;::: 2. Let A' denote the principal
Combinatorial Analysis in Matrices
253
submatrix of A corresponding to the rows and columns i, i + 1 and j. Then A' satisfies the hypothesis of the lemma and so a,3 = 1, absurd. This completes the proof. 0 Proof of Theorem 5.7.2 Assume that G is completable but G is not chordal. Then G has an induced cycle C of length at least 4. Without loss of generality, assume that C = v1112 •• ·v&.v1 • Note that Cis an induced subgraph of G. Define a Cpartial matrix (a~3 )o as follows. 1 a,J
={
1 1
if li il S 1, and {i,j} ~ {1,k} if (i,j) = (1, k} or (i,j) (k,1).
=
Then (a~)o is a Cpartial nonnegative matrix (we need to check only the principal minors of order 2). By Lemma 5.7.3, (a~3 )o is not completable to a positive semidefinite matrix, and so by Lemma 5.7.2, G is not completable either, a contradiction. Conversely, assume that G is chordal but not a complete graph. Let G = G0 , Gt, · · · , Gs be a sequence of chordal graphs satisfying the conditions of Theorem 5.6.3. Let A = (ao; )G beaGpartial positive matrix. We shall show that there exists a G1partial positive matrix At which extends A, and so the sufficiency of Theorem 5.7.2 will follow by induction on s. Let uv E E(Gt) E(G). By Lemma 5.7.1, there exists a unique maximal complete subgraph H of G 1 with u, v e V(H). Without loss of generality, assume that V(H) = vp. For any complex z, let A1 (z) denote the Gt{vt 1 '112 1 ••• ,vp} with u = v1 and v partial matrix extending A with the (1,p}entry being z and the (p,1)entry z. Let M(z} denote the principal submatrix of A1(z) corresponding to the vertices in V(H). Then At(z) is a G 1 partial positive matrix if and only if M(z) is a positive matrix. Thus it suffices to show that there exists an zo such that M(zo} is positive definite. However, with a: V(H} t+ {1,2,· · · ,p} defined by a(v1) = i, the edges in E(H} are exactly the edges
=
and so such a
5.8
zo exists, by Theorem 5.7.1. O
Estimation of the Eigenvalues of a Matrix
Throughout this section C denotes the field on complex numbers. All the matrices considered in this section will be over C. LetA= (ao;) bean nxnmatrix, D =diag(au,a22,· · · ,ann) and BAD. For a z E C, define Az = D + zB. Then Ao = D and At= A. Note that the eigenvalues of Ao are au,~, · · · , Gnn· As the zeros of a polynomial vary continuously with the coefficients of the polynomial, it is natural to guess intuitively that when lzl is
Combinatorial Analysis in Matrices
254
small, the eigenvalues of A., will be close to a 11 ,tl22 , · · · , or This is confirmed by Gersgorin.
ann on the complex plane.
Definition 5.8.1 Let A= (a,;) be ann x n matrix, and let n
lat;l,
Ri(A) = }:
1 $; i $; n.
J•1 iF I
The matrix A is diagonal dominant if
laiil ~ Ri(A),
(5.21)
1 $; i $; n.
H (5.21) is strict, then A is strict diagonal dominant.
Theorem 5.8.1 (Gersgorin, [98]) Let A= (at;) be ann x n matrix. Then all eigenvalues of A lie in the region G(A), which is the union of the closed discs n
G(A)
= U{z E C : lz a,,l $; Ri(A)}
(5.22)
i=l
Moreover, if the union of k of these n discs form a connected region on the complex plane and if this connected region is disjoint from the other n k discs, then this connected region has exactly k eigenvalues of A.
= (xt,X2,··· ,xn)T be an eigenvector
Proof Let A be an eigenvalue of A, and let x belonging to A. Pick a component x,. such that
Since x
:f.: 0, lx,.l > 0. Thus n
..\x,.
=}:a,.;x;, i=l
This is equivalent to n
= }:
x,.(..\ app)
a,.;x;.
i=t
J.,. i
It follows that n
lx,.(A app)l
n
= :E
lJpjZj
$;
:E
la,.;x;l
j•l
j •1
Jtfop
itfop n
$;
lz,.l
:E
i•l
itfop
la,.;l = lx,.I.Rp.
Combinatorial Analysis in Matrices
255
This proves the first assertion of Theorem 5.8.1. Let A= D+B, where D =diag(au,G22, · .. ,a..n) and let~= D+£8, for some E > 0. Note that ~(A.) = ~(EB) = f~(B) =~(A). Without loss of generality, assume that the first k discs k
U{z E C
:
lz
i=l
form a connected region G~o(A) on the complex plane, which is disjoint from other n k discs. Note that for any f with 0 ::::; f ::::; 1, k
G~o(~)
= U{z E C : lz a,,, : : ; ~(A)} i=l
and so G~o(A,) is also disjoint from the other discs {z E C k + 1 ::::; i ::::; For each i, the continuous curve >.,(~) with (0 ::::; € ::::; 1) lies in the disc {z E C !z atil : : ; ~(A)}. Hence G~o(A) has at least k eigenvalues of A. By the sante reason, u~=A:+l {z E c : lz
lz
a,,, : : ;
G~o(A),
~(A)},
n.
Each of the closed discs in (5.22) will be called a Gersgorin disc. Applying Theorem 5.8.1, we can obtain the following theorem. Theorem 5.8.2 Let A= (atJ) be ann x n matrix such that A is strict diagonal dominant. Then (i) A is nonsingular. (ii) If
Theorem 5.8.3 Let A = (atJ) be ann x n matrix, and let >.be an eigenvalue of A lying on the boundary of G(A), with an eigenvector x = (a:1,z2,· · · ,a:n)T. Let Xp be the component of x such that lzpl = m&Xt:5i:5n lzil· Each of the following holds. (i) If for some k, lx~~ol = lzpl, then !>.a~:~: I = R~o(A). (In other words, >. also lies in
Combinatorial Analysis in Matrices
256 the kth Gersgorin disc.)
(ii) Suppose that for some k = 1, 2, · · · , n,
lx•l = lxpl·
lz;l = lz~cl· Proof Since
.X lies in the boundary of G(A), for each i = lzpl,
H for some j
¢ k, a1c; ¢ 0, then
= 1, 2, · · · , n, I.X a,, I ~ R.(A).
It follows that when lx~cl
n
lx~cii.X au I
=
n
E
a~c;x;
:5
i•1
E ;.,..
n
n
:5 lxpl
la~:;z;l
il
;,u
E ;.,..
la~:;l
= lx•l
i=l
E
la~c;l = lx~clR~:,
i=l ;~·
and so equalities must hold everywhere. Thus both (i) holds and n
L
la~:;l(lz~ollx;l)
=0
holds, and so by the fact that each la~:;l(lx~:llx;l) ~ 0, (ii) must follow as well. Corollary 5.8.3 Let A = (a0;) be ann x n matrix such that ao; ¢ 0, 1 and let
.X
D
:5 i,j :5 n,
be an eigenvalue of A lying on the boundary of G(A), with an eigenvector
X= (X1JX2,···
,xn)T.
(i) Every Gersgorin disc contains
(ii) For 1 :5 i,j :5 n,
.X.
lz•l = lx;l·
Brauer considered taking two rows when determining the radii of the Gersgorin discs instead of just one row, and successfully extended Theorem 5.8.1. Theorem 5.8.4 (Brauer, [14]) Let A= (a.;) be ann x n matrix. Then all eigenvalues of
A lie in the region
U
{z E C : lz aiillz a;; I :5 R.(A)R;(A)}.
(5.23)
.X be. an eigenvalue of A with an eigenvector x = (x1,x2, · ·· ,xn)T. Let = maxl:5i:5n lx•l > 0. Since each aii lies in the region (5.23), .X must be there also
Proof Let
lxpl
when x has only one nonzero component. Now assume that x has two nonzero components. Let
lxpl ~ lx9 1 ~ lx•l for all i with
Combinatorial Analysis in Matrices
257
i E {1,2, · · · , n} {p,q}. As before, we have n
n
2:
a,.; xi $
2:
i~P
iyAp
n
lxql
la,;llx;l
i=l
jal
n
1:
.....
la,;l = lxtl
1:
.
la,..fl = lx9 1Rp,
i1
i=l
;.,.
which yields (5.24) However, we also have n
n
2:
a11;x; $
;.,., i=l
2:
~~.
n
lx,.l
la9;llx;l
J=l n
1:=
laq;l = lx,.l
2:
1
i~l
~~·
i ... P
i
Ia,.; I= lx,.I.Rq,
which yields
I"'  aqq I $ Rq lx,.l lx,l· Therefore the theorem follows by combining (5.24) and (5.25).
(5.25)
O
Motivated by the success of Brauer, it is natural to consider more than two rows in a time. Attempts were made by Brauer and by Marcus and Mine. Unfortunately, this does not lead to a successful generalization of Theorem 5.8.1. (See comments in [23]). Brualdi ([23]) discovered new connection between the distributions of eigenvalues and closed walks and connectivity in digraphs, and thereby finding a new extension. Definition 5.8.2 A digraph D is cyclically connected if for any vertex v, there exists a vertex w such that D has a directed (v,w)walk and a. directed (w,v)walk. Thus in a. cyclic connected digraph D, every vertex lies in a nontrivial directed closed walk (a trivial directed closed walk is just a loop). We define a matrix A to be weakly irreducible if D(A) is cyclically connected. For ea.cb vertex t1 E V(D), let N+(v} to be the set of vertices u E V(D) {v} such that (v, u} E E(D). Note tha.t if Dis cyclic connected, then N+(v) :fi 0, Vv E V(D).
Combinatorial Analysis in Matrices
258
Throughout this section, the collection of all nontrivial directed closed walks of D(A) is denoted by C(A). For a W = v,,v,. · ··v;.v;•+> E C(A) (where v0H 1 = v; 1 ), write k
k
II lz  aiil = II lz  ai;i; Iand II ~(A) = II~. (A). i=l
~~ew
i=l
~,ew
A preorder on a set V is a relation ~ satisfying (POl) x ~ x, Vz E V, and (P02) x ~ 11 andy~ z imply that x ~ z, Vx,y,z E V.
Theorem 5.8.5 (Brualdi, [23)) Let A = (a0;) be ann xn matrix. If A is weakly irreducible, then every eigenvalue of A lies in the region
U {z E C : II lz a,ol :5 v,ew II ~(A)} .
(5.26)
~;EW
WEO(A)
Proof Let ). be an eigenvalue of A. If for some i, ). = a.. , then clearly .X lies in the region (5.26). Thus we assume that ). ¥=a.,, Vi, and let x (z1 , x2, • • • , zn)T be an eigenvector belonging to).. Then define a preorder on V(D(A)) a follows:
=
v, ~ v; i+ lz•l
:5lz;l·
Claim 1 ). lies in the region (5.26) if there exists a W E C(A) with these properties (A) W = t1; 1 V 02 • • • Vi• Vi•+• with Vi 1 = Vi•+• and k 2:; 2. (B) For each j 1, 2, · • · , k, Vm ~Vi;+> for all Vm E N+(x,1 ) (C) lxo; I > 0, 1 :5 j ~ k. If such a W exists, then by Ax = .XX, for j = 1, 2, · · • , k,
=
n
(.Xayy)xy=
L
L
a.;... zm=
a.;,..xm.
Therefore,
I.Xa.1 o;IIXi;l =
L
a,;... xm
:5
L
:5
L
lao;,..llzml
(5.27)
v,.eN+(v;1 )
v,.eN+(v;;)
lao;... llz•H• I= ~; lz•H•I·
~.. eN+(v<;l
It follows k
k
J=l
i=l
II I.X  ai;i; llzi; I:5 II ~; lx•J+•I·
(5.28)
Combinatorial Analysis in Matrices
259
Note that k
k
II lA  ao;i; I = II j=l
lA 
a,. I, and II Ri = II B.. 1
v;EV(W)
j=l
v;EW
II lz•sl = II lzi;+ll· "
n
J=l
j=l
(5.29)
Combine (5.28) and (5.29) to get
II IAa.d $ II B..
v;EW
(5.30)
v;EW
and so A lies in the region (5.26). Claim 2 A closed walk satisfying (A), (B) and (C) in Claim 1 exists. Since x :/; 0, there exists :&0 :/; 0. Note that n
(A aii):&f
=
E ; = 1
a.;x;
= E
a.;z;.
v;eN+(v;)
it'i
This, together with lz•l > 0 and A a.. :/; 0, implies that among the vertices in N+(v,), there must be at least one v; E N+(v0) such that z; :/; 0. Let Vi, =Vi and let Vi2 e N(vi,) such that for any 11 E N+(11o1 ), 11 ~ 11i2 • Note that Zi2 oF 0. Inductively, assume that a walk v;.vi2 ···vi1_,v,P satisfying (B) and (C) in Claim 1, has been constructed. Since z,1 :/; 0, we can repeat the above to find 11,1+, E N+(v;) such that for any v E N+(v;) 11 ~ v01+,. Since D(A) has only finitely many vertices, a closed walk satisfying (A), (B) and (C) of Claim 1 must exists. This proves Claim 2, as well as the theorem. O Theorem 5.8.6 (Brualdi, [23]) Let A= (ai;) be ann x n irreducible matrix and let A be a complex number. If A lies in the boundary of the region (5.26), then for any We C(A), A also lies in the boundary of each region
{z E C :
II lzaiil $ IT Ri(A)} · v;EW
(5.31)
v;EW
Proof Note that an irreducible matrix is also weakly irreducible. All the argument in the proof of the previous theorem remains valid here. We shall use the same notation as
260
Combinatorial Analysis in Matrices
in the proof of the previous theorem. Note that Claim 2 in the proof of Theorem 5.8.5 remains valid here.
= a"
for some i, then A cannot be in the boundary of Since Ro > 0, for each i. If A (5.26). Hence A=/: llih 1 ~ i ~ n. Fix a W E C(A) that satisfies (A),(B) and (C) of Claim 1 in the proof of Theorem 5.8.5. Since A lies in the boundary the region (5.26), for each Vi E V(W), we have I.X lltil ~ Rt, and so
II fz lliil ~ II Ro(A). t11EW
,(5.32)
v;EW
By (5.30), we must have equality in (5.32), and so .>.lies in the boundary of (5.31) for this
w. Note that when equality holds in (5.32), we must have, for any j = 1, 2, · · · , k, that equalities hold everywhere in (5.27). Therefore, for any closed walk in C(A) satisfying (A), (B) and (C) of Claim 1 in the proof of Theorem 5.8.5, for any v11 E V(W) and for anytimE N+(v;1 ),
(5.33)
Define K = {v; E V(D(A)) :
lzml = c; =
constant, for any Vm E N+(v;)}.
By Claim 2 in the proof for Theorem 5.8.5, K =/: 0. If we can show that K = V(D(A)), then for any closed walk W E C(A), W will satisfy (A), (B) and (C) of Claim 1 in the proof of Theorem 5.8.5, and so>.. will be on the boundary of the region (5.31) for this W. Suppose, by contradiction, then some vq E V(D(A))  K. Since D(A) is strongly connected, D(A) has a shortest directed walk from a vertex inK to Vq· Since it is shortest, the first arc of this walk is from a vertex in K to a vertex VJ not in K. Adopting the same preorder of D(A) as in the proof for Claim 2 of Theorem 5.8.5, we can similarly construct a walk by letting Vi1 = Vf, Vio is chosen from N+(v,,) so that for any v E N+(v.,), v ~ v;2 • Since D(A) is strong, N+(v,) =/: 0, for every Vi E V(D(A)). Once again, such a walk satisfies (B) and (C) in Claim 1 in the proof of Theorem 5.8.5. In each step to find the next Vis, we choose "i; ¢ K whenever possible, and if we have to choose Vi; E K, then choose "'s E K so that Vi; is in a shortest directed walk from a vertex inK to a vertex not inK. Since IV(D(A)) Kl is finite, a vertex t1 not inK will appear more than once in this walk, and so a closed walk W' E C(A) is found, satisfying (A), (B) and (C) of Claim 1 in the proof of Theorem 5.8.5, and containing v. But then, by (5.33), every vertex in W' must be K, contrary to the assumption that v jnK. Hence V(D(A)) = K. This completes the proof. D
Combinatorial Analysis in Matrices
261
Corollary 5.8.6 Let A= (Go;) be ann x n matrix. Then A is nonsingular if one of the following holds. {i) A is weakly irreducible and
II la••l > II B., for any W E C(A). v;EW
v;EW
(ii) A is irreducible and
II
laiil ~
v;EW
II B., for any WE C(A), v;EW
and strict inequality holds for at least one WE C(A). Proof In either case, by Theorems 5.8.5 or 5.8.6, the region (5.26) does not contain 0; and when A is irreducible, the boundary of the region (5.26) qoes not contain 0, either.
D
5.9
Mmatrices
In this section, we will once again restrict our discussion to matrices over the real numbers, and in particular, to nonnegative matrices. Throughout this section, for an integer p > 0, denote (p} = {1, 2, · · • ,p}. For the convenience of discussion, a matrix A E M,. is often written in the form
[ : l
A
~~: ~~:
A,.1
Ap2
~::
•••
,
(5.34)
Apq
= =
where each block A,; E Mm,,n1 and where m1 + m2 + · · · + mp n n1 + n2 + · · · + nq. In this case, we write A = (A1;),i = 1,2,··· ,p and j = 1,2,··· ,q. A vector x = (xf,xf, · · · ;x'J)T is said to agree with the blocks of the matrix (5.34) if Xi is an m 0dimensional vector, 1 5 i 5 p. When x = (xf,xf,··· ,x'J)T, x'{' is also called the ith component of x, for convenience. Definition 5.9.1 Recall that if A~ 0 and A EM,., then A is permutation similar to its
262
Combinatorial Analysis in Matrices
Frobenius Standard from (Theorem 2.2.1) 0 0
B=
0
0
0 0
0
0 0 0
A,,
0
A 9+t,l Ag+2,1
A9+1, 9 A 9+2, 9
Ag+l,g+l
0 0
A,+a,g+l
A 9+2,g+2
Ak,i
A~c, 9
Ak,g+l
0
(5.35)
A""'
By Theorem 2.1.1, each irreducible diagonal block A;;, 1.::;; i.::;; k, corresponds to a strong component of D(A). Throughout this section, let p; = p(A;;), the spectral radius of A;;, for each i with 1 .::;; i .::;; k. Label the strong components of D(A) {diagonal blocks in (5.35)) with elements in (k), and define a partial order~ on (k) as follows: for i,j e (k), i ~ j if and only if in D(A), there is a directed path from a vertex in the jth strong component to a vertex in the ith strong component; and i < j means i ~ j but i :fi j. The partial order~ yields a digraph, called the reduced graph R(A) of A, which has vertex set (k), where (i,j) e E(R(A)) if and only if i< j. Note that by the definition of strong components, R(A) is has no directed cycles. H a matrix A has the form (5.35), then denote p; = p(Aii), 1.::;; i.::;; k. Let M =AI A be an Mmatrix. Then the ith vertex in R(M) (that is, the vertex corresponding to A;; in R(A)) is a singular vertex if~ p;. The singular vertices of reduced graph R(M) is also called the singular vertices of the matrix M. For matrix of the form (5.34), define
=
'Yii
={
1 0
ifi=jorA;;¢0 otherwise.
Also, let
where the maximum is taken over all possible sequences {i,h,··· ,q,j}. Proposition 5.9.1 With the notation above, each of the following holds. (i) H A is a Frobenius Standard form (5.35), then with the partial order equivalently write, if j ~ i otherwise.
~.we
can
Combinatorial Analysis in Matrices
263
(ii) p
L.Ru.Rh; ~ Ri; ~ Ri1R1;, 1 S l S p h=l
(iii) p
'YihRh; ~ B.; ~ ~ 'YihRhj, if i # j.
L
(5.36)
h=l,h~i
Example 5.9.1 Let
I,
0 0 0 0 a 1 1 1 0 0 b c 1 1
A=
where a, b, c are nonnegative real numbers. Then R(A) is the graph in Figure 5.9.1.
1
2
4
3
5
0 singular vertex
e
nonsingular vertex
Figure 5.9.1 Definition 5.9.2 A matrix B = (b1;) E M,.. is an M matrix if each of the following holds. (5.9.2A) au ~ 0, 1 S i S n.
264
Combinatorial Analysis in Matrices
(5.9.2B) bs; ~ 0, for i :f: j and 1 ~ i,j ~ n. (5.9.2C) H ). f. 0 is an eigenvalue of B, then ). has a positive real part. Proposition 5.9.2 summarizes some observations made in [229] and [216]. Proposition 5.9.2 (Schneider, [229] and Richman and Schneider, [216]) Each of the following holds. (i) A is an Mmatrix if and only if there exists a nonnegative matrix P ~ 0 a.Iid a number p ~ p(P) such that A= piP, and A is a singular Mmatrix if and only if
A=p(P)I P. (ii) H A = (A1;) is an Mmatrix in the F\obenius standard form (5.35), then the diagonal blocks A,1, 1 ~ i ~ k, are irreducible Mmatrices. (iii) The blocks below the main diagonal, A;;, 1 ~ j < i ~ k, are nonnegative. In other words, A;; ~ 0. Lemma 5.9.1 Let A form (5.35), and let x and for an h > a, let
= (A;;), i, j = 1, 2, · · · k be an M matrix in a F\obenius Standard = (xf, xi,· · · , xf)T agreeing with the blocks of A. For an a e (k)
{
x;=O x;>>O
ifHi.,=O if~.. =l.
i
= 1, 2, ..• , h 
1.
(5.37)
H {
and
Yt =0
(5.38)
Yi = E~:!A.;x;, i = 1,2,··· ,k,
then {
Yh =0
ifR,..,=O
Yh >0
if Rha
= 1.
(5.39)
Proof Since A,.;x; ~ 0, we have y,. ~ 0, and Yh j 1,2, · · • ,h 1. By (5.37), y,. = 0 if and only if
= 0 if and only if A,.;x; = 0,
=
'Yh;R;a Since 'Yhi
(5.40)
= 0 whenever h < j, we also have h1
/;
i=l
i=l,j;fi
L: 'Yh;R;a = E
AB h
= 0, j = 1,2, ·· · ,h 1.
f. a, it follows by (5.36)
'Yh;R; ... and JI.l2f'Yh;R;a '<
= li;lBf"'Yh;R;a• J'F
that (5.40) holds if and only if R~oa
= 0. 0
Theorem 5.9.1 (Schneider, [229]) Let A= (A,;) be an Mmatrix in F\obenius standard form (5.35), and let a be a singular vertex of R(A). Then there exists a vector x
=
Combinatorial Analysis in Matrices
265
(xf, xf, · · · , xi)T such that Ax = 0 and {
Xi
>> 0
Xi
= 0
if Ria= 1 (that is, a:::$ i) otherwise.
Proof Let x = (xf,xf, · · • ,xi)T be given and let y = (yf,yf, · · · ,yi)T be defined by (5.38). Then Ax= 0 if and only if
(5.41)
Aux;=Yi, i=1,2,···,k.
Now x; = 0, for all i