This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
. Suppose that 1:t:. IOf, and let g(x) be a nonzero polynomial of minimal degree in I. Also let I(x) be a nonzero polynomial in 1. Dividing g(x) by lex), we get lex) = q(x)g(x) + r'(x), where q(x) E GF(q)lx] is the quotient and r'(x) E GF(q)lx] is the remainder with r'(x) = 0 or the degree of r'(x) is less than the degree of g(x). Since g(x) Eland q(x) E GF(q)lxJ, q(x)g(x) E l. Also, I(x) Eland [(x) - q(x)g(x) = r'(x) is in I. Since g(x) is a nonzero polynomial of minimal degree in I, the degree of r'(x) cannot be less than the degree of g(x). We must have r'(x) = o. Thus [(x) = q(x) g(x) and 1 is the principal ideal
10
24
Error-Control Block Codes for Communications Engineers
m- I Consider a polynomial f(x) = fmxm + fm-I x + . . . + fl x + fo or degree m with coefficients /0, fi- ... ,fm from GF(2). Definition 2.10. A polynomial f(x) or arbitrary degree with coefficients from GF(2) is called an irreducible polynomial if f(x) is not divisible by any polynomial over GF(2) or degree greater than 0 but less than the degree or f(x). 4 Example 2.8 The polynomial x + x + lover GF(2) is not divisible by any polynomial over GF(2) or degree greater than 0 but less than 4. 4 x + x + 1 is an irreducible polynomial over GF(2). 2 The polynomial x + lover GF(2) is divisible by the polynomial 2 x + lover GF(2) of degree greater than 0 but less than 2. x + 1 is not an irreducible polynomial over GF(2). It can be shown that any irreducible polynomial over GF(2) of degree 2m
m always divides x - 1 t 1 [51. m- 1 Definition 2.11. An irreducible polynomial p(x) = Pmxm + Pm_IX + ••. + PI x + Po of degree m with coefficients Po, PI, ... , Pm from GF(2) is called a primitive polynomial if the smallest positive integer n for which p(x) m divides x" + 1 is 2 - 1. 4 Example 2.9 The irreducible polynomial x + x + lover GF(2) divides 4 4 x I 'j + 1, where 15 = 2 - 1. x + x + 1 is a primitive polynomial over GF(2). 4 2 The irreducible polynomial x + x 3 + x + x + lover GF(2) divides 4 l5 4 x + 1, where 15 = 2 - I. It also divides x 5 t 1, where 5:t:- 2 - I. 4 2 3 x + x + x + x + 1 is not a primitive polynomial over GF(2). It can be seen that a primitive polynomial over GF(2) is always irreducible over GF(2), but an irreducible polynomial over GF(2) may not be a primitive polynomial over GF(2).
2.3.2 Construction of Extension Field GF(2 m) from GF(2) m
The construction technique of an extension field GF(2 ) from GF(2) is based on the primitive roots of a primitive polynomial p(x) of degree m with coefficients from GF(2). Consider a set of finite elements on which a multiplication operation "." is defined such that the set F = . . a. a... a IS /0 , 1, a, a 2, . . . , a J, . . .I an d a .I -_a . a..: a ( J.,urnes) J called the power representation of the element O' in the set F If the element (symbol) a is a root of a primitive polynomial p(x) of degree mover GF(2), p(o')
and
=
O. Since p(x) divides x
2m 1 - + 1, we have
Introduction to Abstract Alg~bra
a
2m-I
25
+ 1 = q(a)p(a) = qia) . 0
2m 1 rnodulo-Z addition and F= - _ 2m 2 2 {O, 1, a, a , . . . , a - }. It is left as an exercise for the reader to show that
a
Therefore,
the nonzero elements of F are closed under the multiplication "." and F is 2
2m 2
also closed under addition "+". F = {O, 1, a, a , . . . , a - } is a finite field m) m GF(2 of 2 elements. The powers of a generate all the nonzero elements m) m) of GF(2 and a is a primitive element. Since the extension field GF(2 is constructed from GF(2), GF(2) is called the ground field of Gf'(2m). The ground field is closed under rnodulo-Z addition and multiplication. m m-\ m-2 fd Letp () x = x + Pm-Ix + Pm-2x + ... + PIX + POo egree m with coefficients Po, PI, ... , Pm-I from GF(2). For p(a) = 0, we have
O =a
m
+Pm-Ia
m-I
+Pm-2 a
m-2
+, .. +Pla+Po
and
a
m
= Pm-I a
m-l
+ P m-2 a
m-2
+ . . . + P I a + PO
a m is expressed as a polynomial with coefficients Po, PI, ... , Pm-l from GF(2)andpm_Ia m - 1 + Pm_2am-2 + ... + PIa + poiscalledthcporynomial representation of the element am. Other nonzero elements in the set F can be expressed as a power of the primitive element a and as a linear 0 a 1, ... ,a m-I . Th e po Iynorrua . I representatIon . . 0 f a, 0 f the com biinanon m I· 0 1 2 2 . d I eIement a, I = , , ... , - , IS expresse as a = am-I a m-l + m 2 tl m - 2 a + . . . + al a + ao with binary coefficients. Furthermore, the coefficients of the polynomial representation of a' can be written as a l-by-m row vector [aoal ... am-Il, The vector is called an m-tuple. The zero element 0 in F may be represented by the zero polynomial or the all-zero row vector. Example 2.10 Let a be a root of a primitive polynomial 4 p(x) = x + x + lover GF(2). 4 The power representation of a IS a· a . a . a and the polynomial representation of a is a + 1. The power representation of a 5 is a . a . a . a . a and the polynomial 2 representation of a 5 = a 4 • a = (a + I) . a = a + a. Example 2. J 1 Let a be a root of a primitive polynomial 4 p(x) = x + x + lover GF(2). Table 2.4 shows the power, polynomial, and 4 4-tuple representations of GF(2 ) constructed from the primitive polynomial 4 p(x) = x + x+ lover GF(2).
26
Error-Control Block Codes for Communications Engineers
Table 2.4 4 Representations of Elements over GF(24) Generated by pIx) = x + X + 1 Power Representation
o=
0
1=
1
= = = 5 a = 6 a = 7 a = 8 a = 9 a = io a = 11 a = a a
12
13
14
= =
=
a a
2
000 1 a
3
+ 3 a +
a
a
2
+
+1
2
001 1
+
+
+1
110 1
1
10 10
a
+1
11 10
o1 1 1
a a
I
o1 0 1
a 2
+ 3 2 a + a + 2 3 a + a + 2 3 a + a + 3 a + a
1 1 00
o1 1 0
a
2
a a
3
1 000 00 1 0
3 a
a
0000
o 1 00
a
a= 2 a 3 a 4 a
a
4-Tuple Representation
Polynomial Representation
+1
1111
1
10 1 1
1
1 001
Power representation is handy for multiplication whereas the polynomial representation is very convenient for addition. It is possible to use a nonprimitive but irreducible polynomial with m coefficients over GF(2) to generate the extension field GF(2 ) . In this case, we have to find a nonzero nonprimitive element as a root, compute the powers of the element to generate the extension field. Thus, it is much convenient to use primitive polynomials to generate the extension field as we know that the primitive element is a and the powers of a generate all the nonzero elements m of GF(2 ) . For some values of m, a list of binary primitive polynomials and m a list of GF(2 ) are shown in Appendixes A and B, respectively.
2.3.3 Properties of Extension Field GF(2 ffl ) In abstract algebra, a polynomial with coefficients from GF(2) may not have roots from GF(2) but from the extension field of GF(2). Example 2.12 Let a be a root of a primitive polynomial 4 4 3 p(x) = x + x + lover GF(2). The irreducible polynomial x + x + lover 4 GF(2) does not have roots from GF(2) but it has four roots from GF(2 )
27
Introduction to Abstract Algebra 4
.
II
"7
I
~
14·
generated by p(x) = X + X + I, The four elements a ,a , a " and a of Q 4 CF(2 ) given by Table 2.4 are the roots of X + x-' + I. The extension field C;F(2 m) has the following properties. Theorem 2,12. Let f(x) be an irreducible polynomial of degree mover CF(2) and {:3 be an dement in If {:3 is a root of f(x), then
crir:,
")
).'
')111-1
/3-, /3- , ' , . ,{:3-
are all the roots of f(x). Proof Let ao, a 1, . . . , a r be elements of C;F(q
and m> O. We begin by showing (ao
=
pm), where p is prime
al + , , , + a r )1
+
+ a~, The proof is by induction. Consider (ao + (1\ r = I, (ao + al)q = aX +
"
('q)_
coefficient
GF(q),
=
2
+ , , . +
(Ii
+ ...
(/r)1.
For
a f + , , , + ai, The binomial
q(q-i)!., , _ ')' IS a multiple of q for 0 < t < q. In
q':
t.q
'-q
t.
follows
It
+
= _'( _ ')' = "(
1.
(ao + al)1
(i)arl(ll (i)ag-
aX +
=
t.
(~) = 0
that
modulo-q
for
0 < i< q
and
a6 + ai·
Assume that (ao
+ ill +, , , + il r'_1)1 =
a6
+
ai
a~'_l is true.
+ .. , +
Then
(a 0
+
a\
+ , , ' +
a r' ) q
= [( a 0 + =
is rrue. Therefore, (ao + al (ao +
al
+ , , . + a,,)1 =
-I
• , ,
aX
af
+
a6
+
a1
ai
+ . , , +
=
ag
a;,.
+
a r' ] q
+
a;,
+ . , , +
+ ar'_1)1
aT' - 1)
+ , , . +
a;
+ , , . +
a;'_1 implies
We conclude by induction
1 q 1 1 + ill + , , , + il r ) = a o + a l + , , , + a r , m- I Let f(x) = fmxm + fm-I x + , , , + Ji x + fo with coefficients in
t hat (aO
CF(p). We wish and f
=
[0
show thatJip
-T. and
1, 2, ... , m, respectively. Consider
[f '(x )IP = =
(
I.mX m + I.m
(fmxmjP
= f pm(p)m X
By Theorem 2,9,fr 1 fore 'f'P I
I
=
I, and I
"
[f(x)]p' =f(xP') for 0 ~
I f(X)]pi.
I X m-l + , , , +
Ji1 x
for f
=
t-:
m
I,
I')p + JO
+ (fm_IXm-1)P + " , + (fix)P + (Jo)p +
=
fPm- 1 (p)m-I X
+ ... +
f I XP + r;0
1 for 0 ~ i ~ m.], 1:- 0 andf; E (;F(p). There-
28
Error-Control Block Codes for Communications Engineers
[/(x)]p
= Im(xP)m
+ 1m-I (xp)m-I + . . . + Ilx P + 10
p
= I(x )
= 1([3P) = O. [3P is a root ofI(x). By induction, it follows immediately that fiP' = fi and [/(x)]P' = l(x P') for 0 ~ i ~ m and Since 1(f3)
1= 1,2, ... ,
= 0,
[/([3)]p
m, respectively. Also, since 1([3) = 0, [/([3)]P'
= 1([3P') = 0
pl.
and [3 IS a root of I(x) for 1= 1,2, ... , m. The maximum value of I is m. This is due to the fact that, by Theorem 2.9, [3p"'-1 = 1 for [3:t:. 0 and [3 E GF(pm), and therefore [3P'" = [3. Thus,
2 22 2",-1 [3, [3 , [3 , ... , [3 are all the roots of I(x). Definition 2.12. Let I(x) be a polynomial over GF(2). Also, let [3 and 1
1
[32 be the elements in the extension field of GF(2) for I> O. [32 is called the conjugate of [3 if [3 and [32' are roots of I(x). m Definition 2.13. Let [3 be an element in an extension field GF(2 ) . A polynomial c/J(x) of smallest degree with coefficients in the ground field GF(2) is called a minimal polynomial 01[3 if c/J( (3) = 0, i.e., [3 is a root of ¢J(x). Theorem 2.13. A minimal polynomial ¢J(x) of [3 is irreducible. Proof Suppose c/J(x) of [3 is not an irreducible polynomial, there exists two polynomials 11 (x) and 12 (x) of degree greater than 0 and less than the of ¢J(x) such that ¢J(x) = fi (x) . hex) and degree c/J([3) = fi ([3) . 12([3) = O. This implies either fi ([3) = 0 or 12([3) = O. Since the degree of ¢J(x) is greater than the degrees of (x) and 12 (x), ¢J(x) cannot
It
be a minimal polynomial of smallest degree. This contradicts the definition of a minimal polynomial of [3. Therefore, a minimal polynomial of [3 is irreducible. Example 2.13 Let [3 = as and [32 = a 10 be elements in an extension 4 field GF(2'4) generated by a primitive polynomial p(x) = x + x + lover 2 5 GF(2). a and a 10 are roots of the polynomial ¢J(x) = x + x + lover GF(2). 2 c/J(x) = x + x + I is a minimal polynomial of [3. The minimal polynomial 2 x + x + I of [3 is not divisible by any polynomial over GF(2) of degree greater 2 than 0 but less than 2. x + x + I is an irreducible polynomial over GF(2). 4 Let [3 = a, [32 = ci, [34 = a , and [38 = all be elements in an extension 4 4 field GF(2 ) generated by a primitive polynomial p(x) = x + x + lover 2 4 R . 4 GF(2). a, a , a , and a are roots of the polynomial ¢J(x) = x + x + 1 4 over GF(2). c/J(x) = x + x + I is a minimal polynomial of [3. The minimal 4 polynomial x + x + 1 of [3 is not divisible by any polynomial over GF(2) of
Introduction to Abstract Algebra
29
4
degree greater than 0 but less than 4. x + x + 1 is an irreducible polynomial over GF(2). Theorem 2.14. Let ¢J(x) be the minimal polynomial of {3, where {3 is in
GF(2 m ), and 1-1
¢J(x)
I be the
f1 (x + {3
=
2i
smallest
integer such
that {3 2'
= {3. Then
).
i=O
Proof By Theorem 2.13, ¢J(x) is an irreducible polynomial. By Theorem 2.12, {3 and the conjugates of {3 are the roots of ¢J(x). Therefore, 1-1
¢J(x) = Il (x + {3 2i ). i=O
Theorem 2.15. If {3 is a primitive element of GF(2 m ) , all the conjugates 2
{32, {32 , • • • of {3 are also primitive elements of GF(2 m ) . Proof Let n be the order of the primitive element {3 in GF(2 m ) , where m n = 2 - 1 and {3n = = 1. Also, let n' be the order of the element
s":'
{32' in GF(2 m ) for I > 2' n'
n'2'
o.
By Theorem 2.10, n' divides 2 m 2m 1
n'2'
-
1, and
, I
({3) = {3 = 1. Thus, {3 = {3 - = 1 and n 2 must be a multiple of m m 2 - 1. Since 21and 2 - 1 are relatively prime, 2 m - 1 must divide n', Since n' divides 2 m - 1 and 2 m - 1 divides n', we conclude that n' = 2 m - 1 and m {32' is also a primitive element of GF(2 ) for I > o. m m Given p(x) ofdegree m, we can construct the field GF(2 ) of 2 elements (see Example 2.1 1). Due to Theorem 2.14, we can use the nonzero elements of GF(2 m ) to construct ¢J(x). Table 2.5 shows the minimal polynomials generated by the primitive 4 polynomial p(x) = x + x + 1. Table 2.5 Minimal Polynomials of a Generated by a Primitive Polynomial pIx) = i + x + 1 over GF(2) Conjugate Roots
Order of a
Minimal Polynomial
15 5
x x+1 x4 + X + 1 4 3 2 x +x +x +X+1 2 x +X+1 x4 + x3 + 1
o O
1 =a 1 2 4 8 a,a,a,a 3 6 a, a, a 12, a 9 5
a,a 7
10
a, a 13, a 14, a 11
3 15
30
Error-Control Block Codes for Communications Engineers m
A list of minimal polynomials of elements in GF(2 ) can be found in Appendix C.
2.4 Implementation of Galois Field Arithmetic Consider the multiplication of two polynomials a(x) and hex) with coefficients from GF(q). where
(2.1 ) and
(2.2) Let c(x) = a(x)h(x) =
) s' r- I as br X S H + ( a t-: I br + a[ h r- I x + ... + (ao b: + al
b,
+ a2 bO)x
2
+ (ao b I + al ho)x + ao ho
The above operation can be realized by the circuit as shown in Figure
2.2. Initially, the register holds all (I's and the coefficients of a(x) are fed into the circuit with high-order first, followed by r O's. Table 2.6 shows the input, register contents, and the output of the multiplier circuit.
Figure 2.2 GF(q) multiplication circuit.
Introduction to Abstract Algebra
31
Table 2.6 Input, Register Contents, and Output of the GF(q) Multiplication Circuit in Figure 2.2
Time
Input
Register Contents
Output ~---
aa... aa
8s
8s
8 s-1 8 s-2
8 s-l 8 s ...
I~ + r- 1 a
15++ r + 1 S
t
00
aa a0
a a
8 sb r (8 s-l br+ 8 sb r-,)
0 ... 00
aa
80 81
0
(80bl 80b o
80
a
+ 81 bo)
a
The product is complete after s + r shifts with the output corresponding to the coefficients of c(x) in descending power order. Consider the division of a(x) by b(x) with coefficients from GF(q) , where a(x) and b(x) are given by (2.1) and (2.2). a(x) and b(x) are called the dividend polynomial and the divisor polynomial, respectively. Let q(x) be the quotient polynomial of degree less than s - rand r'(x) be the remainder polynomial of degree less than r. The quotient polynomial of degree s - r is
q (x )
=
qs-rX
s-r
+ qs-r-l x
s-r-l
+ . . .
(2.3)
where
-1
qs-r = as br , qs-r-l = (as-l - asb r- 1 b;l )b;l , '[s-r-';
In the long division process, we first subtract (asb;l)b(x) from a(x) S to eliminate the term as associated with X in a(x). We then repeat the process by deducing the next term in q(x). For example, qs-r-l = (as-1 - asb r- 1 b;l) b;1 . In general, the polynomial q ibex) corresponding to each newly found quotient coefficient q i is subtracted from the updated dividend to eliminate termts) in the updated dividend. For example, (q i = q s- r-l ) b (x) is subtracted from the updated dividend to eliminate the term (a s-l - asb r- 1 b;l) in the updated dividend. The above operation can be realized by the circuit as shown in Figure 2.3.
32
Error-Control Block Codes for Communications Engineers
Figure 2.3 GF(ql division circuit.
Initially, the register holds all D's and the coefficients of a(x) are fed into the circuit. After s + 1 shifts, the coefficients of the remainder polynomial appear in the register with highest order to the right of the register and the coefficients of the quotient polynomial appear at the output with the highorder first. Over GF(2), b~1 = b, and
2.5 Vector Spaces The concept of vector spaces is closely related to matrix theory and is important in coding theory. A concise review of the facts about vector spaces and matrices
Introduction to Abstract Algebra
33
r'
~ {3 <->[b Ob1··· bm-1] r'<->[COc1'" c m-1]
bO b1
--+0--+ --+0--+
{3 -c-e- [bO b1 b 2 ] r'<->[COC1 C2]
Figure 2.4 Hardware realization of GF(23) circuit elements.
that are needed elsewhere in this book are presented here and in Section 2.6, respectively. Definition 2.14. Let F be a field of elements called scalars and V be a set of elements called vectors. A set V is called a vector space over the field F if an operation called vector addition "+" on pairs of elements from V. and an operation called scalar multiplication.":" that combines an element in F and an element in V to produce an element in Vare defined, and V satisfies the following axioms:
34
Error-Control Block Codes for Communications Engineers
Table 2.7 Representations of Elements over GF(23) Generated by p(x) = x 3 + x + 1 Power Representation
Polynomial Representation
3-Tuple Representation
a
a
000
1
1
1 aa a1a aa 1 11a a1 1
a
a
a2 a
3
a
4
a5 a
6
a2 a +1 2
a +a 2
111
2
1a1
a + a +1 a +
1. V is a commutative group under vector addition. 2. Distributive law: For all vectors X, Y a . (X + Y) = a . X + a . Y, (a + b) . Y = a . Y + b . Y. 3. Associative law: For all Y (a . b) . Y = a . (b . Y)
E
E
Vand all scalars a, b E F,
Vand all scalars a, b E F,
The additive identity of V is denoted by 0 and the multiplicative identity of F is denoted by 1. Clearly, 1 . Y = Y for all Y E v: Definition 2.15. A subset 5 of a vector space Vover a field F is called a subspace of V if 5 is also a vector space over the field F and 5 satisfies the following axioms: 1. For all vectors X, YES, X + YES. 2. For all YES and all a E F, a . YES.
Definition 2.16 Let Yo, Y I, . . . , Y k-l be k vectors in a vector space V over a field Fwith elements ao, aI, ... , ak-l' In a vector space 1I, the sum aoYo + alYl + ... + ak-IYk-l is called a linear combination of Yo, Y I , . . . , Yk-l. Theorem 2.16 Let 5 be a subset of a vector space Vover a field F. The set 5 of all linear combinations of Yo, YI , . . . , Yk-l in Vover a field F forms a subspace of v:
35
Introduction to Abstract Algebra
Proof Let W = aoYo
+ a1Y 1 + ... + ak-IYk-l and W' = boYo + be two elements in S. where b, E F and i= 0,1 k- 1. Then. W + W' = (ao + bo)Yo + (til + bl)Y 1 + ... + (ak-I + bk-I)Yk-l is a linear combination of Yo. Y I , . . . • Yk-I' Since (tI, + b,) E F, (w + W') is in S. Also. for all cE F, c : W = caoYo + calYI + ... + cak-lYk-l is a linear combination of Yo. Y I , . . . , Yk-l' Since ca; E F and WE S. c : W is also in S. The set of all linear combinations of kvectors in a vector space Vsatisfies all the axioms in Definition 2.15. Therefore. 5 forms a subspace of V. Definition 2. 17. A set of vectors {Yo, Y I • . . . • Yk- tl is called linearly dependent if there exists k scalars ao. a I • . . . , a k-l E F, not all zero, such that
b1Y I +
+ bk-lYk-1
a"
(2.4) Clearly, if aoYo + ajY j + ... + ak-IYk-1 = O. one of these vectors can be expressed as a linear combination of the others. A set of vectors that is not linearly dependent is called linearly independent. In such a set. anyone of those vectors cannot be expressed as a linear combination of the others. A set of vectors. Yo. YI • . . . • Yk-I not necessarily linearly independent, is said to )pan or generate a vector space V if cvery vector in V is a linear combination of the vectors in the set. Definition 2. 18. In any vector space or subspace. a set of vectors IYo, YI , . . . • Yk- il in the space is called a basis of the vector space provided that the vectors are linearly independent and span the vecror space. V is said to have a dimension k and the vector space is called a k-dimensional vector space. In this book. we will only consider a vector space V over CF(p) or CF(q = pm). where p is prime and m> O. In a vector space V over GF(q), each vector in V can be expressed as a I-by-n row vector with elements from Gf'(q) and the vector is called an n-tuple over CF(q). Example 2.14If B o = [1 00 0] 0] B 1 = (0 1 0
B'I_I = [0 0 0 ... 1], then Bo • B I •••• , B lI - 1 are linearly independent. Also, every vector of length n in a vector spacc Vover GF(2) can be expressed as a linear combination of the vectors in the set IB o • B 1 , ••• , Bn-Il. Thus, the set is a basis of the vector space V and is called an n-dimensional vector space.
36
Error-Control Block Codes for Communications Engineers
Consider two vectors X = [XOX I . . . X 11- Il and Y = [YO} I . . . } 11-Il of length n in a vector space V over a field F. The inner product of the twO vectors is defined as X' Y = XU,Yo + Xl}1 + ... + Xn-IYn-I' If the inner product of twO vectors is zero, they are said to be orthogonal. Theorem 2.17. Let S be a k-dimensional subspace of Vand Sd be the set of vectors in V such that, for any W E S and any Y E Sd. the inner product W . Y = O. The set 5d is itself a subspace of V Proof Let 5d be the set of all vectors orthogonal to 5. Let any vector WE S and let any vectors X, Y E Sd. W· X = W . Y = 0 and W . X + W . Y = 0 = W . (X + Y). Thus, W is orthogonal to (X + Y). Therefore, (X + Y) E 5d. Also, for all a E F and WE 5, W . (aX) = a(W . X) = O. Therefore. aX E 5d. 5d satisfies all the axioms in Definition 2.15 and 5d is a subspace of V 5d is called the null (dual) space of S. Conversely,S is also the null space of 5d. If S is a k-dimensional subspace of V of all n-tuples over GF(2), it can be shown that the dimension of the null space Sd is n - k [I]. At this point, it is worth noting that all the results can be generalized to the case of GF(q), where q is a power of prime.
2.6 Matrices A k-by-n matrix over GF(2) (or any other field) is a rectangular array GO
[0,0
gO,1
gO,n-1
GI
[l,0
gl,l
[1,11-1
Gk-I
[k-I,O
gk-I,I
gk-l,n-l
G=
(2.5)
Over the Galois field GF(2), each entry is an element from GF(2). If the k (k $ n) rows are linearly independent, then all the linear combinations of Go. G 1 , .•• , Gk-l of the form
form a k-dimensional subspace of the vector space V Since U i = 0 or lover k GF(2), there are 2 distinct linear combinations of Go, G I , . . . , G k- I . The k subspace consists of 2 vectors. This subspace is called the row space of G. Let 5 be the row space of a k-by-n matrix Gover GF(2) whose rows Go, G I , . . . , Gk-l are linearly independent. Let Sd be the row space of a (» - k)-by-n matrix
37
Introduction to Abstract Algebra
H
HO
ho,o
ho,1
hO,n-l
HI
hl. o
hl,l
h1,n-l
Hn-k-I
hn-k-I,O
hn-k-I,l
h n-k-I.n-l
(2.6)
over GF(2) whose rows Ho, HI, ... , Hn-k-I are linearly independent and is the null space of S. Then the inner produce of G i and Hl' (G i . H;), must be zero and
(2.7) where H T is the transpose of H. More often, it is useful to transform a matrix into Standard Echelon Form (SEF) or canonical form by row/column transformations. Example 2. 15
G=
Go G1 Gz G3
0 1 0 0
1
0 0 0
1 0
1 1
1
0
0
1
0 0 0 1 0 0 1 1 0 0 I 1
becomes
Go G{ GSEF =
Gi G{
1 0 0 0
0 1 0 0
0
0
0 I
I
0
1 1 1
0
0
1
1
0
0
I
0
I
I
1
where Go := Go + G z + G 3 , G I' := G] + G3' Gi := G z, and G{ := G 3, with rnodulo-Z addition, and := is the assignment operacor.
References [1)
Birkhotf G., and S. Mac Lane, A Surory ofModern Ali:~bra, 3rd ed .. New York: Macmillan,
[21
Durbin,
[31
Bloch, N.
1%3.
J.
R., Modern AIg~bra: An Introduction, New York: John Wiley. 1985.
J.. Abstract AIg~bra with Applications. Englewood Cliffs,
:'\I): Prentice-Hall, 1987.
38
Error-Control Block Codes for Communications Engineers
[41
Me Fliece, R. J., finite Fieldsfor Computer .'Jeientim and Engineers, Boston: Kluwcr Academic Publishers, I'.J1l?
['S]
Fraleigh, }. B., A First Course in Abstract A~~ehrd, 5rh ed., Reading, .'viA: Addison- Wesley.
]'l'.J4.
3 Linear Block Codes In this chapter, we are dealing with the theory of error-control block codes. The treatment here will concentrate on the basic principles of encoding and decoding binary linear block codes. Consider [he coded digital communication system model shown in Figure
3.1. sequence of q-ary digits called the information vector U = [un U I . . . U k- tl is fed into a block encoder. The block encoder adds redundancy digits to U and produces an encoded vector V = lvo VI' . . v n-I 1called k a channel codeword. A set of q q-ary vectors (codewords) of length n defines a block code. In most applications. q = 2 and the block code is binary in A
v
·...-_ ···
_--.--..----_ .. _--- .. ...
.
Channel codeword
Noise
Transmission path (Analog channel)
A
U Estimate of U
R
Received vector Discrete noisy channel
Figure 3.1 Model of a coded digital communication system.
39
40
Error-Control Block Codes for Communications Engineers
nature. The modulator transforms the encoded vector into a modulated signal vector which is suitable for transmission through the analog channel. Typical channels are telephone lines, high-frequency radio links, microwave links, satellite links, semi-conductor memories, and magnetic tapes. Because the channel is usually subject to noise disturbance, the channel output may be differed from the channel input. At the receiving end, the demodulator performs an inverse operation and produces a received vector R = [ro rl ... r n- IJ. Subject to noise disturbance, the received vector R may not be the same as the encoded vector V. The channel decoder uses the redundancy in the encoded vector V to correct the errors in the received vector R and produces an estimated of U, denoted as U = [UOUI ... uk-d.
3.1 Basic Concepts and Definitions Let us consider the following examples where k number of information digits are fed into a channel encoder and n number of encoded digits are produced by the channel encoder. This is shown in Figure 3.2. Example 3.1 Consider the case where q = 2, k = 3, and n = 3. The codewords are {OOO, 00 1, 0 1 0,011,100,101,110, Ill}. Clearly, the codewords contain no redundancy if k = n. A single error will carry one transmitted codeword into another codeword, and the error will not be detected. Example 3.2 Consider the case where q = 2, k = 2, and n = 3. There are k n 2 = 4 possible input patterns and 2 = 8 possible output patterns. A possible encoding rule is shown in Table 3.1. U=[UO Ul'"
Uk.l1J Block
IV=IVO
-I encoder .
vl'"
Vn.l~
Figure 3.2 Block diagram for a block encoder. Table 3.1 Mapping Rule for the Block Encoder in Example 3.2 Information Vector U
00
Codeword V
000
o1
o1 1
10 11
11 01 01 } 2 biIts .In dOff I erence
Linear Block Codes
41
If 1 0 1 is transmitted, an error pattern E = [0 0 1] will convert 1 0 1 to 1 0 0, the received vector R. If the channel decoder has a complete knowledge of Table 3.1 and chooses the output pattern corresponding to the minimum number in bit differences from the received vector R as the estimated information vector V, the decoder cannot decide whether 0 0 0, 1 0 1, or 1 lOis the estimate of the information vector U. This is because 000, 1 Oland 1 1 0 differ in one place from the received vector R = [l 0 0]. However, an error is detected. To complete the decoding process, a codeword is chosen by a random selection process and the decoder commits a decoding error. Example 3.3 Consider the case where q = 2, k = 1, and n = 3. There are k n 2 = 2 possible input patterns and 2 = 8 possible output patterns. A possible encoding rule is shown in Table 3.2. If 0 0 0 is transmitted, an error pattern E = [0 0 1] will convert 000 to 0 0 1, the received vector R. If the channel decoder has a complete knowledge of Table 3.2 and chooses the output pattern corresponding to the minimum number in bit differences from the received vector R as the estimated information vector V, the decoder will pick 000 as the estimate ofU. This is because the received vector R = [0 0 1] is closer to the output pattern 000 than the output pattern 1 1 1. In the presence of a single error, the error is detected and corrected. In general, there are three problems which face the designer of error detection/correction systems. 1. To synthesize a code with the desired redundancy properties; and hence, to design the encoder; 2. To find a reasonably simple decoder; 3. To make the overall coding system as efficient as possible, so that the minimum amount of redundant information is transmitted.
In the remaining part of this chapter, we shall define some useful terms and describe the encoding and decoding structures of linear block codes. Table 3.2 Mapping Rule for the Block Encoder in Example 3.3 Information Vector U
Codeword V
o
o1 01 01 } 3 biIts In
1
0
dOff I erence
42
Error-Control Block Codes for Communications Engineers
Definition 3.1. The Hamming weight of a vector is defined as the number of nonzero elements contained in the vector. Dejinition .3.2. The Ilamming distance d(X. Y) between two vectors X and Y of length n is the number of places in which their dements differ. For all binary or nonbinary n-tuples X, Y, and Z. Hamming distance satisfies the triangle inequality, that is.
ax. Y)
+
dcY, Z) ~ d(X. Z)
(3. I)
Definition 3.3. A block code of size / with q symbols IS a set or a collection of / q-ary vectors (codewords) of length n. Definition 3.4. The minimum (Hamming) distance d min of a code is the smallest Hamming distance between distinct codewords. Definition 3.5. A code of length n, with k information digits, is described as an i n, k) code. and if the code has a minimum Hamming distance d min • we describe the code as an (n. k. dillin) code. The dimension of the code is
k. Definition 3.6. The rate of the code R( is the number of information digits in each codeword divided by the length of the code. (3.2)
Definition 3.7. A block code of length nand / codewords is called an i n, k) q-ary linear code if and only if its / codewords form a k-dimensional subspace of the vector space of all n-tuples over the GF(q).
3.2 Matrix Description of linear Block Codes From our earlier study of vector space theory and Definition 3.7, it is possible to find k linearly independent codewords Go. G I , . . . , Gk-l in the q-ary code C such that (3.3)
where (3.4) U I and ilj are q-ary symbols for 0 ::; i::; k - 1 and 0 ::; j::; n - I. V is a linear combination of the k linearly independent codewords. The k-by-n generator matrix G of the code Cis
Linear Block Codes 43 ----------------------------
Go G1
G=
0.5)
Gk-I
se.c
gO,1
gO,n-1
gl,ll
gJ.]
gl,n-I
gk-I,O
gk-l,1
(3.6)
where G, = [gi,Og" I ' , . g"n- Il with q-ary entries for 0 :0::; i:O::; k - 1. The encoding operation, as shown in figure 3,2, can be expressed as
v,
UG
(3.7)
where (3.8)
Example 3.1 Consider a (7, 4) binary linear block code with
G=
1
000
1
0
1
o o o
1
0
0
0
1
0
1 1
1 1
1 O' The information and code vectors are shown
0
0
1
o
1
I
below.
U
v
0000 0001 0010
1000 1 00 1 10 10 10 I 1 1 I 00 I 10 I 1 1 10
0000000 0001011 00 1 0 1 1 0 0011101 1 00 1 I 1 o 1 0 I 1 00 0110001 o1 I 10 I 0 1 000 1 0 1 1 00 I 1 1 0 10 1 00 1 1 1 0 1 1 000 1 1 000 1 0 1 1 0 I 00 I 1 1 1 0 I 00
1 I I I
I I I I 1 1 I
00 1 1 0100 o10 1 oI I 0
oI
1 1
o
44
Error-Control Block Codes for Communications Engineers
From our earlier study of matrix theory, we have seen that there exists an (n - k)-by-n matrix H with n - k linearly independent rows such that
GHT=O
(3.9)
UGHT=O VH
T
=0
(3.10)
T
where H is the transpose of the matrix H. It follows that any codeword V in C generated by G is orthogonal to every row of H. Therefore, H is the parity-check matrix of the linear code C H can therefore be thought of as the generator of the dual code to that generated by G. The (n - k)-by-n parity-check matrix H takes the following form:
H=
Ho HI
(3.11 )
H n- k- l
ho.o h 1•0
hO•1 hl,1
h n- k- l ,O
h n- k- l , ]
hO•n- 1 h 1,n-l
(3.12)
hn-k-l,n-l
where Hi = [hi,ohi,l ... hi,n-Il with q-ary entries for 0 ~ i ~ n - k - 1. Given the generator matrix G of an (», Ie) linear code, we can put the generator matrix G into systematic form GSH by row/column transformations.
Example 3.5
G=
GSEf
Go G] G2 G} Go G{ = Gi G' 3
1
0
0 0 0
1 0 0
1 0 1 0
1
0
0 0 0
1 0 0
1
0
0
1 0 1
1
0
0
0 1 0 1 1
1
0
0 0
0
1 1
0 1
1
0
I
0
I
I
0
0
I
0
I
I
1
Linear Block Codes
45
where Go := Go + G 2 + G~, G( := G I + G 3 , Gi := G 2 , and Gi := G 3 . When an (n, k) code is generated by the generaror matrix GSEF, the code is called a systematic code. The format of a codeword can take the following form as shown in figure 3.3. The generaror matrix for an (n, k) systematic linear code is 1
0
0
Po.o
PO.I
PO.n-k-1
0
1
0
PI,O
Pl.I
P1.n-k-l
P/r-I,O
P/r-I, I
Pk-I,n-k-I
GSEF = 0
0
0.13)
and the parity-check matrix H SEF is -Po,o
-pl,O
-P/r-I,O
1
0
0
-PO, 1
-Pl.l
-P/r-l.l
0
1
0
-PO,n-k-l
-pl.n-k-1
-P/r-I.n-k-l
0
0
HS EF =
(3.14) For binary linear codes, -Pt.;
= Pi,j for 0
$
i $ k - 1 and 0
$
j
$ n-
k - 1.
3.3 Relationship of Minimum Distance to Error Detection and Correction From Definition 3.4, the minimum Hamming distance of a code is the smallest Hamming distance between distinct codewords. For a linear block code, the all-zero vecror is a codeword. Clearly, the minimum Hamming distance is equal to the minimum weight of its nonzero codeword, denoted as wminlVl. Given the parity-check matrix H of an (n, k) linear code, one can determine the minimum distance of the code using the following theorem. Theorem 3.1. For an (n, k) linear block code, the minimum weight of a linear code is equal to the smallest number of columns of H that sums to zero. Information symbols Uo u1'" uk_1
Parity-check symbols
Iv k
vk+1 ... v n-1
Figure 3.3 Code vector generated by a systematic block code.
46
Error-Control Block Codes for Communications Engineers
------
. T Proof. VH = volho,ohl,o ... h n k-I,O] + VI [ho,1 h U . . . hn-k-U] + ... + v n- l lhO.ll I h1,1l-1 ... hll - k- l,l1 - tl = O. The code syrnbol », is associated [ho.;hl,; ... h11- k- J.)], for 0 ~) ~ n - I, and with the vector [ho)JI,; . . . h n k-I.;] corresponds to the j-th column ofH. Since the minimum Hamming distance of a linear block code is equal to the minimum weight of its nonzero codeword. Clearly, the minimum weight of a linear block code is equal to the smallest number of columns of H that sums to zero. The parameter d Jl1 iJl can be used to predict the error protection capability of a code. A code can correct t' errors, where t' is upper bounded by (dJl1 in - I )/2, i.e., t'
=
L(d Jl1 iJl - I)/2J
(3.15)
or t'
s
(d min - 1)/2
(3.16)
Here L(dm in - I)/2J denotes the greatest integer less than or equal to (dmin - I )/2. The error-correcting capability of a code is best understood by visualizing codewords and arbitrary words as points in geometric space. Each codeword is placed in the center of a sphere of radius t', These spheres are all disjointed. Words that have t' or fewer errors from a codeword will lie in the respective sphere and closer to that codeword. Since the minimum separation between centers of pairs of sphere is equal to the minimum distance of the code, t' errors can be correctly decoded as long as Zt' + 1 ~ d min . This is shown in Figure 3.4. In general, a code can correct any combination of t' errors and detect up to e errors (e ~ t') if
e + t'
~
d rnin
-
1
Figure 3.4 A code with minimum Hamming distance 2t' + 1.
(3.17)
Linear Block Codes
47
Figure 3.5 illustrates the geometric situation. Again, each codeword is placed in the center of a sphere of radius t', The spheres are all disjointed. Words that have t' or fewer errors from a codeword will lie inside the respective sphere, and the errors can be corrected. If the number of errors are greater than t' but less than e, words will lie outside all these spheres and errors are detected but not corrected. Depending on the requirements of the application, a decoder can be designed to detect errors only, correct errors only, or a combination of error detection and error correction. Given the minimum Hamming distance of a code, Table 3.3 gives some possible decoding choices to detect errors and correct errors. The parameter dmin can also be used to predict the error and erasure correction capability of a code. In general, in the presence of f erasures, the effective minimum Hamming distance of a t' -error correction code is dmin - f and the effective error-correction capability t e is upper bounded by (d min - f - 0/2, i.e., t e S (dmin -
f-
(3.18)
1)/2 Codeword
o
I
Arbitrary word
o
o
o o
o o
o
Figure 3.5 A code with minimum Hamming distance t' + e + 1.
Table 3.3 Some Possible Decoding Choices to Detect Errors and Correct Errors
2
dmin
e
r
o o
1
o
3 21
o1
4 32
o1
5
432
o1 2
48
Error-Control Block Codes for Communications Engineers
Again, a decoder can be designed to correct erasures only, errors only, or a combination oferrors and erasures. Given the minimum Hamming distance of a code, Table 3.4 gives some possible decoding choices to correct erasures and errors. Given the values of nand k, what is the minimum Hamming distance of an (n, k) linear code? The following theorem provides an upper bound to the minimum distance of an (n, k) linear code. Theorem 3.2. (Singleton bound [1].) The minimum distance of an (n, k) linear code is
dmin
~
n- k + 1
(3.19)
Proof The maximum number of linearly independent column vectors in the parity-check matrix H is (n - k). A codeword with only one nonzero information symbol cannot have weight larger than n - k + 1. Therefore, dmin ~ n - k + 1. Other useful bounds on d min are the Plotkin upper bound [2] and the Varshamov-Gilbert lower bound [3]. Tighter bounds on d min also exist and an updated table of tightest known bounds on d min are provided by Verho4f[4]. Upper and lower bounds are often used to find codes by an exhaustive search of the generator matrices if necessary.
3.4 Syndrome-Former Trellis Representation of Binary Linear Block Codes So far, we have used matrix theory to describe a block encoder. We can also describe an (n, k) linear block code with the aid of a trellis diagram. A trellis is simply a collection of nodes or states interconnected by unidirectional edges. The nodes are grouped into sets and a node indexed by a particular value d' is said to be at depth d' for d' = 0, 1, ... , n. Edges are drawn from a node at depth d' to a node at depth d' + 1. For an (n, k) linear block code, the trellis has n stages and the trellis is used to represent all the codewords of the Table 3.4 Some Possible Decoding Choices to Correct Erasures and Errors dmin
f
0
t,
0
2
3
4
5
1 0
20
31 o1
42
o1
o1
6
7
53 1
6420
o1 2
o 1 23
Linear Block Codes
49
code. Each distinct codeword corresponds to a distinct path in the nellis. We describe how to construct the trellis for an in, k) binary linear block code. Consider the transpose of the (n - k)-by-n parity-check matrix H of an (n, k) binary linear block code. Suppose that a code vector V is transmitted. In the presence of errors, the n-component received/ector R may not be the same as the vector V and the product of Rand H may not be the all-zero /. T vector, The product of Rand H is called the syndrome and H is called the syndrome-former matrix of the i n, k) binary linear block code. H T can be realized as a linear circuit that consists of (n - k) modulo-2 adders. The syndrome-former circuit for the (n, k) binary linear block code is shown in Figure .:3.6, where the symbol <:£) denotes exclusive-OR operation. Based on the state transitions of the circuit, a nellis can be determined. In [5], Wolf showed that the trellis has at most 2(n-k) states and 2 min {k, n- k f number of states at any depth for all binary linear codes. Here, the trellis corresponding w the code vector V consists of n stages. In the nellis diagram, each node is labeled with the state of the syndrome-former circuit, and the state of the syndrome-former circuit is defined as the OUtputs of the exclusive-OR elements. Each new coded bit causes a transition to a new state and there are 2 paths leaving each state. The state of the nellis at depth d' is expressed by an (n - k)-component binary vector: (3.20)
(k + 1) 5
c =n-k
Figure 3.6 Syndrome-former circuit for an (n, k} binary linear block code.
Error-Control Block Codes for Communications Engineers
50
for 0 ~ d' ~ n, std'), d' 2 1, corresponds to the state when code symbols up to /Jd' -1 are input into the syndrome-former circuit. A transition then occurs when I'd' is inpur. Initially, at depth d' = 0, the input code vector V into the . , .IS zero, an d 5(0) =0, For t he syn d rome- f·ormer CIrcuit e ini irutia I state 5(0) =0:
I. Input Vo 2.
=
I or O.
Observe the state
s'".
s'" of the
circuit and draw branch from
5(0)
to
, , I state S(d') : ror cac h .mtcrrnc dilate IIlI£1a
r.
3. Inpur
Vd' =
I or 0 without altering any previous setting of the circuit.
4. Observe the state (d' . I) to S .
S(d'. 1)
of the circuit and draw branch from S(d')
5. Iterate steps 3 and 4 until
V n-
l
is input.
s'"",
G. For the last interval Cd' = n - I) associated with state input Vn-I = I or 0 without altering any previous setting of the circuit. 7. Observe the final state 5(11-1) to
s'".
S(n)
of the circuit, and draw the branch from
8. Expurgate all the paths that do not return to the final state
S(n) =
O.
Since the trellis is constructed from the syndrome-former circuit of an
(n. k) linear block code, the trellis is called the syndrome-former trellis of the code. We shall see in Section 3.7.4 that decoding using a syndrome-former trellis is particularly effcctivc for high-ratc codes. Example 3. 6 The syndrome-former circuit for a (7, 4) binary linear block 1
code
H
0
with
[~
G
=
000
I
0
I
o
I
0
0
1
I
I
0
0
I
0
1
I
0
000101
I
i: ~ r ~l"hown on
and
Figure d.Z, and its syndrome-
former trellis is given in Figure 3.8, In figure 3.8, we follow the convention that a solid line denotes the coded symbol associated with the bit value (0) and a dashed line corresponds to the bit value (]). The syndrome-former trellis diagram contains all the
Linear BLock Codes
51
I
~V s
'f
"(5) )
s
Figure 3.7 Syndrome-former circuit for the (7, 4) binary linear block code in Example 3.6.
8(0) 8(1) 8(2) S(3) S(4) S(5) 8(6) S(7) 111'
~
'\
1i
\
•
f.\
•
110 •
!
\
,
\
101 •
••
-111
•
•
- 110
•
•
- 101
•
- 100
•
- 011
•
• 010
100 • 011 • 1& 010 001 000
•
• f
I .1
Z V
o
!
i
.\
J
\
I
v
1
v
2
v
3
v
• 001
"I....
""I",
000 4
V
5
V
6
Figure 3.8 Syndrome-former trellis of Figure 3.7.
information that we need to know about the code. A particular encoder output vector will trace out a unique path through the trellis. For example, if the information vector U = [1 0 1 1] is fed to the input of the (7, 4) binary linear block encoder, then the corresponding codeword is V = [1 0 1 1 0 0 0] and is traced as shown in Figure 3.8.
3.5 Examples of Binary Linear Block Codes We have seen that the problem of finding a code with a given degree of error protection reduces to that of finding a code with a given minimum distance.
52
Error-Control Block Codes for Communications Engineers
Unfortunately, there is no general rule for finding codes with a given d min . The construction of some simple and well-known binary linear block codes is presented here.
3.5.1
Repetition Codes
Repetition is the simplest form of error protection. Each information digit may be transmitted n times. The generator and parity-check matrices of an (n, I) binary repetition block code are
I]
G = [1
(3.21)
and
I
0
0
0
1
0
(3.22)
H= 0
0
respectively. It can be seen that the rrururnum Hamming distance of an (n, 1) repetition block code is n. In general, n transmissions of the same digit enable n - 1 errors to be detected, or ~ (n - I )/2 errors to be corrected.
3.5.2 Single-Parity-Check Codes A single-parity-check code of block length n may be formed by taking a paritycheck over k = n - 1 information digits. The parity-check digit Vn-l may be wntten as Vn-l
=
Uo +
Uj
+ ... + Uk-l
(3.23)
where + implies modulo-2 addition. We chose an even-parity rule so that each codeword has an even number of ones. The generator and parity-check matrices of an (n, n - 1) binary single-parity-check block code are
1
0
0
0
1
0 (3.24)
G= 0
0
Linear Block Codes
53
and
H
1]
[1
=
(3.25)
respectively. Single, triple, and all odd number of errors in the block n may be detected. The coding rate is Rc
=
(n - 1)/n
(3.26)
As n goes to infinity, the code rate goes to 1. This code was very widely used for computer punched tape. Example 3.7 The generator and parity-check matrices of a (3, 2) binary single-parity-check code are G
[~ ~ ~]
=
If the input information sequence is U = [1 is V = UG = [1 1 0].
and H
=
[1 1 1], respectively.
1], the encoded code sequence
3.5.3 Single-Error-Correcting Hamming Codes R. W. Hamming found an optimum class of single-error correcting codes in 1950 [6]. The code was used for long-distance telephony. For some integers [2 2, the family of binary Hamming codes has the following parameters: Block length:
n = 2{ - 1
Information digits:
k = 2{ - [- 1
Number of check digits:
[=
Minimum distance:
d min = 3
Error correcting capability: t'
=
n- k
1.
To construct the parity-check matrix of an (n, k) binary Hamming code, we simply place all nonzero binary r-tuples in the columns of the c-by-x paritycheck matrix in any order. For example, the parity-check and the corresponding generator matrices of a (7, 4) single-error-correcting binary Hamming code are
(3.27)
Error-Control Block Codes for Communications Engineers
54
and
G=
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
1 1 1 0
If the input information sequence is U sequence is V = UG = [0 1 1 1 0 1 0].
1
0 1 1 1
1 0 1
(3.28)
= [0 1 1 1], the encoded code
3.5.4 Reed-Muller Codes Reed-Muller (RM) codes were discovered by Reed and Muller in 1954 [7, 8]. For some integers m and 0 $ r' < m, an r'-th order Reed-Muller code of block length n = 2 m is denoted by R(r', m) and has the following parameters: Block length:
Number of check digits:
(7) c 1 (7)
Minimum distance:
d min
Information digits:
k
= 1 +
+
(~)
+ ... +
(~)
+
+
(~)
+ ... +
(m _; _1)
=
=
2
m- r'
Table 3.5 shows the possible (n, k) Reed-Muller codes that can be generated by r' and m for r' = 0, 1, ... ,4 and m = 2, 3, ... , 5. R(l, 5) firstorder Reed-Muller code of rate-6/32 was used to send black-and-white photographs from the Mariner 9 spaceprobe to Earth in 1972. The code has a Table 3.5 Possible (n, k) Reed-Muller Codes Generated by r' and m
r' = a 1
~
m= 2
3
4
5
(4, 1) (4,3)
(8, 1) (8,4) (8,7)
(16, 1) (16,5) (16, 11) 116, 15)
(32, 1) (32,6) (32, 16) (32,26) (32,31)
Linear BLock Codes
55
m r
Hamming distance of 2 - ' = 16 and can correct 7 errors. A limited number of codewords inhabit the transmission of color pictures. An r'-th order Reed-Muller code can be constructed as follows. The generator matrix of the r'-th order Reed-Muller code is given by
Mo M] (3.29)
G=
M'r m
where Mo is the I_by_2 row matrix contammg all ones; M] = [M] oM] [ ... M] n-rJ is an m-by-2 m matrix with M] 0 = [0 0 ... 0] T " 'T T ' M I,l = [0 0 . . . 0 1], M 1,2 = [0 0 . . . 1 01 ,. . . , M I, n-I = [1 1 ... 1]T; and M/, an
(7)-bY-2
m
matrix, is constructed from M]. Each
row of MI corresponds to a product of I rows of MI' m Example 3.8 Let r' = 0 and m = 3. Then n = 2 = 8 and G = [Mol = [1 1 1 1 1 1 1 1]. This is the zero-order Reed-Muller code R(O, 3) and is the (8, 1) binary repetition code. m Example 3.9 Let r' = 1 and m = 3. Then n = 2 = 8 and
1
1
1
1
1
1
1
o o o
0
0
0
1
1
1
0 1
1 0
1 0 0 101
1
0
Reed-Muller code R(1, 3) and the code rate is 4/8. , m r 2 and m = 3. Then n = 2 = 8. 1 1 1 1 1 1] 0 0 1 1 1
0 1
1 0
1 1
0 0
0 1
0 0 0
0 0 0
0
0
0 1
0 0
0 1 0
This is the first-order Example 3.10 Let [1 1
G=
[~J
[~
o
n
1 0 1 0 0
:] :]
, where row 1 of M 2
row 1 of M] X row 2 of M], row 2 of M 2 = row 1 of M] x row 3 of M[ and row 3 of M 2 = row 2 of M 1 X row 3 of M]. This is the second-order Reed-Muller code R(2, 3) and the code rate is 7/8.
56
Error-Control Block Codes for Communications Engineers
3.6 Modifications of Linear Block Codes To suit a particular application, the parameters nand k may need modifications. An (n, k) block code can be augmented, expurgated, extended, punctured, lengthened, or shortened. In all cases, the minimum Hamming distance property of the code may change after the modifications. These six basic modifications are briefly explained as follows:
1. Augmenting a code-An (», k) code may be augmented by adding
new codewords. The process increases the number of information symbols without changing the codeword length. This corresponds to increasing the number of rows of the generator matrix. Augmentation has very little to offer in most practical applications. 2. Expurgating a code-An (n, k) code may be expurgated by discarding some of the codewords from the code. This process is the inverse of augmenting a code. It decreases the number of information symbols without changing the codeword length. This corresponds to decreasing the number of rows of the generator matrix. 3. Extending a code-An (n, k) code can be extended by annexing parity-check symbols to every codeword of the code. The additional parity-check symbols are carefully chosen to improve the minimum distance of the code. The process increases the codeword length without changing the number of information symbols. This corresponds to increasing the number of columns of the generator matrix. The most common modification is the addition of a 0 parity-check symbol to every codeword of an in, k) block code with even weight, and a 1 parity-check symbol to every codeword with odd weight. In terms of the parity-check matrix, a column of zeros followed by a row of ones are adding to the parity-check matrix of the (n, k) code. If the minimum distance, d min , of the (n, k) code was odd, the new minimum distance is d min + 1.
Example 3.11 Annexing a parity-check symbol to the (7,4) binary Hamming code of H
=
1 0 [ 1
1 1 1
1 ] 0
0 ] 1
1 0 0] 0 ] 0, we get an (8, 4) extended 001
Linear Block Codes
binary Hamming code of H
=
1 0 1 1
0 1 1 1
1 1
0 1
57
0 1 0 1
1 0 0 1
0 0 1 1
0 0 . The extended 0 1
binary Hamming code is formed by adding a column of zeros followed by a row of ones to the parity-check matrix of the binary Hamming code. The binary Hamming code has a minimum Hamming distance of3 and the extended binary Hamming code has a minimum Hamming distance of 4. 4. Puncturing a code-An (n, k) code can be punctured by deleting one or more code symbols of the (n, k) code. This process is the inverse of extending a code. It decreases the codeword length without changing the number of information symbols. This corresponds to decreasing rhe number of columns of the generator matrix. The minimum distance may decrease as a result of the puncturing operation.
Example 3.12 Deleting the last column from the generator matrix 1 0 0 0 101 1 o 1 0 0 1 1 1 1 G= 0 0 1 0 1 1 0 1 of an (8, 4) binary code, we get a o 0 0 1 o 1 1 1 1 0 (7, 4) punctured binary code of G = 0 0
0 1 0 0
0 0 1 0
0 0 0 1
1 1 1 0
0 1 1 1
1 1
0 1
. Both codes
have the same minimum Hamming distance of 3. 5. Lengthening a code-Given an (n, k) block code, it is possible to form an (n + i, k + i) block code by adding i extra information symbols. The lengths of the information vector and the codeword are increased by the same amount. This corresponds to increasing the number of rows and columns of the generator matrix by i. In practice, lengthening a code is rarely used. 6. Shortening a code-Given an (n, k) block code, it is always possible to form an (n - i, k - i) block code by making the i leading information symbols identically 0 and omitting them from all code vectors. This is equivalent to omitting the first i rows and columns of the
58
Error-Control Block Codes for Communications Engineers
generator matrix G. This process is the inverse of lengthening a code. The lengths of the information vector and the codeword are decreased by the same amount.
Example 3.13 Deleting the first row and the first column in the generator
matrix G =
1
0
0
0
1
0
1
0
I
0
0
I
I
I
0
0 0
1
0
1
1
0
0
1
0
1
1
0
of the (7, 4) binary Hamming code,
we get a (6, 3) shortened binary code of G
0
[~ ~ ~
1
I
o
n
Both
codes have the same minimum Hamming distance of 3. The encoder circuit for an (n, k) block code can be used to encode an (n - i, k - i) shortened block code by making the i leading information symbols identically 0 and omitting them from all code vectors before the transrrussion.
3.7 Decoding of Linear Block Codes 3.7.1
Standard Array Decoding
Let VI = 0, V 2 , ... , V 2' be the code vectors (codewords) of an (n, k) binary linear block code. In the presence of channel errors of D's and 1's, the received n vector can take on any of the 2 n-component binary vectors. We shall partition k the binary vectors into 2 di~oint subsets such that the code vector Vi is in the i-th subset for 1 ~ i ~ 2 . If the received vector is found in the z-th subset, the received vector is decoded into Vi. Based on the linear structure n of the code, the partitioning of the 2 binary vectors is done as follows:
z"
1. Place all code vectors in the first row of a 2 n - k_by_2 k array with V j as the left-most element.
2. For 1= 2, 3, ... , r:', choose a distinct n-component error vector Erof smallest possible weight from the remaining n-component binary vectors and place it as the left-most element in the I-th row of the array. Fill the remaining entries in that row by adding Ef to Vi'
Linear Block Codes
59
The resultant array is called standardarray of a linear code. This is shown in Table 3.6. It can be seen that all elements of the /-rh subset are grouped under column i. Each row in the standard array is called a coset of the code and the left-most element of each coset is called a coset leader. Any element in a coset can be used as its coset leader. n k, For a ~iven error pattern Ef, 2 :::; I:::; 2 - El + v,» El + v; i:t j and 2 :::; i, j :::; 2 . Hence, the n-component vectors in the same row (coset) of a standard array are different. For any two error patterns El and El" I:t l' and k 2 s I, l' :::; r:', El + Vi:t El' + Vi' i:t j and 2 :::; i, j :::; 2 because if equality holds, El + El' = Vi + Vjwhich is another codeword. This violates the construction rules. Thus, every n-component vector appears in one and only one row. Correct decoding is achieved if and only if the channel error vector is a coset leader. If the channel error vector is not a coset leader, a decoding error will result. Therefore, an (n, k) binary linear block code is capable of correcting 2 n - k error patterns. The choice of coset leaders in the standard array is simple. For an (n, k) linear block code with minimum distance dmin' we simply choose a set of n-component vectors with weight S (dmin - 1)/2 as the coset leaders. The decoding based on the standard array is the minimum distance decoding. Example 3.14 Consider a (5, 2) single-error-correcting binary linear block 1 code of G = 0 [
0 1
0 1
1 1
1] t l The codewords are 00000, 01111, 10011,
and 11100. The standard array for the (5, 2) single-error-correcting binary linear block code is shown in Table 3.7. In this example, d min = 3 and t' = 1. The 8 chosen coset leaders are [00000], [0000 1], [000 1 0], [001 00], [0 1 000], [1 0000], [0010 1], and [001 1 0]. The code with standard array decoding can only correct all the single error patterns [0 0 0 0 1], [0 0 0 1 0], [0 0 1 0 0], [0 1 0 0 0], and [1 0 0 0 0]. It cannot correct the double error patterns Table 3.6 Standard Array for an (n, k) Linear Block Code ,---
V, =0
V2
Vi
V2k
E2 E3
E2 + V2 E3 + V2
E2 + Vi E3 + Vi
E2 + V2k E3 + V2 k
E,
E,+ V2
E,+ Vi
E,+ V2k
E2 n- k
E2 n- k + V2
E2 n- k + Vi
E2 n- k + V2 k
60
Error-Control Block Codes for Communications Engineers
Table 3.7 Standard Array for the (5, 21 Single-Error-Correcting Binary Linear Block Code
~000 00001 1000 1 0 iO 01 00 io 1 000 !1 0000 1001 0 1 1001 1 0
01 1 1!
1
~_11
~J
o1 1 1 0
1 001 0 1 000 1 10 111 11011 000 1 1 10 1 10 10 10 1
1 1 101 11110 I 1 1 000 1 01 00 I; 01 1 00 1 1 001 1 1 01 0
o1 1 0 1 01 0 1 1 001 1 1 11111 01 0 1 0 01 001
[0 0 1 0 1] and [0 0 I 1 0] because the code is a single-error-correcting code. Suppose the transmitted code vector is V = [0 1 1 1 1] and the error vector is E = [0 0 0 0 1]. The received vector is R = V + E = [0 1 I 1 0]. The coset leader [0000 1] is taken as the estimated error vector and the decoded vector is V = R + [0 0 0 0 1] = [0 1 1 1 1]. Single error is corrected. Next, consider the same transmitted code vector V. Suppose the error vector is E = [0 0 0 1 1]. The received vector is R = V + E = [0 1 1 0 0]. The coset leader [1 0 0 0 0] is taken as the estimated error vector and the decoded vector is V = R + [1 0 0 0 0] = [1 1 1 0 0]. Double errors are detected and cannot be corrected. If a double error-correcting code is employed, the 2 n - k coset leaders are called the correctable error patterns. Definition 3.8. For any received word R = [ro r1 ... rn-l], the syndrome of R is defined as S = RH T. In the presence of errors, R = V + E, where V = [vo VI . . . V n- I1 is the transmitted codeword and E = [eo el ... en- I1 is the error pattern. k Theorem 3.3. All the 2 n-component vectors in the same coset have the same syndrome. The syndromes for different cosets are different. Proof Syndrome S = RH T = (V + E)H T = EH ~ For a given error vector k E, all the 2 n-component code vectors share the same syndrome. For any fWO coset leaders Et and El" the syndromes are EtH T and El' H T, respectively. Suppose the syndromes are equal, we have (Et + EI')H T = O. (Et + El') is a codeword and Et = El' + Vj' where V j is a codeword and 2 ~ j s 2 k. This implies that the coset leader El' is in the same row of the coset leader Et. Now, El' cannot be used as a coset leader to construct the array. This violates the construction rules. Thus, different cosets must have different syndromes. From our last example, we can see that the size of the array and the storage requirement becomes impractical for larger values of nand k. Since all the 2 k n-component vectors of a coset have the same syndrome, we can make
Linear Block Codes
61
use of this fact ro simplify the standard array decoding technique. The next section describes just that.
3.7.2 Syndrome Decoding Suppose that a code vector V = [vo VI .•• vn-Il is transmitted. In the presence of noise, the received vecror R = [ro rl . . . rn- Il may not be the same as the transmitted vector. The decoder computes the (n - k)-tuple: (3.30) where H T is the transpose of the parity-check matrix of the (n, k) linear code and S is the syndrome of R. In the presence of errors: (3.31) un- Il is the where R = [rorl . . . rn- Il is the received word, V = [vo V I transmitted codeword, and E = [eo el . . . en- Il is the error pattern. For (n, k) systematic linear code with an information vector U = [UOUI .•. uk-Il, the transmitted codeword V becomes
(3.32) and equation (3.30) becomes
S
=
T
(3.33)
RHSEF
where (3.34) Based on equations (3.14) and (3.34), equation (3.33) can be written as, So
=
SI =
rk + (-ropo,o - rlPl,O rk+1 + (-roPo,l - rlPI,I -
-
rk-lPk-I,O) -
rk-IPk-l,I)
(3.35)
sn-k-l = rn-I + (-roPO,n-k-l - rlPl,n-k-1 - ... - rk-lPk-l,n-k-l)
It can be seen that the first term corresponds to the received parity-check digit and the remaining terms correspond to the recalculated parity-check digits.
62
Error-Control Block Codes for Communications Engineers
The syndrome is the sum of the received parity-check digits and the recompured parity-check digits based on the received digits ro, rj, ... , rk-j. If the syndrome vector S = 0, then it is assumed that no error has occurred. If the syndrome is nonzero, the presence of errors has been detected. Depending on the transmitted format of the codeword, the error locations can be identified accordingly. Since the syndrome vector S = RH T, R = V + E, and VH T = 0, it follows that (3.36) Decoding can be accomplished by a table look-up in the following manner: 1. Compute the syndrome S
=
T
RH .
2. If the syndrome S is zero, we assume that R is error free. 3. If S is nonzero, we assume that R contains errors. The error pattern that corresponds to S is subtracted from the received vector for error correction.
Again, consider the (5, 2) single-error-correcting binary linear block code shown III Example 3.14. The generator matrix, as given, IS G =
H =
[~ ~ ~ ~ ~ ]
[~1 ~ ~ ~ ~]. 100
and the
parity-check matrix of the
code
is
Ifwe choose the same set oferror patterns to compute
1
the syndromes for decoding, the syndrome decoding table is given in Table
3.8. Suppose the transmitted code vector is V = [0 1 1 1 1] and the error vector is E = [00 0 0 1]. The received vector is R = V + E = [0 1 1 1 0]. The syndrome vector is S = RH T = [0 0 1] and E == [0 0 0 0 1] is taken as the estimated error vector. The decoded vector is V = R + E = [0 1 1 1 1]. Single error is corrected. Next, consider the same transmitted code vector V. Suppose the error vector is E == [0 0 0 1 1]. The received vector is T R = [0 1 1 00]. The syndrome vector is S = RH = [0 1 1] and E = [1 0 0 0 0] is taken as the estimated error vector. The decoded vector is V = R + E == [I 1 1 0 0]. Double errors cannot be corrected because the code is a single-error-correcting code.
Linear Block Codes
63
Table 3.8 Syndrome Decoding Table for the (5, 2) Binary Linear Block Code
S (Binary-Coded-Decimal)
E (Binary-Coded-Decimal)
00 1 (1) o 1 0 (2) 1 00(4) 1 1 1 (7) o 1 1 (3) 1 0 1 (5) 1 1 0 (6)
000 0 1 (1) 000 1 0 (2) o0 1 00(4) o 1 000 (8) 1 000 0 (16) o0 1 0 1 (5) 001 1 0 (6)
For the (7, 4) single-error-correcting binary Hamming code in Example 3.6, the syndrome decoding table is given in Table 3.9. All single error patterns can be corrected. The general form of encoder and decoder for an (n, k) systematic binary linear block code is shown in Figure 3.9.
3.7.3 Maximum-Likelihood Decoding Before we proceed to the description of maximum-likelihood decoding for (n, k) block codes, let us define a useful metric measure for decoding. Binary vn-d denote the possible transblock code is assumed. Let V = [VOVI mitted code vector and R = [ro rl r n- d denote the received vector to the input of the decoder. The likelihood function associated with the code n-l
vector is defined as P(RN)
=
TI peri / vi)' where peri / Vj) is the
i=o
Table 3.9 Syndrome Decoding Table for the (7, 4) Binary Hamming Code
S
E
001
0000001 0000010 0000100 0001000 0010000 0100000 1000000
o1 0 1 00
o1 1 110 111 101
channel
Error-Control Block Codes for Communications Engineers
64
Uo
0-.._------_-----o-t--.......-----t--..----
U,
U1
V k +1
'---------------1.-0
'----.. ..,0. n.' V
(a)
'k
1\
'0
~1 r k-1 'k
0
.
0 0
'k+1 0
r n-1
0
(b)
I
I
I
I
I
I
.et
I
Uo 1\
.OU,
~Dk_1 .O~k
.0
1\
V k+'
~~n-1 c =n-k
Figure 3.9 la] (n, kl systematic binary linear block encoder, and (b) syndrome decoder.
transmon probability associated with the j-th code symbol. The metric M(RIV) = Mn-1(rolvo, rjlvl"'" rn-1lvn-l) associated with a code vector IS defined as the logarithmic likelihood function n-l
logP(RIV)
=
L logP(rjlv) and the bit metric M(r) Iv) associated with the
)=0
j-th code symbol is defined as log P(r) I v/
Linear Block Codes
65
k
There are 2 possible channel code vectors associated with an (n, k) binary block code. In the presence of noise, the received vector R may not be the same as the transmitted code vector. From the received vector R, the n-l
k
IT
per)/ V)). The largest likelihood )=0 function from the set is chosen, and the decoder outputs U, bearing in mind that there is a one-to-one mapping between the information vector U and the code vector V. Thus, the decoder performs a maximum-likelihood search of decoder computes a1l2 likelihood functions
n-l
the likelihood function
IT per) / v/
Because log per) / v) is a monotone
)=0
increasing funcrion of (r) / V)), it follows that a maximum-likelihood search of n-I
IT perl/v)) is equivalent to a search of the maximum log-likelihood function L logP(r)/v)). A metric can now be employed in the the likelihood function
)=0 n-l
)=0 decoder. In the decoding process, the most likely code vector corresponds to the code vector with the largest metric. The decoder now computes and k compares the metries of all 2 log-likelihood functions in the set, chooses the log-likelihood function with the largest metric from the set, and decodes. This is the maximum-likelihood decoding of binary block codes.
3.7.3.1
Hard-Decision Decoding
In hard-decision decoding, the discrete noisy channel of Figure 3.10 becomes a binary symmetric channel, and the elements of R are quantized into 0 and
Figure 3.10 Model of a coded digital communication system.
66
Error-Control Block Codes for Communications Engineers
1. With equally likely input symbols and channel transition probability p', the hard-decision bit metrics associated with the j-th code symbol are
(3.37)
(3.38) for rj = "i: Suppose the channel transition probability p' is set to 0.1, we then have M(rj / Vj) = -1 for rj:;i: vjand M(rj / Vj) = -0.05 for rj = Vj' With integer scaling, M(rj/vj) = -20 for rj:;i: Vj and M(rj/vj) = -1 for rj = Vj' This is shown in Table 3.10. Example 3.15 Assuming that the all-zero code vector is transmitted, the received vector is R = [1 0 0 0 0 0 0]. Given the hard-decision metric table of Table 3.10 and the code vectors of the (7, 4) binary Hamming code with
1
o generator matrix G =
0
000 101 1 001 1 1 0 1 0 1 1 0 ' the computed hard-decision
000
101
1
rnetrics (log-likelihood functions) are shown in Table 3.11. The decoder picks the all-zero code vector as the path with the largest metric, and outputs the all-zero information vector. In the example, the received vector R = [1 0 0 0 0 0 0] is decoded to [0 0 0 0 0 0 0] which corresponds to the estimated information vector iT = [0 0 0 0]. In practice, it is common to use Hamming distance to perform metric computations. Consider the metric log P(RN). For a binary symmetric channel, P(rj / v;) = p' for rj:;i: Vj and P(rj / Vj) = (l - p') for rj = Vj' When p' < 0.5, the metric can be written as M(RN) = d(R, V) logp' + [n - d(R, V)] log(l - p') Table 3.10 Hard-Decision Metric Table for p' ~------------~
-1 -20
-20 -1
= 0.1
(3.39)
67
Linear Block Codes Table 3.11 Hard-Decision Metric Computation for p'
~~rm.lion
= 0.1 n-1
M(RN)
Code Vector
Veclor
= L log P(rjlvj) j=o
10000
-26 -83 -83
0000000 000 1 0 1 1 001 0 1 1 0 001 1 1 0 1 o 1 001 1 1 o 1 0 1 1 00 0110001 1 110 10 1 000 1 0 1 1 001 1 1 0 1 0 1 001 1 1 0 1 1 000 1 1 000 1 0 1 1 0 1 001 1 1 1 0 1 00 1111111
000 1 001 0 001 1 0 1 00 o1 0 1 o1 1 0 o1 1 1 1 000 1[1 001 1
-102
-102 -83 -83
o
10 10 10 1 1 [ 1 1 00 1 10 1 11 10
~11
1
M(RIV)
=
-102
-45 -64 -64
-45 -45 -64 -64 -121
p' d(R, V) log-1- , + nlog(l - p')
-p
(3.40)
where d(R, V) is the Hamming distance between vectors Rand V. For a given value of n and p', the maximum-likelihood decoder becomes a hard-decision minimum-distance decoder, which minimizes the Hamming distance between vectors Rand V. The hard-decision minimum-distance decoder simply determines the code vector V which is closest in Hamming distance to the received vector R with elements 0 and 1. Example 3.16 Assuming that the all-zero code vector is transmitted, the received vector is R = [1 0 0 0 0 0 0]. Given the code vectors of the (7, 4) binary Hamming code with generator matrix
G =
1
000
o
1 0
0
1
0
1
001 1 0 1
1 1
1 0 ' the computed Hamming distances are shown
1
1
000 in Table 3.12.
1
0
68
Error-Control Block Codes for Communications Engineers
Table 3.12 Hamming Distance Metric Computation Information Vector
Code Vector
d(R, V)
aaaa aaa 1 aa 1 a aa 1 1 a 1 aa a1a1 a1 1 a a1 1 1 1 aaa 1 aa 1 101 a 1a1 1 1 1 aa 1 1 a1 111a
0000000 0001011 0010110 aa 1 1 1 a 1 a 1 aa 1 1 1 0101100 0110001 a1 1 1a10 1000101 1001110 1010011 1011000 1100010 1101001 1 1 1 a 1 aa 1111111
4 4 5 5 4 4 5 2 3 3 2 2 3 3
1111
1
6
The decoder picks the all-zero code vector as the path with the smallest metric, and outputs the all-zero information vector. In the example, the received vector R = [1 0 0 0 0 0 0] is decoded to [000 0 000] which corresponds to the estimated information vector iJ = [0 0 0 0]. Single error is corrected.
3.7.3.2 Soft-Decision Decoding Consider the coherent BPSK demodulator of the coded digital communication system shown in Figure 3.10. If the demodulator is left unquantized, the demodulated signal is simply in an analog form. In this case, we may use Euclidean distance as a metric measure for decoding. Euclidean distance is defined as the distance between two signal vectors in the N-dimensional Euclidean vector space. For BPSK signals, the vector space has a dimension of 1. Now, the optimum decoder determines the code vector V that is closest in Euclidean distance to the received vector R. That is accomplished by minimizing n-l
the Euclidean metric M(R/V)
L
8(rj' Vj) over all code vectors. 8(rj' Vj) j=O is the Euclidean distance between symbols rj and "i: k From the received vector R, the decoder computes all2 Euclidean metrics M(R/V). The vector V associated with the smallest metric from the set IS chosen, and the decoder outputs the corresponding iJ. =
Linear Block Codes
69
In a practical implementation, each signal at the output of the coherent BPSK demodulator is often quantized to Q number of regions. Q is defined as the number of quantization levels and Q> 3. The demodulator is said to make soft decisions. The quantized signal, therefore, can be represented as a binary vector. In general, the binary vector is logz Q bits long for a Q-Ievel quantization. Figure 3.11 shows a 3-bit natural binary quantizer with uniform quantization regions. It can be seen that the two possible BPSK signals, -1.0 and + 1.0, lie in the regions covered by the binary vectors [OOOJ and [Ill], respectively. Also, the received signal that falls in the region covered by the binary veccor [101 J is quantized to the binary vector [10 1J. The left most bit is the hard-decision digit. From the binary vector, we can calculate the soft-decision distance between two quantized signals. To compute the soft-decision distance between two quantized signals, we simply convert the binary vectors into decimal numbers and take the absolute value of their difference. For example, the soft-decision distance between the binary veccors [101] and [111] is 2. In the decoding process, we replace the Euclidean distance by the soft-decision distance as a metric measure for decoding. The decoder is called a soft-decision minimum-
distance decoder. Example 3.17 Assuming that the all-zero code veccor is transmitted, the received vector is R = [4 0 0 0 4 5 0]. Given the code veccors of the (7, 4) Hamming code with generator matrix bina 1 G =
000
1
0
1
0100111 0 0 1 0 1 1 0'
the
computed eight-level
soft-decision
0001011 Euclidean distances are shown in Table 3.13. The decoder picks the all-zero code vector as the path with the smallest metric, and outputs the all-zero information vector. In the example, the received vector R = [4 0 0 0 4 5 0] is decoded to [000000 OJ which corresponds to the estimated information vector U = [0 0 0 OJ. Triple errors are corrected 000 001 010 -1.0
011 100
o
101 110
111 Real
f
1.0
Received signal
Figure 3.11 A 3-bit natural binary quantizer.
70
Error-Control Block Codes for Communications Engineers
Table 3.13 Eight-Level Soft-Decision Metric Computation Information Vector
Code Vector (Soft-Decision)
0000 000 1 001 0 001 1 o 1 00 o1 0 1 o1 1 0 o1 1 1 1 000 100 1 101 0 10 11 1100 1 10 1 1 1 10 1111
0000000 0007077 0070770 0077707 0700777 0707700 0770007 0777070 7000707 7007770 7070077 7077000 7700070 7707007 7770700 7777777
Soft-Decision Metric M(RN) 13
24 16 33
23 26 34 31 18 15
23 26 16 33
25 36
with soft-decision decoding. In general, soft-decision decoding can correct more errors than hard-decision decoding for the same code.
3.1.4 Maximum-Likelihood Viterbi Algorithm Decoding To perform optimum decoding, Viterbi proposed a decoding algorithm for convolutional codes in 1967 [9]. The Viterbi algorithm can achieve the maximum-likelihood decoding performance, which minimizes the probability of error in the decoding of the whole received sequence, when the information digits are statistically independent and equally likely to have any of their possible values [l0-11]. The Viterbi algorithm can also be used to decode block codes and the algorithm is described next. In the Viterbi algorithm decoding of block code, the decoder first uses the syndrome-former trellis of the block code to estimate the code vector, followed by an inverse operation. The required digital communication system model is shown in Figure 3.12. Here, the decoder splits into two parts: a codeword estimator and an inverter. The codeword estimator uses the Viterbi algorithm and applies it to the syndrome-former trellis of the block code. From the received word, the codeword estimator determines the maximum-likelihood vector V, followed by an inverse operation that maps the decoded information vector U.
Linear Block Codes
u
Block encoder
71
v
t------+_~
Information Channel vector ' - - - - - - - - ' codeword
Transmission path (Analog channel)
Noise
vector
Discrete noisy channel
Figure 3.12 Model of a coded digital communication system.
Before we proceed to the description of maximum-likelihood Viterbi decoding for block codes, we need to define another useful metric measure for decoding. Binary linear block code is assumed. Let va, VI, ... Vj denote the possible transmitted channel code sequence for 0 ~ j ~ n - 1 and P(rj I Vj) be the channel transition probability associated with the j-th code symbol in the trellis of the code. The branch or bit metric M(r/v) associated with the j-th code symbol is defined as logP(rjlvj) and the path metric Mj(rolvo, r, l u«, ... , rjlvj) associated with a code sequence is defined as 10gP(rolvo) + 10gPhlv]) + ... + 10gP(rjlvj)' . Consider an (n, k) binary linear block code with 2mmlk,n-kl syndrome-
former trellis states. There are 2 transitions between trellis states, as shown in Figure 3.13.
..A\\J\J"'~
~x> , V
o
V \ ... ,
\
~--1---'lt
V 1
Total number of states
= 2 min{ n ' n- k}
Figure 3.13 Trellis diagram for an (n, k) binary linear block code.
a
72
Error-Control Block Codes for Communications Engineers
Assuming that the encoder is initially in state 0, there are 2 ffiin!k, n-k) paths for the first j = min {k, n - k} branches. Let va, VI' ... Vj_1 denote the possible channel code sequence associated with the paths of the trellis. From ffitn the received sequence ro, rl, ... rj_I' the decoder computes all 2 {k, n-kl path rnerrics M j_ 1(ro I va, rl I VI' ... , rj_1 Ivi: d associated with the 2 mtn1k•n-k} paths and preserves them. On receiving a new symbol rj' the decoder uses the trellis and produces an estimated code sequence VA, VI, ... Vj of the channel code sequence va, VI, . . . "i: The branch metrics M(rj I Vj) of 2 paths entering a state in the trellis are computed. Each of these branch rnerrics is added to the corresponding path metric Mj_l(rolvo, rvlv». ... , rj_I/vj_l)' The path with the largest path metric entering each state is preserved. Thus, the decoder performs a maximumof the largest path metric M(rj I v) + likelihood search Mj_l(rolvo, rl/vl, ... , rj_1lvj_I)' The process repeats in this way as more symbols are received by the decoder. A firm decoding decision is then made by the decoder when the whole message block is received over the channel. From the received word, the codeword estimator determines the maximumlikelihood vector V, followed by an inverse operation that maps the decoded information vector
t:
3.7.4.1
Hard-Decision Decoding
As discussed in Section 3.7.3.1, we may use Hamming distance to perform metric computations. Consider the branch metric M(rjlvj) = logP(rjlvj). For a binary symmetric channel, P(rjlvj) = p' for rj'" "i- and P(rjlvj) = (I - p') for rj = "i: The received symbol rj is quantized into two levels, a "1" or a "0." When p' < 0.5, the branch metric can be written as M(rjlv)) = d(rj' Vj)logp' + [1 - d(rj' Vj)]log(I - p') M(rjlvj)
= d(rj'
vj)log 1
~'p' + log(I
- p')
(3.41 ) (3.42)
where d(rj' Vj) is the Hamming distance between symbols rj and "i: For a given value of p', the maximum-likelihood decoder becomes a hard-decision minimum-distance decoder, which minimizes the Hamming distance between sequences ro- rl, . . . , rj and va, VI, . . . , Vj' This is accomplished by the Viterbi algorithm, which recursively minimizes the Hamming metric Mj(rolvo, rl/vl,""
rjlv;)
= Mj_l(rolvo, + d(rj, Vj)
over all code sequences {va,
VI, . . . , Vj}'
rl/vl,···, rJ-I/vj-l) (3.43)
Linear Block Codes
73
From the received sequence rO' rl, ... , rj_l, the decoder uses the Ham. diistance as th ' an d computes a11 2min{k'n-k} . mmg e metrrc p a t h metncs ' d . h h 2minlk, n-kl h ) associate Wit t e M ;-1 (roI Vo, rl I VI, ... , rj_1 I Vj-l pat san d preserves them. On receiving a new symbol rj' the decoder computes running path Hamming metrics for all paths entering a state. This is done by adding the branch Hamming distance, d(rj' Vj) entering that state node to the running path Hamming metric, Mj_1(rolvo, rjlvl,"" rj-jIVj-l) of the associated surviving path. The decoder then selects the smallest running path Hamming metric per state-node. The running path Hamming metrics are adjusted by normalizing the smallest running path Hamming metric to zero. "W'henever a tie in running path Hamming metric occurs, one of the paths is chosen by a random selection process. The process repeats in this way as more symbols are received by the decoder. A firm decoding decision is produced by the decoder until the whole message block is received over the channel. To start the harddecision minimum-distance decoding process, the running path Hamming metric associated with the zero state of the trellis is set to zero. Example 3.18 Consider the (7, 4) binary Hamming code with generator 100 0 1 0 1 0100111 matrix G = 0 0 1 0 1 1 O' The syndrome-former trellis of this 0001011 code is shown in Figure 3.14(a). Again, we have assumed that the transmitted code sequence is the all-zero codeword. The received vector is [1 0 0 0 0 0 OJ, which has an error. The sequence of states of the decoder is shown in Figure 3.14(b). The unnorrnalized running path Hamming metrics of the trellis are quoted inside angle brackets (« »). As before, the decoder determines the surviving path per node by extending all surviving paths obtained from the preceding iteration step. The decoding decision is made when the last symbol is received and decoded. In the example, the decoded code vector is [000000 OJ, and the corresponding decoded information vector is [0000]. Single error is corrected.
3.7.4.2 Soft-Decision Decoding "W'hen the demodulator is left unquanrized, we may use Euclidean distance as a metric measure for decoding. Now, the optimum decoder determines the code sequence vo, VI, ... vjthat is closest in Euclidean distance to the received sequence ro, rl, ... rj. This is accomplished by the Viterbi algorithm, which recursively minimizes the Euclidean metric Mj(rolvo, rllvl,""
rjlv) = Mj_l(rolvo, rllvl,"" + 8(rj' v;)
rj_llvj_l)
(3.44)
74
Error-Control Block Codes for Communications Engineers
S (O)s (1)S {2) S (3)S (4)S (5)S (6)S (7)
111. 110· 101. 100·
~, .,_ _:-,_ • ,.., ~\. II '.' ."'-""'". /.\/ .~';:"j~\\\.
• •
1
• •
• •
••
••
Iteration 4
• • • •
gi~ :/ y\~t~~K(:\~:~~=.... : 000 , O. . rI ), \, '-. .....•
••
• •
R = (1 0 00 0 0 0] (a)
Iteration 1
• • • • • • • •
• • • • • •
• • • •~• • ia • !. • i • ./ • • '-----<1 >
• • • • • • • • • • • • • •
• •
• • • • • • • •
;
•
Iteration 2
• • • • • • • • • • • • • • • • •
• •• •• ;ito, • .<2> <1 > • ~_ f._ <0> • •• /.\.J • '\.._-~ JJ---e <2> • • /. f\ .......: :.-<1> • .! .1 '.,.-- .'''' • <1> • ~ • • ·····.<2> • .i • • <1> • ,~
·.
Iteration 5
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
• • • • • • •
• • • • • • • •
·
• • • • • • • • • • • • • • • • • • • • • •
• • • •
f.
• • /e ;. • •• i!.. !/ •• • • i i : •
·., ...• .• · .... ,
.\ ~\\
.\\\.
•
• \\.<1>
: : \~~~~ .<1>
..
·
•
.<1>
Iteration 3
• • • • ••
,
;fe
·.
1.\1. 'Ii J. F\:
•
•
• ~
• • • • • •• • • • • • • •• • ,----...~ • • • • •• f.,4 • • .\ • • /. • • • \.. • .j • • • • \.'-<1> • ,; • • • • • .<1>
• • / .t '.<1> ./ .I, • J
•
Iteration 6
• • ,,<2> • ...-i-a<0> ~
• •
•
••
. .
•
•
Iteration 7
• •
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
~• •~_ feq ,.<0> • fa'-J. ";[~<2> • ;. }\._~~<1> • • j .1 W-l.<1>
.j .;
r:
.; ~<3> ·<1>
•••
• • • •
•• • • • • • • • •
• • ••
• • • • • • • •
• • •• •.<1>
(b)
Figure 3.14 Hard-decision minimum-distance Viterbi decoding steps.
over all code sequences {vO' vI, . . . vjl. 8(rj' Vj) is the branch Euclidean distance between symbols rj and Vi From the received sequence ro, r1' . . . rj_l, the decoder uses the Euclid' an d computes aII Zmin{k, n-kl pat h memes . ean di(stance as th e rnetrtc min (k, n-kl . d . h h 2 h M i-I (rO/ lJO, rl / vI,' .. , rj-J / ) associate Wit r e Vj_1 pat san d preserves them. On receiving a new symbol rj' the decoder computes running path Euclidean metrics for all paths entering a state. This is done by adding the branch Euclidean distance, 8(rj' Vj)' entering that state node to the running path Euclidean metric, M j - 1(ro / lJo, rl / VI, . . . , rj_1 / lJj_l) of the associated
Linear Block Codes
75
surviving path. The decoder then selects the smallest running path Euclidean metric per state-node. The running path Euclidean rnetrics are adjusted by normalizing the smallest running path Euclidean metric to zero. Whenever a tie in running path Euclidean metric occurs, one of the paths is chosen by a random selection process. The process repeats in this way as more symbols are received by the decoder. A firm decoding decision is produced by the decoder until the whole message block is received over the channel. To start the decoding process, the technique employed by the hard-decision minimum-distance decoder can also be used. In a practical implementation, we quantize the outpUt of the coherent BPSK demodulator. As discussed in Section 3.7.3.2, we simply replace the Euclidean distance by the soft-decision distance as a metric measure for decoding. An example is shown below. Example 3.19 Consider the (7, 4) binary Hamming code with
G
1
000
o
1 0
= 0
1
0
1
001 1 0 1
1 1
1 O· The syndrome-former trellis of this code is
1
1
000
1
0
shown in Figure 3.15(a). For simplicity, we assume that the transmitted code sequence is the all-zero codeword. The received vector is [40004 5 0] which has three random errors. The sequence of states of the decoder is shown in Figure 3.15(b). The un normalized running path soft-decision metrics of the surviving paths are quoted inside angle brackets In the example, a tie is present at iteration step 5, and the ticked path is randomly chosen and retained. The decoding decision is made at the end of the iteration steps. In the example, the decoded code vector is [0 0 0 0 0 0 0 0], and the corresponding decoded information vector is [0000]. Triple errors are corrected with soft-decision decoding. It can be seen that the syndrome-former trellis has 2 min !k, n-kl number of states. and has one or two paths entering or leaving a state. The decoder ffi tn!k .. ' computatIOns . ' n-kl survlvmg pat h sand perrorms ate most two metric ld 2s ho per trellis state. The number of metric computations is less than 2 . 2 min{k, n-k} per trellis stag~. For an n-stage trellis, the total number of metric computations is
«».
Error-Control Block Codes for Communications Engineers
76
s (0) S (1)S (2) S (3)S (4)S (5)S (6) S (7) 111· 110·
1~1.
• • It.
~
.,.
.~
••
Iteration 4
•
,~ 00\· • • f •." M ~\\. • •
• • •
• • • • • • • • • • • • • •
··
1 0·7!·V.~~,\\.
•
o010· 1· .•/ !. ;i\.A~';~.~ \,., - ._-'.. 1
• • •
001. • .J ':;':~ .\ ")" 000/ 0 J J \, " - . R = [4 0 004 5 OJ (a)
• • • • • • • • • • • • • • • • • • • • •• • • •• •• • •
• • • •
• • • • • • • • • • • • • • • • • •
• • .<3> , ,,.
• • • • • • • •
• .I. • j
•
.i •
~<4>
Iteration 2
• • •• • • • •
• • • • • • • • • ~<3> • • • • • • ;e\/ •
•
.<11> ...<10> ,,! .__ ..-"-. <3> /·\1·,,_<17> •
• I . t-;•..~<10>
.1./ V .1 .j • ,
110:;
.<10>
• ....-. <17
i
•
<4>
•
·· .. . ...•. ..
• /:.. • ." • • I· / • :1:• • :/ • .;i .1 J •
•
~
~\ ~.\'it· :~\<14>
:• .• \\<13> '.<6> •
•
.<8>
··
• • •• ie :--. • • • .1.1 .... <10 • • .; -i • • • .<4> • • ~
·
Iteration 7
• • • • • • • •
• • • -<11 !iA ,,<11 • • • ••• ••, 1.\ .10<3> • • • • ,I.\! ."1,.<17 • • •
• • • • • • • • • • • •
----.. ...
• • • • • • • • • • • • • • • • •• .• • • ~ • /. l· • • «:> • • \. • • • -i • • • •• \.\.-<11> • • ,. : • • • • e<13> • • •
Iteration 3
• • • • • • • •
•
., • ,.
Iteration 6 ~<1b
• • • • •
• • • • •
• •
Iteration 5
Iteration 1
• • • • • • • •
• •
• 1.1\.
./.;
};(.<10
'~<10
./ e/ .; j,<18 ~
J
•
.<4>
• • • • • • • • • • • • • • • • • • •• •• • • • • • • • • • •
• • •
• • • • • •
• •
• • • • •• • • • •.<13:>
• •
(b) Figure 3.15 Eight-level soft-decision minimum-distance Viterbi decoding steps.
maximum-likelihood Viterbi algorithm decoder using a syndrome-former trellis is particularly more effective than a maximum-likelihood decoder for high-rate codes.
3.8 Correction of Errors and Erasures The correction of errors and erasures can be accomplished by modifying a standard decoder. Error-and-erasure decoding does not provide extra coding
Linear Block Codes
77
gain for AWGN channels, but it does provide substantial coding gain over bursty channels. Let us assume that the error-and-erasure decoder uses Hamming distance as the metric measure. Error-and-erasure decoding for (n, k) linear block codes over GF(q) can be performed as follows. Let/be the number of erasure symbols in the received vector Rand 5 be the set of q! n-tuples which agree with the received vector R in the n - / nonerasure positions. 1. Select a vector in the set S.
2. Determine the code vector which is closest in Hamming distance to that vector in S. Store that code vector. 3. Select another vector in the set 5 and repeat step 2. 4. Of the q! code vectors obtained, select the one that differs from R in smallest number of places outside the / erased positions.
Example 3.20 Assuming that the all-zero code vector is transmitted, the received vector is R = [? ? 0 0 1]. Here, ? denotes erasure and the right-most received symbol is in error. Let 5 = {Ro, R I , R 2 , R 3 }, where Ro = [0 0 0 0 1], R I =[0 1 0 0 1], R 2 = [1 000 1], and R 3 = [1 1 0 0 1]. Given the code vectors of a (5, 1) binary repetition code with generator matrix G = [1 1 1 1 1], the computed Hamming distances are shown in Table 3.14. Initially, the decoder decodes Ro, R I , R 2 , and R 3 to the codewords Vo = [0 0 0 0 0], VI = [0 0 0 0 0], V2 = [0 0 0 0 0], and V3 = [1 1 1 1 1], respectively. Vo, VI' and V2 differ from R in one place outside the two erased positions and V3 differs from R in two places outside the two erased positions. Since all the code vectors Vo, VI, and V 2 differ from R in one place outside the two erased positions, one of the code vectors is chosen by a random selection process. In the example, Vo is chosen. The received vector R = [? ? 0 0 1] is decoded to V = [0 0 0 0 0] which corresponds to the estimated information vector iT = [0]. Single error and two erasures are corrected. For binary linear block codes, the above decoding procedure can be modified to: Table 3.14 Hamming Distance Metric Computation ----------------
Information Vector
Code Vector
d(Ro. V)
d(R,. V)
d(R2. V)
d(R3. V)
0 1
00000 11111
1
4
2 3
2 3
3 2
78
Error-Control Block Codes for Communications Engineers
1. Replace the erasure positions in the received vector R by zeros and
determine the code vector which is closest in Hamming distance to that vector. Store the code vector as Vo. 2. Replace the erasure positions in the received vector R by ones and determine the code vector which is closest in Hamming distance to that vector. Store the code vector as Vj • 3. From the code vectors Vo and Vj , select the one which differs from R in smallest number of places outside the f erased positions. Thus, we reduce the decoding procedure of binary linear block codes to 2 steps of erasure filling with error-only decoding. This algorithm always works when Zt , + f < dm in . First consider the case when the f erasures are filled with zeros and to :::; fl2 number of errors are generated in those ferasure positions. The total number of errors is t e + to. If t , + to:::; t', then Zt, + f:::; Zt', Errors are correctable when 2t' < d m in . It follows that 'Lt, + f:::; 2t' < d m in . If to > fl2 when the f erasures are filled with zeros, then the total number of errors is t, + to> t', This implies Zt , + f> 2t' and errors are not correctable. On the other hand, if to > fl2 then only (f - to) < fl2 number of errors are generated in those f erasure positions when the f erasures are filled with ones. The total number of errors is t e + 'J - to) < t', We can write Zt , + f< 2t' and the errors are correctable when 2t' < d min . It follows that 2t e + f < 2t' < d m in · Next consider the case when the ferasures are filled with ones and t] :::; fl2 number of errors are generated in those ferasure positions. The total number of errors is t , + tl' If t e + tj :::; t', then 2t e + f:::; 2t' is implied. Errors are correctable when 2t' < d m in . It follows that Tt, + f:::; 2t' < d m in · If t] > fl2 when theferasures are filled with ones, then the total number of errors is t e + tj > t', This implies Tt , + f> 2t' and errors are not correctable. On the other hand, if t] > fl2 then only (f - t]) < fl2 number of errors are generated in those f erasure positions when the f erasures are filled with zeros. The total number of errors is t , + (f - tj) < t', We can write 2t e + f < Zt' and the errors are correctable when 2t' < d m in . It follows that Zt , + f < 2t' < d m in . It can be seen that, in both cases either to :::; fl2 or t) :::; f12, the algorithm always results in correct decoding. Consider again the (5, 1) binary repetition code shown in Example 3.20. Let us use this modified algorithm to decode the code. When the erasure positions in the received vector R are replaced by zeros, the decoder picks the all-zeros vector as the decoded code vector with uniry Hamming metric. When the erasure positions in the received vector R are replaced by ones, the decoder picks the all-ones vector as the decoded code vector with Hamming metric of
Linear Block Codes
79
2. From the rwo decoded code vectors, the all-zero code vector differs from R by one place outside the two erased positions and the all-one code vector differs from R by rwo places outside the rwo erased positions. The decoder selects the all-zero code vector as the decoded codeword. The received vector R = [? ? 0 0 1] is again decoded to V = [0 0 0 0 0] which corresponds to the estimated information vector U = [0]. Single error and two erasures are corrected.
3.9 Performance of Binary Block Codes If the minimum Hamming distance of a block code is dmin' any two distinct codewords will differ in at least d min places. No error pattern of d min - 1 or less can change one codeword into another codeword after transmission. As a result, 1. The code can detect all error patterns of d min 2. An (n, k) binary linear code can detect 2
n
-
-
1 or less.
k
2 error patterns oflength
n. k
3. There are 2 - 1 undetectable error patterns. This is due to the fact k that there are exactly 2 - 1 nonzero error patterns that are identical k to the 2 - 1 nonzero codewords. They can change a transmitted codeword to another codeword. 4. If the code is used for error detection only on a binary symmetric channel with minimum hard-decision distance decoding, the decoder fails to detect the presence of errors whenever an error pattern changes the transmitted codeword to another codeword. The probability of undetected word error is n
r,
=
2.,A iP,i(I _
rr'
(3.45)
i=1
where p' is the channel transition probability, A i is the number of codewords of weight In the code, and AI = A 2 = . .. = Admm-l = O. (3.45) can be written as n
r;
=
2., i=dmin
AiP,i(l _
rv:
(3.46)
80
Error-Control Block Codes for Communications Engineers
5. If the code is used for error correction only on a binary symmetric channel with minimum hard-decision distance decoding, the probability of decoding word error is upper bounded by
(3.47)
where (;) gives the number of error patterns of weight i and
r:'
p,i O is the probability of occurrence of a particular weight i error pattern.
Example 3.21 The codewords of (4, 1) binary reperrnon code are
o 0 0 0 and 1 1 1 1. The total number of undetectable error patterns is k 2 - 1 = 1 and the total number of detectable error patterns is k n 2 - 2 = 14. The probability of undetected word error is n=4
r,
=
L AiP,i O _ rr"
i= I
= A1P,IO - p,)3 + Azp'ZO _ p')Z = 0 + 0 + 0 + p,40 _ p,)O
+
A 3P,30 _ p,)l
+
A4P,4 0 _ p,)O
The probability of decoding word error is
For p' channel is
=
0.01, the decoding word error probability on a binary symmetric
r, s 0.0011800899 Let the probability of selecting an incorrect codeword be PJ. For coherent BPSK signals with AWGN channels and un quantized soft-decision decoding, it can be shown [12] that
Linear Block Codes
81
(3.48)
where Q(a") = _1_
-ji;
Je
-{3u2/2
df3"
(3.49)
aU
and E blNo is the ratio of average bit energy to noise power-spectral-density. For all incorrectly selected codewords, the decoding word error probability is upper bounded by n
r, s LAiPd
(3.50)
i=1
and n
Pe s LA i Q(-.j2i(kl n )EbIN o)
(3.51)
i=1
3.10 Computer Simulation Results The variation of bit error rate with the E blNo ratio of the block-coded system of Figure 3.10 with coherent BPSK signals, and hard-decision and unquantized soft-decision minimum-distance decoding for AWGN channel has been measured by computer simulations. Here, Eb is the average transmitted bit energy, and N o/2 is the two-sided power spectral density of the noise. The (7, 4) binary Hamming code has been used in the tests. The parameters of the code are given in Table 3.15. Perfect timing and synchronization are assumed. In Table 3.15 Parameters of the (7, 4) Binary Hamming Code
H
1
r
~ ~ ~ ~ : ~ :l 001 0 1 1 0 0001 01 1
L•._
..• __ ._..
1 1 101 00] 01 1 1 0 1 0 [ 1 101001 -----'
Error-Control Block Codes for Communications Engineers
82
each test, the average transmitted bit energy was fixed, and the variance of the AWGN was adjusted for a range of average bit error rates. The simulated error performance of the (7, 4) binary Hamming code with hard-decision and unquantized soft-decision minimum-distance decoding is shown in Figure 3.16. Comparisons are made between the coded and uncoded coherent BPSK systems. For a certain range of low Eb/No ratios, an uncoded system always appears to have a better tolerance to noise than the coded systems. 4 At a bit error rate of 10- , the (7,4) binary Hamming code with BPSK signals and unquantized soft-decision minimum-distance decoding gives about 1.8 dB of coding gains over the un coded BPSK system. o
10
10 10 10 10 10 10
-1
-2
-3
-4
-5
-6
·2
-{]--
........ -0-
0
2
4
6
e, IN o
(dB)
8
10
12
Uncoded coherent BPSK Hard-decision decoding Unquantised soft-decision decoding
Figure 3.16 Performance of (7. 4) binary Hamming code with hard-decision and unquantized soft-decision minimum-distance decoding in AWGN channels.
Linear Block Codes
83
References [I J
Singleton, R. c., "Maximum Distance q-ary Codes," IEEE Trans. on Inftrmation Theory, Vol. IT-10, No.1, January 1964, pp. 116-118.
[2]
Plotkin, M., "Binary Codes with Specified Minimum Distance," IRE Trans. on Injormation Theory, Vol. IT-6, No.4, September 1960, pp. 445-450.
[.3]
Varshamov, R. R., "Estimate of the Number of Signals in Error Correcting Codes," DokladyAkad. NaukSSSR, Vol. 117, No.5, September 1957, pp. 739-741 (in Russian).
[4J
Verhoeff T., "An Updated Table of Minimum-Distance Bounds for Binary Linear Codes," Doklady Akad. Nauk SSSR, Vol. 33, No.5, September 1987, pp. 665-680.
(5]
Wolf, J. K" "Efficient Maximum Likelihood Decoding of Linear Block Code Using a Trellis," IEEE Trans. on Information Theory, Vol. IT-24, No.5, September 1978, pp. 76-80.
[6]
Hamming, R. W., "Error Detecting and Error Correcting Codes," Bell System Technical Journal, Vol. 29, April 1950, pp. 147-160.
[7]
Muller, D. E., "Application of Boolean Algebra to Switching Circuit Design," IEEE Trans. on Computers, Vol. 3, No.5, September 1954, pp. 6-12.
[8]
Reed, I. S" "A Class of Multiple-Error-Correcting Codes and a Decoding Scheme," IEEE Trans. on Inftrmation Theory, Vol. IT-4, No,S, September 1954, pp. 38-49.
[9]
Viterbi, A. J., "Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm," IEFE Trans. on Information Theory, Vol. IT-13, No.2, April 1967, pp. 260-269.
[10]
Forney, G. D., j., 'The Virerbi Algorithm," IEEE Proc., Vol. 61, No.3, March 1973, pp, 268-278.
[11]
Omura, J. M., "On the Virerbi Decoding Algorithm," IEEE Trans. on Injormation Theory, Vol. IT-IS, No. I, January 1969, pp. 177-179.
[12]
Proakis, J. G., Digital Communication, 3rd ed., New York: McGraw-Hill, 1995.
This page intentionally left blank
4 Cyclic Codes 4.1 Introduction Most of the known good codes belong to a class of codes called linear codes. However, the implementation complexity of decoders becomes impractical for linear codes with very large block length n. Linear codes with an extra degree ofalgebraic structure are most welcome, in the hope that the decoding complexity can be reduced. Cyclic codes offer such additional structure [1-3]. Cyclic codes form an important subclass of linear codes. These codes are important because their underlying Galois field description leads to encoding and decoding procedures that are computationally efficient. The treatment here will concentrate on the basic principles of binary cyclic codes and the syndrome decoding of the codes.
4.2 Polynomial Description of Cyclic Codes Definition 4.1. An (n, k) linear code is cyclic if every cyclic shift of a codeword (codevector) is also a codeword (codevector) in the code. A cyclic shift of a codeword of length n, represented by an n-tuple codevector V = [vQ v I . . . V n- d, j times to the right is another n-tuple code~,.( ) vector v') = [vn-j' Vn-j+l, . . . , Vn-l' VQ, VI, • • • , vn-j-d. Clearly, cyclically shifting Vj places to the right is the same as cyclically shifting Vn - j places to the left. For convenience, we always shift V to the right. 85
Error-Control Block Codes for Communications Engineers
86
Example 4.1
v=
[VO VI
«:
v3]
[1 0 1 1] V(2)
= [1
1 1 0]
The codevecror of a cyclic code may also be expressed in polynomial form with indeterminate x. v (x )
= Vn-IX
n-l
+ Vn-2X
n-2
(4.1)
+ ... + VIX + va
is called the code polynomial of the codevector V. For (n, k) cyclic codes, thecodepolynomialhasadegreeofn- 1 (Vn - l ~ 0) or less (Vn - l = 0). Clearly, the code polynomial that corresponds to the codevector V(j) is V(x)
v(x)
has the following property:
j n+j-I n+j-2 n+1 n x v(x) = Vn-IX + Vn-2X + ... + Vn-j+1X + Vn-jX n-I n-2 I + Vn-j-I x + Vn-j-2X + ... + VI x J+ + vox J 0
o
I
J- -- - v n- 1x n+j-l + Vn-Ix n-I + Vn-j-IX j-I + Vn-IX +
0
v n- 2XJ-
0
(4.3)
2
- ... - v n-J+ I x - v n-J n+j-2 n+l n + v n-2 x + ... + Vn-j+IX + Vn-jX n-2 j+l j + Vn-j-2X + ... + VIX + vOx J-2 Vn-2 X + ... + Vn-j+IX + Vn-j ( oJ n = -q(x) + q(x)x + v J (x) n = q(x)(x - 0 + v(j)(x) 0
0
(4.4)
where q (x )
= Vn-I x
j-l
+ Vn-2X
j-2
+ ... + Vn-j+IX + Vn-j
(4.5)
and v(j\x) is the remainder of xjv(x)/(x n - 1), denoted as Rem{xJv(x)/ n
(x -
01.
Definition 4.2. A cyclic code of length n is an ideal of the polynomial ring GF(q) [x] modulo-Ix" - 1), where GF(q)[x] modulo-Ix" - 1) denotes the set of polynomials in x with coefficients from GF(q) modulo-Ix" - 1).
Cyclic Codes
87
By Theorem 2.11, every ideal of the polynomial ring GF(q)[x] modulo-ex" - 1) is a principal ideal. A principal ideal simply consists of all multiples of a polynomial g(x). The polynomial g(x) is called a generator polynomial of the ideal. An i n, k) cyclic code may be defined in terms of a generator polynomial g(x), where g (x )
= gn-k x
n-k
+ gn-k-lx
n-k-l
+ ... + glx + go
(4.6)
and gi E GF(q) for 0 :-: : i:-:::: (n - k). g(x) is unique and of minimum degree, because a new polynomial of degree less than n - k can only be constructed by subtraction (modulo-Z addition for binary codes) of g(x) of degree n - k and a polynomial of the same degree if that polynomial exists. All the parameters of a cyclic code can be determined from its generator polynomial g(x) and the code has the following properties: Theorem 4.1. The generator polynomial g(x) of minimum degree n - k of an (n, k) cyclic code divides x" - 1. Proof Dividing x" - 1 by g(x), we get x" - 1 = h(x)g(x) + b(x), where h(x) is the quotient and b(x) is the remainder. The remainder b(x) can be n expressed as b(x) == [(x - I) - h(x)g(x)] modulo-Ix" - 1) == -h(x)g(x) modulo-fx" - 1). By Definition 4.2 and Theorem 2.11, h(x)g(x) is a code polynomial. Hence, the remainder b(x) is a code polynomial and has a degree less than the degree of g(x). But the generator polynomial g(x) of minimum degree is unique, the only such code polynomial of degree less than the degree n of g(x) is b(x) = O. This says x - 1 = h(x)g(x) and g(x) divides x" - 1. For large n, x" - 1 may have many factors of degree n - k. Some of these factors (polynomials) generate good cyclic codes and some generate bad cyclic codes. Let u(x) be the information polynomial of the information vector U = [UoU] . . . uk-d, where u (x )
and as
Ui E
= u k-l x
k-I
+ u k-2X
k-2
+ ... +
U 1x
+
«o
(4.7)
GF(q) for 0:-:::: i:-:::: (k - 1). The encoding operation can be expressed
v(x)
= u(x)g(x)
(4.8)
Since the generator polynomial g(x) of degree n - k for an (n, k) cyclic n code divides x - 1, we can write x" - 1 = h(x)g(x)
(4.9)
Error-Control Block Codes for Communications Engineers
88
where (4.10) and hj E GF(q) for 0 ~ j s k. h(x) is called the parity-check polynomialof an (», k) cyclic code and can be explained as follows. Multiplying v(x) by h(x), we get v(x)h(x)
= u(x)g(x)h(x)
(4.11)
where the left-hand side of equation (4.11) is
+ ... + (vOh k + VI hk-I +
+ Vk-I hi + VkhO)X
+ (VI h k + V2hk-1 +
+ Vkhl + Vk+1 h o)x
n- 1 n-2
+ ... + (V n-k-2 hk + Vn-k-I hk-I + ... + Vn-3hl + vn_2ho)xk+1 k + (Vn-k-lhk + Vn-khk-I + ... + v n-2h l + vn-Iho)x
+ ...
and the right-hand side of equation (4.11) is u(x)g(x)h(x)
= u(x)(x n - 1) = u(x)x
n
-
u(x)
n+k-2 n+1 n + Uk-2X + ... + UlX + uox + OX n-I + 0 X n-2 + . . . + 0 X k+1 + 0 X k k-l k-2 - uk-Ix - Uk-2 X - ... - UlX- Uo
= Uk-IX
n+k-I
. . t he coe ffiicrents Equatmg 0 f x n-l ,x n-2 , ... x hi , x k on bot h sid Sl es 0 f equation (4.11), we obtain the following n - k equalities: k
L V(n+k-I}-i-lhi = 0 i=O
(4.12)
Cyclic Codes
89
= n - 1, n - 2, ... , k + 1, k. If the coefficients Vn-I, Vn-2, ... , Vn-k+I' Vn-k in the code polynomial vex) are taken as the information digits, the n - k equalities define the parity-check digits Vn-k-I' Vn-k-2' ... , VI, Vo of an (n, k) cyclic code and hex) is therefore called the parity-check polynomial
for f
of a cyclic code. Theorem 4.2. Let g(x) of minimum degree n - k and hex) of degree k be the generator and parity-check polynomials of an tn, k) cyclic code C, respectivell' Cl-, the dual code of C, is generated by the polynomial k- I k lex) = x hex -1) = hox + hI x + ... + hk-I x + hk of degree k, where I) xkh(x- is the reciprocafpolynomiafof hex). Proof By Theorem 4.1, the generator polynomial g(x) of minimum n degree n - k for an (n, k) cyclic code divides x - 1, we can write n x - 1 = h(x)g(x). x" - 1 = h(x)g(x) implies x- n - 1 = h(x-I)g(x- I)
( -I) I - x n =X nh( x -I) gx ( -I) ( n - 1) =X kh( x -I) x n-kgx -x
Thus, xkh(x- I) divides x" - 1 and the polynomial xkh(x- I) of degree k is the generator polynomial for the dual code of C. There is an interesting relationship between the weight structure of a code and the weight structure of its dual code. Let a polynomial n n- 1 A(x) = Anx + An_1x + ... + Ajx + A o be the weight enumerator of an (n, k) linear code, where Ai denotes the number of codewords of weight i in n n- 1 the (n, k) linear code. Also, let B(x) = Bnx + Bn_Ix + ... + Bvx + Bo be the weight enumerator of its dual code, where B i denotes the number of codewords of weight i in the (n, n - k) dual code. The weight enumerator A(x) is related to B(x) by the MacWilliams identity [4] as
A(x)
= r(n-k)(x
+ I)nB[-(x -
1)]
x + 1
(4.13)
Often, the weight distribution of a code is not readily determined, but the weight distribution of its dual code is known. In this case, we can determine the weight distribution of the code from its dual code, and the probability of undetected word error of the code. Given g(x) of an tn, k) cyclic code, we can put the code into systematic C . C • rorm. Let u () x = Uk-IX k-I + Uk-2X k-2 + ... + UIX + UQ be an rnrorrnatrori polynomial. Dividing xn-ku(x) by g(x), we get xn-ku(x) = a(x)g(x) + hex),
90
Error-Control Block Codes for Communications Engineers
where
a(x) IS the quotient and b(x) = bn_k_IXn-k-I + bn_k_2Xn-k-2 + ... + b 1x + b o is the remainder. Write xn-ku(x) - b(x) = a(x)g(x). By Definition 4.2 and Theorem 2.11, a(x)g(x) is a code polynomial. This implies xn-ku(x) - b(x) is also a code polynomial. Hence, systematic
encoding of cyclic codes consists of the following operations:
1. Form xn-ku(x). 2. Find the remainder b(x) from xn-ku(x)/g}x). 3. Form v(x)
= xn-ku(x)
- b(x)
(4.14)
where n-k-l b() x = b n- k- I x +
b n- k- 2 x n-k-2
+ . . . +
bIx
+
b
0
(4.15) Substituting equations (4.7) and (4.15) into equation (4.14), we get n-k+1 n-k + ... + ulx + uox b n-k-Ix n-k-l - b n-k-2 x n-k-2 - ... - b IX - b0 -
v ( x ) = Uk-IX
n-I
+ uk-2x
n-2
For binary cyclic codes, -bi = b, and 0
~
i
~
n - k - 1.
Example 4.2 A (7, 4) binary cyclic code generated by g(x) 3 2 Given u(x) = x + x + x, find v(x). Lx
(4.16)
= x3
+ x + 1.
7-4 () 6 5 4 ux=x+x+x.
2. By long division and modulo-Z addition, the remainder
3. v(x) = x 7 - 4u(x)
+ b(x) 654 2 =x +x +x +x.
The generator polynomial g(x) of degree 3 generates a (7, 4) binary cyclic code because g(x) divides x n=7 + I.
Cyclic Codes
x x
3
+x+ I ) x x
4
Z
+x
+x
91
+ 1
7
7
+1 + X X X
5 5
+x +x
4 4 Z 3 + X + x
5 x x
4
+x
3
4
+x + x
x
3
3 X
Z
2
+x + x +1 + x +1
4.3 Matrix Description of Cyclic Codes A convenient way to construct the generator matrix from the generator polynomial of an i n, k) q-ary cyclic code is as follows. Let g(x) = ' I 0f g n-kX n-k + g n-k-l x n-k-1 + ... + gl x + go be t h e generatori po ynorrua an (n, k) cyclic code C. The code polynomials are of the form v(x) = u(x)g(x) = Uk_lxk-1g(x) + Uk_Zxk-Zg(x) + ... + ulxg(X) + uOg(x), and there are / number of code polynomials. Putting the coefficients of the code polynomials k- 1g(x) g(x), xg(x) , ... , x in vector form, we have Go G, (4.17)
G= Gk-l go
gl
gz
0
go
gl
0 0
0 0
gn-k
0 gn-k
go
gl
0
go
0 0
0 0 gn-k
gl
0 gn-k
The set of k row vectors of the k-by-n matrix G are linearly independent vectors and all the linear combinations of Go, G 1, ... , Gk-l of the form V = uoGo + UIGI + ... + Uk-1Gk-l form a k-dimensional subspace of the
92
Error-Control Block Codes for Communications Engineers
vector space of all n-tuples over GF(q). By Definition 3.7, G as constructed is indeed a generator matrix of an in, k) cyclic code C. In a similar fashion, we can form an (n - k)-by-n matrix H from the generator polynomial xkh(x -I) of code C.L, where hk
hk-I
0
hk
0 0
0 0
hk-2
ho
0 ho
hk-I
0 0
0 0 (4.18)
H= hk
hk-I
0
hk
ho
0 ho
hk-I
H is the generator matrix of the dual code C.L. Also, it can be seen rhat T equation (4.12) is equivalent to VH = 0, where V = [VOVI ... vn-Jl. It follows that any codevector in C is orthogonal to every row of H. Thus, H is the parity-check matrix of the code C. Given g(x) of an (n, k) cyclic code, we can also put the code generator matrix into systematic form GSEF' Recall equation (4.14) V(x)
= xn-ku(x)
- b(x)
(4.19)
where b(x) = Rem {xn-ku(x)1g(x)}
(4.20)
which transforms the cyclic code into systematic form. Suppose we form a remainder bi(x) from xn-k+1u(x)lg(x), we obtain (4.21) ai(x)g(X) is a multiple of g(x); i.e., a code polynomial corresponding to a codevector (codeword) for 0 $ i $ k - 1 and b i () x
n-k-I b n-k-2 b b = b i,n-k-IX + i,n-k-2 X + ... + i,IX + i,O
(4.22)
The codevecror form of xn-k+iu(x) - bi(x) is Vi
= [-bi,o -bi,1 ... -bi,n-k-I Uo
UI
.,.
u ; ... uk-Jl
(4.23)
with Ui = 1 and Uj = 0 for j "# i. Cyclically shifting the vector Vi k times to the right, we get V/k) = [UO ul . .. u , .. ' uk-I -bi,o -bi,1 ... -bi,n-k-Jl· By placing the vector V?) as the i-th row of GS EF' we obtain
Cyclic Codes
1
0
0 1
93
0
-bo,o
-bO,l
- bO,n-k-l
0
-bl,o
-bl,\
-bl,n-k-l
-bk-l,O
-bk-l,l
(4.24)
GSEF = 0
0
- b k-l,n-k-l
which is the generator matrix of code C in systematic form. The corresponding systematic parity-check matrix HSEF is bk-l,O
1
0
0
bO,l
bl,o bl, 1
bk-l,l
0
1
0
bO,n-k-l
bl,n-k-l
bk-l,n-k-l
0
bo,o
HSEF
(4.25)
=
0
From our early contribution on matrix and linear block code theory, we can also obtain the standard-echelon-form GSEF from the generator matrix G of the code by row/column transformations. For an (n, k) cyclic code, we only perform row interchange and combination operations on the generator matrix G to obtain GSEF. Column interchange and combination operations are not possible, as this destroys the cyclic properties of the code. HSEF can then be found by taking the corresponding elements in GSEF as the elements in HSEF according to equations (4.24) and (4.25). Example 4,3 Given the generator matrix of a (7, 4) binary cyclic code
Go Gl of G = G 2 G3
1 0 0
1 1 0
0 1 1
1 0 1
0 1 0
0 0 1
0 0 0 ' the standard-echelon-form of G
0
0
0
1
1
0
1
=
Go G{ G{ G{
IS
GSEF
1 0 0 0
0 1
0 0
0 0
0 0
1 0
0 1
1 0 1 1
1 1 1 0
0 1
1 1
where Go := Go + G l + G2' G( .- G l + G 2 + G 3, G{ G{ := G 3, and
.-
G2
+
G 3,
94
Error-Control Block Codes for Communications Engineers
o I
~ ~ ~ ~l] 100
I
4.4 Encoding of Cyclic Codes Recalling (4.14), the code polynomial is v(x)
= xn-ku(x)
- b(x)
vs(x) - b(x)
=
(4.26) (4.27)
where vs(x) = xn-ku(x). Systematic encoding of an in, k) cyclic code consists n- k, of multiplying u(x) by x computing the remainder b(x) of xn-ku(x)/g(x), and forming the code polynomial v(x). A linear (n - k)-stage shift register with a feedback connections circuit can simultaneously accomplish the multiplication and division tasks. Such an encoding circuit for binary cyclic codes is shown in Figure 4.1. 1. With switches SW j closed and SWz at position I, information digits
Uk-I, uk-Z, ... , Uo are transmitted and shifted in this order into the circuit. The circuit simultaneously performs xn-ku(x) and n k x - u (x)/g(x) operations. After k shifts, the coefficients bo, b l , ••• , bn-k-l in the register form the remainder b(x).
o -------~y--
...
--------j
Initial condition At time =0
«c-.... Figure 4.1
In,
+
uk-2, uk-1
k) binary cyclic encoder.
Coefficients of v ( x)
Cyclic Codes
95
2. With switches SW 1 open and SW2 at position 2, the coefficients bn-k-l' b n-k-2' ... , b o are shifted out and transmitted in this order.
It can be seen that the coefficient associated with the highest-order position of the code polynomial v(x) is transmitted first. 3 Figure 4.2 shows a (7, 4) cyclic encoder fenerated by g(x) = x + x + 1. 3 For an information polynomial u(x) = x + x + x, the shift-register contents of the encoder shown in Table 4.1. . 2 6 5 4 2 After 4 shifts, b(x) = x and v(x) = x + x + x + x . The vector form n- ku(x) ofu(x)is[Olll]andthecodevectorformofx + b(x) is [0 0 1 0 Ill]. The right most digit is transmitted first.
4.5 Decoding of Cyclic Codes 4.5.1
Syndrome Decoding
We have seen that for a systematic linear block code, the syndrome is the vector sum of the received parity-check digits and the parity-check digits SW 1
2
SW2
At time =
Attime=O
•
•
°
0,0,1,0,1,1,1
0,1,1,1
Figure 4.2 (7, 4) binary cyclic encoder.
Table 4.1 Shift-Register Contents
Shift
Input
bo: = Input + b2
b,: = Input + bo + b2
a
a
a a
a
2 3
L
a
a
a
a
a ------'
Error-Control Block Codes for Communications Engineers
96
recomputed from the received information digits. For an in, k) systematic cyclic code, the syndrome vector S = [so SI ... sn-k-Jl can be represented by a polynomial s(x) ofdegree n - k - 1 or lessand s(x) can be found by summing a received parity polynomial and a parity polynomial recomputed from the information part of the received polynomial r(x). Let v(x) and e(x) be the code polynomial of a systematic cyclic code and an error polynomial of degree n - 1 or less, respectively. Recalling (4.27), the code polynomial can be expressed as v(x) = vs(x) - b(x). Let an error polynomial of degree n - 1 or less be expressed as (4.28) where (4.29) and ec(x )
=
en-k-Ix
n-k-l
+ en-k-2X
n-k-2
+ ... + elx + eo
(4.30)
In the presence of transmission errors, the received polynomial of degree n - 1 or less is given by r(x) = v(x) + e(x)
(4.31)
=
vs(x) - b(x) + es(x) + ec(x)
(4.32)
=
v;(x) + b'(x)
(4.33)
v;(x) vs(x) + es(x) of degree n - 1 or less and b'(x) = -b(x) + ec(x) of degree n - k - 1 or less. Dividing r(x) by g(x), we
where get
r(x)
=
q(x)g(x) + s(x)
(4.34)
where q(x) is the quotient and s(x) is the remainder. Substituting (4.33) into (4.34) and dividing (4.34) by g(x), we obtain v;(x) + b'(x) = (x) + s(x) g(x) g(x) q g(x)
By taking the remainders of (4.35), we have
(4.35)
Cyclic Codes Rem{v;(x)/g(x)} + Rem{b'(x)/g(x)}
97
= Rem{s(x)/g(x)}
Since the degrees of s(x) and b'(x) are less than the degree of g(x), we can wnte Rem{v;(x)/g(x)} + b'(x)
= s(x)
(4.36)
s(x) is the sum of the received parity polynomial b'(x) and Rem{v;(x)/g(x)}, the parity polynomial recomputed from the information part of r(x). s(x) is called the syndrome polynomial and 5 ( X) =
sn-k-Ix
n-k-l
+ sn-k-2x
n-k-2
+ ... + sIX +
So
(4.37)
Equating (4.31) and (4.34), we obtain q(x)g(x) + s(x) = v(x) + e(x)
(4.38)
Since the code polynomial, v(x), is a multiple of g(x), we can write q(x)g(x) + s(x) = a(x)g(x) + e(x)
(4.39)
and e(x)
=
[q(x) - a(x)]g(x) + s(x)
(4.40)
(4.40) shows that the syndrome polynomial s(x) == Rem{e(x)/g(x)}. For cyclic codes, the syndrome polynomial s(x) has the following property. Theorem 4.3.. Let s(x) be the syndrome polynomial of a received polynomial r(x). Then s(j)~x), the j-th cyclic shift of s(x), is the syndrome polynomial of r(j)(x), where /j (x) is the remainder of xjs(x)/g(x) and r(j)(x) is thej-th cyclic shift of r(x). . . Proof Replacing v(x) with r(x) and v(j)(x) with r(j\x) in (4.4), we get (4.41) where q(x)
= rn_Ix j-
. I
+ rn-2X
j-2
+ ... + rn_j+lx + rn-j
Substituting (4.9) into (4.4 1), we get
(4.42)
98
Error-Control Block Codes for Communications Engineers
r(J)(x)
=
xJr(x) - q(x)h(x)g(x)
(4.43)
Dividing r(J)(x) and r(x) by g(x), we get
= a(x)g(x)
r(j)(x)
+ p(x)
(4.44)
and r(x)
=
b(x)g(x) + s(x)
(4.45)
where a(x) and b(x) are the huotiems. p(x) and s(x) are the remainders and they are the syndromes of r U (x) and r(x), respectively. Substituting (4.44) and (4.45) into (4.43), we get a(x)g(x) + p(x) = xJ[b(x)g(x) + s(x)] - q(x)h(x)g(x)
(4.46)
Rearranging (4.46), we have xJs(x)
=
[a(x) + q(x)h(x) - xJb(x)]g(x) + p(x)
(4.47)
p(x) is the remainder of xJs(x)lg(x). Therefore, p(x) = s{J)(x). Since p(x) is the syndrome of r(J)(x), s(J)(x) is the syndrome polynomial of r{J\x). It can be seen from (4.34) and (4.40) that a decoder can compute s(x) from r(x) and estimate e(x) based on s(x). Decoding of cyclic codes, therefore, consists of (1) syndrome computation, (2) error pattern detection, and (3)
error correction. The error pattern detection simply maps the coefficients of the syndrome polynomial s(x) to an error pattern and can be implemented by a decoding look-up table. The error correction is usually carried out in parallel manner. However, the cyclic structure of a cyclic code enables us to use the same error pattern detection circuitry to decode each received digit in serial manner. A general decoder for an (n, k) binary cyclic code is shown in Figure 4.3 and explained as follows.
( ) =en_!x n-I () =rn_!x n-! +rn-2X n-2 + ... +r!x+roan d ex Letrx
+ en_2Xn-2 + ... + el x + eo be a received polynomial and an error polynomial,
respectively. With switches SW! and SW3 closed and other switches opened, the received digits ro, r1, ... , r n-I are shifted into the buffer register and the syndrome register from the left. After n shifts, the contents in the syndrome register form the coefficients of the syndrome polynomial s(x) = Sn-k-IX n-k-! + Sn-k-2X n-k-2 + .. , + SjX + So. Assoonast hesyn d rome di19its have been computed, switch SW! is opened and other switches are closed. The error pattern detector tests whether the syndrome polynomial s(x) corre-
Cyclic Codes
99
olp
'0 '1'" , n';b-c~_--t;_
vector -_-_-:Buffer :.-: .:. :.-::-:.,.:register -~-~-:::-:.::-::.-_-_--.JI--'i.-----.'H-j....--.-
SW1
Syndrome update
Figure 4.3 Syndrome decoder for (n, k) binary cyclic code.
sponds to a correctable error pattern that has an error in the highest-order . . n-l position x If s(x) of r(x) does not correspond to a correctable error pattern that n l has an error in the highest-order position x - , e n-l = 0 and rn-I is not in error. The buffer register and the syndrome register are cyclically shifted once to the right. The contents in the buffer register form the coefficients of the n-l n-2 2 d . I r (I)() po Iynorma x = rn-2x + rn-3x + ... + rlx + rox + r n-\ an the contents in the syndrome register form the coefficients of the polynomial (I)( ) n-k-l n-k-2 2 h s x = sn-k-2x + sn-k-3x + ... + sIx + sOx + sn-k-l were, by Theorem 4.3, /l)(x) is the syndrome polynomial of r(l)(x). The second digit rn- 2 of r(x) becomes the first digit of r(Il(x). To decode r n-2 of r(I)(x), the same error pattern detector is used to test whether the syndrome polynomial s(I)(x) corresponds to a correctable error pattern that has an error in the highest.. n-l or der positron x . If s(x) of z(x) corresponds to a correctable error pattern that has an error n- I , en-I = I and r n-l is in error. With error in the highest-order position x correction, the received polynomial r(x) becomes r\ (x)
= x n- 1
(4.48)
+ r(x) n I
= (rn-l EEl en-l)x - + rn-2x , n-I n-2
= rn-Ix
+ r n-2 x
n-2
+ ... + rl x + rO
+ ... + r\x + rO
(4.49)
(4.50)
Error-Control Block Codes for Communications Engineers
100
where (4.51)
The effect ofthe error digit en-I is removed from the syndrome polynomial s(x) and the syndrome polynomial becomes SI(X) = en_IX
n- 1
+ s(x)
(4.52)
Then, the buffer register and the syndrome register are cyclically shifted once to the right. The contents in the buffer register form the coefficients of n-I n-2 2 , . I (I)() t he po Iynomla rl x = rn-2x + rn-3x + ... + rlx + rOx + rn-l> and the contents in the syndrome register form the coefficients of the polynomial Al)(x), where
S~l)(x) = Rem {XSI (x)/g(x)} = Rem ljx" + xs(x)]/g(x)} = 1 + Rem {xs(x)/g(x)} = 1 + /I)(x)
(4.53)
Therefore, the digit 1 is what enters the syndrome register while it is shifted. It can be shown that Al)(x), a cyclic shift of Sl (x) one place to the right, is the syndrome polynomial of r~l)(x), where Al)(x) is the remainder of
r~l)(x) is a cyclic shift of r\ (x) one place to the right. Proof It follows from (4.4) that xrl (x) can be expressed as follows:
XSI (x)/g(x) and
(4.54)
Substituting (4.9) into (4.54), we get (4.55) Dividing r~l)(x) and rl (x) by g(x), we get
r~l)(x) = a(x)g(x)
+ p(x)
(4.56)
r1 (x) = b(x)g(x) + $1(x)
(4.57)
and
Cyclic Codes
101
where a(x) and b(x) are the quotients. p(x) and s\ (x) are the remainders and they are the syndromes of r\l)(x) and r1 (x), respectively. Substituting (4.56) and (4.57) into (4.55), we get a(x)g(x) + p(x)
= x[b(x)g(x)
+ SI(X)] - r~_1h(x)g(x)
(4.58)
Rearranging (4.58), we have XS\(x)
= [a(x)
+ r~_lh(x) - xb(x)]g(x) + p(x)
p(x) is the remainder of XSI (x)/g(x). Therefore, p(x)
(4.59) =
sP)(x). Since
p(x) is the syndrome of r\l)(x), s\l)(x) is the syndrome polynomial of r\l)(x).
To decode r-:: of rpl(x), the same error pattern detector tests the syndrome polynomial s\l)(x) corresponds to a correctable error pattern that n I has an error in the highest-order position x - Other received digits are decoded by the same technique. The decoding process stops until ro is read out of the buffer register. If an error pattern is correctable, the syndrome polynomial is equal to 0 ar the end of the decoding process. If the syndrome polynomial is not equal to 0 at the end of the decoding process, an un correctable error panern has been detected. The decoder is known as Meggitt decoder [5]. The decoder tests the syndrome polynomial only for those error patterns that have an error in the highest-order position, and errors in other positions are decoded later. The following example shows the implementation of the error detection circuit and how to use the same error pattern detection circuit to decode a (7, 4) binary cyclic code. Example 4.4 Figure 4.4 shows a syndrome decoder for a (7, 4) binary cyclic encoder generated by g(x) = x 3 + x + 1. The mapping between the syndrome and correctable error patterns in vector and polynomial forms is shown in Table 4.2, where s(x) is the remainder of e~x)/g(x). Suppose that a single error occurs at location x . From Table 4.2, the error pattern [0 0 0 0 0 0 1] maps to the syndrome vector [1 0 1]. After the entire received polynomial r(x) has been shifted into the syndrome register, the contents in the syndrome register correspond to the syndrome polynomial s(x) = / + 1. The error pattern detector produces a logic 1. This indicates that the received digit r6 is in error. The error panern detector can be implemented by an inverter and a single three-input AND gare. To correct r6 of r(x), the detector output is added to the received digit r6. Suppose that a single error occurs at location x', for 0 ~ i ~ 6. rj is in error. After the entire received polynomial r(x) has been shifted into the syndrome register, the
Error-Control Block Codes for Communications Engineers
102
SW
alp vector
Syndrome update
Figure 4.4 Syndrome decoder for (7, 4) binary cyclic code.
Table 4.2 Mapping Between Syndrome and Correctable Errors in Vector and Polynomial Forms Polynomial Form
Vector Form
5(X)
e(x)
S = [5051 52]
E = [eo e1 ... e6]
x2 + 1 x2 + x + 1 x2 + x x+1 x2 x
xG
101 111 o1 1 110 001 o1 0 1 00
0000001 0000010 0000100 0001000 0010000 0100000 1000000
x5 x4 x3 x2 x
coefficients of the syndrome polynomial corresponding to the correctable error polynomial e(x) = x' appears in the syndrome register. For example, i = 3, 3 e(x) = x , s(x) = x + 1 and S = [1 1 0]. However, the contents in the syndrome register become [1 0 1] after another 6 - i shifts, and the received digit ri is now shifted into the right-most position of the buffer register. Therefore, only the syndrome pattern [1 0 1] needs to be detected and the received digit r, can be decoded with the same error detection circuitry. In principle, Meggitt decoders can be used to decode any cyclic codes. However, they are not very effective for correcting multiple errors because the implementation complexity of these decoders is very high. A practical variation
Cyclic Codes
103
of Meggitt decoding, called error-trapping decoding, is presented in the next section.
4.5.2 Error-Trapping Decoding An error-trapping decoder is a modification of a Meggitt decoder [6]. The decoding technique is based on the fact that an error pattern which appears in the received polynomial rex) also appears in the syndrome register, as the received register and the syndrome register are shifted together. When this happens, the error pattern is trapped in the syndrome register. The error pattern detector needs only to detect a correctable error pattern in the syndrome register. Error correction can then be carried out by subtracting the correctable error pattern from the received vector. Consider an (n, k) cyclic code with generator polynomial g(x). Suppose that errors are confined to the n - k low-order positions of rex). We have, n-k-l n-k-2 S' h d ( ) ex:::: en-k-IX + en-k-2X + ... + elx + eo. mce t e syn rome sex) of rex) is equal to the remainder of e(x)/g(x) and since the degree of e(x) is less than n - k, we have sex) = e(x). Therefore, the error pattern e(x) is identical to sex) if errors are confined to the (n - k) low-order positions of rex). C':.
d
(n-k) +i-l
(n-khi-2
. S~ppose t hat errors are connne to x , X , • • • , x' + 1, x' positions of r(x) (including the end-around case, where errors oflength 1 are confined to j high-order positions and 1- j low-order positions. of a We have, e(x) :::: e(n_k)+i_l)n-k)+l-l + received polynomial). (n-k)+i-2 i+l i If (). e(n-k)+i-2x + . , . + ei+lx + e pc , r x IS cycI'Ica11y shif irte d n - .i times, the errors will be confined to the n - k low-order positions of ,(n-l) (x), the (n - i)-th cyclic shift of rex). The corresponding error pattern . (n-i) ( ) n-k-l n-k-2 S' IS e x = e(n-khi-lx + e(n-k)+i-2x + ... + ei+lx + ei' mce the . syndrome sen-i) (x) of r(n-i) (x) is. equal to the remainder of e(n-~) (x)/g(X( and since the degree of e(n-l) (x) is less than n - k, we have /n-l) (x) = en-I) (x). Therefore, the error pattern e(x) is identical to i (n-i)( ) 'f C':. d to x (n-k)+i-l , x (n-k)+i-2 , ... ,x i+l ,x i pOSI-. X 1 errors are connne x s tions of rex). Wht7n t~e error pattern is trapped in the syndrome register, we simply compute xls(n-l)(x) and subtract it from rex) to obtain the transmitted code polynomial vex). To detect a correctable error pattern in the syndrome register, we can test the weight of the syndrome. Let t' be the error-correcting power of an (n, k) cyclic code. Suppose that t' or fewer errors are confined to ' . f r () h dx (n-k}+i-l , x (n-k)+i-2 , . . . ,x i+l ,x i pOSItIOnS 0 x (imc 1u dimg teen
Error-Control Block Codes for Communications Engineers
104
(n-k)+i-l (n-k)+i-Z . . + e(n-k)+i-Zx
) W eave, ( ) = e(n-k)+i-lx h aroun d case. ex 1+1 1 W . e () + ... + e i + \ x + eiX . e can wnte, x =
Dividing
where.
e(x)
(n-I)() x 1e x,
h were
= xie(n-i) by the generator polynomial
g(x),
we get
is the quotient and sex) is the syndrome of xie(n-i) and = a(x)g(x) is a code polynomial. If sex) = x z/ n- z), then the syndrome weight w{s(x)} ~ t' and the code polynomial is the all-zero code polynomial. If sex) =I- x1e(n-z) and w{s(x)} ~ t', the code polynomial has a weight < 2t' + 1. This contradicts the minimum weight of a t' -error-correcring code. Thus, when the syndrome weight w{s(x)} ~ t', correctable errors are trapped in the syndrome register. Error correction can then be accomplished by subtracting the contents of the syndrome register from the received erroneous digits. A general error-trapping decoder for an (n, k) t' -error-correcting binary cyclic code is shown in Figure 4.5. The decoding operations are described as follows: a(x)
x1e(n-l) - sex)
1. With switches SW\ and SW 3 closed and other switches opened, the received information digits rn-k' rn-k+\' . . . , r n-\ are shifted into SW1 Received vector
SW2 r n _11------.ffi~".,O-' L.-_L.-_---'_'---'-'---=:....L....~..:.J Output vector SW3
Figure 4.5 Error-trapping decoder for (n, k) binary cyclic code.
Cyclic Codes
105
the k-bit buffer register and the received digits '0, '1' . . . , r n-I are shifted into the syndrome register. Now, the coefficients So' SI, . . . , Sn-k-I in the syndrome register form the syndrome polynomial s(x). 2. The syndrome weight is tested. (a) If the weight is t' or less, errors are confined to the (n - k) low-order received pariry-check digits '0, '1' . . . , rn-k-I and the k high-order received information digits rn-k, 'n-k+l' . . . , 'n-I are error free. With switch SW 2 closed and switch SW4 opened, the received information digits 'n-k, 'n-k+l, . . . , 'n-I are read out of the buffer. (b) If the syndrome weight is greater than t', switch SW 3 is closed and the other switches are opened. The syndrome register is shifted once. 3. The syndrome weight is tested again. (a) If the weight is t' or less, errors are confined to received digits 'n-I, ro, '1' . . . , 'n-k-2' The left most digit in the syndrome register is Sn-k-I and is identical to the error in r n-I ; the remaining digits in the syndrome register match the errors in digits '0, rl, . . . , 'n-k-2' The output of the threshold gate triggers a clock to count from 2 and the syndrome register is then shifted once, in step with the clock, with switch SW3 opened. As soon as the clock has counted to n - k, the syndrome vector becomes [0 0 ... 0 1] and Sn-k-I matches the error in digit, n-I. The received information digits 'n-k, 'n-k+l, . . . , rn-I are read out of the buffer for error correction. Stop. (b) If the syndrome weight is greater than t', switch SW 3 is closed and the other switches are opened. The syndrome register is shifted once again.
4. Repeat step 3(b) until the syndrome weight drops to t' or below. If the syndrome weight drops to t' or below after i-th shift, for 2 $ i $ (n - k), the clock begins to count from i + 1 and the syndrome register is shifted with switch SW3 opened. As soon as the clock has counted to n - k, the right-most i syndrome digits Sn-k-i, Sn-k-i+l' . . . 'Sn-k-l in the syndrome register match the errors in the right-most i received information digits 'n-i' rn-i+l, . . . , 'n-I in the buffer register and the other received information digits in the buffer register are error free. Switches SW 2 and SW4 are closed and the received information digits are read out of the buffer register for error correction. 5. If the syndrome weight never drops to t' or below after (n - k) shifts, switch SW2 is closed and the received information digits 'n-k, 'n-k+l, . . . , 'n-l are read out of the buffer register one at a time. At the same time, the syndrome register is shifted with switch
106
Error-Control Block Codes for Communications Engineers
SW 3 closed. As soon as the syndrome weight drops to t' or below, the syndrome digits so' St, . . • , S n-k-l match the errors in the next n - k digits to come out of the buffer register. With switch SW 3 opened and switches SW 2 and SW4 closed, the corrupted received information digits are corrected by the digits coming out of the syndrome register. As an example, the error-trapping decoding circuit for a (7, 4) singleerror-correcting cyclic code in Example 4.4 is shown in Figure 4.6. Certain cyclic codes may also be decoded by majority-logic decoding [7]. The decoding method relies on being able to find]' parity-check sums orthogonal on the first digit of the cyclic codeword; i.e., the first digit appears in all ]' parity-check sums, and the other digits appear once and only once in each parity-check sum. Then, provided that no more than]' /2 errors are in the received sequence, the first digit can be corrected by majority vote if it is in error, and subsequent digits can also be corrected by cyclically shifting the received n-tuple and repeating the majority vote calculation. This is known as one-step majority logic decoding, but is restricted to a small class of cyclic codes. So fat, we have studied cyclic codes to correct random errors. Cyclic codes can also be used to detect burst errors. We define a burst error of length b' as follows: Definition 4.3. A burst of length b' is defined as the length of an error sequence that begins with an error and ends with an error. Cyclic codes used for error detection are called cyclic redundancy check (CRC) codes. For burst error detection, we have the following: Theorem 4.4. An (n, k) cyclic code can detect any error burst of length n - k or less, including the end-around case. Proof Let b(x) = b n-k-l x n- k - 1 + b n_k_2Xn-k-2 + ... + b 1x + bo be a polynomial of degree n - k - 1 or less with bo = 1 and g(x) be the generator SW2
SW1
Received vector
L..-..;~...:..L....::..l----::":-'
I - -....m-....,,...-........
Output vector SW4
Figure 4.6 Error-trapping decoder for (7, 4) binary cyclic code in Example 4.4.
Cyclic Codes
107
polynomial of degree n - k. A burst error including the end-around case can be expressed as e(x) == x! b(x) modulo-Ix" -. 1) for. 0 $ j $ n - 1. x" - 1 = h(x)g(x) and x" - 1 is not divisible by x', Thus, xl is not divisible by g(x). Since the degree of b(x) is less than the degree of g(x), b(x) is not divisible by g(x). Hence, x! b(x) is also not divisible by g(x). The syndrome polynomial s(x) = Rem {xJb(x)/g(x)l = Rem {e(x)/g(x)l :;:. o. Thus, any error burst of length n - k or less is detectable.
4.6 Golay Codes and Shortened Cyclic Codes Golay codes are a class of multiple-error-correcting cyclic codes. The (23, 12) binary Golay code was discovered by Golay in 1949 [8]. The code has a minimum Hamming distance of 7 and can correct 3 errors. The generator polynomial g(x) of the (23, 12) binary Golay code is given by (4.60) Alternatively, the (23, 12) binary Golay code can also be generated by the generatot polynomial (4.61) .
c
Both generator polynomials are factors of x
l)(x
II
ro + x
+ x
6
+ x
5
+ x
4
+ x
2
+ l)(x
II
+ x
9
23
23
+ 1, where x + 1 = (x + 7 6 5 + x + x + x + x + 1).
Given the (23, 12) triple-error-correcting binary Golay code of minimum Hamming distance d m in = 7, it is possible to form the (24, 12) triple-errorcorrecting and four-error detecting code of minimum Hamming distance d m in = 8 by adding an overall parity check bit to the code vector of the (23, 12) binary Golay code. Such a code is called an extended binary Golay code and is not cyclic. The (24, 12) triple-error-correcting extended binary Golay code was used to send color pictures of Jupiter and Saturn from the Voyager space missions to Earth between 1979 and 1981. Since Golay codes are a class of cyclic codes, the codes can be decoded by an error-trapping decoder. However, the error-trapping decoding arrangement in its current form cannot correct all correerable error patterns for Golay codes. A modified version of the error-trapping decoding technique was proposed by Kasami in 1964 [9]. The technique allows us to correer all correctable error patterns. We have seen that the generator polynomial of a cyclic code divides x" - 1. This implies that there are relatively few codes for most values of k
108
Error-Control Block Codes for Communications Engineers
and n. In practice, it is natural to find codes that enjoy the mathematical structure and ease of implementation of cyclic codes. It is always possible to form an (n - i, k - i) cyclic code by making the i leading information symbols identically 0 and omitting them from all code vectors of an (n, k) cyclic code. This is equivalent to omitting the first i rows and columns of the generator matrix G. However, the code is no longer cyclic and is called a shortened cyclic code. Example 4.5 Delete the first row and column of a (7, 4) binary cyclic 1101000 code of G '"
0110100 0 0 1 1 0 1 0 ' we get a (6, 3) shortened binary 000
cyclic code of G
=
[~
1
101 101
o
1
1
0
1
o
1
1
o
~l
Th, vectors II I 0 1 00],
[0 1 1 0 1 OJ, and [0 0 1 1 0 1] are code vectors, but one more cyclic shift of [001 101] to the right results in [1001 1 OJ, which is not a code vector. The shortened code is not cyclic. Both codes have the same minimum Hamming distance of 3. The encoder circuit for an (n, k) cyclic code can be used to encode an (n - i, k - i) shortened cyclic code by making the i leading information symbols identically 0 and omitting them from all code vectors before the transmission. The Meggitt or error-trapping decoder for an (n, k) cyclic code can also be used to decode an (n - i, k - i) shortened cyclic code by inserting i zeros to the corresponding information part of the received vector and omitting the i leading symbols from the decoded information vectors.
4.7 Computer Simulation Results The variation of bit error rate with the EblNo ratio of a binary cyclic code with coherent BPSK signals for AWGN channel has been measured by computer simulations. The system model is shown in Figure 4.7. Here, Eb is the average transmitted bit energy, and N o/2 is the two-sided power spectral density of the noise. A (7, 4) binary cyclic code generated by g(x) = x 3 + x + 1 is used in the test. Perfect timing, synchronization, and error-trapping decoding are assumed. In each test, the average transmitted bit energy was fixed, and the variance of the AWGN was adjusted for a range of average bit error rates. The simulated error performance of the (7, 4) binary cyclic code with error-trapping decoding is shown in Figure 4.8. Comparisons are made between
Cyclic Codes
109
v Channel codeword
Noise
1\
U
R Received vector
Estimate of U
Discrete noisy channel
Figure 4.7 Model of a coded digital communication system.
10 10
10
~... .e..
10
(I)
+-
iii
10 10 10
0
-1
·2
-3
-4
-5
-6
-2
0
2
4
6
8
10
12
Eb/N o (dB) Uncoded coherent BPSK - - Error-trapping decoding
-13-
Figure 4.8 Performance of (7, 4) binary cyclic code with error-trapping decoding in AWGN channels.
110
Error-Control Block Codes for Communications Engineers
the coded and uncoded coherent BPSK systems. For a certain range of low Eb/No ratios, an uncoded system always appears to have a better tolerance to 4 noise than the coded systems. At a bit error rate of 10- , the (7, 4) binary cyclic code with BPSK signals and error-trapping decoding gives 0.4 dB of coding gains over the uncoded BPSK sysrem.
References [11
Prange, E., "Cyclic Error-Correcting Codes in Two Symbols," Air Force Cambridge Research Center-TN-57-103, Cambridge, MA, September 1957.
[2J
Prange, E., "Some Cyclic Error-Correcting Codes with Simple Decoding Algorithms," Air Force Cambridge Research Center-TN-58-156, Cambridge, MA, April 1958.
[3]
MacWilliams, F. j., and N. ]. A. Sloane, ihe Mathematical Theory of Coding, NorrhHolland, Amsterdam, 1977.
[4]
MacWilliams, F. J., "A Theorem on the Distribution of Weights in a Systematic Code," Bell System Tecbical Journal, Vol. 42, 1963, pp. 79-94.
[51
Meggitt,]. E., "Error Correcting Codes and Their Implementation for Data Transmission Systems," IRE Trans. on Information Theory. Vol. IT-7, No.5, October 1961, pp. 234-244.
[6]
Rudolph, L. D., and M. E. Mitchell, "Implementation of Decoders for Cyclic Codes," IEEE Trans. on Information Theory, Vol. IT-10, No.3, June 1964, pp. 259-260.
[7]
Lin, S., and G. Markowsky, "On a Class of One-Step Majority-Logic Decodable Cyclic Codes," IBM Journal Research Development, January 1980.
[81
Golay, M. J. E., "Notes on Digital Coding," IRE Proc., Vol. 37, June 1949, p. 657.
[9]
Kasami, T., "A Decoding Procedure for Multiple-Error-Correction Cyclic Codes," IEEE Transactions on Information Theory, Vol. IT-10, No.2, April 1964, pp. 134-139.
5 Bose-Chaudhuri-Hocquenghem Codes 5.1 Introduction Bose, Chaudhuri, and Hocquenghem (BCH) codes form a class of powerful random- and multiple-error-correcting cyclic codes. The codes were discovered by Bose and Chaudhuri in 1%0, and by Hocquenghem, in 1959 [1-3]. They are relatively easy to encode and decode using algebraic decoding [4, 5]. Algebraic decoding is possible because the considerable mathematical structure of the codes makes it possible to find algorithms for solving the syndrome equation. The codes can be described in the time domain or in terms of the Galois-field Fourier transform; i.e., in the frequency domain. The treatment here will concentrate on the basic principles of binary BCH codes and the algebraic decoding of the codes.
5.2 General Description of BCH Codes Let a be an element of GF(qS) and let an integer b > 1. A BCH code of length n and minimum Hamming distance ;:::2td + 1 can be generated by the
. I g () b a b+l , ... , a b+2t r ' as t he generator po Iynomla x over GF()' q wit h a, roots of g(x). Let a', a nonzero element in GF(qS), be a root of the minimal polynomial cPi(X) over GF(q) and n , be the order of aI, for i = b, b + 1, ... , b + 2td - 1. The generator polynomial of a BCH code can be expressed in the form
111
112
Error-Control Block Codes for Communications Engineers
where LCM denotes least-common-multipliers. The length of the code is (5.2)
Example 5.1 Given the polynomials Ii (x), h(x), and h(x), where
Ii (x) = x 4
+x + 1 4 f"2(x) = x +x+ 1 4 2 3 }3(x) = x + x + x + 1 LCMf!l(x),h(x),h(x)}
=
(x
4
4
+ x+ 1)(x
+ x
3
+ x
2
+ 1)
Let 0d = 2td + 1. 0d and td are defined as the designed Hamming distance and the designed error-correcting-power of a BCH code. The true minimum Hamming distance, d min , of a BCH code mayor may not be equal to d> and can be obtained by applying Theorem 3. I to the parity-check matrix H of the code. In general, the minimum Hamming distance of a BCH code is always equal to or greater than 0 d. If b = 1, the code is called a narrow-sense BCH code. If n = q5 - 1, the code is called a primitive BCH code. The most important BCH codes are the binary (q = 2), narrow-sense (b = 1), primitive (n = q5 - 1) BCH codes. In this chapter, we discuss primarily the binary, narrow-sense, primitive BCH codes, but a brief discussion of nonprimitive BCH codes is also given in the next section.
o
5.3 Binary, Narrow-Sense, Primitive BCH Codes For any positive integers s ~ 3, t d < 2 - and n = 2 - 1, a binary, narrowsense, primitive BCH code has the following parameters. 5
1
5
5
n= 2 - 1 Block length: Number of check digits: c = (n - k) $ std Minimum distance: d m in ~ 2td + 1 5
Let a be a primitive element in GF(2 ) and g(x) be a generator polynomial over GF(2) which has a, ci, , a 2td as its roots. Let ¢Ji(X) be the minimal polynomial of a\ for i = 1, 2, , 2td. Then (5.3)
is the lowest-degree polynomial over GF(2) that generates a binary, narrowsense, primitive BCH code. The degree of ¢J i(X) is less than or equal to s. By
Bose-Chaudhuri-Hocquenghem Codes
Theorem 2.12, if
f3
113
m
is an element in GF(2 ) and a root of an irreducible 2
22
2 m- 1
polynomial of degree mover GF(2), then f3 ,f3 ,/3 are also the roots of 2i the irreducible polynomial, Therefore, each even-power element a and its l corresponding element a are the roots of the same minimal polynomial ¢Ji(X) given by equation (5.3). The generator polynomial g(x) can be reduced to g(x) = LCM {¢J I (x), ¢J3 (x), ¢Js (x), ... , ¢J 2tr I (x)}
(5.4)
and has a, a 3, as, ... , a 2tr i as roots. Since the degree of each minimal polynomial in equation (5.4) is less than or equal to s, the degree of g(x) is, therefore, at most equal to std. The design of a binary, narrow-sense, primitive BCH code of length s n = 2 - 1, minimum distance ~2td + 1, and s ~ 3 is described in the following steps: 1. Select a primitive polynomial of degree s over GF(2) and construct S GF(2 ) with a as a primitive element. 2. For some designed values of 8 d» compute t d and find the minimal polynomials of a' generated by the primitive polynomial, where aI, S a nonzero element in GF(2 ) , is a root of the minimal polynomial ¢Ji(X) and i = 1, 3, 5, ... , 2td - 1. 3. Compute g(x) = LCM {¢JI (x), ¢J3(X), ¢Js(x), ... , ¢J 2td- I (x)} with a, a 3, a S, . . . , a 2tri as t he roots. Example 5.2 Let a be a root of the pnmmve polynomial p(x) = x 3 + x + lover GF(2). The order of the element a in GF(2 3) is 7 3 and a is a primitive element in GF(2 ). The minimal polynomials are shown in Table 5.1. To generate a binary, narrow-sense, primitive BCH code of Table 5.1 Minimal Polynomials of a Generated by a Primitive Polynomial p(x) = x 3 + x + lover GF(2) Conjugate Roots
Order of Element
o 1 = aO 1 2 4 a,a,a
7
3 6 5 a,a,a
7
Minimal Polynomial
x x+1 x3 + X + 1 x3 + x2 + 1
Error-Control Block Codes for Communications Engineers
114
length n = 7 and designed Hamming distance of 3, both parameters band t d are equal to 1. The generator polynomial g(x) has a as a root and g(x)
=
x3 + x +
Since n = 7 and the degree of g(x) is n - k = 3, the number ofinformation symbols k is 4. g(x) generates a (7, 4) binary, narrow-sense, primitive BCH code of designed Hamming distance 3. Since g}x) is a code polynomial of the code and has a weight of 3, the true minimum Hamming distance is exactly equal to the designed Hamming distance. The code is a single-error-correcting code. Example 5.3 Let a be a root of the primitive polynomial 4 4 p(x) = x + x + 1 over GF(2). The order of the element a in GF(2 ) is 15 4 and a is a primitive element in GF(2 ). The minimal polynomials are shown in Table 5.2. To generate a binary, narrow-sense, primitive BCH code of length n = 15 and designed Hamming distance of 5, the parameters band t d are equal to 1 and 2, respectively. The generator polynomial g(x) has a and 3 a as roots and g(x)
LCM{
=
= (x 4
4 2 3 + x + 1) (x + x + x + x + 1) 6 4 8 = x + xl + x + x + 1
Since n = 15 and the degree of g(x) is n - k = 8, the number of informarion symbols k is 7. g(x) generates a (15, 7) binary, narrow-sense, primitive Table 5.2 Minimal Polynomials of a Generated by a Primitive Polynomial pix) GF(2) Conjugate Roots
Order of Element
= x 4 + x + lover
Minimal Polynomial
-----------------------,---------1
o 1= a 1 2 4 a,a,a,a 3 6 a, a, a 12, a 9 a 5, a 10
5
x x+1 x4 + X + 1 x4 + x3 + x 2 + x + 1
3
x + x+ 1
a 7 a 14 a 13 a 11
15
x4 + x3 + 1
O
'I
I
I
I
15
2
Bose-Chaudhuri-Hocquenghem Codes
115
BCH code of designed Hamming distance 5. Since g(x) is a code polynomial of the code and has a weight of 5, the true minimum Hamming distance is exactly equal to the designed Hamming distance. The code is a double-errorcorrecting code. 5 It is possible to take a nonprimitive element in GF(2 ) to generate a 5 binary BCH code. Note that if 2 - 1 is not prime, there are nonprimitive 5 elements in GF(2 ) . We can use this property to generate binary BCH codes. 5 Codes generated by nonprimitive elements in GF(2 ) are called binary, nonprimitive BCH codes. Construction of binary, nonprimitive BCH codes proceeds in the same manner as for the binary, primitive BCH codes, except that we compute the codeword length n. This is illustrated by the following example, and, for simplicity, we also let b = 1. Example 5.4 Let Q' be a root of the primitive polynomial p(x) = x 6 + x + 1 over GF(2) and /3 = Q'3 be an element in GF(26) . The order of the element /3 = Q'3 in GF(26) is 21 and /3 = Q'3 is not a primitive 6 element in GF(2 ). The minimal polynomials are shown in Table 5.3. To generate a binary, nonprimitive BCH code of length n and designed Hamming distance of 7, the parameter i« is equal to 3. Letting b = 1, the orders of the elements /3 = Q'3,/3 3 = Q'9, and /3 5 = Q' 15·In GF(2 6) are 21, 7, and 21, Table 5.3 Minimal Polynomials of a Generated by a Primitive Polynomial p(x) = x 6 + x + lover GF(2) Coniugate Roots
Order of Element
o 1 =a
O
2 4 8 16 32 a, a , a , a , a , a a 3 a2 6 a4 12 24 48 33 fJ 5= a fJ 20= a fJ 17= a , a , a , a 34 40 a , a , a , a , a ,a 7 14 28 56 49 35 I I r a , a , a , a , a ,a f33 = a 9, f36 = a 18, a 36 11 22 44 25 50 37 a ,a ,a ,a ,a ,a 13 I:a ,a 26,a 52,a 41 ,a 19,a38 153060575139 'a5 fJ=a,a,a,a,a,a \ 21 42 :a , a '234629585343 a ,a,a a,a,a 27 54 45
l0
a ,a ,a
a
31
62 61 59 55 47 ,a ,a ,a ,a ,a
63 21 63
9 7 63 63 21 3
63
7 63
Minimal Polynomial
x x+1 x6 + x + 1 4 x6 + x + x2 + x + 1 2 6 5 x +x +x +X+1 6 3 x +x +1 x3 + x 2 + 1 2 6 5 3 x +x +x +x +1 4 3 6 x +x +x +X+1 4 2 6 5 x +x +x +x +1 2 x +X+1 4 6 5 x +x +x +X+1 3 x +x+1 x6 + x5 + 1
Error-Control Block Codes for Communications Engineers
116
respectively. The codeword length n = LCM {2I, 7, 21} polynomial g(x) has f3, f33, and f3s as roots and g(x)
=
21. The generator
= LCM {
4
6
I}
Since n = 21 and the degree of g(x) is n - k = 15, the number of information symbols k is 6. g!x) generates a (21, 6) binary, nonprimitive BCH code of designed Hamming distance 7. The true minimum Hamming distance of the code can be computed from the parity-check matrix of the code (see next section for the construction of H), which in fact is equal to the designed Hamming distance. The code is a triple-error-correcting code.
5.4 Parity-Check Matrix and BCH Bound on
dmin
Let a be an element of GF(l) and consider an (n, k) BCH code of minimum distance ~2td + 1 generated by a generator polynomial g(x) over GF(q). Since BCH codes are a class of cyclic codes, the encoding operation can be expressed as v(x) = u(x)g(x), where v (x )
=
Vn-IX
n-I
+ Vn-2X
n-2
(5.5)
+ ... + VIX + VQ
is a code polynomial of degree n - lover GF(q) and u ( x ) = Uk-IX
~I
+ Uk-2X
~2
(5.6)
+ .. " + UIX + Uo
is an information polynomial of degree k - lover GF(q). Clearly, the code polynomial v(x) is divisible by the generator polynomial g(x). Since g(x) has .. diistance a b,a b+1, ... , a b+2t.1 as roots 0 f an ( n, k)BCH co d eo f rmrumum b l b b 2t 1 ~2td + 1, v(x) also has a , a + , ..• , a + • as its roots. We have O = v ( a b) = vn-I ( a b)n-I + v n-2 ( a
'v
+ ... + VI a b + vQ O = v ( a b + I ) = vn-I ( a b + I ) n-I + v n-2 ( a b + I) n- 2 + ... + VI a b + I + Vo ?
(5.7) 2t O -_ v ( a b+ r
l) _
+ VI a
-
vn-l
b+2t r l
(b+2t rl)n-1
a
+ vQ
+ v n-2
(
a b+2t rl)n-2
+
Bose-Chaudhuri-Hocquenghem Codes
117
In matrix form, we get 1 ab+2trl (ab+2trl)2
0= VH
(5.8)
T
Hence, the parity-check matrix of an (n, k) BCH code of minimum distance ~2td + 1 can be represented as a a
(a b)2 b+? (a
b b+l
(ab)n-l (ab+1
r'
H=
(5.9) 1
ab+2trl
(ab+2trl)2
(ab+2trl)n-l
Letting b = 1, the generator polynomial g(x) and the code polynomial v(x) have a, £1'3, £1'5, ... , a
2tr 1
as roots for binary BCH codes. A:, a result, the matrix given by equation (5.9) can be reduced to
H=
a £1'3
(£1')2
(a)n-l
(£1'3)2
(a 3)n-l
£1'5
(£1'5)2
(a 5)n-l
(a2tr1 )2
(a 2tr 1)n-l
a
2tr 1
(5.10)
It can be seen that the size of the above parity-check matrix is td-by-n. If we replace each element in H by its r-tuple binary representation, we get an std-by-n binary parity-check matrix. This is very different from writing it as an (n - k)-by-n binary parity-check matrix. In fact, when the number of parity-check symbols (n - k) < std, there will exist some dependent rows in the std-by-n binary parity-check matrix. To construct the (n - k)-by-n binary parity-check matrix, we have to remove those dependent rows. Example 5.5 Let a be a primitive element in GF(24). The parity-check matrix of a (IS,S) triple-error-correcting, binary, narrow-sense, primitive BCH code can be represented as
Error-Control Block Codes for Communications Engineers
118
H
= [:
=
[I
a a 2 a 3 a 4 a S a (, a 7 a 8 a 9 a 10 a II a 12 a 13 a 14] l'i 18 21 a 3 a 6 a ') a 12 a-a a a 24 a 27 a 30 a 33 a 36 a 39 a 42 a 5 a ]0 a 15 a 20 a 25 a 30 a 35 a 40 a 45 a 50 a 55 a 60 a 65 a 70
a a 2 a 3 a 4 a S a (, a 7 a S a 9 a 10 a II a 12 a 13 a 14] J a 3 a 6 a 9 a 12 J a 3 a 6 a 9 a l 2 J a 3 a 6 a') a l 2 as a l O J as a l O J as a l O J as a l O J as a l O J 4
Using GF(2 ) given by Table B.3 in Appendix B to represent each element of H by its corresponding 4-tuple, we get
H=
J 0 0 0 1 0 0 0 1 0 0 0
0 1 0 0 0 0 0
0 0 J 0 0 0
]
0 1
1 1 1
]
]
0
0
]
0 0 0 J 0 1 0 1 1 0 0 0
J 1 0 0 1 J ]
1
0 1 ]
0
0 J 1 0 1 0 0 0 1 J J 0
0 0 1 J 0 0 0
J 1 0 1 0 0 ]
J 0 1 0 0 1 0
0 1 0 1 1 1 1
]
]
]
]
1 0 0 0
0
1
]
]
]
]
0
0
1 0 0 0
J 1 1 0 1 0 0 0 0 J 1 0
0 1 1 J 0 0 0 ]
1 J J 0
1 1 J 1 0 0 1 1 1 0 0 0
1 0 J 1 0 J 0 1
0 J 1 0
1 0
0 J J J ]
1 1 J ]
0
Row 10 is identical to row J], and row ]2 contains the all-zero row vector. We can remove rows ]] and ]2. The 12-by-15 binary parity-check matrix is reduced to
H=
1 0 0 0 1 0 0 0 1 0
0 J 0 0 0 0 0 1 0 1
0 0 1 0 0 0 1 1 1 J
0 0 0 1 0 1 0 1 1 0
J J 0 0 1 1 J 1 0 J
0 J 1 0 J 0 0 0 1 J
0 0 J J 0 0 0 1 1 0
]
]
J 0 J 0 0 1 1 0 J
0 1 0
0 1 0 1 1 J
0 J 0 ]
1 1 1 1 1 0
J J 1 0 J 0 0 0 0 1
0 J 1 1 0
0 0 1 1 J
J 1 1 1 0 0 1 1 1 0
1 0 1 1 0 1 0 1
0 J
J 0 0 1 J 1 1 J 1 J
Bose-Chaudhuri-Hocquenghem Codes
119
The minim um Hamming distance ofa BCH code generated by a generator polynomial g(x) over GF(q) which has 2tdconsecutive roots is lower bounded by 2td+ 1; i.e., d min ~ 2td+ 1. Proof Here, we have to show that an arbitrary selection of 2td columns ofH given by equation (5.9) are linearly independent. Taking any 2tdcolumns of H given by equation (5.9), we can form a 2td-by-ltd square matrix b
(a /
I
(ab+1 )il
(a b )i2
(a b )i2I,
(a b + 1)i2
(a b + 1)i2/
J
The determinant of the square matrix is (ab)i l (a
b+1/1
(ab+2trl/1
(a b)i 2
(a b)i2Id
(a b+1 )i2
(a
(ab+2trl)i2
(ab+2trl)i2Id
b+1/2I,
Taking our the common factor from each column of the determinant, we get
ab(il +i 2 +..
. +1 2t)
1
1
(ail)l
(ai2) 1
(a i2Id)!
(a i l)2
(a i2)2
(a i2IJ)2
(ail)2trl
(ai2)2trl
(ai2IJ)2trl
(5.1 1)
The determinant in equation (5.11) is known as the Vandermondedeterminant which is nonzero. All ltd columns of H are linearly independent. No 2td or fewer columns of H are linearly dependent. By Theorem 3.1, the minimum distance of a linear code is equal to the minimum number oflinearly dependent column vectors in the matrix H. Therefore, d m in ~ 2td + 1.
5.5 Decoding of Binary BCH Codes Since BCH codes are a class of cyclic codes, they can be decoded by the techniques already presented in Chapter 4. Because BCH codes have a regular
Error-Control Block Codes for Communications Engineers
120
structure, a great deal of research work has been given to computational efficient decoding algorithms. This section describes two decoding techniques for binary BCH codes. The extension of the decoding techniques to (he nonbinary case is straightforward, and will be dealt with in Chapter 6 for Reed-Solomon codes.
5.5.1
Berlekamp-Massey Algorithm
To decode a BCH code in an efficient and algebraic manner, we begin with the syndrome computation from the received word. Without any loss ofgeneral.Ity, we Iet b d t' = td. Let v () = Ian x = vn-1x n-l + vn-2x n-2 + ... + VI x + Vo be the code polynomial of the code vector V = [vo VI ... Vn-1], n- 1 e(x) = en_IX + en_2Xn-2 + ... + ejX + eo be the error polynomial of the n- 1 + rn_2Xn-2 + error vector E = [eo el . . . en-I], and rex) = rn_jX ... + rl x + ro be the received polynomial of the received vector R = (ro rl . . . rn-l)' The received polynomial is (5.12)
rex) = vex) + e(x)
and
RH T = VH T + EH T
= 0 + EH T = EH T
(5.13)
where
HT =
1 (£1')1
1 (£1'2)1
(£1')2
(£1'2)2
1 (aU) I (a2t' )2
(a)n-I
(a 2)n-l
(a 2t')n-1
(5.14)
The syndrome vector S is defined as
S = RH
T T
= EH = [51 52 ... 5u] where
(5.15) (5.16)
(5.17)
Bose-Chaudhuri-Hocquenghem Codes
121
S, = r(a l )
(5.18)
( i) ( i)n-l +r n-2 ( a i)n-2 + ... +rla =rn-la +rO
or
S, = e(a
i
(5.19)
)
i)n-l ( i)n-Z ( i) = en-l ( a + en-2 a + ... + el a + eO
for 1 $ i $ j],jz, ...
i«.
Suppose that the error vector has v nonzero digits at locations
.i» then (5.20)
0 $ j] < l :
... < j p < n
and v $ t', eil = ei2 = ... = eiv = 1. Equation (5.19) becomes
where
52' t
=
For
binary
codes
. 2' . 2' . 2' . 2' e· (a l v) t + e, (al,.-I) t + . . . + e· (ah) t + e. (all) t lv lv-I 12 11
where all, ah, ... , alvare unknown and equation (5.21) can be expressed as p
5i
=
L. e}/ail)i
(5.22)
/:]
The set of functions in equation (5.21) are known as power-sum symmetric functions in ail, a i 2, ... , a l". Any decoding algorithm for solving equation (5.21) is a decoding method and iI, jz, ... ,j p indicate the locations of errors in e(x). Rather than solving this set of nonlinear functions directly, we shall relate the syndrome components to the coefficients of an error-locator polynomial and locate the errors that way. Define an error-locator polynomial
122
Error-Control Block Codes for Communications Engineers
A(x)
= Avx v +
Av_1x
v- 1
+ A1x + 1
+
(l - a i 'x)(1 - a i 2x)
=
(l - aivx)
(5.23) (5.24)
v
T1 (l -
=
(5.25)
aJ,x)
/~I
.
I
.
where the roots of A(x) are (altf , the inverse of a lt, for 1 ::; I::; u, If we can find the coefficients of A(x) and the roots of A(x), the power terms associated with the inverses of the roots (ai'f I will give us the locations of the errors. The technique of finding A(x) and its roots is described as follows. Suppose we multiply both sides of equation (5.23) by ei/ai/)" +// and evaluate the error polynomial A(x) at x = (ai'f becomes
o = eJ/aJ/)"+//[Av(aj/)-V +
l
.
Then, A(x) = 0 and equation (5.23)
A v - I (aJ'f v + 1 + ... + AI (aJ/)-1 + 1]
'1'1 '1' I '1' o = ej,[Av(a lt'1' ) + A v- 1 (a lt ) + + ... + Al (a J, ) +//- + (a lt ) +//]
(5.26)
Equation (5.26) holds for each value of I and 1'. For each 1', we sum equation (5.26) over 1= 1, 2, ... , t/, Equation (5.26) becomes v
v
.( J1)1' + Av-I ~ ~.( j,)/'+l + ... 0 -- A v ~ ~ eli a eli a /~I
1~1
L e'a' (J' ) I' V
A
+1\1
I~1
li
L e'a' ( J') I'
(5.27)
V
+//-
I +
I~ 1
li
+ /J
Using equation (5.22), we can write equation (5.27) as (5.28) Because v::; t', the above equation always specifies known syndrome components Sj, 1::; t s. i«, if I' is in the interval 1::; I' ::; u, Letting j = I' + u, we obtain (5.29) for v + 1 ::; j::; 2v. The functions defined by equation (5.29) are Newton's identities. Rearranging equation (5.29), we get
Bose-Chaudhuri-Hocquenghem Codes
5j
=
-A I5j _ 1 - A 25j - 2
-
... -
A p-
123
15j - l'+ 1 -
A p5j _ p
(5.30)
p
=
-l A/5j _1
(5.31 )
1= 1
Equation (5.31) is the key co solve the decoding problem and is called the key equation. In matrix form,
5 p+ 1 5 p+2
52
52 p
5 p 5 JJ+l
51
52 53
Ap
5p 5 p +1
A p_ 1
52p- 1
Al
(5.32)
The coefficients of A(x) are now related co the syndrome components by inverting the square matrix, effectively. One can realize that equation (5.31) can be implemented as a linear feedback shift-register. Figure 5.1 shows the synthesis of the error-locator polynomial as a linear shift-register circuit. The problem of finding the error-locator polynomial reduces co the design of a linear feedback shift-register with coefficients of A(x) as the taps that generate the given syndrome sequence. An iterative algorithm based on the synthesis of the error-locator polynomial as a linear feedback shift-register was developed by Massey in 1972 [4] for the solution of Newton's identities. Massey's approach is closely related co an algorithm developed by Berlekamp in 1965 [5]. The two decoding techniques are often referred to as one procedure, the Berlekamp-Massey iterative algorithm. We shall follow Massey's approach. The algorithm simply determines v =- 2,11/ J / =1
s.
1 -/ ,j=v+1.v+2, ...,2v
Figure 5.1 Synthesis of an error-locator polynomial as a linear feedback shift-register circuit.
124
Error-Control Block Codes for Communications Engineers
the tap connections of a linear feedback shift-register of minimum length that generate the given syndrome sequence. It can be applied to binary or non binary, primitive or nonprimitive BCH codes. The flow chart of the Berlekarnp-Massey iterative algorithm is shown in Figure 5.2. At the ,u-th iteration step, let LJ.L be the degree of the error-locator polynomial A(J.L)(x), dJ.L be the ,u-th discrepancy, T(x) be the new connection polynomial for no discrepancy, B(x) be the old connection polynomial, and
Initialize J.l=O,B(X)= X,i\(O)(X) = I,Lp=O and j =0
Does current shift· register design give next syndrome?
Yes (taps are OK) dp = 0 ?~----------.
No (correct the taps) Compute new connection polynomial for which dp T(X)=i\(P-1)(X)-d
P
B(x)
Must shift-register be lenghtened ? Yes B (x) ~ d -1i\ (P -1) (X) Store old sh.iftr.egister ) J.l after normalization. (J.l (x) ~ T (X) Update shift register. TJ.l ~ J.l - j
j L
Update length.
~J.l-L 1 T J.l J.l ~ P
Figure 5.2 Berlekamp-Massey algorithm.
=0
Bose-Chaudhuri-Hocquenghem Codes
125
j be the location of the oldest symbol in the linear feedback shift-register at the point where the register fails. Bearing in mind that the coefficients of all the polynomials determine the taps of the linear feedback shift-register, the algorithm works as follows. Initially, the tap coefficient and the length of the feedback shift-register are set to 1 and 0, respectively. This implies that the computed error-locator polynomial A (Ol(x) and its length are set to 1 and 0, respectively. xA(O)(x) is temporarily stored in the connection polynomial B(x). Every time a new syndrome component is taken, a discrepancy term dl£ is computed from the tap coefficients and the contents of the linear feedback shift-register. If there is no discrepancy, the tap coefficients generate the given syndrome sequence and the old connection polynomial B(x) is updated for the next iteration. If discrepancy exists, the tap coefficients do not generate the given syndrome sequence, a new connection polynomial T(x) for no discrepancy is computed from the shift-register taps, the discrepancy dl£ and the old connection polynomial B(x). The length of the feedback shift-register is then tested. If the length requires an extension, the old connection polynomial B(x) is normalized and updated, the shift-register taps are modified by the coefficients of T(x), the length of T(x), the value j, and the shift-register length L 1£ are updated; otherwise the shift-register taps are modified by the coefficients of T(x) and B(x) is updated for next iteration. The algorithm stops at the end of iteration f.L = 2t' and A (2t')(x) is taken as the error-locator polynomial A(x). The degree of the error-locator polynomial found by the algorithm is always at minimum. A detailed proof of the algorithm can be found in [6). When the error-locator polynomial is found, we need to determine the roots of A(x). A procedure called the Chien search, a trial and error method, can be used to determine the roots, (ai'f I, of the error-locator polynomial (7]. The technique tests whether a nonzero element in GF(qS) is a root of A(x) by substituting each nonelement in GF(l), in turn, into equation (5.23). If A(x) = 0, a root is found, and the power terms associated with the inverse of the roots give us the locations of the errors. In summary, the Berlekarnp-Massey decoding procedure of binary BCH codes is as follows: 1. Use the received polynomial r(x) to compute the syndrome components 51, 52,.·.,521" 2. Use the syndrome components 5 I, 52, ... , 52t' and apply the Berlekarnp-Massey algorithm to determine the coefficients of the errorlocator polynomial A(x). 3. Find the roots of A(x).
Error-Control Block Codes for Communications Engineers
126
4. Determine the error polynomial e(x) as identified by the roots of A(x).
5. Add the error polynomial e(x) to the received polynomial for correction. 4
Example 5.6 For t' = 3, p(x) = x + x + 1 with a as a pnmltlve root, the generator polynomial g(x) of a (15, 5) triple-error-correcting, binary, . .. BCH co de IS . x 10 + X 8 + X 5 + X 4 + X 2 + X + 1. narrow-sense, pnmltlve
°
l2
Assume v(x) = and r(x) = x + x 5 + x 3 and use Berlekamp-Massey algorithm to compute the error-locator polynomial. From equations (5.17) and (5.18), the syndrome vector t S = [51 52... 52t'] and 5 i = r(a ) . Therefore, 51 = r(x = a) = a
= (a
=
s
+ a
2
12
+ a
5
+ a
+ a + 1) + (a
2
,) + a) + a
,)
1
Similarly, 2
52 = r(a ) = 1 ,)
53 = r(a )
= a
10
4
54 = r(a ) = 1 5 [0 55 = r(a ) = a
56
= r(£1'6) = £1'5
Using the Berlekamp-Massey iterative algorithm we obtain Table 5.4. Assuming that we have filled out the third row (f-L = 2) and we know d: = 0, B(x) = x 2,A m(x) = x + 1 = Alm x + 1,· J = 1, and L 2 = 1. Now, f-L := f-L + 1 = 3, where := denotes the assignment operator. The error in the
next syndrome is
Bose-Chaudhuri-Hocquenghem Codes
127
Table 5.4 Berlekamp-Massey Iterative Algorithm Decoding Steps p..
dp,
B(x)
A1p,)(x)
i
Lp,
0 1 1 2 2 3 3
0 1 1 2 2 3 3
--
0 1 2 3 4 5
-
x
1
Sl
X
0
x
+1 X+ 1
6
0
a
2
5
a l Ox2
lO
a lOx3 3 a lOx 4 a lOx
0 a
X
+ alOx + a lOx 2 + a 5x 2 + a 5x + a 5x3 + a 5x 2
5x 2
+x +1 +x +1 5x3 +X+1 a 5x 3 a +X+1 a
a
5x 2
Thus, the current shift-register design does not give the next syndrome. We then compute the connection vector T(x) for no discrepancy. 2 T(x) = A (2)(x) + d 3B(x) = asx + x + 1. Since L 2 < (j.J- - j), we need to extend the shift-register length, store old shift-register contents, ufdate the shift-register content and the extended length, i.e., B(x) := a 10x + a 10x, 2 A (3)(x):= T(x) = a Sx + x + 1, T(J.L=3) = j.J- - j = 3 - 1 = 2, j = j.J- - L J.L-l = 3 - 1 = 2, and L(J.L =3) = T 3 = 2. The iteration process stops at the end of j.J- = 6. From Table 5.4, the error-locator polynomial is A(x) = A (6)(x) and A(x) = asx 3 + x + 1. To determine the roots of an error-locator polynomial in Example 5.6, 4 we substitute all nonzero elements in GF(2 ) into equation (5.23) and test 3 10 12 for A(x) = O. We found that the elements a ,a ,and at are the roots of l2 3 A(x). Their inverses are a , as, and a which are the error-locator numbers. 3 " = r(x) + Therefore, e(x) = x 12 + x S + x. The deco ded vector v(x) e(x) = O. Triple errors are corrected.
5.5.2 Euclid's Algorithm The Euclidean algorithm is an efficient recursive technique to find the greatest common divisor gcd(ro(x), rl (x) of two distinct nonzero polynomials ro(x) and rl (x) over GF(q). Furthermore, there exist polynomials a(x) and b(x) over GF(q) such that (5.33) a(x) and b(x) can be computed by the extended form of Euclidean algorithm. Given two distinct nonzero polynomials ro(x) and rl (x) over GF(q) , the extended Euclidean algorithm works as follows. Let
128
Error-Control Block Codes for Communications Engineers
deg rl (x)
~
= 1,
ao(x)
deg ro(x) bo(x)
=0
where deg ro(x) and deg rl (x) are the degrees of ro(x) and rl (x), respectively. For i ~ 2, compute the quotient polynomial qj(x) and remainder polynomial rj(x) by applying the long division process ro rj_2(x) and rj-l (x):
where 0
~
deg rj(x) < deg rj_1 (x), Then compute
and
Iteration stops when deg rj(x) = O. At the end of the iteration, the last nonzero remainder polynomial is the greatest common divisor gcd(ro(x), rl (x». In addition, we have the following properties: 1. deg aj(x)
=
L deg qj(x) j =3 t
2. deg bj(x) = Ldeg qj(x) } =2
3. deg bj(x)
=
deg ro(x) - deg rj-l(x) it I
4. deg rj(x)
=
deg ro (x) -
L deg qlx)
j=2
Example 5.7 Compute gcd(ro(x) , rl (x», a(x) and b(x), where 5 2 4 3 + 1 and rl (x) = x + x + x + 1 using the extended Euclidean
ro(x) = x
algorithm. Application of the extended Euclidean algorithm yields Table 5.5. The iteration process stops when deg rj(x) = O. At the end of the iteration, d 5 +l,x4 +x 3 +x 2 +1)=r3(x)=x+1, a(x)=a3(x)=x 2 +1 and gc(x b(x)
= b 3(x)
=
x
3
+ x
2
+ x.
Bose-Cbaudburi-Hocquenghem Codes Table 5.5 An Extended Euclidean Algorithm Example with ra(x) 4 2 rdx) =x + x 3 + x + 1
129
= x 5 + 1 and
,---------------------------,
,;(x)
q;(x)
x5 + 1
0 1 2 3 4
4
2
x + x3 + x + 1 2 x +x x+1
x+1 x2 + 1 x
B;(X)
b;(x)
1 0 1
0 1 x+1
2
x +1
x3 + x 2 + x
0
--
The extended Euclidean algorithm can be used to decode BCH codes. Define an infinite-degree syndrome polynomial as 5(x)
= ...
+ 5 2t,x
2t'
+ 52t'-IX
2t' -1
+ ... + 5 2x
2
+ SIX
(5.34)
where we only know the coefficients 51, 52, ... , 52t" 5(x) is made into an infinite-degree polynomial so that it can also be expressed as a generator function 00
5(x)
L 5i'x i'
=
(5.35)
i' = 1
Let the error-locator polynomial
(5.36) Also, define an error-evaluator polynomial O(x) = A(x)[5(x) + 1]
(5.37)
where n ()
.1£
x
= A v 5 2t'X 2t' + v + (A v-I 52t'
+
A v S 2t'-1) x 2 t' + v-I + ...
+ (5 2t' + A I52t ' - 1 + A 252t'-2 + ... + A v52t,-v)x
+ (5 v + 1 + Al 5 v + A 25 v-I + . . . + A v5 1) X lJ +
1
+ (5 u + Al 5 v- 1 + A 2 Sv- 2 + . . . + A v) Xv + . . . 3 + (53 + A15 2 + A25 1 + A 3 )X + (52 + A 1 5 1 +
A 2 )X2
+ (51 +
A l )x
+ 1
2t'
+ ...
(5.38)
130
Error-Control Block Codes for Communications Engineers
Assuming 1/ errors, then the degree of O(x) is less than or equal to 1/. We observe that the coefficients of O(x) from x V+ 1 through x 2t ' define exactly the same set of equations as (5.28). Given the 2t' known coefficients of S(x), the decoding problem becomes one of finding an error-locator polynomial A(x) of degree 1/ ~ t' that satisfies O(x) == A(x)[S(x) + 1] modulo-x2t' + I
(5.39)
Thus, equation (5.39) holds the key to solve the decoding problem. Equation (5.39) is also called the key equation and can be written as O(x)
=
p(x)x
2t' 1 +
+ A(x)[S(x) + 1]
(5.40)
2t' 1
If we let ro(x) = x + and rl (x) = S(x) + 1, and use the extended 2t Euclidean algorithm to compute the greatest common divisor of x ' + 1 and S(x) + 1, the partial results at the z-th step are (5.41 ) {rJx), ai(x), bi(x)} is a set of solutions to the above equation at step i. By property 3 of the extended Euclidean algorithm, we have
(5.42) and deg bi(x) + deg ri-l (x) = 2t' + 1
(5.43)
If deg ri-l (x) > t' and deg ri(x) ~ t", then we are assured that deg bi(x) ~ t' and deg bi+1(X) > t', respectively. Letting 0i(X) = ri(x) and Ai(x) = bi(x), this implies that the partial results at the /-th step provide the only solution to the key equation. The only solution that guarantees the desired degree of O(x) and A(x) is obtained at the first i. Thus, we simply apply the extended Euclidean algorithm to x 2t ' + I and S(x) + 1 until deg ri(x) ~ t', It can be seen that the useful aspect of the algorithm is not in finding a greatest common divisor but the partial results that provide the only solution to the key equation. In general, the Berlekamp-Massey algorithm is more efficient than the extended Euclidean algorithm. For error correction of binary BCH codes, the decoding steps can be summarized as follows:
Bose-Chaudhuri-Hocquenghem Codes
131
1. Use the received polynomial rex) to compute the coefficients 51, 52, ... , 5 2t, of the syndrome polynomial 5(x). 2. Apply the extended Euclidean algorithm with ro(x) := x 2t ' + 1 and r1(x) := 5(x) + 1 to compute the error-locator polynomial A(x). The iteration stops when deg ri(x) ~ t', 3. Find the roots of A(x). 4. Determine the error polynomial e(x) as identified by the roots of A(x).
5. Add the error polynomial e(x) to the received polynomial for correction. Example
5.8 Consider the (15, 5) triple-error-correcting binary, narrow-
sense, primitive BCH code with the code polynomial vex) and the received polynomial rex) given in Example 5.6. From Example 5.6, we have 51 := 1, . 52 := 1,53 := a 10,54:= 1, 5 s := a 10,and 56 := a S. T he syndrome polynomial " 5() . t h e exten d ed Euc lid IS x:= a Sx 6 + a 10x S + x 4 + a 10x 3 + x 2 + x. U smg 1 ean algorithm we obtain Table 5.6. The iteration process stops when deg ri(x) ~ (t' := 3). From Table 5.6, the error-locator polynomial is A(x) := b 3 (x) := as x 3 + x + 1, same as in Example 5.6.
5.6 Correction of Errors and Erasures The error-and-erasure decoding problem can be solved using the BerlekampMassey algorithm or the extended Euclidean algorithm. The technique is similar to that described in Section 3.8. For error-and-erasure correction of binary BCH codes, the decoding steps using the Berlekamp-Massey algorithm can be summarized as follows. Table 5.6 Extended Euclidean Algorithm Decoding Steps q;(x)
r;(x) = O;(x)
o
x7
1
S(xl + 1
2
a
3
10x4
B;(X)
=p;(x)
b ;(x) = A;(x)
o + a 5x2 + a 5x + 1
1 1
a ox + l
a
5x3
+x+1
'I
Error-Control Block Codes for Communications Engineers
132
1. Modify the received polynomial rex) by substituting O's in the erasure positions in rex). Label the modified received polynomial ro(x) and compute the syndrome components 5 j , 52,"" 5u. 2. Use the syndrome components 5 1,52 , ••• , 5u and apply the standard Berlekamp-Massey algorithm to compute the error-locator polynomial A(x). 3. Find the roots of A(x).
4. Determine the error polynomial e(x) as identified by the roots of A(x).
5. Determine the code polynomial vo(x), where vo(x) = ro(x)
+ e(x).
6. Modify the received polynomial rex) by substituting 1's in the erasure positions in rex). Label the modified received polynomial rl (x) and compute the syndrome components 5 j , 52,"" 5u· 7. Repeat steps 2 to 4 to determine the code polynomial VI (x), where VI (x) = rl (x) + e(x). 8. From the twO code polynomials, select the one that differs from rex) in the smallest number of places outside the f erased positions. 4
Example 5.9 For t' = 3, p(x) = x + x + 1 with a as a primitive root, the generator polynomial g(x) of a (15, 5) triple-error-correcting, binary, . ., BCH co d e IS . x 10 + X 8 + X S + X 4 + X 2 + X + 1. narrow-sense, pnmltlve l2 2 5 Assume v(x) = 0 and rex) = x + x + ?x + i x, where? denotes erasures. Use the Berlekamp-Massey algorithm to compute the error-locator polynomial and decode. Modifying rex) by substituting O's in. the erasure positions, we get ro(x) = )2 + x S• The syndrome vector
where
Therefore,
51 = a =a
Simi Iar Iy, 52 = a
13
l2
+
as
+
a2
+
a
14
, S3 = a 13, 54 = a II , 55 = a S, and S6 = a II .
Bose-Cbaudburi-Hocquenghem Codes
133
Using the Berlekarnp-Massey algorithm, we obtain Table 5.7. The iteration process stops when J.L :;;: 2t' :;;: 6. From Table 5.7, the errorlocator polynomial A(x) :;;: a 2x2 + a 14x + 1. By trial and error method, 0'3 and a 10 are roots of the error-locator polynomial A(x) such that A(x) = o. Their inverses are 0'12 and 0'5. Therefore, (5.44) where es = el2 = 1. Substituting es and el2 into equation (5.44), we have e(x) = x
12
+ x
5
The decoded vector vo(x) = ro(x) + e(x) = o. Next, modifying rex) by substituting I's in the erasure positions, we get r1 (x) = x 12 + x S + x 2 + x. The syndrome vector
where
Table 5.7 Berlekamp-Massey Iterative Algorithm Decoding Steps JL
dp.
B(x)
0 1 2 3 4 5 6
-
x ax ax 2 a 13x2 + a 14x a 13 x3 + a 14x2 a 13x4 + a 14x3
51 0 a 0 0 0
a
13x 5
+a
14x4
A(p.)(x)
i
Lp.
1
0 1 1 2 2 2 2
0 1 1 2 2 2 2
a 14x + 1 a 14x + 1 a 2x2 + a 14x + 1 a 2x2 + a 14x+ 1 a 2x2 + a 14x + 1 a 2x2 + a 14x + 1
I I I
I I
Error-Control Block Codes for Communications Engineers
134
Therefore, 51
l2
=a =a
9
Similarly, 52 = a , 53 = a 5(x)
= a l 3x6
l
4,
+ a
12
54
+ alOx
2
+ as + a
= a 3,
5S
=a
lO,
and 56 = a
13.
S + a 3x4 + a l 4x3 + a 9x 2 + a l 2x
Using the Berlekamp-Massey algorithm, we obtain Table 5.8. The iteration process stops at the end of f1 = 2t' = 6. From Table 5.8, . 4 3 8 2 12 the error-locator polynomial A(x) = a x + a x + a x + 1. 2 By trial and error method, a, a , and a 8 are roots of the error-locator 14 l polynomial A(x) such that A(x) = O. Their inverses are a , a 3, and a l. Therefore, (5.45) where e7 = e13 = el4 = 1. Substituting e7' e13' and el4 into equation (5.45), we have e(x)
= x l4
+ x
l3
+ x
7 l4
l2
The decoded vector VI (x) = rl (x) + e(x) = x + x 13 + x + xl + 2 S x + x + x. The code polynomial Vo (x) differs from r(x) in 2 places outside the 2 erased positions and the code polynomial VI (x) differs from rex) in 3 places outside the 2 erased positions. vo(x) = 0 is selected as the decoded code polynomial. Double erasures and double errors are corrected. Table 5.8 Berlekamp-Massey Iterative Algorithm Decoding Steps JL
dp-
B(x)
A(p-)(x)
j
Lp-
0 1 2 3 4 5
-
x
1
0
Sl 0
a
0 1 1 2 2 3 3
6
a
0 1 0
8
a
3x 3x 2 4x 2
7 a X 7 2 a x
+ + all x3 + a 12x2 + X a
a
a
4x3
11x4
+ a 12x3 + x2
12x
+1 12x +1 a 11x2 a + a 12x + 1 11x2 + a 12x + 1 a 4x3 a + a 8x 2 + a 12x+ 1 4x3 a + a 8x2 + a 12x + 1 a
1
1 2 2 3 3
Bose-Chaudhuri-Hocquenghem Codes
135
We can of course use the extended Euclidean algorithm at Step 2 instead of the Berlekamp-Massey algorithm to decode BCH codes. To illustrate this, we rework Example 5.9 using the extended Euclidean algorithm. Example 5.10 Consider the (I5, 5) triple-error-correcting, binary, narrow-sense, primitive BCH code with the code polynomial v(x) and the received polynomial r(x) given in Example 5.9. Modifying r(x) by substituting O's in the erasure positions, we get 12 S 14 13 ro(x) = x + x. From Example 5.9, we have 51 = a , 52 = a , 13, 53 = a 54 = all, 55 = as, and 56 = all. The syndrome polynomial is 11 6 S S 11 4 13 3 13 2 14 . 5(x) = a x + a x + a x + a x + a x + a x. Usmg the extended Euclidean algorithm we obtain Table 5.9. The iteration process stops when deg ri(x) :::; (t' = 3). From Table 5.9, 2 14 = the error-locator polynomial A(x) = b 3(x) = ax + a 13x + a 14 2 2 14 . a (a x + a x + 1) and the error-evaluator polynomial O(x) = r3(x) = 2 14. ax + a It can be seen that the solution A(x) is identical (within a constant factor) to that found in Example 5.9 using the Berlekamp-Massey algorithm. Clearly, the error locations must be identical to those found in Example 5.9 and
where es = el2 = 1. The decoded vector vo(x) = ro(x) + e(x) = O. Next, modifying r(x) by substituting 1's in the erasure positions, we get rl (x) = x 12 + x 5 + xL + x. From Examp Ie 5.9, we have 51 = a 12,52 = a 9, lO 14, 13. 3, 53 = a 54 = a 5 S = a , and 56 = a The syndrome polynomial is 13 6 10 5 3 4 14 3 9 2 12 . 5(x) = a x + a x + a x + a x + a x + a x. Usmg the extended Euclidean algorithm we obtain Table 5.10. The iteration process stops when deg ri(x) :::; (t' = 3). From Table 5.10, . 14 3 3 2 7 10 the error-locator polynomial A(x) = b4(X) = a x + a x + a x + a = Table 5.9 Extended Euclidean Algorithm Decoding Steps q;(x)
'i(X) = O;(x)
B;(X) = p;(x)
b;(x) = A;(x)
1
0
0
2
a 4x + a 13
1
1 a 4x + a 13
3
a 12x + a 5
x7 S(x) + 1 11x4 a 14x 5 + a + a 9x 3 + 5x 2 6x a + a + a 13 14 2 ax + a
a 12x + a 5
ax 2 + a 13x + a
0 1
:
14
Error-Control Block Codes for Communications Engineers
136
Table 5.10 Extended Euclidean Algorithm Decoding Steps qj(x)
fj(X)
0 1
2
a 2x + a 14
3
a 7x+a 12
4
a 5x+ a 14
=Gj(X)
x7 S(x) + 1 a 6x 5 + a 5x4 + a 4x3 + a 6x 2 + a 9x + a 14 ax 4 + a 5x3 + a 12x + a 12 3x a 2 + a 10
aj(x)
=pj(X)
b j(x)
=Aj(x)
1
0
0
1
1
a 2x + a 14
a 7x + a 12
a 9x2 + a 8x + a 12 a 14x3 + a 3x 2 +
a 12x 2 + a 3X + a 12
a 7x+a 1O
i ' I a 10(a 4x 3 + a 8x 2 + a 12x + 1) and th e error-eva uator polynornia 3 2 fl(x) = r4(x) = a x + a 10. The solution A(x) is also identical (within a constant factor) to that found in Example 5.9 using Berlekarnp-Massey algorithm. Again, the error locations must be identical to those found in Example 5.9 and
where e7 = en = e14 = 1. The decoded vector VI (x) = rl (x) + e(x) = x x
13
+ x
12
+ x
7
+ x
5
+ x
2
14
+
+ x.
The code polynomial Vo (x) differs from r(x) in 2 places outside the 2 erased positions and the code polynomial VI (x) differs from r(x) in 3 places outside the 2 erased positions. vo(x) = 0 is selected as the decoded code polynomial. Double erasures and double errors are corrected.
5.7 Computer Simulation Results The variation of bit error rate with the EblNo ratio of two BCH codes with coherent BPSK signals for AWGN channel has been measured by computer simulations. The system model is shown in Figure 5.3. Here, E b is the average transmitted bit energy, and No 12 is the two-sided power spectral density of the noise. The parameters of the codes are given in Table 5.11. Perfect timing, synchronization, and Berlekamp-Massey algorithm decoding are assumed. In each test, the average transmitted bit energy was fixed, and the variance of the AWGN was adjusted for a range of average bit-error rates.
Bose-Chaudhuri-Hocquenghem Codes
137
v Channel i codeword
::.1
Transmission path (Analog channel)
Noise
1\
I I+--U---t Dala sink
R
Estimate " - - - -...... Received vector of U
Figure 5.3 Model of a coded digital communication system. Table 5.11 Parameters of Binary, Narrow-Sense, Primitive BCH Codes
n
k
Designed Distance 2t d + 1
g(x)
15 15
7
5 7
x 8 + x7 + x 6 + x 4 + 1 x 10 + x8 + x 5 + x4 + x 2 + X + 1
5
The simulated error performance of the binary BCH codes with Berlekamp-Massey algorithm decoding is shown in Figure 5.4. Comparisons are made between the coded and uncoded coherent BPSK systems. At high E b/No ratio, it can be seen that the performance of the coded BPSK system is better than the uncoded BPSK system. At a bit error rate of 10-\ the (15, 7) double-error-correcting, and (15, 5) triple-error-correcting binary BCH coded BPSK systems give 0.75 and 1.25 dB of coding gains over the uncoded BPSK system, respectively.
Error-Control Block Codes for Communications Engineers
138
0
10 10 10 Q)
-1
-2
~ 10
-3
.t::
-4
eQ;
to
10 10 10
-5
-6
-2
0
2
4
6
e; INa
(dB)
8
10
12
Uncoded coherent BPSK (15.7) binary BCH code (15,5) binary BCH code
Figure 5.4 Performance of binary BCH codes with Berlekamp-Massey algorithm decoding in AWGN channels.
References [I]
Hocquenghem, A., "Codes Correctrurs dErreurs," Chiffres, Vol. 2, 1959, pp. 147-156.
[2]
Bose, R. C, and D. K. Ray-Chaudhuri, "On a Class of Error Correcring Binary Group Codes," Information and Control, Vol. 3, March 1960, pp. 68-79.
[3]
Bose, R. C, and D. K. Ray-Chaudhuri, "Further Results on Error Correcting Binary Group Codes," InfOrmation and Control, Vol. 3, September 1960, pp. 279-290.
[41
Massey, J. L., "Shifr Register Synthesis and BCH Decoding," IEEE Trans. on InfOrmation Theory, Vol. IT-15, No. I, January 1972, pp. 196-198.
[5J
Berlekamp, E. R., "On Decoding Binary Bose-Chaudhuri-Hocquenghem Codes," IEEE Trans. on Injormation Theory, Vol. IT-II, No.5, October 1965, pp. 577-580.
[6]
Blahut, R. E., Theory and Practice of Error Control Codes, Reading, MA: Addison-Wesley, 1984.
[7]
Chien, R. T., "Cyclic Decoding Procedure for the Bose-Chaudhuri-Hocquenghem Codes," iEEE Trans. on InfOrmation Theory, Vol. IT-10, No.5, October 1964, pp. 357-363.
6 Reed-Solomon Codes 6.1 Introduction Reed-Solomon (RS) codes form a subclass of the nonbinary BCH codes. The Reed-Solomon codes are cyclic codes and the codes were discovered by Reed and Solomon in 1960 [1]. Although they are a subclass of the nonbinary BCH codes, Reed-Solomon codes offer better error control performance and more efficient practical implementation because they have the largest minimum Hamming distance for fixed values of k and n. In this chapter, we describe the generation and decoding of Reed-Solomon codes. A thorough discussion of the Reed-Solomon codes can be found in references [2-6).
6.2 Description of Reed-Solomon Codes Let a be an element of GF(q5) and let td be the designed error-correcting power of a BCH code. In Chapter 5, we have seen that for some positive integers sand b ~ 1, a BCH code oflength n and minimum Hamming distance ~2td + 1 can be generated by the generator polynomial g(x) over GF(q) with ) Let a i , a nonzero eI 'In a b, a b+l , ... , a b+2trl as t he roots 0 f g (x. ement GF(q5), be a root of the minimal polynomial ¢Ji(X) over GF(q) and n , be the order of aI, for i = b, b + 1, ... , b + 2td - 1. The generator polynomial of a BCH code can be expressed in the form
where LCM denotes least-common-multipliers. The length of the code is 139
140
Error-Control Block Codes for Communications Engineers
(6.2) The degree of ¢ i(X) is s or less, and the degree of g(x) is, therefore, at most equal to 2std. The codewords generated by a binary BCH code have symbols from the binary field GF(2). In a nonbinary BCH code, the codewords have symbols from GF(q). These codes are called q-ary BCH codes. The Reed-Solomon codes, a very important sub-class of q-ary BCH codes, can be obtained by setting s = 1, b = 1, and q = pm, where p is some prime. Let a be a primitive element in GF(pm). A primitive Reed-Solomon code with symbols from GF(pm) has the following parameters:
n
Block length:
=
pm - 1
Number of check digits: c = (n - k) = 2td Minimum distance:
d min
= 2td + 1
An important property of any Reed-Solomon codes is that the true minimum Hamming distance is always equal to the designed distance. This is shown as follows. From the Singleton bound (Theorem 3.2), the minimum distance of any (n, k) linear codes
d min
~
(6.3)
n- k+
For BCH codes, we have (6.4)
For Reed-Solomon codes, n - k
d m in
=
2td and
k
::?: n -
+
1
(6.5)
Therefore, d min = n - k + 1. The minimum Hamming distance of a Reed-Solomon code is exactly equal to the designed distance, i.e., dmin = 2td + 1, and the error-correcting-power t' of a Reed-Solomon is equal to td. This tells us that for a fixed (n, k), no code can have a larger minimum distance than a Reed-Solomon code. A Reed-Solomon code is therefore a maximum-distance code. Let a', a nonzero element in GF(pm), be a root of the minimal polynomial ¢i(X) over GF(pm) and ni be the order of a', for i = 1, 2, ... , 2t'. The generator polynomial of a primitive Reed-Solomon code is 2
2t'
g(x)=(x-a)(x-a) ... ( x - a )
(6.6)
Reed-Solomon Codes
141
2, g(x) has a, a ... , ~ 2/' as all its roots from GF(pm). The degree of ¢ Jx) is 1, and the degree of g(x) is, therefore, at most equal to g(x) can also be expressed in polynomial form with dummy variable x and is given by
u:
g () x = x
n-k
+ gn-k-lx
n-k-l
+ gn-k-2x
n-k-2
+ ... + glx + go
(6 ) .7
where gi are elements from GF(pm), 0 ~ gi ~ (n - k - 1). It can be seen that the coefficients of g(x) are in the same field as the roots of g(x). Figure 6.1 shows the general encoder circuit ofan in, k) Reed-Solomon code with symbols over GF(pm). m In practice, Reed-Solomon codes with symbols over GF(q = 2 ) are of m_ary greatest interest. Now, each 2 symbol can be expressed as an m-tuple over GF(2) and the Galois field circuit elements of Figure 6.1 can be realized by binary logic elements and delay flip-flops. In what follows we consider m). Reed-Solomon codes with symbols from GF(2 Example 61 Let a be a root of the primitive polynomial 4 4 p(x) = x + x + 1 over GF(2). The order of the element a in GF(2 ) is 15 2, I4 4 and a is a primitive element in GF(2 ) = 10, 1, a, a ... , a }. The ele4 4 ments of GF(2 ) generated by the primitive polynomial p(x) = x + x + 1 over GF(2) is shown in Table 6.1. The polynomial representation of the m eIement a,i t. = 0 , 1, . . . , 2 - 2, .IS expressed as a i = am-l a m-l + am_2am-2 + ... + ao with binary coefficients, and the coefficients of the polynomial representation of a' are expressed as a binary m-ruple lao al . . . am - Jl. To generate a double-error-correcting, primitive Reed-
o
,
Attime:::Q
Information symbols
vo ..... uk-2, uk-1
Figure 6.1 (n, k) Reed-Solomon encoder.
Code symbols
142
Error-Control Block Codes for Communications Engineers
Table 6.1 4 Elements of GF(241 Generated by a Primitive Polynomial p(x) = x + X + 1 4-Tuple
Elements --
0
0= 1=
1
2
a = 3 a =
a
2
4
0'+ 0'2 +
0'5=
a6 =
0'3 +
7
0'3 +
=
aS = a9 =
00 1 0
0'3
a =
a
00 1 1 0'+
0'3 +
a
14
= 0'3 +
1 1
a 0'2 + 0'+
= 0'3 + 0'2+
000 1 1 10 0
o1 1 0
a
11 a = 0'3 + 0'2 + a 12 a = 0'3 + 0'2 + 0'+ 13
1
a2 0'2 +
0'10 =
a
o 1 00
a
0'=
0000 1 000
1
1 10 1 1010 o1 0 1 1110
o1 1 1 1 1 1
1111 1011 10 0 1
Solomon code of length n = 15, the paraqIeter t' is equal to 2. The generator polynomial g(x) has a, 0'2, 0'3, and a (2t =4) as roots and 2)(x g(x) = (x - o')(x - O' - O' 3)(x - 0'4) 4 13 3 6 2 3 10 =x +0' x +o'x +O'x+o'
Since n = 15 and the degree of g(x) is n - k = 4, the number of information symbols k is 11. g(x) generates a (15, 11) double-error-correcting, primitive Reed-Solomon code. The minimum Hamming distance of the code is 5. Figure 6.2 shows the encoder circuit of the (15, 11) Reed-Solomon code. Example 6.2 Let a be a root of the primitive polynomial 4 4 p(x) = x + x + lover GF(2). The order of the element a in GF(2 ) is 15 4 and a is a primitive element in GF(2 ). To generate a triple-error-correcting, primitive Reed-Solomon code of length n = 15, the parameter t' is equal to 3. The generator polynomial g(x) has a, 0'2, 0'3, and a (2t'=6) as roots and 2)(x 3)(x g(x) = (x - o')(x - O' - O' - a 4)(x - O' 5)(x - 0'6) 6 10 5 14 4 4 3 6 2 9 6 =x +0' x +0' x +o'x +o'x +O'x+o'
Reed-Solomon Codes
143
Code symbols
Information symbols Figure 6.2 (15, 11) Reed-Solomon encoder.
Since n = 15 and the degree of g(x) is n - k = 6, the number of information symbols k is 9. g(x) generates a (I5, 9) tripe-error-correcting, primitive Reed-Solomon code. The minimum Hamming distance of the code is 7. It is possible to take a nonprimitive element {3 in GF(pm), p is a prime, to generate a Reed-Solomon code. Codes generated by nonprimitive elements in GF(p m) are called nonprimitive Reed-Solomon code. The generator polynomial IS
2
g(x) = (x - {3)(x - {3 ) ... (x - {3
21'
)
(6.8)
and the codeword length n is equal to the order of the field element {3. In practice, non primitive Reed-Solomon codes are rarely used because the distance of a nonprimitive Reed-Solomon code cannot be increased, and codes oflength other than p m - 1 can also be obtained by shortening a primitive Reed-Solomon code.
6.3 Decoding of Reed-Solomon Codes 6.3.1
Berlekamp-Massey Algorithm
Decoding of a primitive or nonprimitive Reed-Solomon code can be accomplished by the Berlekamp-Massey iterative algorithm with some extra calculations. Let a be an element over GF(q = fm), p is some prime. Also, let a code polynomial v(x) = Vn_IX n- 1 + Vn_2Xn- + ... + VIX + va, an error polyno. I ex ( ) = en-lX n-l + en-2X n-2 + ... + elx + eo, an d t he receive . d po Iynorrua . I ( ) n-l n-2 Th mla r x = rn_[x + rn-2X + ... + rlx + roo en
Error-Control Block Codes for Communications Engineers
144
r(x) = v(x) + e(x)
(6.9)
As described in Section 5.5, the syndrome vector and the infinite-degree syndrome polynomial are defined as (6.10)
and S(x)= ... +S2t'X
2t'
+S2t'_IX
21'-1
+".+S2X
2
+SIX
(6.11)
respectively, where the 2t' known syndrome components
S,
= r(a
l
(6.12)
)
( i) n-2 + ... +rla ( i) +rO ( i) n-I +rn-2a =rn_Ia
(6.13)
= e(a i )
(6.14)
or
S,
( i) n- 2 i i) n-I = en-l ( a + en-2 a + ... + el (a ) + eo
for 1 5 i 5 Zt', The error-locator polynomial is A(x)
where
v- 1
=
Avxl' + Av_1x
=
(l - a)lx)(l - ahx)
i.. n. ... .i. indicate the
+
+ Ajx +
(l - a).x)
(6.15) (6.16)
locations of errors in the error pattern (6.17)
The coefficients in e(x) represent the error values and are related to the known syndrome components by the following equation l'
s, = L ei/ a i,/
(6.18)
1=1
for 1 5 i 5 ir, Given the 2t' known syndrome coefficients of S(x), the decoding problem becomes one of finding an error-locator polynomial A(x) of degree v 5 t' that satisfies the following key equation
Reed-Solomon Codes !1(x) == A(x)[S(x) + IJ modulo-x
145 2t' 1 +
(6.19)
where !1(x) is the error-evaluator polynomial of degree less than or equal to P. The Berlekamp-Massey algorithm is used to determine the error-locator polynomial A(x) from the known syndrome coefficients of S(x) and the Chien search is used to find the roots of A(x). Once the roots of A(x) are found, we can dererrnine jj , ii. ... ,}]) to locate the errors. Up to this point, this is all we need to compute the error vector e(x) with unity coefficients for binary codes. For nonbinary codes, we need to determine the magnitude of the errors. We can find the error values by solving (6.18). Alternatively, we can use the error-evaluator polynomial fl(x) and the formal derivative of A(x) to determine the magnitude of the errors [2, 7J. The technique is more efficient than solving (6.18) when P becomes very large. Definition 6.1. Let f(x) = fmxm + fm-l x m- I + ... + fix + fo be a polynomial of degree m with coefficients in GF(q). The formal derivative is defined as f'(x) = mfmxm-1 + (m - I)fm_lxm-2 + ... + 3f3x 2 + 2h x + fi. The main properties of the formal derivative are: 1. If f
2(x)
divides a(x), then f(x) divides a'(x).
2. [f(x)a(x»)' = f'(x)a(x) + f(x)a'(x). m 3. Iff(x) E GF(2 )[x J, the set of all polynomials in x with coefficients from GF(2 m ), then f'(x) has no odd-powered terms.
The error magnitude can be computed by utilizing the error-evaluator polynomial !1(x) and the formal derivative of the error-locator polynomial A(x). The following theorem states how. Theorem 6.1 [2, 7]. The error magnitudes are given by
(6.20)
for 0 ~}i < nand 1 ~ i ~ P. Proof The error-locator polynomial is ])
A(x)
=
Il (l 1~1
and the formal derivative of A(x) is
ai/x)
146
Error-Control Block Codes for Communications Engineers
A'(x)
=
[rl
(I - ai/x)]
1= 1
By property 2, we get
Evaluating A'(x) at an actual error location x = (ai,r 1, we get
i
A'[(ai,r 1] =
)-I]}
{-ai'O [1 - a i/'(ai i
1=1
1'*1
If we expand the above expression, all the products in the above expression have a zero term except for the case l = i. We can write
A'[(a i ir 1]
=
-ai,O [1 - a J/'(a Ji)- I ] I' *i
The infinite-degree syndrome polynomial is oc
S(x) =
l.. Si'/ i' = 1
and
Si'
=
r(a i')
II
=
l.. eJ/aJ/)i' 1=1
where we only know the first 2t' coefficients. Then S(x) can be reduced to a simple rational expression.
S(x)
=
=
i~l [~ei/aJ/{]/
i
1=1
eJ/[
f (a J/{/]
r; 1
Reed-Solomon Codes
147
The error-evaluator polynomial is n(x) == A(x)[S(x) + 1] modulo-x
{[fI =i
==
(1 - ail'x)]
1'=1
[i !=]
ir + 1
eli (
ai/x )] + A(X)} modulo-x2t' + I 1 - a/!»
[eJpi'xn (l - ail'x)] + A(x) I'ot-I
!=l
Evaluating flex) at an actual error or erasure location x = (ai,)-l, we .
have A[(aJT
1
1 = 0 and
fl[(ai,r1l
=
i 1=1
{ei/aj/(aj'T
I
Il [1 -
ajl'(aJ,r
l
l}
r»!
Again, if we expand the above expression, all the products in the above expression have a zero term except for the case I = i. we can write
Hence,
and
The decoding procedure of Reed-Solomon codes is as follows. 1. Compute the syndrome components 51, 52, . . . , 52t' from the received polynomial rex). 2. Use the syndrome components 51, 52, . . . , 52t' and apply the Berlekamp-Massey algorithm to compute the error-locator polynomial A(x).
Error-Control Block Codes for Communications Engineers
148
3. Compute the error-evaluator polynomial 2t' I !1(x) == A(x)(5(x) + 1) rnodulo-x "'.
!1(x),
where
4. Find the roots of A(x) and locate the error positions.
5. Determine the magnitude of the errors and store in the error polynomial e(x).
6. Subtract the error polynomial e(x) from the received polynomial r(x) for correction. 4
Example 6.3 For t' = 3, p(x) = x + x + 1 with a as a primitive root, the generator polynomial g(x) of a (15, 9) triple-error-correcting, primitive 6 10 5 14 4 4 3 6 2 "9 6 . Ree d-Solornon co de IS x + a x + a x + a x + a x + a x + a . 4 12 3 6 7 3 Assume v(x) = and r(x) = a x + a x + a x and use the Berlekamp-
°
Massey algorithm to decode. The syndrome vector
where
Therefore,
5] = r(x = a) = a =a+a =a
9
+a
4(a I 2
) + a
3(a 6)
+ a
7(a 3
)
10
12
Similarly, 52 = 1, 53 = a 14, S4 = a 10, 55 = 0, and S6 = a 12. Using the Berlekamp-Massey iterative algorithm as discussed in Section 5.5.1 of Chapter 5, the decoding steps are shown in Table 6.2. From Table 6.2, A(x) = A16)(x) and A(x)
=
a
A(x)(5(x) + 1] = a
6x 3 3x9
+ a
4x2
+ ax
8
+ a
+ x
7
7x+ + a
1 6x 3
(6.21 ) + x
2
and !1(x) == A(x)[5(x) + 1] rnodulo-x 6 3
=ax
+x
2
2
+ax+l
2[' I +
+ a
2x
+
Reed-Solomon Codes
149
Table 6.2 Berlekamp-Massey Iterative Algorithm Decoding Steps JL
dlJ
B(x)
A(IJ)(x)
i
LIJ
0 1 2 3 4 5
-
1
6
a 13
x a 3x a 3x 2 a 3x 2 + X a 3x 3 + x 2 a 2x 3 + a 9x 2 + a 5x a 2x4 + a 9x 3 + a 5x 2
0 1 1 2 2 3 3
0 1 1 2 2 3 3
51 7
a
1
a7 a
10
a 12x + 1 a 3x + 1 a 3x 2 + a 3x+ 1 a 12x 2 + a 4x + 1 a 13x 3 + a 3x 2 + a 4x + 1 a 6x 3 + a 4x 2 + a 7x + 1
. an d error method, a 3, a 9, and a 12 are roots of A(x) such that By trial A(x) = O. Their inverses are a 12 , a 6, and a 3. Therefore, (6.22) Use (6.20),
r
-a 3{t:i
. [(a 3)3
1+
r
[(a 3)2 1 + a 2 • (a 3)- 1 + I}
a6 •
[(a 3)2 1 + a 7 a 3{a6 • a 6 + a 9 + a 2 . a l 2 + l} 6 9 7 a
a
3{a 12
+ a
. a
9
12
+ a
9
+ a
+ a
1+ a
a 3{a
14
+ l}
7
+ a
1+ a
r
14
+ 1}
7
a 3 • a l3 a9
=a 7 Similarly, e6 we have
=
4. 3 a and e12 = a Substituting e3' e6, and e12 into (6.22), 4 12
e(x) = a x The decoded vector vex)
=
3 6
+ a x
rex) - e(x)
7 3
+ a x =
O. Triple errors are corrected.
Error-Control Block Codes for Communications Engineers
150
6.3.2 Euclid's Algorithm For error correction, the decoding steps using the extended Euclidean algorithm are as follows. 1. Compute the coefficients 51' 52, ... , 5 2t , of the syndrome polynomial 5(x) from the received polynomial r(x). 2t'+1
2. Apply the extended Euclidean algorithm with ro(x) = x and r1 (x) = 5(x) + 1 to compute the error-locator polynomial A(x) and the error-evaluator polynomial O(x). The iteration stops when deg ri(x) ~ t '. 3. Find the roots of A(x) and locate the error positions. 4. Determine the magnitude of the errors and store in the error polynomial e(x). 5. Subtract e(x) from the received polynomial r(x) for correction. Example 64 Consider the (IS, 9) triple-error-correcting, primitive Reed-
Solomon code with the code polynomial v(x) and the received polynomial r(x) given in Examfle 6.3. From Example 6.3, we have 51 = a 12, 52 = 1, 14 12 1 53 = a , 54 = a , 55 = 0, and S6 = a . The syndrome polynomial is 12 6 10 4 14 3 2 12 . . 5(x) = a x + a x + a x + x + a x. Using the extended Euclidean algorithm as discussed in Section 5.5.2 of Chapter 5 to compute the errorlocator polynomial A(x) and the error-evaluator polynomial O(x), we obtain Table 6.3. The iteration process stops when deg ri(x) ~ (t ' = 3). From Table 6.3, . A(x) = b4(X) = a 73 the error-locator polynomial x + a 52 x + a 8x + a = Table 6.3 Extended Euclidean Algorithm Decoding Steps qj(x)
fj(X)
0 1
x
2
a
13
a
4
a
3x
14x
+a3
5x+
a
=fij(x)
7
SIx) + 1 13x5 a + a 2x4 + a\3 + 2 3x x +a 8x4 a + a 6x3 + a 13x2 + 4x a +1 2 3 3x a) x + ax + a +a
Bj(X)
=Pj(x)
1
a
=Aj(x)
0 1
0 1 a
bj(x)
a 14x
+a3
4x2
+ a 2x + a
a
3x
2x2
+ a 6x+ 1
7 3 5x2 a x +a + 8x a +a
Reed-Solomon Codes
151
4x 2 + a + 0'7 X + 1) and the error-evaluator polynomial D(x) = 73 2 3 63 2 2 r4(x) = a X + ax + a X + a = 0'(0' X + X + a X + 1). 3 9 d 12 By mal and error method, a , a , an a are roots 0 f A(x) such that A(x) = O. Their inverses are 0'12, 0'6, and 0'3. Therefore, a(a
6x3
o
(6.23)
To determine the error magnitudes, we substitute the syndrome coefficients 5 1,52 , and 53 and the valuesjl,h, andj3 into (6.18). We get 51 = a
J2
52 = 1
53
= 0'14
3
a e3 = (a 3)2
=
=
(a 3)3
6
+ a e6 + (a 6)2
+ a
12
el2
e6 + (a12)2e12 12)3 6)3 e6 + (a eI2 e3 + (a e3
Hence,
0'7, e6 = 0'3, and el2 glvmg e3 (6.23), we have 4 12
e(x) = a x
4
a . Substituting e3' e6, and el2 into
3 6
+ax
7 3
+ax
The decoded vector v(x) = r(x) - e(x) = O. Triple errors are corrected.
6.4 Correction of Errors and Erasures 6.4.1
Berlekamp-Massey Algorithm Decoding
Error-and-erasure correction of Reed-Solomon codes using the BerlekampMassey algorithm can be accomplished as follows. Let a be an element over GF(q = pm), p is some prime. Denote the infinite-degree syndrome polynomial by 5(x)
= ... + 521'x
21'
+ 521'-1 x
21'-1
where the Lt' known syndrome coefficients
+ ... + 5 2x
2
+ 5 1x
(6.24)
Error-Control Block Codes for Communications Engineers
152
Si
= r(a i ) = r n-l ( a i)n-I
(6.25)
I) + ro
( i)n-2
+ . . . + rl (a
( i)n-2
i) + eo + . . . + el a
+ r n- 2 a
(6.26)
or
S,
= e(a =
l
(6.27)
)
i)n-I
en-l ( a
+ en-2 a
(
u: Suppose that the error vector has TJ = v + f nonzero digits at
for 1 ~ i ~ locations it,
ji- . . . ,
i TJ , then
(6.28)
where 0 ~ i, < j i . . . < i TJ < nand 2v + f 5, 2t'. The coefficients in e(x) represent the error values and are related to the known syndrome coefficients by the following equation TJ
s, = L ej,(ai/)i
(6.29)
1=1
for 1 ~ i 5, ir. Also denote the error-locator polynomial by A(x)
=
ll
AlIx + AlI_1X
ll
-
1
+ . . . + A1x +
(6.30)
Define an erasure-locator polynomial (6.31 )
where v and f are the number of errors and erasures in the received polynomial r(x), respectively. The erasure-locator polynomial is defined so that one can compute the syndrome coefficients by substituting arbitrary symbols into the erased positions. The substitution offarbitrary symbols into the erased positions can introduce up to f errors. Then the polynomial 'I'(x)
= A(x)f(x) \Tr
='YTJX =
TJ
\Tf
+'Y1/-1X
(6.32) TJ-l
\Tr
+···+'Ylx+l
(l - aJ1x)(I - a J2x) ... (l - aJqx)
(6.33) (6.34)
TJ
=
IT (l != 1
ai/x)
(6.35)
Reed-Solomon Codes
153
represents the error-and-erasure-Iocator polynomial of degree 'Tf, and the power terms associated with the inverses of the roots of'l'(x) give the locations of errors and erasures. In a similar fashion to the technique described in Section 5.5.1 of Chapter 5 for error-only decoding ofBCH codes, it is easily verified that the coefficients of 'I'(x) and the known syndrome coefficients are related by the following equation. (6.36) Because 2v +15, 2t', the above equation always specifies known syndrome coefficients 1 5, i 5, 2t', if l' is in the interval 1 5, l' 5, u, Letting j = l' + 'Tf, we obtain
s..
(6.37) for 'Tf + 1 5, j 5, 'Tf + u, Again, the functions defined by (6.37) are Newton's identities. Rearranging (6.37), we get 5 j = -'I'15j - 1 - 'l'2 5j-2 - ... - '1'7/-1 5j_7/+1 - 'I' 7/5j_7/
(6.38)
7/
= - I'I'/5j-1
(6.39)
1=1
Define
flex) = 'I'(x)[5(x)
(6.40)
+ 1]
where the error-evaluator polynomial "'() x
.1£
5 2t' +7/ = ,It '1:'7/ 2t'X
+
('It 5 ,Tr 5 ) 2t' +7/-1 + ... '1:'1]-1 Tt' + '1:'7/ 2t'-1 x
,
+ (52t' + 'l'I S2t'-1 + 'l'2 52t'-2 + ... + 'l'7/ 52t'-7/)X + 'l'1]5 1)x1]+1
+ (57/+1 + '1'1 57/ + '1'2517-1 +
+ (52 + '1'151 + 'l'2)X
2
+ ...
(6.41)
+ 'l'1])x7/ + ...
+ (51] + '1'1517-1 + '1' 2517-2 +
+ (53 + '1'\52 + '1'251 + 'l'3)X
2t'
3
+ (51 + 'l'1)x+ 1
Assuming v errors and I additional errors added in the erased positions, then the degree of flex) is less than or equal to 1]. This means that the
154
Error-Control Block Codes for Communications Engineers
coefficients of D,(x) from x 1J + I through x2t' define exactly the same set of equations as (6.36). These coefficients provide a set of 2t' - v - f linear equations in YJ unknown coefficients of 'I'(x) and can be solved if 2v + f ~ 2t'. Given the 2t' known syndrome coefficients of S(x), the decoding problem becomes one of finding an error-and-erasure-locator polynomial 'I'(x) of degree YJ thar satisfies rhe following key equation D,(x) == 'I'(x)[S(x) + 1] rnodulo-x
2t' 1 +
(6.42)
Since F(x) is a factor of'l'(x), f(x) is used to initialize the BerlekarnpMassey algorithm at step J-t = f = degf(x) together with the known syndrome coefficients to compute the error-and-erasure-locator polynomial 'I'(x). To compute the syndrome coefficients, we assign arbitrary values to the symbols at the erasure positions. In practice, zeros are often assigned to the erasure positions. This reduces the number of computations in the syndrome calculations. Since arbitrary assignment of symbols into the erased positions may result in incorrect guessing of those symbols, we need to estimate the values associated with the erasure positions. In a similar fashion to the technique described in Section 6.3.1 for erroronly decoding of Reed-Solomon codes, the error-and-erasure values may be calculated from
(6.43)
where 0 ~ ji < nand 1 ~ i ~ YJ, !l(x) is the error-evaluator polynomial, and 'I"(x) is the formal derivative of the error-and-erasure-locator polynomial 'I'(x). The error-evaluator polynomial is computed from (6.42). For error-and-erasure correction, the decoding steps can be summarized as follows. 1. Compute the erasure-locator polynomial f(x).
2. Modify the received polynomial r(x) by substituting arbitrary received symbols in the erasure positions in r(x) and compute the syndrome coefficients 5 I, 52, ... , S2t'. 3. Replace the error-locator polynomial A(x) shown in Figure 5.2 of Chapter 5 by the error-and-erasure-locator polynomial 'I'(x). 4. Set initial conditions J-t = f, B(x) = xf'(»), 'I'(,u)(x) = I'(x), j = 0, and L p = f
Reed-Solomon Codes
155
5. Use the syndrome coefficients 51' 52, ... , 52t' and apply the Berlekamp-Massey algorithm to compute the error-and-erasure-locator polynomial 'I'(x). 6. Compute the error-evaluator polynomial !1(x), where !1(x) == 'I'(x)[5(x) + 1]modulo_x 2t ' + I . 7. Find the roots of'l'(x) and locate the error-and-erasure positions. 8. Determine the magnitude of the errors and erasures associated with the error-and-erasure positions and store in e(x). 9. Subtract e(x) from the modified received polynomial for correction. 4
Example 6.5 For t' = 3, P(x) = x + x + 1 with a as a primitive root, the generator polynomial g(x) of a OS, 9) triple-error-correcting nrirnitive . 6 10 5 14 4 4 3 6 2 '9 6 Reed-Solomon code IS x + a x + a x + a x + a x + a x + a . 11 10 7 3 2 Assume v(x) = 0 and r(x) = a x + a x + x + x, where r1 = 1 and r: = 1 denotes erasures. Use the Berlekamp-Massey algorithm to compute the error-and-erasure-locator polynomial and decode. 2x) 3x2 The erasure-locator polynomial f(x) = 0 - ax)(l - a = a + a\ + 1. The syndrome vector
S
=
[51
52t']
52'"
where
Therefore, 51 = a
=a =a 3
11(a 10 6
+a
) +
10
=
2
+ a
2
+a +a
13
3
Similarly, 52 = a , 53 = a , 54 = a
5(x)
a 7(a 3 ) + a
a 9x6 + a 3x 5 + a
14 ,55
14x4
= a 3, an d
+ a
S6
= a 9.
3x3 + a 3x2 + a 13x
Using the Berlekarnp-Massy algorithm we obtain Table 6.4. The iteration process stops when J-L = Zt' = 6. From Table 6.4, the error. 4 14 3 2 14 h and-erasure-locator polynomial 'I'(x) = ax + a x + x + a x + 1. T e 4 21 3 error-evaluator polynomial !1(x) == 'I'(x) [5(x) + 1] modulo-x ' + 1 = a x + forrnal deri ',Tr a'3x 3 + a 5x 2 + a 2x + 1. The formal envanve 't' (x) = a 14x 2 + a 14.
Error-Control Block Codes for Communications Engineers
156
Table 6.4 Berlekamp-Massey Iterative Algorithm Decoding Steps W(pl(x)
B(x)
dp
JL
j
Lp
0
2 3 3 4 4
0 1 2 3 4 5 6
f(x)
xf(x) a 12
6x 3
8x 2
3x
14x+
x +a +a 1 a 3x 2 + a 5x + 1 ax 4 + a 3x 3 + a 8x 2 + a 5x + 1 ax4 + a 14x 3 + x 2 + a 14x + 1
a +a +a a 6x 4 + a 8x 3 + a 3x 2 a 8x 3 + a 10x2 + a 5x a 8x 4 + a lOx3 + a 5x 2
a9 a 10 a7
6x 2
3
1
1
2 2
. By tnal and error method, a 5,a 12,a 13,and a 14 are roots 0 f the errorand-erasure-locator polynomial 'I'(x) such that 'I'(x) = O. Their inverses are 3 a 2,an d a. T hererore, C a 10,a,
(6.44)
Use (6.43),
r
r
r
r
_a 3{a3 • [(a 3)4 l + a 3 . [(a 3)3 l + a 5 • [(a 3)2 l + a 2 • (a 3 1 + I}
r
l + a 14
2
. a
a 14 • [(a 3)2
a 3{a 3 . a- 12 + a 3 . a -9 + a 5 . a -6 + a 2 . a -3 + I} a 3
a {a
3
3
=
+ a
3
. a
. a
6
-6
+ a
5
9
14
+ a . a + a 14 9 14 a . a + a 14 14 3{a6 9 a + a + a + a + I} 8 14 a +a 3 10 a 'a
a
. a
14
12
+ l}
6
a7 Similarly, el = I, «: = I, and
e(x) = a
11 10
x
7 3
+ a x
+x
2
+ x
Reed-Solomon Codes
The decoded vector v(x) errors are corrected.
= r(x) - e(x) =
157
O. Double erasures and double
6.4.2 Euclid's Algorithm Decoding In applying the extended Euclidean algorithm for error-and-erasure correction of Reed-Solomon codes, the Forney's approach should be used [8]. Denote the infinite-degree syndrome polynomial by S(x)
= ... +
S2t'X
ir
+ S2t'-IX
2t' -I
2
+ ... + S2X
+ SIX
(6.45)
the error-locator polynomial by (6.46)
and the erasure-locator polynomial by (6.47)
where v and fare the number of errors and erasures in the received polynomial r(x), respectively. Then the polynomial 'I'(x)
= A(x)f(x)
(6.48)
represents the error-and-erasure-locator polynomial. For error-and-erasure decoding, the syndrome polynomial should be modified accordingly [8]. Let 2(x) + 1 == f(x)[S(x) + 1] modulo-x2t' + 1
(6.49)
where the modified syndrome polynomial is - ( ) =02t'X 2t' +02t'-lx 2t'-1 + . . . +02 - x 2 +Olx ox
(6.50)
Define O(x) = A(x)f(x)[S(x) + 1]
(6.51)
= A(x)[2(x)
(6.52)
Then O(x)
+
1]
158
Error-Control Block Codes for Communications Engineers
where the error-evaluator polynomial
(6.53)
Assuming 1J errors and f additional errors added in the erased positions, then the degree of O(x) is less than or equal to 1J + f This means that the coefficients of O(x) from x/l+f+1 through xU provide a set of 2t' - 1J - f linear equations in 1J unknown coefficients of A(x) and can be solved if 21J + f'5, 2t'. Given the 2t' known syndrome coefficients 51, 52, ... , 5u of 5(x) and the coefficients of nx), the decoding problem becomes one of finding an error-locator polynomial A(x) of degree 1J that satisfies the following key equation
= A(x)[2(x)
O(x)
+
1] modulo-x 2t ' + I
(6.54)
The desired solution is the polynomial A(x) which has degree for even f
t' - [
1J '5, {
t' _
~2
+
! 2
(6.55)
for oddf
and can be shown as follows. (6.54) can be written as O(x)
= p(x)x 2t'+1
+ A(x)[2(x) +
1]
(6.56)
2t
If we let ro(x) = x ' + I and rl (x) = 2(x) + 1, and use the extended ' hm to compute the greatest common diivrsor " Euc II'dean aI gont 0 f x 2t' + 1 an d 2(x) + 1, the partial results at the i-th step are (6.57) {ri(x), ai(x), bi(x)} is a set of solutions to the above equation at step i. By
property 3 of the extended Euclidean algorithm, we have
Reed-Solomon Codes
159
(6.58)
and deg bi(X) + deg ri-l (x)
=
2t' + 1
(6.59)
the case of I even. If deg ri-l (x) > t' + (112) and deg ri(x) :S t' + (112), then we are assured that deg bi(x) :S t' - (112) and deg b i+ 1(x) > t' - (112), respectively. The case for lodd follows in a similar manner. In this case, if deg ri-l (x) > t' + (1- 1)/2 and deg ri(x) :S t' + (1- 1)/2, then we are assured that deg bi(x):S t' - (I - 1)/2 and deg b i + 1(X) > t' - 'J': 1)/2, respectively. Letting ni(x) = ri(x) and Ai(x) = bi(x), this implies that the partial results at the z-th step provide the only solution to the key equation. The only solution that guarantees the desired degree of n(x) and A(x) is obtained at the first i. Thus, we simply apply the extended Euclidean algorithm to x 21' + 1 and 2(x) + 1, until Consider
t
deg ri(X) :S
{
t
l
+
I
1;-
for even I 1
+--
2
for oddf
The modified syndrome coefficients are used to compute the error-locator polynomial A(x) and the error-evaluator polynomial n(x). We can then compute the error-and-erasure polynomial 'I'(x) , where 'I'(x) = A(x)f(x). For error-and-erasure decoding, the decoding steps can be summarized as follows. I. Compute the erasure-locator polynomial r(x).
2. Modify the received polynomial r(x) by substituting arbitrary received symbols in the erasure positions in r(x) and compute the syndrome coefficients 5 I, 52, ... , S2t'.
3. Compute f(x)[S(x) + I] and the modified syndrome polynomial 21 S(x), where 2(x) == f(x)[S(x) + 1] - 1 modulo-x ' + I. 21
4. Apply the extended Euclidean algorithm with ro(x) = x ' + I and rl (x) = 2(x) + 1 to compute the error-locator polynomial A(x) and the error-evaluator polynomial n(x). The iteration stopS when
Error-Control Block Codes for Communications Engineers
160
t' + [
deg ri(X)
s
I
for even!
, ;_ 1
t +--
5. Find the roots of'l'(x) positions.
=
2
for odd!
A(x)nx) and locate the error-and-erasure
6. Determine the magnitude of the errors and erasures associated with the error-and-erasure positions and store in e(x).
7. Subtract e(x) from the modified received polynomial for correction. Example 6. 6 Consider the (15, 9) triple-error-correcting, prImitive Reed-Solomon code with the code polynomial v(x) and the received polynomial r(x) given in Example 6.5. From Example 6.5, we have I' (x) = (1 - ax)(1 - a 2x) = a 3x 2 + a 5x + 1, 13 3 51 = a , 52 = a , 14, 3, 3, 9. 53 = a 54 = a 55 = a and 56 = a The syndrome polynomial is 5 (x) = a 9x 6 + a 3x 5 + a 14x 4 + a 3x 3 + a 3x 2 + a 13x. T hus, nx)[5(x) + 1]
=
a
12 8
8 7
7 6
x + a x + a x 7x+ 3x 2 + a 1 + a
+ a
10 5
x
+ a
12 3
x
and 2(x) + 1
= f(x)[5(x)
2t
+ 1]modulo-x ' +1 7 6 10 5 12 3 3 2 7 =ax +a x +a x +ax +ax+
Using the extended Euclidean algorithm to compute the error-locator polynomial A(x) and the error-evaluator polynomial fi(x), we obtain Table
6.5. The iteration process stops when deg ri(x)
s
(t'
+ { =
4).
From Table
6.5, the error-locator polynomial A(x) = b3 (x ) = a 9x 2 + a 8x + II
13 2
12
.
all
=
a (a x + a x + 1) and the error-evaluator polynomial fi(x) = r3(x) = l 4x3 l 3x 2 l 4x4 2 2x l l(a 3x4 3x3 + a + ax + a + all = a + a + a 5x + a + a l 2x4 1). The error-and-erasure-locator polynomial 'I'(x) = A(x)nx) = a + 10 3 II 2 10 II II 4 14 3 2 14 a x + a x + a x + a = a (ax + a x + x + a x + 1). The C . ,'r'() ' rorma I deri errvatrve 't' x = a 10x 2 + a 10 = a II( a 14x 2 + a (4) . Th e soI utions fi(x), 'I'(x), and 'I"(x) are identical (within a constant factor) to those found
Reed-Solomon Codes
161
Table 6.5 Extended Euclidean Algorithm Decoding Steps qi(X)
'i(X)
=0i(X)
Bi(X)
=Pi(X)
b i(X)
=Ai(X)
--_.. _ - - - - - - - - -
0 1 2
a 8x + a
3
aX+ a
11
x7 1 SIx) + 1 0 7 a 6x5 + a 5x4 + a x 3 + a 3x2 + 1 12x a + all a
14x4
a
+ a 14x3 + ax 2 + a 13x + ax+ a
0 1 a 8x+ a 11 a 9x2 + a 8x + all
11
in Example 6.5 using the Berlekamp-Massey algorithm. Clearly, the error-anderasure locations and values must be identical to these found in Example 6.5 and (6.60)
h el = 1, e2 = 1, e3 = a 7 ,and elO = a 11. were Substituting el' e2, e3' and elO into (6.60), we have e(x) = a
The decoded vector v(x) errors are corrected.
11 10
x
= r(x)
7 3 2 + a x +x +x
- e(x)
= O. Double erasures and double
6.5 Computer Simulation Results The variation of bit error rate with the EblNo ratio of two primitive ReedSolomon codes with coherent BPSK signals and Berlekamp-Massey algorithm decoding for AWGN channels has been measured by computer simulations. The system model is shown in Figure 6.3. Here, E b is the average transmitted bit energy, and N o/2 is the two-sided power spectral density of the noise. The parameters of the codes are given in Table 6.6. Perfect timing and synchronization are assumed. In each test, the average transmitted bit energy was fixed, and the variance of the AWGN was adjusted for a range of average bit error rates. The simulated error performance of the Reed-Solomon codes with Berlekamp-Massey algorithm decoding in AWGN channels is shown in Figure
Error-Control Block Codes for Communications Engineers
162
v
r·
__.__..
_--~
Channel codeword
Transmission path (Analog channel)
Noise
1\
U I
Data sink
R
1.....- - - 1
Estimate of U
L..-_ _--J
Received vector
~
_
---..- -_.._----,
Discrete noisy channel
Figure 6.3 Model of a coded digital communication system.
Table 6.6 Parameters of the Primitive Reed-Solomon Codes (n, k)
dmin
r
g(x)
115, 11) (15,9)
5 7
2 3
x 4 + a 13x 3 + a 6x 2 + a 3x + a lO x 6 + a 10x 5 + a 14x 4 + a 4x 3 -;- a 6x 2 + a 9x+ a 6
6.4. Comparisons are made between the coded and uncoded coherent BPSK systems. At high E b/No ratio, it can be seen that the performance of the coded BPSK systems is better than the uncoded BPSK systems. At a bit error rate 4 of 10- , the (15, 11) double-error-correcting, and (15, 9) triple-error-correcting Reed-Solomon coded BPSK systems give 1.4 and 1.7 dB of coding gains over the uncoded BPSK system, respectively.
Reed-Solomon Codes 0
10
10
10 \I)
li1...
...0
10
Qj .1::
m 10 10
10
163
-1
-2
-3
-4
-5
-6
-2
0
2
4
6
8
10
12
-a- Uncoded coherent BPSK
......
-0-
(15, 11) Reed-Solomon code (15,9) Reed-Solomon code
Figure 6.4 Performance of Reed-Solomon codes with Berlekamp-Massey algorithm decoding in AWGN channels.
References [1]
Reed, I. S., and G. Solomon, "Polynomial Codes over Certain Finite Fields," SIAM Journal on Applied Mathematics, Vol. 8, 1960, pp. 300-304.
[2]
Berlekarnp, E. R., Algebraic Coding Theory, New York: McGraw-Hill, 1968.
[31
Peterson, W. W., and E. L. Weldon, jr., Error Correcting Codes, 2nd ed., Cambridge, MA: MIT Press, 1972.
[4]
MacWilliams, F. J., and N. J. A. Sloane, The Theory ofError Correcting Codes, Amsterdam: North-Holland, 1977. Blahut, R. E., Theory and Practice of Error Control Codes, Boston, MA: Addison-Wesley, 1984. Wicker, S. B., and V. K. Bhargava Eds., Reed-Solomom and Their Applications, IEEE Press. 1994.
[5] [6] [7]
Forney, G. D., Jr.,"On Decoding BCH Codes," 11:.'1:.'£ Trans. on Inftrmation Theory, Vol. IT-II, No.5, October 1965, pp. 393-413.
[8]
Forney, G. D., Concatenated Codes, Cambridge, MA: MIT Press, 1966.
This page intentionally left blank
7 Multilevel Block-Coded Modulation 7.1 Introduction We have seen that in classical digital communication systems with coding, the channel encoder and the modulator design are considered as separate entities. This is shown in Figure 7.1. The modem (modulator-demodulator) transforms the analog channel into a discrete channel and the channel codec (encoder-decoder) corrects errors that may appear on the channel. The ability to detect or correct errors is achieved by adding redundancy to the information bits. Due to channel coding, the effective information rate for a given transmission rate at the encoder output is reduced. Example 7.1 It can be seen from Figure 7.2 that the information rate with channel coding is reduced for the same transmission rate at locations 2'
ISource
u Binary information vector
V
.I Channel I I encoder I
Channel codeword
J
Modulator
•
I
Transmission Noise
path
(Analog channel) 1\
U
ISink IEstimate
I Channell
R
I decoder I Received
of U
vector
Figure 7.1 Model of a coded digital communication system.
165
•
I Demodulator Discrete nosiy channel
Error-Control Block Codes for Communications Engineers
166
1'
Isource
I 2 biVs
2'
~ I No coding 12 biVs 4'
3'
Is ource 1--""'--'1 biVs .
1
819oa' at location 3'
2biVs
Jf---------J o
Signal at locations 1" 2' and 4'
SAME transmission rate
1 'Tlma
-+---+---H~Time
0
1 second
Figure 7.2 Comparison of coded and uncoded signaling schemes.
and 4'. If we increase the information rate with coding from 1 bps to 2 bps, the transmission rate at the encoder output increases to 4 bps. In modem design, we can employ bandwidth efficient multilevel modulation to increase spectral efficiency. For 2 Y-ary modulation, y number of bits are used to specify a signal symbol which, in rum, selects a signal waveform for transmission. If the signaling interval is T seconds, the symbol rate is 1IT symbol/s and the transmission rate at the input of the modulator is yl T bps. Also, the signal bandwidth is inversely proportional to the signaling interval if the carrier signal frequency I. = 1/ T. Example 7.2 Consider Figure 7.3. For BPSK (y
=
1): symbol rate, 1/ T = 2 symbol/so information rate, yl T = 2 bps.
For 4-PSK (y = 2): symbol rate, II T = 1 symbol/so information rate, yl T = 2 bps.
It can be seen from Figure 7.3 that 4-PSK (l symbol/s) requires less bandwidth than BPSK (2 symbolls) for the same information rate of 2 bps. Suppose the following: For BPSK (y = 1): symbol rate, II T = 2 symbolls. information rate, yl T = 2 bps. For 4- PSK (y = 2): symbol rate, 1/ T = 2 symbol/so information rate, yl T = 4 bps.
Multilevel Block-Coded Modulation 1'
SAME information rate
{
2 bit/s
.1
BPSK
3' 2 bit/s
·14-PSK
I
167 2'
•
2 symbol/s 4'
I1
•
symbol/s
Signal at locations l' and 3'
BPSK signal at location 2'
4-PSK signal at location 4'
1 second Figure 7.3 Comparison of uncoded BPSK and 4-PSK modulation schemes at same information rate.
4-PSK (4 bps) now has a higher information rate than BPSK (2 bps) for the same bandwidth requirement. From our early example, we know that channel coding reduces the information rate for the same symbol rate. To compensate for the loss of information rate, two methods can be employed. In the first method, if the type of modulation is fixed and the bandwidth is expandable (i.e., to increase the symbol rate), we simply increase the information rate from the output of the source for the coded system. In the second method, if the bandwidth is fixed, we increase the modulation level of the coded system. That increases the information rate of the coded system. Example 73 From Figure 7.4, it can be seen that the information rate at location 4' with coding is reduced when compared with the uncoded case at location l ' for the same modulation and bandwidth constraints. Example 74 The information rate at location 4' is increased from 1 bps to 2 bps. From Figure 7.5, it can be seen that more bandwidth is needed with coding for the same type of modulation and the same information rate at locations l ' and 4'. Example 75 The information rate of 1 bps is maintained at location 4'.
Error-Control Block Codes for Communications Engineers
168
t : : __ ·--'--,2'
"
--.~ _~~ ~~~~~~ 2 bit/s
J
4' ~ R~~/2 , bit/s
I
.r l 2 bit/s 5'
2 bit/s
Signal at location 4'
~
3'
PI'
BPSK 2 symbol/s BPSK
S' ~
'/3' = , bit/symbol
I 4'/S'=O.5bitlsymbol I
2 symboVs
I'oj
l. 0
Time
I
Signal at locations ",2' and 5'
- I f - - - f - - - - + + Time
BPSK signal at locations 3' and S'
~4--I-H"---T--++-
o
Time
, second
Figure 7.4 Comparison of uncoded and coded BPSK modulation schemes at same symbol rate.
From Figure 7.6, it can be seen that for the same information rate at locations l' and 4' with different modulation level (BPSK and 4-PSK), the uncoded BPSK and the coded 4-PSK require no change in the bandwidth requirement. The following discussion is concerned with bandwidth-constrained channels. We have already seen that if we double the modulation level with coding, the uncoded and the coded modulation schemes have the same information rate and the bandwidth requirement remains unchanged. Bandwidth-efficient modulation with coding has been studied by many researchers [1--4), but trellis-coded-modulation (TCM) was first introduced and documented by Ungerboeck [5] in 1982. The technique combines the design of convolutional codes with modulation and the codes were optimized for the minimum free Euclidean distance [5J. Here, we shall restrict ourselves to a particular kind of bandwidth-efficient coded-modulation design, namely, multilevel block-coded modulation (MBCM). MBCM is a technique for combining block coding and modulation. The technique allows us to construct an L-level code C from L component codes C 1, Cz, ... , CL with minimum Hamming distances of d], d z, ... , d i- respectively, and d] ~ d z ~ ... ~ d i. We now ask ourselves some questions. Can we simply use well-known block codes from a textbook? Will the multilevel block-coded modulation
Multilevel Block-Coded Modulation l'
,-----------,
2'
~~.~?_:~~i~~_rl 2 bitls
~ .-I R~~/2
I
2 bitls
2 bitls 5'
~
4 bit/s
3' BPSKP 2 symbol/s
I 1'/3' = 1 bit/symbol
6' BPSK ~
I 4'/6' = 0.5 bit/symbol I
169
4 symbol/s
Signal at locations 1',2' and 4' BPSK signal at location 3'
Signal at location 5'
BPSK signal at location 6'
o 1 second
Figure 7.5 Comparison of uncoded and coded BPSK modulation schemes at same information rate.
scheme be an optimum scheme? The answer is yes provided we design the coding and the modulation as a single entity. Although the codes were optimized for the minimum Hamming distance, the mapping of the coded sequence onto the 2 L_ary modulation signal symbol sequence can give a good minimum Euclidean distance and the approach gives surprisingly good performance. Multilevel block-coded-modulation schemes are analogous to the TCM schemes in that they use the geometry of the signal constellation to achieve a high minimum Euclidean distance. In 1977, multistage decoding of block codes was first discussed by Imai and Hirakaw [6]. Further work by Cusack [7, 8] and Sayegh [9] led to the introduction of multilevel block-coded modulation for bandwidth constrained channels. Since their publications, active research works have been carried out by others in the development of I-dimensional (I-D) and 2-dimensional (2-D) combined block coding and modulation techniques as an alternative to TCM schemes when the implementation of the latter is not economically feasible at very high transmission rates [10-46]. This chapter discusses the principles of 2-D MBCM with maximum-likelihood and
Error-Control Block Codes for Communications Engineers
170 l'
,----------,
2'
3' BPSK 1 symbol/s
1'/3' = 1 bit/symbol
~ 4-PSK~ I
4'/6'= 1 bit/symbol
~:_ ~_~ ~~~~n?__~ 1 bit/s
1 bit/s
~ R~~~/21 1 bit/s
5'
P
2 bit/s
1 symbol/s
~nme
Signal at locations 1',2' and 4'
01
BPSK signal at location 3'
O~.Tlme
Signal at location 5'
~
4-PSK signal at location 6'
I·Time Time
a •
1 second
.j
Figure 7.6 Comparison of uncoded BPSK and coded 4-PSK modulation schemes at same
information and symbol rates.
sub-optimum decoding. We shall illustrate the design procedure of2-0 MBCM with M-PSK signals. The design procedure is also valid for MBCM schemes with 1-0 or other 2-D signal constellations.
7.2 Encoding and Mapping of Multilevel Block-Coded Modulation In multilevel block-coded modulation, an L-level code C is constructed ftom L component codes C], C z- ... , C L with minimum Hamming distances of d], d z, ... , dL, respectively, and d] ~ d z ~ ... ~ dL' In terms of error correcting capability, C 1 is the most powerful code. For simplicity, we assume that all component codes are binary, linear block codes. The choice of the l-th component code C, depends on the intra-subset distance of a partitioned signal set for 1 ~ I ~ L. In Figure 7.7, the encoding circuit consists of L binary encoders £], £z, ... , £L with rates R] = k] In, Rz = kzln, ... ,
Multilevel Block-Coded Modulation
171
s· Mapper
Figure 7.7 Encoding model of a multilevel block-coded modulation system.
RL = sc!». respectively. k, is the number of information symbols associated with the I-th component encoder and n is the block length of all encoders for 1 $ 1$ L. The information sequence is first partitioned into L components of length k J, k 2 , . . . , k L, and the I-th k ,-component information sequence is encoded by the Z-th encoder E, which generates a codeword of length n in
c,
We now illustrate the signal mapping procedure for a specific example, namely, the M-ary PSK signals. The signal constellation can be partitioned into two subsets and the subsets can themselves be partitioned in the same way. At each partitioning stage, the number of signal points in a set is halved and there is an increase in minimum Euclidean distance in a set. Figures 7.8 and 7.9 show the set-partitioning of 4-PSK and 8-PSK constellations, respectively. It is assumed that all the signal points are equally spaced and lie on a unit circle. The average signal symbol energy is equal to 1. Consider the 8-PSK signals shown in Figure 7.9. The Euclidean distance between two closest signals is 8\ = ""';0.586. Ifwe partition the 8-PSK signals
[00]
[10]
Figure 7.8 Set-partitioning of a 4-PSK constellation.
[01]
[11]
Error-Control Block Codes for Communications Engineers
172
[000]
[100]
[0101
[110]
(001)
[101]
[011)
[111)
Figure 7.9 Set-partitioning of an 8-PSK constellation.
in a natural way, we can see that the Euclidean distance between two closest whereas the smallest subset, signal points in the subset to, 2, 4, 6} is 02 = to, 4}, formed from the subset to, 2, 4, 6}, has the Euclidean distance 03 = 2. Hence, this set partitioning divides a signal set successively into smaller subsets with increasing intrasubset distance. We now form an array of n columns and L rows from the encoded L sequence, where M = 2 . Each column of the array corresponds to a signal point in the signal constellation diagram. Each symbol in the first row corresponds to the right-most digit and each symbol in the last row corresponds to the left-most digit in the representation of the signal points. The l-th row n-component vector corresponds to the codeword generated by an (n, k[, d,) component code C, with minimum Hamming distance d]. The total number of codes used in an MBCM scheme is simply L. There are L . n code symbols and we send n signals in a transmitted block. If the overall coding rate is R e , the total number of information symbols is L . n • R e . We have
-/2,
0< k,
~
n
(7.1)
and L
"Lk,=L'n'R c t; I
(7.2)
Multilevel Block-Coded Modulation
113
In matrix notation, let the input to the component codes be represented by the I-component vector
v=
[VI V 2
...
(7.3)
VL]
and the output of the component codes be represented by the I-component vector
V"
[VI'
=
V5.' VLJ
(7.4)
where the f-th encoder input is represented by the kl-component subvector
(7.5) and the f-th encoder output is represented by the n-component subvector (7.6)
with elements 0 and 1, for 1 S f formulated as
s
L. The f-th encoding operation can be
(7.7)
V[ = VIG,
G7 is the
kI-by-n generator matrix of the f-th component code, and is given
by
G"I
" gl,o,o
" gl,O,1
g "I,O,n-1
" gl,I,O
" gl,I,1
" g 1,l,n-l
" g l,kl-I,O
" gUrU
" l - I, n- l g I,k
with elements 0 and 1. The I-component codevector V" can, represented as
V" = VG" where
(7.8)
=
10
turn, be
(7.9)
Error-Control Block Codes for Communications Engineers
174
Gi' 0 G"= 0
0
G2'
0 0
0
GZ
(7.10)
Define an L-by-n permutation matrix
P=
Pl,D P2,o
Pl,I P2, 1
PL,D
PL,J
Pl,n_l] P2, n- 1
(7.11)
PL,n-1
where the n-by-L sub-matrix Pt.j is given by PO,2
Pt,j =
[PO" Pl,I
PO,L
P1,2
~1,L
Pn-I,2
Pn-!,L
:
Pn-l,l
]
(7.12)
1 s l s L, 0 ~ j s n - 1 with zero entries except the element Pj,l = 1. The codevector, V, of the L-level code Cis
V = V"P
(7.13)
Subsrituting equation (7.9) into equation (7. J3), we get
V = UG"P
(7.14)
where the n-component vector V is given by
V = [Yo V: ... V n -
I]
(7.15)
and the L-component subvecror Vj is
V= [v'i· j ,j
o~ j
~ n -
VI2/ ,j
1. In terms of the channel bits,
l
V'L ,j ]
(7.16)
Multilevel Block-Coded Modulation
v=
[v'1.0 v2,O
v'Lo v'{, I v2, I
175
v "L,I
(7.17)
V'l,n-l V2,n-1 ... V[,n-Jl " v "Z,j . . . v " . I sym b cror an d[ Vl,j L,j) 'IS mappe d onto a sIgna 0 l" S j' o ~ j ~ n - 1. After the signal mapping, we get the n-component vector
s" =
[so s'{ ... s~-Jl
(7.18) L
with elements chosen from the signal set 10, 1, ... , M - II and M = 2 . Example 7.6 Choose n = 4 and 4-PSK signal constellation shown in Figure 7.8. Let C, be the (4, 1,4) binary repetition code of G'{ = [1 1 1 1], and C Z be the (4, 3, 2) binary single-parity-check code of G
z=
[~ ~ ~ ~]. The resultant two-level BCM scheme has R o
0
1
I =
1/4,
1
Rz = 3/4, and an overall coding rate of 1/2. All the uncoded information bits are shown in Figure 7.10. The corresponding codewords and signal vectors generated by the component codes are shown in Figures 7.11 and 7.12, respectively. Example 7.7 Choose n = 4 and 8-PSK signal constellation shown in Figure 7.9. Let C I be the (4, 1,4) binary repetition code of G'{ = [1 1 1 1], and C 2 be the (4, 3, 2) binary single-parity-check code of
o 111 1
111 Figure7.10 Partitioning of uncoded bits to the input of the component codes in Example 7.6.
V V
1
0000
1111 2 ~"';;";"I---I---I--~~~~~~~+-:-~ 1111 1111
Figure 7.11 Partitioning of codewords generated by the component codes in Example 7.6.
Error-Control Block Codes for Communications Engineers
176
n ....... =4
2222 1111 1133 1313 1331 3113 3131 3311 3333
Figure 7.12 Partitioning of signal vectors generated by the component codes in Example 7.6.
G~ ~ :],
G, =
G 3=
[~ ~ ~ ~]. 000
R2
and C, be the I-i, 4, I) binary nonredundanr code of
The resultant three-level BeM scheme has R I = 1/4,
1
3/4, R 3 = 4/4, and an overall coding rate of2/3. AU the uncoded information bits are shown in Figure 7.13. The corresponding codewords and signal vectors generated by the component codes are shown in Figures 7.14 and 7.15, respectively. Alternatively, the multilevel encoding process can be performed by a =
L
single encoding process. In this case, the
L k I-by-(L . n) generator matrix is 1=\
G
=
G"P
(7.19)
UG
(7.21)
where (see Equation 7.20 on p. 177) and the codevector is
v,
Also, the parity-check matrix associated with the generator matrix G is
L
the
L (n -
k/)-by-(L . n) matrix H, where
!=l
H and
=
H"P
(7.22)
Multilevel Block-Coded Modulation
0
0
0
0
0
0
0
g'{,k, -1.0
0
0
0
0
0
0
0
0
0
g2.0,O gl,l,O
0
0
0
0
g2.k,-1.0
0
0
0
0
0
0
0
0
0
0
0
0
gl.1.0
G=
0
g'1.0.1 g'1.1.1
gl.O.O
0
0
0
0
177
0
g'1.0.n-1 g'{,l.n-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 0
0
0
0
g'1.k,-I.n-1
0
0
... 0 0
0
0
0
0
0
g2.0,n-1 g2,l.n-1
0
0
0
0
0
g2,.,-l.n-1
0
0
0
g'Lo,1 g'i.,1,l
0
0
0
0
0
0
0
0
0
g'LO,n-1 g'Ll,n-1
0
g'Lk,-1.1
0
0
0
0
g'!..k,-I.n-I
0
0
0
g'l.k, -1.1
0
0
0
g2.0,1
0
0
g2.l.!
0
0
g2.k,-1.I
g'LO.O g'Ll,O
0
0
0
0
0
0
g'i..k,-1.0
0
0
0
(7.20)
178
u, U 2
Error-Control Block Codes for Communications Engineers
0 0 000 000 000 000 000
0 000
~~~~~~~~~~~~~~~~~ 0000000000000000 001 001 001 001 001 001 001 001 001 001 001 001 001 001 001 001 00000001 00100011 01000101 0110 0111 1000 1001 10101011 1100 1101 1110 1111 ········· +·······..···t···········+··········t·············1..·..·······+· ····+..····· 1·······..·..+ ··· + " + + + . 0000000000000000 010 010 010 010 010 010 010 010 010 010 010 010 010 010 010 010 000000010010001101000101 0110011110001001101010111100110111101111 ..• +· ·..+ " t ·..I • ·+ +··..· +·· ·t ····..· ·..······ ·..+··· ·..+ ··..·+ + .. 0000000000000000 011 011 011 011 011 011 011 011 011 011 011 011 011 011 011 011 0000000100100011010001010110011110001001 101010111100110111101111 ............+ ·t " ·..·t ·..I..· · ·..I · t · t · ·..· · ·+ .. 0000000000000000 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 0000000100100011010001010110011110001001 101010111100110111101111 ......·..··..t ····..t· ·t·· · ·t ..·..·..·..·t · ·+ · ·+· · ··1 ·····+ · ··..· +·..· + ..·· ··..+ + . 0000000000000000 101 101 101 101 101 101 101 101 101 101 101 101 101 101 101 101 0000000100100011010001010110011110001001 101010111100110111101111 ....· ·· ··"..·..·..·..·t ..· ·..·I..••• ·+..· ·+ + ·..+ ····· · t · +..· -l.. 0000000000000000 110 110 110 110 110 110 110 110 110 110 110 110 110 110 110 110 01000101 0110 01111000 1001 1010 1011 1100· 1101· 1110 ·00000001 ·..· ·+00100011 t· · ·I ··..·I · ·•· · ·I · •..· ·t· · ·..·+· · ·11111 · .. 0000000000000000 111 111 III III 111 111 111 111 111 111 111 111 111 111 111 111 00000001 00100011 01000101 0110 0111 1000 1001 10101011 1100 11011110 1111 1111111111111111 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 0000000100100011010001010110011110001001 10101011110011011110 1111 ........·· ..·+ ··..t ..· ·"..·..·..·..·t ..··· I •..• ·1..•..• + ·+ ·•·· •..· ·• ·..· + .. 1111111111111111 001 001 001 001 001 001 001 001 001 001 001 001 001 001 001 001 0000 0001·t 00100011 01000101 ··..· ·..·t ..· ·..·..t ..·..·· ·t ..· ·i·+0110 · +01111000 + +1001 10101011+1100 1101+1110 ·..·..·11111 · .. 1111111111111111 010 010 010 010 010 010 010 010 010 010 010 010 010 010 010 010 000000010010001101000101011001111000 1001101010111100110111101111 ..•..• ·4 · +· ·t ..·..·..·..-t- ·..·-/ · ·+..·..• -/ ·/· ·4 + ·..+ ·..·t ·· · ·· · / . 1111111111111111 011 011 011 011 011 011 011 011 011 011 011 011 011 011 011 011 00000001 00100011 t 01000101 0110 0111· 1000 1001 ......· t ..·..·..·..·4..· ·..-/ · ·t ..·..· ·..·1 · ·..+ · +10101011 + ·+1100 ·..· 1101 1110 ·..I1111· 1111111111111111 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 0000000100100011010001010110011110001001 10101011 1100110111101111 ............·t " t· ..··• ·..I ·I · · · 1 ·I · ·t ..· · · ·-/ +·..· -f.. 1 1 1 1 11 1111111111 101 101 101 101 101 101 101 101 101 101 101 101 101 101 101 101 000000010010001101000101011001111000 1001101010111100110111101111 ·..·..·..·..·t ·..·t ..·..· t · I · I · ·..-1 •..•..• ·4·..·..·..•..· · " •..·..· ·..· · ·1 ·..·..· 1111111111111111 110 110 110 110 110 110 110 110 110 110 110 110 110 110 110 110 000000010010001101000101 0110011110001001101010111100 110111101111. ......·.. ·..·+ 4· • t · ·-/ ·· ·..I..• · • 1 • 1..• •..+..·..·..· · • • +..• 1111111111111111 111 111 111 111 111 111 111 111 111 111 111 111 111 111 111 111 000000010010001101000101 0110011110001001101010111100 110111101111
Figure 7.13 Partitioning of uncoded bits to the input of the component codes in Example
7.7.
Multilevel Block-Coded Modulation
179
..... v; n =4
Vi
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
~~~~~~~~~~~~~~~~~ 0000000000000000000000000000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0011 0011 0011 0011 00110011 0011 00110011 0011 00110011 0011 00110011 0011 0000000100100011010001010110011110001001 101010111100 110111101111 .........................~ - + + ·······t·..··..·····+·..······ ·..· t ··..· ~······ ~· · ·~ · ·ot-·····..·•• ·········I············· 00000000 0000 0000 0000 0000 0000 0000 0000 0000 0000 00000000 000000000000 010101010101010101010101010101010101010101010101 0101010101010101 0000000100100011010001010110 01111000 ............t ..·..· · · ·_ ·- · -l-t..·.._·..·+ ·..+·..· +1001 · · 101010111100110111101111 ·..··· ·..+ ·..ot- •..t· + .. 0000 0000 0000 0000 00000000 0000 0000 0000 0000 0000 0000 0000 00000000 0000 0110011001100110011001100110 011001100110 011001100110 011001100110 00000001 0010t 0011 0100 01 01 t 0110 0111 I 1000t 1001 10101011 · ·i i ..·..· · + ..·..·.._..·,· · ~ ~ ..·..· + 1100 ··..· 1101 ·..+1110 ·..·-t1111. 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 1001100110011001100110011001 100110011001 100110011001 100110011001 0000 0001 00100011 0100 01 01 0110·,·0111 l··..·· 1000t 1001 10101011 1100· 1101 t1110 1111 ..·..· i· · ·· · ·i..·-· · + ..·..···..t · ~ ..· · ~ · ..· · I..· ·..··· 0000 00000000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 1010 1010 1010 1010 10101010 101010101010 1010 101010101010101010101010 00000001 0111 1000·t 1001·+1010·t1all 1100 1101 +1110 · i-..· +00100011 ·..·..· ·· +01000101 ·_..-l- ·_-t 0110 .._ ·+..· · ·..t · -t1111.. 0000 0000 00000000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 1100 1100 1100 1100 1100 11 00 1100 1100 11 00 1100 1100 1100 1100 1100 1100 1100 0000 0001• 0010 0011 0100•..I01• 01·_+.. 0110 01111000 1001 1011 1100 110111101111 ....• •..• • •..•.. ~ ..•..•••..• _ +·.. · 1 +.. ·..· • 1010 · +..· ·_ ·.._ , . 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 00000000 0000 0000 0000 0000 1111111111111111111111111111 111111111111 111111111111 111111111111 000000010010001101000101 0110011110001001101010111100 110111101111 1111111111111111111111111111111111111111111111111111111111111111 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 00010010 0011010001010110 01111000 1001 10101011 1100 110111101111 ·..· ·..·+··..·• t ..· ·i ..·• ..· · I· ·..-t· ·..· 1 ·+··· ··..· ··..· ·,·· · + + .. 1111111111111111111111111111111111111111 111111111111 111111111111 0011 0011 0011 0011 00110011 0011 00110011 0011 00110011 0011 00110011 0011 0000000100100011010001010110 011110001001101010111100 11011110 1111 ............·+.. ·..• ··.···_·..···+· ··+ ·..1-..·..···- ··..·..··..I · ·+ · ·········..~··· ..····..·~· ..··· -'] _ , . 1111111111111111111111111111111111111111111111111111111111111111 010101010101010101010101010101010101010101010101 0101010101010101 0111 1000 1001 ·~10101011 1100 1111 ·0000 ·..· 0001 · ·..+00100011 ..· ·+..·..· ~01000101 ..· · ·I •..• 0110 • I· •.. + · · ,.. · +11011110 -..· ·1..·..·• .. 1111111111111111111111111111 111111111111111111111111111111111111 01100110 0110 011001100110 all a 0110 all 00110 0110 all a 0110 0110 all 0 0110 00000001 0010 0011 01000101 0110 0111+1000 ....· _· • ·~ ..• ·_..·I • ot- • -t- +1001 1010 1011 - 1100 _..-+11011110 + +1111.. 1111111111111111111111111111111111111111 111111111111 111111111111 1001100110011001100110011001 100110011001 100110011001 100110011001 0000 0001· ·0010 0011 01000101 t·_..· 0110·101111000 1001 1010+1011 ·11100 11011110 1111 ..........· · ·..t- ·..+·_ ·~ •• ·+ ·+-· ·..· ·..• •..• •..1 • .. 1111111111111111111111111111 111111111111 111111111111111111111111 1010 1010 1010 1010 10101010 1010 10101010101010101010 1010 101010101010 00000001 00100011 01000101 0110 01111000 ......·..· ·..· ·i ..·_ · ·..·•· · ·..· ·· + + _ +1001 +10101011 1100 +11011110 + - +1111 -. 1111111111111111111111111111 111111111111 111111111111 111111111111 1100 1100 1100 1100 1100 1100 1100 11001100 1100 1100 1100 1100 1100 1100 1100 0000 00010010 0011010001010110011110001001 10101011 1100110111101111 ..·..· ·..i · · ·..·+ ·.., -t..· ·..• • ·t · · t ..·..·..·..· 1 •..•..•.. 1111111111111111111111111111111111111111111111111111111111111111 1111111111111111111111111111111111111111111111111111111111111111 00000001 00100011 01000101 0110 01111000 1001 10101011 1100 11011110 1111
Figure 7.14 Partitioning of codewords generated by the component codes in Example 7.7.
180
Error-Control Block Codes for Communications Engineers
.... n
S
=4
",..---~-~-~-~-~-r--T"'"""--r--T"'"""-"T"'"-"T"'"-T"'"""-"T"'"-"T"'"--r--...,
0000 0004 0040 0044 400 404 0440 0444 000 4004 4040 04 4400 4404 4440 4444 ••••a
.
0022 0026 0062 006604220426 0462 04664022 4026 40624066 4422 4426 4462 4466 ....•..···..4··....••..··4··....······4··..•....•..4··....•..•...•··•..··•·......•···••....4·......•·..·4··....
·····+·····..·..··4·..•..······4..•....····4··....·······•·..·•·······...•··•····•·••••..··•·.•..
0202 0206 0242 024606020606 0642 06464202 4206 42424246 4602 4606 4642 4646 ..•..•..•..·4
•• 4···
·4
4··..• • ·
· 1..••
·4
·..·4..•..••..• • •
4
•..•..··4··..•••..··4·
••..•..•..•..•• ••..•·•..·4..••······•••
0220022402600264062006240660 066442204224426042644620462446604664 ·4 •..· 4 4 4 • • • + ·+..· 4..·..•• ·t..·..· 4 • •..4 •••..4·..•..• •• .. 2002 2006 2042 20462402 2406 2442 24466002 6006 60426046 6402 6406 6442 6446 •..·•..•·..4 ···..·4 ···..•..4·..·· •..4 ··•..·..4 •..·..1···..·•·..·..f·· f ..···..· t ·..···..4 •· ····..··4· + . 2020202420602064242024242460246460206024606060646420642464606464
·
....·..· +···
f··
··
f
f
•
·
+
·
·..· f..·..·
f
•
+· ·
•..4·..•..•
f·
•..• ••..• ••
.
2200220422402244260026042640264462006204624062446600660466406644
......·..·..·f
f
f
4
·..·
•
·..f ..··
·I·..·
f
+· ·
·
4
·
·4
·
• ·•·•..•
·•
..
2222 2226 2262 226626222626 2662 26666222 6226 62626266 6622 6626 6662 6666 1111 1115 1151 115515111515 1551 15555111 5115 5151 5155 5511 5515 5551 5555
....•
4·
•..•..·4·
·
4·
•
• •..1
f·
•
·
•
• ·..•
f
4
·
• •..• •
..
1133113711731177153315371573 1577 5133513751735177 5533 5537 55735577
..• · ·4·..• ·
4
f·
•..· · ·
· I
f
•
·..·
·
·..t
·..·
·
·..·..· ·..·..·
..
1313131713531357171317171753 175753135317 5353535757135717 5753 5757 f ..······ 4····..·..···4
·
·•····
·..·
•
·1·····
··
··..·..·f..····..···+
··
·
·4·
4
.
1331 13351371 137517311735 1771 17755331 5335 5371 53755731 57355771 5775
..•..•
•..•..• •
• •
•
•
• •
•..•
f ..•
1 •
·
t
·
·f
•
·f..·
·
···
·
·..·
f
•..•
·
31133117 3153 315735133517 3553 355771137117 7153715775137517 7553 7557
·
·
4
· ·4 ·
·
·..• •..·1
·f
·..·.. 4..•
·t
·..·f
··4
•
•..•..• •..•
.
3131313531713175353135353571 35757131713571717175753175357571 7575
............·f
··
4 ·
•..4
+
·
··..· ··
I..•
•..·
• •..4
t
· ·
·
·..4 ·
4..•
·..·
·
.
3311331533513355371137153751 37557311731573517355771177157751 7755 •..••
• •..••..•
•
•••
•
•..•••..•
1
•••
• •
• •
•..4..•..•..•..• • •• • •• ••
..
3333333733733377373337373773377773337337737373777733773777737777 Figure 7.15 Partitioning of signal vectors generated by the component codes in Example 7.7.
Multilevel Block-Coded Modulation
o
H2
o o
o
o
HI
H'{ 0 H"",
181
(7.23)
H'/is the (n - k ,)-by-n parity-check matrix associated with the l-th component code C, with elements 0 and 1, for 1 s I s L. Example 7.8 If L '" 2, U I '" [1], U 2 '" [0 1 1], G'{ = [1 1 1 1], and G 2'"
1 0 0 1] [o 0
1 0
0 1
1 ,then 1
V'{ '" UIG'{
V'i = [1 1 1 1) V3'" U2G 3 V3'" [0 1 1 0) V" '" [V'i V 2) V" '" [1 1 1 1 V", v-r
o 1 1 0)
1 0 0 0
0 0 0 0
0 1 0 0
0 0 0 0
0 0 1 0
0 0 0 0
o o o
0 0 0
1
0
0 0 0 0
1 0 0 0
0 0 0 0
0 1 0 0
0 0 0 0
0 0 1 0
o o o o
0 0 0 1
p=
V = [1 0 1 1 1 1 1 0) Example 7.9 If L = 2, U I
G, [~ 0
0 1 0
U", [U j U 2) U = [1 0 1 1)
0 0 1
:1
chon
=
[1], U 2 = [0 1 1], G'{ = [1 1 1 1], and
Error-Control Block Codes for Communications Engineers
182
~2]
G" = [G" 0 1
G" "
[~
1
1
1
0
0 0
0 0
0
1
0 0
0 0
0
0
1
0
0
0
0
0
0 1
G = G"P
G"
[~
0
1
0
1
0
1
1
0
0
0
0
0 0
0 0
0 1
0 0
0 1
0 0
0
V= UG and
~]
~]
V = [V'l.O v2.0 v "l.l v 2, I v "1,2 v "2,2 v '{,3 V2,3] ff
= [1 0
11
11
1 0]
This agrees with the results obtained using the 2-level encoding process in Example 7.8. In both cases, the 4-PSK modulated signal vector is 5"=[1331]. Given equations (7.1) and (7.2), we want to choose {k/} to maximize the minimum Euclidean distance between the codewords of the code. Let (n, k[. d/) denote code C/. Consider rwo different codewords in C/. For each one of the positions where these two codewords differ, the squared Euclidean 2 distance between the corresponding signal points is ~ 8 / _ Since two distinct codewords must differ in at least d/ positions, the minimum squared Euclidean 2/d/. distance d~ed between two codewords in C/ must be ~ 8 In general, the minimum squared Euclidean distance between the codewords of the L-level code Cis (7.24) Consider the rate-L'Z two-level block-coded 4-PSK signals shown in Example 7.6 and the uncoded BPSK signals. It is assumed that all the signal points are equally spaced and lie on the unit circle. The average signal symbol energy is equal to 1. For the rate-II2 block-coded 4-PSK signals, L = 2, kl + k : = 4, a total of 64 codewords in C, L· n = 8, 8r = 2, d j = 4, 8~ = 4, d: = 2, and d~ed ~ 8. The minimum squared Euclidean distance of
Multilevel Block-Coded Modulation
183
the uncoded BPSK signals is 4. The information rate and the bandwidth of both signaling schemes are the same but the coded signaling scheme requires more signal points for transmission than the corresponding uncoded signaling scheme. It can be seen that the Euclidean distance between any two nearest coded 4-PSK signals is smaller than the corresponding uncoded BPSK signals. The gain from coding with 4-PSK signals must, therefore, outweigh the loss of distance due to signal expansion. Otherwise, the coded 4-PSK signaling scheme will not outperform the corresponding uncoded BPSK signaling scheme.
7.3 Decoding Methods 7.3.1
Maximum-likelihood Decoding
Maximum-likelihood decoding involves finding the most likely codeword in t.
~>I
C among the 2 'd codewords in C. This is done by computing the squared Euclidean distance between the received n-component signal vector R = [ro rt ... r11-1] and all possible n-component signal vectors, bearing in mind that there is a one-to-one correspondence between each n-component signal vector and codeword in C. The closest n-component signal vector to the received vector is taken as the decoded signal vector. The corresponding codeword in C can now be determined. Example 710 Choose n = 4 and 4- PSK signal constellation shown in Figure 7.8. Let C 1 be the (4, 1,4) binary repetition code of G'l = [1 1 1 1] (4, 3, 2) binary single-parity-check code of and C 2 be the G 2=
[~ ~ ~ o
0
1
: ]. All the information vectors, the codewords generated 1
by the component codes, and the corresponding signal vectors are shown in Figures 7.10, 7.11, and 7.12, respectively. Assuming that the all-zero codeword is transmitted and a 4-component received signal vector R = fro rt r2 r3] = [(0.25, -0.5) (0.25, -0.5) (0.25, -0.5) (0.5, 0.3)], where each received signal point rj is represented by the x- and y-coordinates. The squared Euclidean distances between vectors Rand S" are shown in Table 7.1. The squared Euclidean distance between R and the all-zero 4-component signal vector is 2.7775 which is the closest signal vector to R. The decoded codeword is the all-zero codeword, and the corresponding decoded information vector is the all-zero information vector. The errors are corrected.
184
Error-Control Block Codes for Communications Engineers
Table 7.1 Squared Euclidean Distance Computation Signal Vector S"
Squared Euclidean Distance S2(R, S")
f---~~~--~-----~~~~-------------I
2.7775 5.7775 5.7775 4.7775 5.7775 4.7775 4.7775 7.7775 7.6775 6.8775 6.8775 3.6775 6.8775 3.6775 3.6775 2.8775
0000 0022 0202 0220 2002 2020 2200 2222 1111 1 1 33 13 13 1 331 3 1 13 3 13 1 331 1 3333
If hard-decision decoding is used, the received signal vector is quantized to [(0.0, -1.0) (0.0, -1.0) (0.0, -1.0) (1.0, O.O)J. The squared Euclidean distance between the quantized received signal vector and the 4-component signal vector [3 3 3 3J is 2.0 which is the closest signal vector to the quantized received signal vector. The decoded codeword is the all-one codeword, and the corresponding decoded information vector is the all-one information vector. Errors are not corrected. In general, maximum-likelihood decoding becomes prohibitively complex L
~k,
when the value of 2'=1
is large.
7.3.2 Multistage Decoding [9] Instead of performing a true maximum-likelihood decoding, a multistage decoding procedure can be used. The system model is shown in Figure 7.16. The idea underlying multistage decoding is the following. The most powerful component code C 1 is decoded first by decoder D 1• The next powerful code C 2 is then decoded by decoder D 2 by assuming that C 1 was correctly decoded. In general, decoding of C I by decoder D I always assumes correct decoding of all previous codes, for 1 s I ~ L.
Multilevel Block-Coded Modulation
185
M as· Discrete R p noisy I-+~~ p channel
e r
Transmitter
Multistage decoders
Figure 7.16 Model of a multilevel block-coded modulation system.
The multistage decoding scheme partitions all the possible codewords in C into 2 k, sets. All the elements in a set have the same codeword in the first row. The first decoding step involves choosing the most likely set among the 2 k, sets. When the decision is made, the first row is decoded and is assumed to be correct. The elements in the chosen set are then partitioned into 2 k2 sets. All the elements in a set have the same codeword in the first row as well as having the same codeword in the second row. The next decoding step involves choosing the most likely set among the 2 k2 sets from the chosen set. When the decision is made, the second row is decoded and is assumed to be correct. The above procedure is repeated L times and the received signal vector is decoded in a suboptimal way. This is shown in the following example. Example 7.11 Choose n = 4 and 4-PSK signal constellation shown in Figure 7.8. Let C 1 be rhe (4,1,4) binary repetition code ofG'{ = [1 1 1 1] and C 2 be the (4, 3, 2) binary single-parity-check code of
1 0 00 1] All the information vectors, the codewords generated
[o
G:2 = 0
1 0
1
1 . 1
by the component codes, and the corresponding signal vectors are shown in Figures 7.10, 7.11, and 7.12, respectively. Assuming that the all-zero codeword is transmitted and a 4-component received signal vector R = [ro r\ r2 r3] = [(0.25, -0.5) (0.25, -0.5) (0.25, -0.5) (0.5, 0.3)], where each received signal point rj is represented by the x- and y-coordinates.
186
Error-Control Block Codes for Communications Engineers
1. To estimate the codeword in code C\, the first-stage decoder starts decoding in the subsets 10, 2} and {1, 3}. The closest signal symbols in the subset 10, 2} to Rare 0, 0, 0, and 0 with squared Euclidean distances 0.8125, 0.8125, 0.8125, and 0.34, respectively. The total squared Euclidean distance is 2.7775. The closest signal symbols in the subset {1, 3} to Rare 3, 3, 3, and 1 with squared Euclidean distances 0.3125, 0.3125, 0.3125, and 0.74, respectively. The total squared Euclidean distance is 1.6775. The first-stage decoder picks the signal vector [3 3 3 1] as the decoded signal vector with the smallest squared Euclidean distance, and chooses the all-one codeword in C]. The first row is decoded and is assumed to be correct. The corresponding decoded information symbol is 1. 2. To estimate the codeword in code C 2 , the second-stage decoder starts decoding in the subsets III and 131. The closest signal symbols in the subset II} to Rare 1, 1, 1, and 1 with squared Euclidean distances 2.3125, 2.3125, 2.3125, and 0.74, respectively. The total squared Euclidean distance is 7.6775. The closest signal symbols in the subset 13} to Rare 3,3,3, and 3 with squared Euclidean distances 0.3125, 0.3125,0.3125, and 1.94, respectively. The total squared Euclidean distance is 2.8775. The second-stage decoder picks the signal vector [3 3 3 3] as the decoded signal vector with the smallest squared Euclidean distance, and chooses the all-one codeword in C2 . The second row is decoded and the corresponding decoded information vector is [1 1 1]. The decoded codeword is [1 1 1 1 1 1 1 1], and the corresponding decoded information vector is [1 1 1 1]. It can be seen that the multistage decoder makes an incorrect decision at the end of the first-stage decoding. This incorrect decoding information is used by the second-stage decoder. As a result, the decoded codeword is [1 1 1 1 1 1 1 1]. For the same received vector R, a maximum-likelihood decoder shown in Example 7.10 makes the correct decision and chooses the all-zero codeword as the decoded codeword. The end result is that the multistage decoding technique is a suboptimum decoding technique. 7.3.3
Multistage Trellis Decoding
Since an (n, k, d min ) block code can be represented by an n-stage syndromeformer trellis and can be decoded by means of the Viterbi algorithm, we can implement an alternative multistage decoding scheme to decode the component
Multilevel BLock-Coded Modulation
187
codes in sequence. A Viterbi decoder is used to decode C\ followed by one for C 2 . We use the syndrome-former trellis for C\ and a Viterbi decoder D\ to estimate the codeword in C j and use the syndrome-former trellis for C 2 and a Viterbi decoder D 2 to estimate the codeword in C 2 assuming that the estimated codeword in C\ is correct. The above procedure is repeated L times until all codewords in all the component codes are estimated. Example 7.12 Choose n = 4 and 4-PSK signal constellation shown in Figure 7.8. Let C\ be the (4, 1, 4) binary repetition code of G'l = [I I 1 1] and C 2 be the (4, 3, 2) binary single-parity-check code of G 2=
1 0I 00 1] The syndrome-former trellises of the component codes [o 0
1
0
1
.
1
are shown in Figure 7.17. Assuming that the all-zero codeword is transmitted and a 4-component received signal vector R = ['0'\'2 '3] = [(1.0, -0.1) (1.0, -0.1) (0.9, -1.0) (0.9, -1.0)], where each received signal point 'j is represented by the x- and j-coordinares. ~
7 6 5 4 3
2 1
o
•
•
S1a1.e • 7
•.L::h.;. ".....
•
•
•
!\
•
;. \ , 1 .
.~/.
•
/. 11
•
if.
\\-
0
• 6
•
•
"."',1. ....... •
II. $ Vi ,0 .! J
•
•
0
0
V1'~ 1
v<2
4
·3 • 2
1
•
•
5
···Cr.........· • :.. .
1
0
vi •3
(a)
Subset assigned to each path according to the decoded bit v 1 .t j .s n - 1
~
1.
I
.o,
0 _A.
~/
~/
O
1../ 0 •
'-..1 / )(
;/0
~
0 :.....
"'"..1
1
X
/"-_ f
......
/ 0 .........
IV
•
1
•
1
"-
0"'; 0 ...:t
(b)
Figure7.17 Svndrome-tormer trellis diagrams: (al (4, 1, 4) binary repetition code. and (b) (4, 3, 2) binary sinqle-paritv-check code.
Error-Control Block Codes for Communications Engineers
188
1. To estimate the codeword in code C I' the syndrome-former trellis for code C j is used. Due to the set-partitioning of the signal constellation, it can be seen that signals associated with the trellis paths with a code symbol '0' and a code symbol '1' come from the sets {O, 2} and {I, 31, respectively. The closest signal vector associated with the all-zero codeword path is S"= [0 0 0 OJ and the running squared Euclidean distance is 0.01 + 0.01 + 1.01 + 1.01 = 2.04. The closest signal vector associated with the all-one codeword path is S" = [3 3 3 3J and the running squared Euclidean distance is 1.81 + 1.81 + 0.81 + 0.81 = 5.24. The first-stage decoder picks the all-zero signal vector as the path with the smallest running squared Euclidean distance, and chooses the all-zero codeword in C 1• The corresponding decoded information symbol is O. 2. To estimate the codeword in code C 2 , the syndrome-former trellis for code C 2 is used. Assuming that the estimated all-zero codeword in C 1 is correct, it can be seen that signals associated with the trellis paths with a code symbol '0' and a code symbol '1' come from the sets {O} and {2}, respectively. The two closest signal paths entering the end state of the trellis are 0-0-0-0 and 0-0-2-2. These signal paths 0-0-0-0 and 0-0-2-2 correspond to the codeword paths 0-0-0-0 and 0-0-1-1, respectively. The running squared Euclidean distances associated with the 0-0-0-0 and the 0-0-2-2 signal paths are 0.01 + 0.01 + 1.01 + 1.01 = 2.04 and 0.01 + 0.01 + 4.61 + 4.61 = 9.24, respectively. The second-stage decoder picks the all-zero signal vector as the path with the smallest running squared Euclidean distance, and chooses the all-zero codeword in C2 • The corresponding decoded information vector is [0 0 0]. The decoded codeword is [0 0 0 0 0 0 0 OJ, and the corresponding decoded information vector is [0 0 0 0]. Errors are corrected. Example 7.13 Choose n = 4 and 8-PSK signal constellation shown III Figure 7.9. Let C 1 be the (4, 1, 4) binary repetition code of
::
~ [[l~l l~l]~ C!]~',n:eC3(:e:~:)(4~:.'~:::~:-:::::hn:~nt:::,:: o
G 3=
0
I
I
r~ r Hl· 000
1
The syndrome-former trellises of the component codes
Multilevel Block-Coded Modulation
189
are shown in Figure 7.18. Assuming that the all-zero codeword is transmitted and a 4-component received signal vector R = ['0'1'2 '3] = [(1.0, -0.1) (1.0, -0.1) (0.9, -0.4) (0.9, -0.4)], where each received signal point rj is represented by the x- and y-coordinates. 1. To estimate the codeword in code C 1, the syndrome-former trellis for code C 1 is used. Due to the set-partitioning of the signal constellation, it can be seen that signals associated with the trellis paths with a code symbol '0' and a code symbol '1' come from the sets
s.tam
~
7
••
.. • 1."'-,1. . ..$/. \. ,. .
6 5 4 3 2
.;.
\._ •
r
'-.
1
.1
i·
!
•
/0
•
v1~0
$
•
0
v1~ 1
••
7 6 5 • 4
•
.•
--.• ,.", 1 ·
• 1 • /.
o
•
'-
',.""
........
. 1
0.0 v 1:2
• 3 • 2 •
1
0
v1:3
(a)
Subset assigned to each path according ,O~;j 5. n - 1
I to the decoded bit v 1 j
+
State
0
0
SlaW
'
A
·
1· ,/'"---,1 /~"""'-1 1 Lh./ ,.' "-.F =1 ~-'x' )if ..... 1./ 0 !,.,(j ......-.. J' 0 -,....0"O •• "Y.4 •• 0 v2 ,0 v2 ,1 v2 ,2 v2 ,3
$
f
(b)
I
$-
Subset assigned to each path according to the decoded bits v 1, j and v2, j ,05. j 5. n - 1
1
1
1
1
e;;,;.M·..···_·..·..·.······_····_······...•·.... ··········-···zeroo·_..,._'''~
(c)
Figure 7.18 Syndrome-former trellis diagrams: (a) (4, 1,4) binary repetition code; (b) (4, 3, 2) binary single-parity-check code; and (c) (4, 4, 1) binary nonredundant code.
190
Error-Control Block Codes for Communications Engineers
10, 2, 4, 6} and {1, 3, 5, 7}, respectively. The closest signal vector associated with the all-zero codeword path is S" = [0 0 0 0] and the running squared Euclidean distance is 0.01 + 0.01 + 0.17 + 0.17 = 0.36. The closest signal vector associated with the all-one codeword path is S" = [7 7 7 7] and the running squared Euclidean distance is 0.4544 + 0.4544 + 0.1315 + 0.1315 = 1.1718. The first-stage decoder picks the all-zero signal vector as the path with the smallest running squared Euclidean distance, and chooses the all-zero codeword in C 1• The corresponding decoded information symbol is O. 2. To estimate the codeword in code C z, the syndrome-former trellis for code C z is used. Assuming that the estimated all-zero codeword in C 1 is correct, it can be seen that signals associated with the trellis paths with a code symbol '0' and a code symbol' r come from the sets 10, 4J and {2, 6}, respectively. The two closest signal paths entering the end state of the trellis are 0-0-0-0 and 0-0-6-6. These signal paths 0-0-0-0 and 0-0-6-6 correspond to the codeword paths 0-0-0-0 and 0-0-1-1, respectively. The running squared Euclidean distances associated with the 0-0-0-0 and the 0-0-6-6 signal paths are 0.01 + 0.01 + 0.17 + 0.17 = 0.36 and 0.0 1 + 0.01 + 1.17 + 1.17 = 2.36, respectively. The second-stage decoder picks the all-zero signal vector as the path with the smallest running squared Euclidean distance, and chooses the all-zero codeword in C z. The corresponding decoded information vector is [0 0 0]. 3. To estimate the codeword in code C 3 , the syndrome-former trellis for code C 3 is used. Assuming that the estimated all-zero codewords in C t and C z are correct, it can be seen that signals associated with the trellis paths with a code symbol '0' and a code symbol '1' come from the sets {OJ and {4L respectively. The two closest signal paths entering the end state of the trellis are 0-0-0-0 and 0-0-0-4. These signal paths 0-0-0-0 and 0-0-0-4 correspond to the codeword paths 0-0-0-0 and 0-0-0-1, respectively. The running squared Euclidean distances associated with the 0-0-0-0 and the 0-0-0-4 signal paths are 0.01 + 0.01 + 0.17 + 0.17 = 0.36 and 0.01 + 0.01 + 0.17 + 3.77 = 3.96, respectively. The third-stage decoder picks the all-zero signal vector as the path with the smallest running squared Euclidean distance, and chooses the all-zero codeword in C 3 . The corresponding decoded information vector is [0 0 0 0]. The decoded codeword is [0 0 0 0 0 0 0 0 0 0 0 0], and the corresponding decoded information vector is [0 0 0 0 0 0 0 0]. The errors are corrected.
Multilevel Block-Coded Modulation
191
7.4 Performance of Multilevel Block-Coded Modulation With Multistage Decoding Given an L-Ievel code C with L component codes of length n each. If each component of the codeword in C is mapped onto a signal point in the 2 L_ary signal constellation, the effective information rate of C is defined as the average number of information bits transmitted per signal dimension and is given by 1 R[C] = 2nlog21 CI
(7.25)
where I C I is the total number of codewords in C. The asymptotic coding gain (dB) over the un coded 2(L~ 1)-ary signals is 2
2
ACG = 1OIoglO(R[Cj d med1duncoded)
(7.26)
where d~ncoded is the squared Euclidean distance of the uncoded 2(L-I) -ary signaling scheme. In Example 7.12, L = 2, k j + k: = 4, I C I = 16, 2 2 2 L'n=8, 01=2, d 1=4, 02=4, d 2=2, and dmed~8. Thus, R[ C] = 0.5 bit per dimension and the asymptotic coding gain = 3.01 dB. In Example 7.13, L = 3, k] + k 2 + k 3 = 8, I C I = 256, L· n = 12, 012 = 0.586, d j = 4, 022 = 2, d: = 2, 032 = 4, d 3 = 1, and d2med ~ 2.344. Thus, R[ C] = 1 bit per dimension and the asymptotic coding gain = 0.689 dB. This MBCM scheme achieves very little coding gain at all. If we use the same kind of component codes but increase the codeword length, n, to 7, the above arrangement can achieve a coding gain of 3 dB. The probability of error with multistage decoding is given by (7.27) where P,
=
P(l) . P(2/l) . P(3/2,1) ... P(UL - 1, L - 2, ... , I)
(7.28)
P; is the probability of correctly decoding the L-Ievel code C, P(l) is the probability of correctly decoding the code C 1, P(2/l) is the probability of correctly decoding the code C 2 given that the code C j was correctly decoded, and so on.
192
Error-Control Block Codes for Communications Engineers
7.5 Advantages and Disadvantages of Using Multilevel Block-Coded Modulation It can be seen from (7.25) that it is easy ro transmit noninteger bits per dimension with an appropriate choice of I C I. Implementation is less complex than maximum-likelihood decoding. The total decoding complexity is the sum of the decoding complexity of each level instead of the product in the case of single-stage maximum-likelihood decoding. A drawback of a multistage decoding is the error propagation effect caused by passing incorrectly decoded information from one stage to the next stage. Another drawback of a multistage decoding is that it results in a larger effective path multiplicity than a singlestage decoding. As a result, the overall decoding is not optimum even if the decoding at each stage is optimum. These error propagations can be made negligibly small if the first few component codes are powerful with optimum decoding. At high signal-to-noise ratio where the Euclidean distance is the main factor that affects the performance, we would expect this multistage decoding scheme to be slightly inferior to a true maximum-likelihood decoding.
7.6 Computer Simulation Results The variation of bit error rate with the EJ No ratio of two multilevel blockcoded modulation schemes with coherent 4-PSK and 8-PSK signals for AWGN channel has been measured by computer simulations. The system models are shown in Figures 7.1 and 7.16. Here, E, is the average transmitted signal symbol energy, and N o/2 is the two-sided power spectral density of the noise. Maximum-likelihood and multistage decoders have been used in the tests. In the two-level block-coded 4-PSK modulation scheme, a (4, 1, 4) binary repetition code and a (4, 3, 2) binary single-parity-check code are used as component codes. The parameters of the component codes are given in Table 7.2. In the three-level block-coded 8-PSK modulation scheme, a (7, 1, 7) binary repetition code, a (7,6, 2) binary single-parity-check code, and a (7, 7, I) binary nonredundant code are used as component codes. The parameters of the component codes are given in Table 7.3. In each test, the average energy per signal symbol was fixed, and the variance of the AWGN was adjusted for a range of average bit error rates. The simulated bit error performance of the two-level and three-level block-coded modulation schemes with unquanrized maximum-likelihood and multistage decoding is shown in Figures 7.19 and 7.20, respectively. For a certain range of low EJ No ratios, an uncoded system always appears to have a better tolerance to noise than the coded system. It can be seen that an
Multilevel Block-Coded Modulation
193
Table 7.2 Parameters of the Two-Level SCM Component Codes, C/, 1= 1, 2
G",
Code
Cl
(4,1,4)
H",
[1 1 1 1]
(4,3,2)
[1 1 1 1]
Table 7.3 Parameters of the Three-Level SCM Component Codes, Ct. 1= 1, 2, 3 Code
(n, k" d,)
G",
H",
C1
(7,1,7)
[1 1 1 1 1 1 1]
-1 1 1 1 1 _1
C2
(7,6,2)
r""1
0 0 0 0 0 1-
C3
(7, 7, 1)
-1
0 0 0 0 0 _0
0 1 0 0 0 0
0 0 1 0 0 0
0 0 0 1 0 0
0 0 0 0 1 0
0 0 0 0 0 1
1-
0 1 0 0 0 0 0
0 0 1 0 0 0 0
0 0 0 1 0 0 0
0 0 0 0 1 0 0
0 0 0 0 0 1 0
0-
1 1 1 1 1_
0 0 0 0 0 L
1 0 0 0 0 0
0 1 0 0 0 0
0 0 1 0 0 0
[1 1 1 1 1 1 1]
0 0 0 1 0 0
0 0 0 0 1 0
00 0 0 0 1_
Error-Control Block Codes for Communications Engineers
194
10 10 10
0
-1
-I!-
"""'I
~
-2
~.
"~r.
Q)
~
...e 10 -3 ...
c-o
'\ \
Q)
10 10 10
-4
\
-5
\
\ \,
-6
-2
o
2
4
6
8
10
12
-e- Uncoded coherent BPSK ..... Coded 4-PSK with two-stage decoding -0- Coded 4-PSK with maximum-likelihood decoding
Figure 7.19 Performance of uncoded coherent BPSK scheme and two-level block-coded coherent 4-PSK modulation scheme over AWGN channel.
improvement of about 1.8 dB in coding gain can be achieved with both multilevel block-coded modulation schemes. Furthermore, the bit error probability performance degradation of the MBCM schemes with multistage decoding is vety small when compared with the MBCM schemes with maximum-likelihood decoding. In low-cost applications, MBCM with multistage decoding is less complex than maximum-likelihood decoding.
Multilevel Block-Coded Modulation a
10 10 10
:.....
·1
~ ·2
~
"~.
(I)
"§ ....
eQ;
10
ffi 10 10 10
195
·3
\\
-4
~
-5
-6
-2
o
2
4
e,
6
8
10
12
INa (dB)
Uncoded coherent 4-PSK ...... Coded 8-PSK with three-stage decoding -
Figure 7.20 Performance of uncoded coherent 4-PSK scheme and three-level blockcoded coherent 8-PSK modulation scheme over AWGN channel.
References [1]
Anderson, J. B., and D. P. Taylor, "A Bandwidth-Efficiem Class of Signal-Space Codes," IEEE Trans. on lnftrmation Theory, Vol. IT-24, No.6, November 1978, pp. 703-712.
[2]
Taylor, D. P., and H. C. Chan, "A Simulation Study of Two Bandwidth-Efficient Modulation Techniques," lEEE Trans. on Communications, Vol. COM-29, No.3, March 1981, pp. 267-275.
[3]
Borelli, W. c., H. F. Rashvand, and P. G. Farrell, "Convolutional Codes for MultiLevel Modems," Elect. Lett., Vol. 17, No.9, April 1981, pp. 331-333.
[4]
Clark, A. P., and W. Ser, "Improvement in Tolerance to Noise through the Transmission of Multilevel Coded Signals," lEE Coni Proc., Vol. 49, April 1981, pp. 129-141.
[5]
Ungerboeck, G., "Channel Coding With Multilevel/Phase Signals," lEEE Trans. on Injormation Theory, Vol. IT-28, No.1, January 1982, pp. 55-67.
[6]
Imai, H., and S. Hirakawa, "A New Multilevel Coding Method using Error-Correcting Codes," lEEE Trans. on lnftrmation Theory, Vol. IT-23, Vol. 3, May 1977, pp. 371-377.
[7]
Cusack, E. L., "Error Control Codes for QAM Signalling," Elect. Lett., Vol. 20, No.2, January 1984, pp. 62-63.
196
Error-Control Block Codes for Communications Engineers
[8]
Brownlie, J. D., and E. L. Cusack, "Duplex Transmission at 4800 and 9600 bir/s on the PSTN and the Use of Channel Coding with a Partitioned Signal Constellation," British Telecom Techno!.]., Vol. 2, No.4, September 1984, pp. 64-73.
[9]
Sayegh, S. I., "A Class of Optimum Block Codes in Signal Space," IEEE Trans. on Communications, Vol. COM-34, No. 10, October 1986, pp. 1043-1045.
[10]
Cedervall, M., and C E. Sundberg, "A Class of Perfect Block Codes for the Binary Channel Associated with Differentially Encoded PSK Signals and Coherent Detection," IEEE Trans. on Information Theory, Vol. 11'-27, No.2, March 1981, pp. 250-254.
[11]
Kschischang, F. R., P. G. de Buda, and S. Pasupathy, "Block Coset Codes for M-ary Phase Shift Keying," IEEE]. set. Areas Commun., Vol. SAC7, No.6, August 1989, PI" 900-913.
[12]
Zhang, L., and B. Vucetic, "Bandwidth Efficient Block Codes for Raleigh Fading Channels," Elect. Lett., Vol. 26, No.5, March 1990, PI'. 301-303.
[13]
Chouly, A., and H. Sari, "Design and Performance of Block-Coded Modulation for Digital Microwave Radio Systems," IEEE Trans. on Communications, Vol. COM-38, No.5, May 1990, pp. 727-734.
[14]
Su, S. L., and J. M. Wu, "A Combination of Block Coded Modulation and Trellis Coded Modulation," Proc. Int. Symp. Information Theory and Applications, Hawaii, USA, 27-30 October 1990, pp. 47-50.
[15]
Biglieri, E., "Error Probabiliry of Block-Coded Modulation," Proc. Int. Symp. Information Theory and Applications, Hawaii, USA, 27-30 October 1990, PI'. 55-58.
[16]
Kasami, T., T. Takata, T. Fujiwara, and S. Lin, "On Linear Structure and Phase Rotation Invariant Properties of Block M·PSK Modulation Codes," IEEE Trans. on Information Theory, Vol. 11'-37, No. \, January 1991, pp. 164-167.
[17]
Filho, R. B., and P. G. Farrell, "A Decoding Method for Block Codes Defined over Rings of Integers Modulo-q," Proc. 3rd Bangor Symp. Commun., Bangor, Wales, UK, 29-30 May 1991, PI'. 156-160.
[18J
Kasarni, 1'., T. Takata, T. Fujiwara, and S. Lin, "On Multilevel Block Modulation Codes," IEEE Trans. on Information Theory, Vol. 11'-37, No.4, July 1991, PI" 965-975.
[19]
Williams, R. G. C, "Block Coding for Voiceband Modems," British Telecom Techno]. j., Vol. 10, No.1, January 1992, PI'. 101-11 \,
[20]
Saifuddin, A., "Decoding of Multilevel Block Modulation Codes," lEE Proc. Commun., Vol. 142, No.6, December 1995, PI'. 341-351.
(21]
Sayegh, S., and F. Hernrnati, "Differentially Encoded M·PSK Block Codes," IEEE Trans. on Communications, Vol. COM-4.', No.9, September 1995, PI" 2426-2428.
[22]
Zhang, L., and B. Vucetic, "Multilevel Block Codes for Rayleigh-Fading Channels," IEEE Trans. on Communications, Vol. COM-43, No.1, January 1995, PI'. 24-31.
[23]
Forney, G. D., jr., "Density/Length Profiles and Trellis Complexity of Lattices," IEEE Trans. on Information Theory, Vol. 11'-40, No.6, November 1994, PI'. 1753-1772.
[24]
Forney, G. D., [r., "Dimension/Length Profiles and Trellis Complexity of Linear Block Codes," IEEE Trans. on Information Theory, Vol. 1T-40, No.6, November 1994, PI'. 1741-1752.
[25J
Biglieri, E., and G. Caire, "Symmetry Properties of Multilevel Coded Modulation," IEEE Trans. on Information Theory, Vol. 11'-40, No.5, September 1994, PI" 1630-1632.
Multileue! BLock-Coded Modulation
197
[26]
Takara, 1'., er al., "Suboptimum Decoding of Decomposable Block Codes," IEEE Trans. on Information Theory, Vol. IT-40, No.5, Seprember 1994, pp. 1392-1405.
[27]
Ma, S. C, and M. C Lin, "A Trellis Coded Modularion Scheme Constructed from Block Coded Modularion wirh Inrerblock Memory," IEEE Trans. on Information Theory, Vol. [T-40, No.5, September 1994, pp. 1348-1363.
[28J
Pellizzoni, R., and A. Spalvieri, "Binary Multilevel Coser Codes Based on Reed-Muller Codes," IEEE Trans. on Communications, Vol. COM-42, No.7, July 1994, pp. 2357-2360.
[29]
Baldini, R., and P. G. Farrell, "Coded Modularion Based on Rings ofInregers Modulo-q, Parr 1: Block Codes," lEE Proc. Commun., Vol. 141, No.3, June 1994, pp. 129-136.
[30]
Filho, R. B., and P. G. Farrell, "Multilevel Block Subcodes for Coded Phase Modulation," Eleet. Lett., Vol. 30, No. 12, June 1994, pp. 927-929.
[31J
Biglieri. E., and G. Caire, "Power Spectrum of Block-Coded Modulation," IEEE Trans. on Communications, Vol. COM-42, No. 2/3/4, February/March/April 1994, pp. 1580-1585.
[32]
Kofman, Y., E. Zehavi, and S. Shamai, "Performance Analysis of a Mulrilevel Coded Modularion System." IEEE Trans. on Communications, Vol. COM-42, No. 2/3/4, February/March/April 1994, pp. 299-312.
[33]
Lin, M. C, and S. C Ma, "A Coded Modulation Scheme wirh Inrerblock Memory:' IEEE Trans. on Communications, Vol. COM-42, No. 2/3/4, February/March/April 1994, pp.911-916.
[34]
Lunn, T. L., and A. G. Burr, "Number of Neighbors for Staged Decoding of Block Coded Modulation," Electron. Lett., Vol. 29, No. 21, October 1993, pp. 1830-1831.
[35]
Takata, T, S. Ujita, T Kasami, and S. Lin, "MultiStage Decoding of Multilevel Block M-PSK Modularion Codes and Irs Performance Analysis," IEEE Trans. on Information Theory, Vol. [T-39, No.4, July 1993, pp. 1204-1218.
[36]
Benedetto, V., and E. Biglieri, "Computing Error Probabiliries of Mulrilevel Modulation Codes," Annals ofTelecommunications, Vol. 48, No. 7-8, July-August 1993, pp. 378-383.
[37J
Seshadra, N., and C E. W. Sundberg, "Multi-Level Block-Coded Modulations wirh Unequal Error-Protection for the Rayleigh Fading Channel," European Trans. on Telecommunications and Related Technologies, Vol. 4, No.3, May-June 1993, pp. 325-334.
[38]
Kasami, T., T Takata, T. Fujiwara, and S. Lin, "On Complexity of Trellis Structure of Linear Block Codes," IEEE Trans. on Information Theory, Vol. IT-39, No.3, May 1993, pp. 1057-1064.
[391
Henkel. W., "Condirions for 90° Phase-Invariant Block-Coded QAM:' IEEE Trans. on Communications, Vol. COM-41, No.4, April 1993, pp. 524-527.
[40]
Isaksson, M., and L. H. Zerrerberg, "Block-Coded M-PSK Modulation over GF(M):' IEEE Trans. on Information Theory, Vol. IT-39, No.2, March 1993, pp. 337-346.
[41]
WU, J., and D. J. Costello, Jr., "New Multilevel Codes over GF(ql:' IEEE Trans. on
Information Theory, Vol. 1T-38, No.3, May 1992, pp. 933-939.
142}
Valenzuela, R. A., "Performance of Quadrature Amplitude Modulation for Indoor Radio Communications," IEEE Trans. on Communications, Vol. COM-35, No. 11, November 1987, pp. 1236-1238.
198
Error-Control Block Codes for Communications Engineers
[43]
Rhee, D. J., S. Rajpal, and S. Lin, "Some Block- and Trellis-Coded Modulations for Rayleigh Fading Channel," IEEE Trans. on Communications, Vol. COM-44, No.1, January 1996, pp. 43-46.
[441
Burr, A. G., "Block Versus Trellis: An Introduction Commun. Eng. j., August 1993, pp. 240-248.
[451
WU, J. M., and S. L. Su, "Combination of Block-Coded Modulation and Trellis-Coded Modulation," lEE Proc. I, Vol. 138, No.5, October 1991, pp. 381-386.
[46]
Wachsmann, U., R.I'. H. Fischer, and J. B. Huber, "Multilevel Codes: Theoretical Concepts and Practical Design Rules," IEEE Trans. on Information Theory, Vol. IT-45, No.5, July 1999, pp. 1361-1391.
to
Coded Modulation," Electron. &
8 Applications of Block Codes 8.1 Introduction Since Shannon's work in 1948 [1], error-control block and convolutional coding has been an active area for research and has found its applications in many practical systems. Most of the early work in error-control coding was mainly devoted to space communication systems where power was limited and the bandwidth is not a major concern. The goal was simply to reduce the power requirement and achieve Shannon's channel capacity limit. Error-control coding has also found its applications in the bandwidth-limited region of satellite communications, and mobile communications. There, the goal is to reduce power requirements and increase spectral efficiency. Recently, ReedSolomon codes have applied to digital audio recording systems. This chapter describes applications oferror-control block coding techniques to space communications, mobile communications, and the compact disc digital audio recording system.
8.2 Applications to Space Communications In deep-space communications, the received signal power usually is weak at the earth station. Noise is additive white Gaussian, and the errors are random in nature. A large error-correcting capabiliry is needed. Because the bandwidth is not restricted, it is possible to build a complex channel decoder. Therefore, low-rate, powerful, error-correcting codes with soft-decision decoding are often used. Code concatenation is also a possibiliry to improve system performance [2]. 199
200
8.2.1
Error-Control Block Codes for Communications Engineers
Voyager Missions
In 1977, the Voyager spacecraft was launched to explore the outer planets. It reached the planets Jupiter and Saturn in 1979 and 1981, respectively. Pictures of those planets were sent back to earth stations. Two convolutional codes were designed at the Jet Propulsion Laboratory and employed for the mission: the rate-112 and rate-l/3, 64-state binary convolutional codes with free minimum Hamming distance of 10 and IS, respectively [3]. (The reader may wish to consult [4] for the theory of convolutional codes.) Both binary convolutional codes with coherent BPSK signals can be decoded by an eight-level soft-decision Viterbi decoder [5], capable of very high speed, up to 100 Kbps. At a probabiliry of decoding error of 10-5, the rate-I/2 and rate-I /3 binary convolutional codes give a coding gain of 5.1 dB and 5.6 dB, respectively, when compared with the un coded coherent BPSK signaling scheme. To improve system performance, the rate-112 64-state binary convolutional code was used in concatenation with a (255, 223) Reed-Solomon code over Galois field GF(q = 2 8) in the mission of the Voyagerto Uranus in 1986. The block diagram of the serial concatenated coding system is shown in Figure 8.1. In that communication system, the Reed-Solomon code is served as an outer code with block interleaving, and the convolutional code is used as an inner code as studied by Forney [2]. Interleaving is a process of rearranging the symbols in a sequence in a predefined manner. The purpose of interleaving is to ensure the bursts of errors affecting data being transmitted do not appear as bursts but as randomly scattered errors in the received word. For transmission, the Reed-Solomon q-ary coded sequence is first divided into A' sequences of (q - I) symbols each. The block interleaver accepts a block of A '(q - 1) symbols and permutes the symbols. The permutation is accomplished by filling the rows of a A'-row by (q - I)-column array with symbols. A' is defined as the interleaving depth. Symbols are entered into the
Interleaver
Noise
Transmission path
(Analog channel) Deinterleaver
Figure 8.1 A serial concatenated coding system block diagram.
AppLications of BLock Codes
201
interleaver array by rows, and symbols are read out by columns. At the receiver, symbols are entered into the deinterleaver array by columns, and symbols are read out by rows. Figure 8.2 shows an interleaver with A' rows and (q - 1) columns. A block of A'(q - 1) symbols are input to the interleaver, It can be seen that any two adjacent symbols at the interleaver input are separated by A' - 1 other symbols on output. In another word, any burst of length b' ~ A' symbol errors results in isolated errors at the deinterleaver output. Isolated errors are separated by at least (q - 1) - 1 other symbols. The total end-to-end delay is 2A'(q - 1) symbols. An interleaving depth of 5 is employed in the Voyager missions. The system operated at about 30 Kbps. The inner convolutional code was decoded by an eight-level soft-decision Viterbi decoder, and the ReedSolomon code was decoded by a Berlekamp-Massey hard-decision decoder. As shown in Figure 8.3, with the rate-I /2, 64-state convolutional code alone, the scheme with AWGN can achieve an error rate of 10- 5 with an E blNo ratio of about 4.5 dB, and the serial concatenated coding scheme with ideal interleaving and AWGN can achieve an error rate of 10- 5 with an Ebl No ratio of 2.6 dB.
8.2.2 Galileo Mission A serial concatenated coding scheme with coherent BPSK signals is employed on the Galileo spacecraft launched in 1989. An inner rate-1I4, 16,384-state binary convolutional encoder used in concatenation with the (255, 223) ReedSolomon code was designed at the Jet Propulsion Laboratory [6]. The rate-I 14 binary convolutional code has a minimum free Hamming distance of 35. The first and third coded bits are inverted. The inversion of bits simply ensures sufficient symbol transitions in the code sequence for improved symbol synchronization. In terms of system performance, the new serial concatenated coding scheme was designed to provide an extra 2 dB of coding gain over the corresponding serial concatenated coding scheme employed in the Voyager missions.
..
q-1 2
...
q -1
q+1
...
2( q - 1)
[A'(q·1)+3].q
...
1'(q·1)
1
q
[1' ( q - 1) + 2] - q
Figure 8.2 A A' -by-(q - 1) block interleaver.
202
Error-Control Block Codes for Communications Engineers
10 10 10
*-e ...
10
CD
10
in
10 10 10
a
,
-1
~. ~.
-2
-3
~ -4
\
-5
~
~
\
\
\
•
•
-6
"""\ \. \
-7
o
-2
2
4
6
8
10
12
E b INa (dB) ~
• )( ~
•
Uncoded coherent BPSK Rate-1/2, 64-state convolutional code Voyager Rate-1/4, 16,384-state convolutional code Galileo Shannon's limit
Figure 8.3 Performance of convolutional and serial concatenated coding systems with eight-level soft-decision Viterbi decoding on AWGN channel.
Figure 8.3 also shows the bit error performance of the rate-l/4 binary convolutional coding and the serial concatenated coding schemes with AWGN. The rate-l/4 binary convolutional code with eight-level soft-decision decoding can achieve an error rate of 10- 5 with an EblNo ratio of about 1.75 dB and the serial concatenated coding scheme with an Ebl No ratio of about 0.8 dB.
8.3 Applications to Mobile Communications In mobile communications, the available power and bandwidth are very limited. The transmitted signal is subject to multipath fading, and the size of the mobile transceiver is also restricted. Recent design and development of mobile communication systems make extensive use of error-control coding techniques
Applications ofBlock Codes
203
and cellular concepts. The concept of cellular mobile radio makes maximum use of the channel bandwidth by reusing the available frequencies in different cells and is well suited for the growing demand of efficient digital mobile transmission on bandlimited channels. When the frequencies used in a cell are reused in cells which are not the adjacent cells, interference may occur. This is called cochannel interjerence. As a result, cellular radio systems suffer from significant cochannel interference as well as adjacent channel interference and fading. Good error-correcting codes with interleaving and a simple decoder are preferred. In the full-rate GSM digital radio system, cyclic-redundancycheck code and binary convolutional code are used for error-detection and errorcorrection, respectively. The full-rate GSM digital radio system is described as below.
8.3.1
GSM Digital Radio System
In the United Kingdom, the total access communications system (TACS) has been used for commercial mobile communications since 1985. The system is an analog system. Other incompatible analog systems are also employed on the Continent. The lack of capacity in the existing analog systems and the ever increasing number of users motivate the development of more efficient transmission systems. In 1982, the Committee of European Post and T elecomms (CEPT) formed the Groupe Speciale Mobile (GSM) Committee. Various multiple-access methods, transmission rates, coding and interleaving techniques, and modulation methods were studied. Field trials were also conducted. Based on the results of the trials, the Groupe Speciale Mobile Committee has drawn up and agreed on standard and design specifications for a unified Pan European digital cellular scheme in 1988, known as the full-rate GSM system, which operates in the 900 MHz band and at a gross coded speed of 22.8 Kbps [7-8]. The full-rate GSM digital mobile radio system is a time-division multiple access (TDMA) scheme with eight time slots per TDMA frame. It uses digitized speech along with digital signal processing techniques to give higher spectral efficiency and higher levels of integration. To combat burst errors, error-control coding with interleaving is employed. A model of the full-rate GSM system is shown in Figure 8.4. Speech samples are analyzed and processed in 20-ms blocks by a regular pulse excited linear predictive coder (RPE-LPC) with long-term prediction [9]. The coder is operated at 13 Kbps, which corresponds to 260 bits per speech frame. It is known that the performance of the coder is sensitive to errors which degrade the speech quality. Thus, error-control coding is employed to improve the system performance. Digitized binary speech symbols
204
Error-Control Block Codes for Communications Engineers {X}.--Cyclic encoding and reordering
-,
Binary information sequence
Noise
Cyclic decoding and reordering
Deinterleaver Estimate of {X}
Figure 8.4 Model of the full-rate G5M digital communication system.
u = [uo ul . . . U259] are first arranged in descending order of importance. They are then divided into two classes (classes 1 and 3), where class 1 contains the first 182 bits and class 3 contains the remaining 78 bits. The first 50 critical bits are encoded by a (53, 50) cyclic-redundancy-check code with generator polynomial g(x) = x 3 + x + 1. Three parity-check bits pi = [Po PI pzJ are generated and used for error detection and bad-frame indication at the receiver. One hundred eighry-rwo class 1 bits are split into two equal pans and reordered, and the three parity-check bits are then inserted into the middle of the two reordered parts as follows: xl'
=
u2f'
X184-f' x91+f'
xl' = 0
=
U2f'+1
= pit
for f' = 0, 1,
, 90
for f'
, 90
=
0, 1,
for f' = 0, 1, 2 for I' = 185, 186, 187, 188
Applications of Block Codes
205
These 189 bits (182 class-I bits plus three parity-check bits together with four tailing zeros) are encoded by a rate-l/2, 16-state binary convolutional code. The four tailing zeros are used to properly terminate the encoder trellis for decoding. Finally, the 78 uncoded class-3 bits are appended to the 378 coded bits to give a total of 456 bits Y = [Yo Yl ... Y4'i'i], where [Y378 Y379 ... Y455] = [U182 Ul83 ... U259]' The 456 bits are reordered according to Table 8.1 and block diagonal interleaved over eight time-slots, where I' is the bit position of the corresponding subblock. Early system performance evaluation with various interleaving and reordering techniques pointed to this particular interleaving and reordering arrangement. The cyclic block encoding, reordering, and convolutional coding processes for each speech frame are shown in Figure 8.5; the structure of the interleaver is shown in Figure 8.6. Each time-slot is built with two subblocks belonging to two successive speech frames. The numbered bits of the first four current subblocks and the numbered bits of the last four previous subblocks are arranged in such a way that the result of the block diagonal interleaving is a redistribution of the 456 reordered bits. For example, the distribution of bits from current subblock B 3 and previous subblock Bi onto a time-slot is shown in Figure 8.7. Time-slots from the current and seven other users are grouped together in sets of eight consecutive time-slots as one TDMA frame. These frames are then grouped to form one multiframe. The TDMA structure (not explored further here) and the grouping hierarchy of the full-rate GSM system is shown in Figure 8.8. For each user, the 456 interleaved code symbols are encrypted, differentially encoded, and sent to the modulator. Gaussian minimum-shifted keying (GMSK) modulation with a modulation index of 0.3 and differentially coherent demodulation are employed [10]. GMSK is a form of continuous phase modulation (CPM) where the phase cannot jump discontinuously, as it can in M-ary PSK signals. It has a premodulation Gaussian filter to limit spurious radio emissions outside the passband. GMSK was selected over other modulation schemes as a compromise between spectral efficiency, complexity of the transmitter, and limited spurious emissions. The encryption unit, differential encoder, and modulator together with the transmission path, the demodulator, and the decryption unit form a discrete noisy channel subject to Rayleigh fading and cochannel interference. At the receiving end, the received sequence is deinterleaved, reordered, and the eightlevel soft-decision Viterbi algorithm decoder operates on the deinterleaved and reordered sequence and makes a firm decision at the end of each frame to recover the transmitted information sequence. The estimated information symbols are reordered, and the first 50 bits are used to compute a 3-bit syndrome vector S = [so 51 52]' If the recomputed syndrome vector is zero, the decoded speech frame is declared as a good frame; otherwise, a bad frame is detected. Finally,
206
Error-Control Block Codes for Communications Engineers Table 8.1
Reordering and Partitioning a 456-Bit Coded Frame into Eight Subblocks f'
80
8i
8i
83
84
85
86
8f
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
i=0
64 128 192 256 320 384 448 56 120 184 248 312 376 440 48 112 176 240 304 368 432 40 104 168 232 296 360 424 32 96 160 224 288 352 416 24 88 152 216 280 344
57 121 185 249 313 377 441 49 113 177 241 305 369 433 41 105 169 233 297 361 425 33 97 161 225 289 353 417 25 89 153 217 281 345 409 17 81 145 209 273 337 401
114 178 242 306 370 434 42 106 170 234 298 362 426 34 98 162 226 290 354 418 26 90 154 218 282 346 410 18 82 146 210 274 338 402 10 74 138 202 266 330 394 2
171 235 299 363 427 35 99 163 227 291 355 419 27 91 155 219 283 347 411 19 83 147 211 275 339 403 11 75 139 203 267 331 395 3 67 131 195 259 323 387 451 59
228 292 356 420 28 92 156 220 284 348 412 20 84 148 212 276 340 404 12 76 140 204 268 332 396 4 68 132 196 260 324 388 452 60 124 188 252 316 380 444 52 116
285 349 413 21 85 149 213 277 341 405 13 77 141 205 269 333 397 5 69 133 197 261 325 389 453 61 125 189 253 317 381 445 53 117 181 245 309 373 437 45 109 173
342 406 14 78 142 206 270 334 398 6 70 134 198 262 326 390 454 62 126 190 254 318 382 446 54 118 182 246 310 374 438 46 110 174 238 302 366 430 38 102 166 230
399 7 71 135 199 263 327 391 455 63 127 191 255 319 383 447 55 119 183 247 311 375 439 47 111 175 239 303 367 431 39 103 167 231 295 359 423 31 95 159 223 287
Applications ofBlock Codes
207
Table 8.1 (continued) Reordering and Partitioning a 456-Bit Coded Frame into Eight Subblocks ('
80
------._--
42 43 44 45 46 47 48 49 50 51 52 53 54
I ~~
--
408 16 80 144 208 272 336 400 8 72 136 200 264 328 392
8{ - ~ - -
9 73 137 201 265 329 393 1 65 129 193 257 321 385 449
81 -_._---_. .
83
84
66 130 194 258 322 386 450 58 122 186 250 314 378 442 50
85
86
8f
---------
~
123 187 251 315 379 443 51 115 179 243 307 371 435 43 107
180 244 308 372 436 44 108 172 236 300 364 428 36 100 164
237 301 365 429 37 101 165 229 293 357 421 29 93 157 221
294 358 422 30 94 158 222 286 350 414 22 86 150 214 278
351 415 23 87 151 215 279 343 407 15 79 143 207 271 335
the four tailing bits are extracted, and the 260 estimated information bits together with the bad frame indication are sent to the speech decoder for further processing. The average bit error rate performance of the full-rate GSM system has been measured by computer simulation. Six thousands speech frames were transmitted over the discrete noisy channel in each test. All tests were carried out under the conditions of frequency hopping, independent Rayleigh fading (fading between time-slots is uncorrelated), a mobile unit travelling at 50 kph and various cochannel interference conditions. The results of the simulation tests, operating at the carrier-to-cochannel interference ratios (CIR) of 10 dB, 7 dB, and 4 dB, are shown in Figure 8.9. A CIR ratio of 10 dB corresponds to 50% cell coverage, 7 dB to 90% cell coverage (a location just inside a cell boundary), and 4 dB to a location just outside a cell boundary. Table 8.2 shows the average bit error rate within each class, operating at various CIR channel conditions. At CIR = 10 dB, the system can provide an acceptable bit error rate performance for the transmission of speech signals.
8.4 Applications to Compact Discs In digital audio recording systems, the storage medium is subject to surface contamination. Both random and burst errors may occur. Recent design and
208
Error-Control Block Codes for Communications Engineers
I
Original speech uou, ... U49 frame 4--50 bits ---.4-- 132 bits ---. 4 182 bits .4 Class 1
Cyclic encoding
L-
..L--L---I.._....I....
78 bits -.. Class 3
---L
----'
4 - - 53 bits --.....-132 bits - - . 4
.4
Tailing ---. 3 check bits ....- ---. bits ....Reordering and tailing Jxox , ooo x9ol x 9' ... X93 X'84 ~-U-'-82-"-'U-25-9-
E94 . .
4 4
Convolutional encoding
I
4
Reordering
91 bits. 4 3 bits. 4 91 bits. ~ bi~
189 bits
• 4--78 bits -.. Class 3
Yo Y1 ... Y377
378 bits
I
U'82 ... U259
• 4--78 bits ---. Class 3
----.. 57 bits ....-
4
456 bits
..
Figure 8.5 Channel coding and reordering of each speech frame in the full-rate GSM system.
development of digital audio recording systems make extensive use of errorcontrol coding and interleaving techniques. Good error-correcting codes with interleaving and a simple decoder are preferred. In this final section, we shall examine the compact disc system [11]. In a digital audio recording system, the analog audio signal is converted into a sequence of digital data and recorded on a master storage medium. Mass-produced copies are made from the master storage medium and the copies may have defects. These defects are often caused by the surface contamination from fingerprints and scratches on the storage medium. The quality of
Applications of Block Codes Previous block (456 bits)
Current block (456 bits)
209 Next block (456 bits)
Figure 8.6 Diagonal interleaving structure over eight time-slots for the full-rate GSM system.
From current sub-block B 3 ~35
Y43 Y271
Y207
From previous sub-block B'7 Figure 8.1 Distribution of bits within a typical time-slot.
Traffic .A
Mutiframe
•
Control
~
.......·········-·120
Traffic
Control
r~----"A ...- - - - - - - \
~~<,
...................•.......-....
.•.........•...•.._
.
TDMA frame
4.615 ms ................... ....................
-
.., 57 3 tail coded bits data Time slot
•
1 26 57 1 control training control coded bit sequence bit data 0.577 ms
Figure 8.8 Traffic channel multiframe and time-slot structure.
.....
8.25 bits guard period
3 tail bits 4
30,us
.. ..
Error-Control Block Codes for Communications Engineers
210
10
10
~
10
0 I
-1 ~
•
~ Class 3
-2
x
e 10
-
-3
Qj
,
ffi
~ Class 1
10
10
-4
-5 I
2
4
6
8
10
12
CIR (dB)
Figure 8.9 Performance of the full-rate GSM system.
Table 8.2 Bit-Error Rate in Each Class for the Full-Rate GSM System CIR (dB)
Class 1 Bit Error Rate
Class J Bit Error Rate
10 7 4
5.38(10- 4) 5.97(10- 3) 4.32(10- 2)
4.51(10- 2) 7.81(10- 2) 0.126
the playback audio will obviously be determined by the quality of the master storage medium, the copy, and the playback device. Defects caused by the surface contamination from fingerprints and scratches on the storage medium appear as burst errors to the input of the playback device. To combat burst errors, error-control coding with interleaving is often employed to improve the system performance. In the compact disc system, the recording medium is an aluminized and laminated flat circular polyvinyl chloride disc. The diameter of the disc is 12 em. Digital audio signals are recorded on the disk along a spiral track with a separation of 1.6 u tu. The total track length is about 5 km with a maximum playing time of 73 minutes. A model of the compact disc recording and decoding system is shown in Figure 8.10. It can be seen
Applications of Block Codes Recording
Player
r_---.-A....-----_
~
M~ster
Left channel
~
:
Right Encoding channel system
~ ...
211
-r DIsc
r_----..A....---_, User Disc
Left --r. . .-------, speaker
Write laser
D- ... ,..
Photo-: detector' ..............
Decoding Right system speaker
Channel
Left channel Right channel
Control and display information o
o
Synchronization bits
Runlengthlimited encoder 1.4112 Mbitls 1.88 Mbitls 1.94 Mbitls
4.32 Mbitls
44,100 samples per channel per second Figure 8.10 Block diagram of the compact disc recording and decoding system.
that the channel can be thought of as consisting of a write laser, a master disc, a copy of the master disc, and a photo-detector. The compact disc system uses digitized audio along with digital signal processing techniques to give higher levels of integration.
8.4.1
Encoding
The left and the right channels of a stereo analog audio signal are sampled at 44.1 kHz, that is, 44,100 samples per second. Each sample is then uniformly quantized into 16 audio bits and the net transmission rate IS 44,100(2 + 16) = 1.4112 Mbps. A frame is then formed by concatenating 6 blocks where each block contains 16 audio bits from the left channel and 16 audio bits from the right channel. It is known that the performance of the compact disc system is sensitive to errors which degrade the audio quality. Thus, error-control coding is employed to improve the system performance. For channel encoding purposes, the 16 bits from each channel are represented
212
Error-Control Block Codes for Communications Engineers
by two GF(2 8 ) symbols. A frame then contains 24 symbols. These 24 uncoded symbols are encoded by a rate-3/4 Cross-Interleaved Reed-Solomon Code (CIRC) encoder. Figure 8.11 shows the CIRC encoder structure. The CIRC encoder consists of symbol delay lines, an outer encoder £2, a cross-interleaver, an inner encoder £], and inverters. Symbols from oddand even-number blocks are separated by subjecting all symbols from evennumber blocks to 2 symbol delay, followed by regrouping of the symbols. This arrangement makes it much easier to interpolate uncorrectable symbols after decoding. These 24 uncoded symbols are encoded by the outer encoder £2' A (28, 24) double-error-correcting shortened Reed-Solomon code of d min = 5 is used for this. Symbols from odd- and even-number blocks are further separated by placing the parity-check symbols in the center. To combat burst errors, the (28, 24) shortened Reed-Solomon code symbols are permuted by a cross-interleaver [12, 13J. A general cross-interleaver structure is shown in Figure 8.12. Schematically, symbols are arranged in blocks of length I. Let us assume that a block corresponds to a codeword from a block code of length I. Code symbols in a codeword are shifted sequentially into the bank of 1 delay lines of increasing lengths. Each successive delay line has j more delay elements than the preceding one. The input and output commutators are held in synchronism with each other. When a code symbol is shifted into one of the 1 delay lines, the oldest code symbol in that delay line is read. This is called an (I,]) cross-interleaver, where] = Ij. Correspondingly, a (j, I) cross-deinterleaver performs the inverse operation and is also shown in Figure 8.12. Sometimes, a cross-interleaver is called a convolutional interleaver and has the following properties. 1. For any two symbols that are separated by less than] other symbols at the interleaver input, these two symbols are separated by at least 1 - 1 other symbols at the interleaver output. 2. Adjacent symbols of a block at the interleaver input are separated by ] other symbols at the interleaver output. 3. Any error burst oflength b' affecting vj or fewer blocks at the deinterleaver input appear as no more than v errors per block at the deinterleaver output. These v errors in a block may appear as isolated errors or cluster of errors in that block. For v = 1, there is only one error per block at the deinterleaver output and errors from adjacent blocks at the deinterleaver output are separated by at least (I - 2) other symbols. The total end-to-end delay is ](1- 1) symbols. The essence of a cross-interleaver is best illustrated by an example. Figure 8.13 shows a (4, 4) convolutional interleaver and deinterleaver circuit. Symbols
Applications of Block Codes
213
D =4 symbols
1-symbol delay
-+-.(:IJf------+--y----+----+-.,....--+-.lll-+--... :
:
: ;: : =
:
-ml-+:--'"
-
1
r+-+-+-~~wuiJr4----H---+:-.l'''L1j-I-+-;- - .
:......,..
: :
: 'ill: =
:
=: c....••••....•.•..•......•••...•..•••..:
y I
Symbols trom odd blocks
Figure 8.11 Standard CIRC encoder structure.
-cD , :.....L
214
Error-Control Block Codes for Communications Engineers
Input sequence: ElV j v j+ 1 V i+ 2 ·.. Vi+(I-1)lv i+( .. ·1
~V~~ n~ -1
~
I!"·
~
: I -1})
I
\n
';oL
~(-1 ~
D
Interleaved sequence:
Vi -(J -1) vi -2( J -1) ... v i -(I -1)( J -1) J (f -1))
2)
O / [ I J . ! ...
01-----.,
~o(j~~f----
\-1
Deinterleaved sequence:
L]v i vi +1 v +2· j
n
D
vi +(1 -1) Ivi +( ..·1
Figure 8,12 (I, J) convolutional interleaver and deinterleaver. V4 and VB are separated by 3 < j other symbols at the interleaver input. They are separated by 3 ~ (1- 1) other symbols at the interleaver output. Now, consider all the code symbols V4, V5, V6, and V7 of a codeword at the interleaver input. Any adjacent code symbols of the codeword at the interleaver input are separated by 4 = j other symbols at the interleaver output. If a burst of errors affects the code symbols V12, V9, V6, and V3 in a codeword, the burst length is 4. There is only one error per block at the deinterleaver output and errors from adjacent blocks at the deinterleaver output are separated by 2 ;:::>: (1 - 2) other symbols. The example shows that corrections of consecutive errors is possible with a random-error-correcting code.
Applications of Block Codes
215
Input sequence:
v4-v-Sv6-v-7 f s v gv1a v 11h2 v13 V14V151 V16V 17V1SV191] []va v 1 v2 v31-
U
2
I-----~
Interleaved sequence:
JJ
Figure 8.13 (4, 4) convolutional interleaver and deinterleaver.
In the compact disc system, a (28, 112) cross-interleaver and deinterleaver are employed. Interleaved symbols are further encoded by the inner encoder £1, a (32, 28) double-error-correcting shortened Reed-Solomon codes of d m in = 5. A symbol delay is inserted into every other coded symbol to separate two adjacent symbol errors caused by short bursts. To prevent the occurrence of the all-zero codeword, all parity-check symbols are inverted. One more 8-bit control and display information symbol is added to each encoded frame. This results in a total of 8(32 + 1) = 264 data bits. In some compact disc players, the control and display information symbol is used to select the playing order of the recorded music pieces and to display the track and the elapsed time of the music piece selected by the user. The above CIRC format gives a maximum
216
Error-Control Block Codes for Communications Engineers
fully correctable burst length of 4,000 data bits (= 2.5 mm track length) and a maximum inrerpolarable burst length of 12,300 data bits (=7.7 mm track length). For high-density recording on the master disc, a runlengrh-lirnired encoder is used (0 ensure that there is a minimum runlengrh of 2 zeros and a maximum runlengrh of 10 zeros between successive ones. The minimum runlength limits the highest frequency component of the encoded bit stream and sets the transition density for recording, whereas the maximum runlength ensures adequate transitions for synchronization of the readout clock in the playback device. A class of runlengrh-lirnited codes called Eight-Fourteen Modulation (EMF) code is used for this. Each 8-bit data sequence is mapped (0 a l-i-bit sequence. A further 3 merging bits are inserted between two adjacent 14-bit sequences to reduce the low-frequency component of the encoded bit stream. The reduction of the low-frequency component ensures a stable servo control and readout from the copy disc. Finally, 27 synchronization bits are added to each frame. This results in a total of (14 + 3)(32 + 1) + 27 = 588 channel bits. The CIRC block encoding, cross-interleaving, and runlength-limired coding processes for each audio frame are shown in Figure 8.14. For each frame, the 588 channel bits are converted into a signal that drives the write laser for recording on the master disc. The write laser, the master disc together with the user disc, and the photo-detector form a discrete noisy channel. At the receiving end, the 27 synchronization bits and the 3 merging bits are removed before runlength-limited decoding. The 8-bit control and displayed information symbol is removed from the frame and the 32 data symbols are fed into the CIRC decoding system.
8.4.2
Decoding
The CIRC decoder operates on the 32 data symbol sequence and makes a decision at the end of each frame to recover the transmitted information sequence. The decoding circuitry consists of symbol delay lines, inverters, an inner decoder D], a cross-deinterleaver, and an outer decoder D 2 . The block diagram of the CIRC decoder is shown in Figure 8.15. However, the decoding method has not been standardized and it is up to each compact disc player manufacturer to optimize the performance of their product. Since the inner and outer encoders have a minimum Hamming distance of 5, the inner and outer decoders can correct two errors or any combination of v errors and f erasures, where (2v + f) < dm in = 5. Two common decoding strategies are briefly described here. In a simple decoding strategy, both decoders correct single error. In most decoding strategies, the inner decoder D I is designed to correct single error. If more than 1 error
Applications of Block Codes
217
1 frame Uncoded frame
6x32 audi bits
GF (2 8)
symbols
Uncoded frame
24 symbols
CIRC encoding
33 data symbols
t
16 data bits Runlengthlimited encoding Channel bits
33x 17 channel bits
+ 27 sync. bits
Figure 8.14 CIRC and runlength-limited encoding of each digital audio frame in the compact disc system.
occurs, the inner decoder D 1 ourpurs 28 erasures. These erasures are crossdeinterleaved and sent to the outer decoder D z. The outer decoder D z may be set to correct 2 errors or 4 erasures. In most compact disc players, the outer decoder is designed to decode erasures only. If more than 4 erasures occur, the outer decoder D z simply outputs 24 erasures. In the event that an erroneous symbol has been declared as unreliable by the inner decoder D 1 and cannot be corrected by the outer decoder D z, an un correctable symbol may be interpolated from reliable neighboring symbols. If both error correction and interpolation fail, the sounds associated with erroneous symbols can be muted. The reader may find further details in reference [14].
218
Error-Control Block Codes for Communications Engineers
o
1-symbol delay ~
2-symbol delay
-..:
,,
,.~._-
_-------_ .. ,,,
}L
1
}R} Block 0
2 3 4 5 6 7 8 9 10 ~--+----.I 11 12 -+----+-IDo~., Inner 13 decoder
14 ---+----+-1 :>o~~ 15
,
-r-w"I...Loil.l.l..~,-pj
D1
16 -+-,--+---+1 17 ~L.LH---.I
H .....................--,...
18 ---+-'---l-----4~
L} Block 3
17
} Block 4
...'-.... "L-L~ ~ ~} R 1\)('\'---+-to- 16} L
19 ---+-~.u--l--__<~ 20-+--+--~
21
.'----r. 18} R
22 -+---+---+1 23
'----:. • 19
24-i---+---+l
25 26 -+---+----+/
2
29
30 -i---+-{';>O---.I 31
Figure 8.15 LIRe decoding system.
:
20
:
21 } L } Block 5
c::D--!-.-
Hf---------i~:.J:
27-+-k'LJ--+--~
28 -+--H>o--.I
12}
u...--~.......-..---.... 13
L..-_ _..J
Crossdeinterleaver
:
_-
;
22} R 23
Applications of Block Codes
219
References [11
Shannon, C E., "A Mathematical Theory of Communication," Bell Syst. Tech. j., Vol. 27, No.3, July 1948 , pp. 379-423, and Vol. 27, No.4, October 1948, pp. 623-656.
[2]
Forney, G. D., Jr., Concatenated Codes, Cambridge, MA: MIT Press, 1967.
[3J
Yuen, J. H., Ed., Deep Space Telecommunications Systems Engineering. Plenum Press, 1983.
[4]
Lee, I.. H. C, Convolutional Coding. Fundamentals and Applications, Norwood, MA: Arrech House, 1997.
[5J
Viterbi, A. J., "Error Bounds for Convolutional Codes and an Asyrnprorically Optimum Decoding Algorithm," IEEE Trans. on Information Theory, Vol. IT-13, No.2, April 1967, pp. 260-269.
[6J
Yuen, J. H., er al., "Modulation and Coding for Satellite and Space Communications," IEEE Proc., Vol. 78, No.7, July 1990, pp. 1250-1266.
[7j
GSM Recommendations 05.03, "Channel Coding," Draft version3.4. 0, November 1988.
[8J
Hodges, M. R. L., "The GSM Radio Interface," British Telecom Technol. j: Vol. 8, No. I, January 1990, pp. 31-43.
[9]
Helling, K., er al., "Speech Coder for the European Mobile Radio System," hoc. ICASSP89, Glasgow, UK, 1989, pp. 1065-1069.
[10]
Murora, K., and K. Hirade, "GMSK Modulation for Digital Mobile Radio," IEEE Trans. on Communications, Vol. COM-29, No.7, July 1981, pp. 1044-1050.
[II]
Carasso, M. G., J. B. H. Peek, and J. P. Sinjou, 'The Compact Disc Digital Audio System," Philips Technical Review, Vol. 40, No.6, 1982, pp. 151-156.
[12]
Ramsey, J. L., "Realization of Optimum Inrerleavers," IEEE Trans. on Inftrmation Theory, Vol. IT-16, No.3, May 1970, pp. 338-345.
[13]
Forney, G. D., "Burst-Correcting Codes for the Classic Channel," IEEE Trans. on Communications, Vol. COM-19, No.5, Ocrober 1971, pp. 772-781.
[14]
Hoeve, H., J. Tirnmermans, and L. B. Vries, "Error Correction and Concealment in the Compact Disc System," Philips Technical Review, Vol. 40, No.6, 1982, pp. 166-172.
This page intentionally left blank
A
Appendix
Binary Primitive Polynomials Appendix A presents a list of binary pnmltlve polynomials p(x) = O x m + Pm-Ix m-l + Pm-2 x m-2 + ... + PIx + Poo fd egreem,wh erep} = d an 1, 0 ~ j ~ m - 1, and 2 ~ m ~ 7. Table A.1 Binary Primitive Polynomials of Degree m
m -
p(x) -
--
--------------'
x2 + X + 1 x3 + X + 1 x3 + x 2 + 1
x4 + X + 1 x 4 + x3 + 1 2
x5 + x + 1 x 5 + x3 + 1 x 5 + x3 + x 2 + X + 1 x 5 + x4 + x 2 + X + 1 x 5 + x4 + x3 + X + 1 x5 + x 4 + x 3 + / + 1
x6 + X + 1 x 6 + x4 + x 3 + X + 1 x6 + x 5 + 1 x6 + x 5 + x 2 + X + 1 x6 + x5 + x3 + x2 + 1 x6 + x 5 + x 4 + X + 1
221
222
Error-Control Block Codes for Communications Engineers
Table A.1 (continued) Binary Primitive Polynomials of Degree m m
p(x)
7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7
xl xl xl xl xl xl xl xl xl xl xl xl xl xl xl xl xl xl
+X+1 + x3 + 1 + x3 + x2 + X + 1 + x4 + 1 + x4 + x3 + x2 + 1 + x5 +x2 + X + 1 + x5 + x3 + X + 1 +x5 +x4 +x3 + 1 + x5 + x4 + x3 + x2 + X + 1 + x6 + 1 + x6 + x3 + X + 1 + x6 + x4 + X + 1 + x6 + x4 + x2 + 1 + x6 + x5 + x2 + 1 + x6 + x5 + x3 + x2 + X + 1 + x6 + x5 + x4 + 1 + x6 + x5 + x4 + x2 + X + 1 + x6 + x5 + x4 + x3 + x2 + 1
Appendix
B
Galois Field Tables m
Appendix B presents a list of GF(2 ) tables generated by a binary primitive . I P () po Iynomla x = x m + Pm-Ix m-I + Pm-2 x m-2 + ... + PIX + Po 0 f degree m, where Pi = 0 and 1, 0 ~ j ~ m - 1, and 2 ~ m ~ 6. Each element in m GF(2 ) is expressed as a power of some primitive element a and as a linear . 0 f a, 0 a I , ... ,a m-I . Th e po Iynornra . I repr~sentatlon . 0 f t he com bimatlo~ 1 eement a,l ·t = 0, 1 , ...2 , m - 2 IS. expresse d as a 1 = am-Ia m-I +
am_2am-2 polynomial
ao with binary, coefficients and the coefficients of the representation of a' is expressed as a binary m-tuple
+ ... +
[ao al ... am-d· Table B.1 Elements of GF(2 2) Generated by a Primitive Polynomial pIx) Elements 0 1
0= 1= a= a
2
2-Tuple
o1
a = a
00 10
+1
223
11
=x2 + X + 1
224
Error-Control Block Codes for Communications Engineers
Table B.2 Elements of GF(231 Generated by a Primitive Polynomial p(xl = x 3 + X + 1 Elements
3-Tuple
0=
o
1=
1
a= 2
a
a
001
3 4
100
o1 0
=a2 a +1
110
a
o1 1
a = 2
+ 2 5 a =a + 2 6 a = a +
a
000
= a
a
111
+1
101
Table B.3 Elements of GF(2 41 Generated by a Primitive Polynomial p(x) = x 4 + X + 1 Elements
4-Tuple
0=
0
1=
1
a= 2 a = 3 a = 4 a = 5 a = a a
a a a
2
1 000
o 1 00 001 0
3
000 1 a + 1 a
2
6
3 2 = a + a
7
= a +
8
0000
+ a
1 10 0
o1 1 0 001 1
3
a + 1
1 10 1
2
+
2
+ a +1
1 1 10
+ a + a 2 a 12 = a 3 + a + a + 1
a a a a
a a
9
=
a 3
= a +
10 11
13
=
a a
= a
a
o1 0 1
3
2
o1 1 1
3
2
10 1 1
= a + a +
14=
101 0
3
+
1111 1 001
AppendixB: Galois Field Tables Table 8.4 Elements of GF(25) Generated by a Primitive Polynomial pIx) = x 5 + x 2 + 1 Elements
5-Tuple
r----~-------------____I
0=
o
00000
1=
1
1 0000
a
=
3
=
4
a = a a a a
o 1 000 001 0 a oaa 1 0 a 0 aa 1 1 a 1 aa a1 a1 a 00 1 a 1 1 a1 1 0 a1 a1 1
a
a= 2
a
5
6 l
4
= =
a
3
+
4
= a +
a a
2
3 2 = a + a + 4 3 a = a + a + a 4 10 a =a +
a
B
9
1 a2 + a + 1 a3+ a2+ a
all =
a 12 = a
13=
a
a 16 a
a
3+
+
a3
=a + =a4 +
3
a 14 = a a 15
4+ 4
4
17=a4
a
+
a
2
a2
a3 +
a + 1
+
alB =
a a a a a a a
19 20
21 22
a a
3
+a
+ a3 =a +
2
3
a
1 1 00 1 1 1 a0 a
1
a1 1 0 0 oa 1 1 0 aaa 1 1 1 a1 a1 1 1 1 1a o1 1 1 1 1 aa 1 1
+a
2
2
+
2
23
=
24
= a + a + a + a
25 26
a 4
3
4
3
+ a + a +1 2
= a + a + 4
11111 1 1a1 1
+1
4
4
1 1 1 aa a1 1 1 a oa 1 1 1 1a1 1 1
a+l a
= = = a
1
+
+ a2+ a + 1
1 000 1
+
1110 1
a = 4 2B a = a +
110 1a o1 1 a 1 1 aa 1 0 a 1 a0 1
a
= a
2l
a
a
29 30
=
4
= a +
a
225
226
Error-Control Block Codes for Communications Engineers
Table 8.5 Elements of GF(2 6) Generated by a Primitive Polynomial p(x) = x 6 + X + 1 6-Tuple
Elements 0 1
0= 1= a= 2 a = 3 a = 4 a = 5 5 a = a 6 a = 7 a = 8 a = 9 a = s+ a 'O= a 11 5 a = a + 12 a = 13 a = 14 a = 15 5 a = a + 16 a = 17 5 a = a + 18 a = 19 a = 20 5 a = a + 21 = as + a 22 5 a = a + 23 5 a = a + 24 a = 25 5 a = a + 26 a = 27 a = 28 a = 29 5 a = a + 30 5 a = a + 31 5 a = a + 32 a = a
33
a a a a
2
3
000100 000010
4
a a a a a
4
+
a
3
+
a
2
+
+1
2
3
a+ a a a
a
3
3
+ 4 3 a + a + 4 3 a + a + 4 3 a + a + 4
+
a
a
3
2
a+l
000101 1 1 00 1 0
a
o 1 1 001
a
+1
a
1
2+
+
+ a
2
+ 3 2 a + a + 4 3 2 a + a + a 4 3 a + a a
a
4
a
4
3
2
100101 1 000 1 0 o 1 000 1 1 1 1 000 o 1 1 1 00 001 1 1 0 000111 1 1 001 1 100100
+
--~---~~-~._-_.
+1
1 10 11 1 10 10 11
10100~'
+
+
+1
a
a
+ a
a
a
111100
o1 1 1 1 0 o1 1 1 1 1
a
a+ a
a
4
2
+ + 2 a + 2 a + a
a
2
+
00001 1 1 1 000 1 1 0 1 000
o 1 0 1 00 001 0 1 0
a a
a
1
+
+
+ a
4
3
2
000001 110000 011000 001 1 00 000 1 1 0
a
4
4
000000 100000 010000 001000
a
o 1 001
0
Appendix B: Galois Field Tables Table 8.5 (continued) Elements of GF(26) Generated by a Primitive Polynomial pIx)
5
a 34 = a + a a
36
a
2
3
a +
= 4
a +
= 5
a 37 = a +
0 0 100 1 a +1
110 100
2
0 1 10 10
2
00 1 10 1
a + a a3 + a
4
a 38 =
= x6 + x + 1
6-Tuple
Elements
35
227
3
a + a +
a +1
1 10 1 10
a = a + a + a + a 5 3 a 40 = a + a + a2 + a + 1
2
0 1 10 1 1 1 11 10 1
2
10 1 1 10 0 10 1 1 1
2
1 110 1 1 10 1 10 1 100 110
5
39
4
4
41
3
a = a + a + a + a 42 = a 5 + a 4 + a 3 + a 5
4
5
4
a 43 = a + a + a + a +1 a 44 = a 5 + a3 + a2 + a 45 = a4 + a3 + 46
a =a + a + a a 47 = a 5 + a2 + a + 1 a 48 = a3 + a2 + a4 + a3 +
a 49 = a
50
5
=a +
00 1 0 1 1 a3 +
4
52
0 10 110
4
a +
a 51 = a 5 +
a +1 2
a = a + a + 5 53 3 a = a + a + a a
54
4
a +
=
a 55 = a 5 +
0 100 1 1 11100 1 10 110 0
1 10 10 1
0 10 10 1
2
1 110 10
2
0 1 1 10 1
a + a +1 a3 + a + a
4
3
2
111110
5
4
3
2
011111
5
4
2
111111
a 59 = a + a + a 3 + a +
5
4
2
10 1 11 1
5
4
100 111
a 56 = a
57
a + a + a + a +1
= a + a + a + a + a
a 58 = a + a + a 3 + a + a + 1 3
a 60 = a + a + a + 4
a 61 = a 5 + a + a 62 = a 5 +
i
10 10 10
100 0 1 1 10000 1
~"--------------~
This page intentionally left blank
c
Appendix
Minimal Polynomials of Elements in GF(2 m) Appendix C presents a list of minimal polynomials of elements in GF(2 m ) . The degree of the minimal polynomial cP(x) = cPmxm + cPm_lXm-1 + ... + cPIX + cPo is m, where cPj = 0 and 1, 0 ~ j ~ m and 2 ~ m ~ 6. The . 2 3 4 5 6 finite fields GF(2 ), GF(2 ), GF(2 ), GF(2 ), and GF(2 ) are constructed . ... '1 s x 2 + x + 1, x 3 + x + 1 , x 4 + x + 1, usmg the pnmltlve po Iynomla 2 5 6 x + x + 1, and x + x + 1 with a as a root, respectively. Table C.1 Minimal Polynomials of Elements in GF(2 m ) m
Conjugate Roots of «p(x)
Order of Element
2
a,a 2
3
3
2 4 a,a,a
7
3
365 a,a,a
7
4 4 4
a, a 2, a 4, a 8 a 3, a 6, a 12, a 9 a 5, a 10
15
x4 + X + 1
5
x +x +x +X+1 x2 + X + 1 x 4 + x3 + 1 __
1
1~_~a~_a1~~~~
3
Minimal Polynomial «p(x)
4
3
~~15_~~~~~
229
2
~
230
Error-Control Block Codes for Communications Engineers
Table C.1 (continued) Minimal Polynomials of Elements in GF(2 m )
m
Conjugate Roots of iP(x)
Order of Element
Minimal Polynomial iP(x)
5 5 5 5 5 5
2 4 a, a, a, a 8, a 16
31 31 31 31 31 31
X
6 6
6 6 6 6 6
3
6
5
10
7
14
12
24
17
20
9
18
28
25
a, a, a , a , a a, a , a , a, a
a ,a ,a ,a ,a
19
11
22
13
26
21
15
30
29
27
23
a ,a ,a ,a ,a a ,a ,a ,a ,a
2 4 8 a, a, a, a, a 16, a 32
63
3 6 a, a, a 12, a 24, a 48, a 33 5 a , a 10, a 20, a 40, a 17, a 34
63
7 a, a 14, a 28, a 56, a 49, a 35
a 9, a 18, a 36 11
22
44
25
50
37
13
26
52
41
19
38
a ,a ,a ,a ,a ,a
21
x + x4 + x3 + x2 + 1 x 5 + x4 + x 2 + X + 1 x5 + x 3 + x 2 + X + 1 x5 + x4 + x3 + X + 1 x5 + x3 + 1
x6 + X + 1 x6 + x 4 + x 2 + X + 1 x6 + x 5 + x 2 + X + 1
i
63 63
i
6 6
a 21 ,a 42
3
6
a~a~a~a~a~a~
63
6
a 27, a 54, a 45
7
6
a 31 ,a 62,a 61 ,a 59,a 55,a 47
63
21
+ x2 + 1
5
9 7
a ,a ,a ,a ,a ,a a 15, a 30, a 60, a 57, a 51, a 39
5
X
3
+ x3 + 1 + x2 + 1 3 2 + x5 + x + x + 1
6
x + x4 + x3 + X + 1 i + x 5 + x 4 + x2 + 1 x2 + X + 1 i + x 5 + x4 + X + 1 x3 + X + 1 x6 + x 5 + 1
Appendix
o
Generator Polynomials of Binary, Narrow-Sense, Primitive BCH Codes Appendix D presents a list of generator polynomials of binary, narrow-sense, primitive BCH codes of designed error-correcting power t d. Table D.l lists the code dimension k, codeword length n, the designed Hamming distance n- k 0d = 2td + 1, and the generator polynomial g(x) = x + gn_k_lxn-k-1 + gn_k_2xn-k-2 + .. , + glx + gO of the codes, where gj = 0 and 1, o 5,j~ n - k- 1, and n = 7,15,31,63,127. Table 0.1
Generator Polynomials of Binary, Narrow-Sense, Primitive BCH Codes
n
k
Designed Distance
g(x)
2td + 1 7
4
3
x3 + X + 1
15 15 15
11
3
7
5
5
7
x4 + X + 1 x8 + x 7 + x6 + x 4 + 1 x 10 + x8 + x 5 + x4 + x 2 + X + 1
26 21
3
16
7
31 31 31
5
x5 + x 2 + 1 x 10 + x9 + x8 + x6 + x 5 + x3 + 1 x 15 + x" + x 10 + x9 + x8 + x7 + x 5 + x3 + x 2 + X + 1
231
232
Error-Control Block Codes for Communications Engineers
Table 0.1 (continued) Generator Polynomials of Binary, Narrow-Sense, Primitive BCH Codes Designed Distance
g(x)
57 51 45
3 5 7
x6 + x + 1 x 12 + x 10 + x8 + x 5 + x4 + x3 + 1 x 18 + x 17 + x 16 + x 15 + x 9 + x7 + x 6 + x 3 + x 2 + x + 1
120 113 106
3 5 7
x7 + x 3 + 1 x 14 + x9 + x8 + x6 + x 5 + x 4 + x2 + X + 1 x 21 + x 18 + x 17 + x 15 + x 14 + x 12 + + x 8 + x 7 + x6 + 5 x +X+1
n
k
63 63 63 127 127 127
u, + 1
x"
i
About the Author L. H. Charles Lee received a B.Se. degree in electrical and electronic engineering from Loughborough University, England, in 1979; a M.Se. degree from the University of Manchester Institute of Science and Technology, Manchester, England, in 1985; and the Ph.D. degree in electrical engineering from University of Manchester, England, in 1987. From 1979 to 1984 Dr. Lee was with Marconi Communication Systems Limited, Chelmsford, England, where he was engaged in the design and development of frequency hopping radios. In 1987 and 1988, he was appointed lecturer at Hong Kong Polytechnic. From 1989 to 1991, he was a staff member of the Communications Research Group, Department of Electrical Engineering, University of Manchester, England, where he was involved in the design of channel coding and interleaving for the Pan European GSM digital mobile radio system. Currently, Dr. Lee is with the Department of Electronics, Macquarie University, Sydney, Australia. He has also served as a consultant for various companies in Australia and Asia. Dr. Lee is a Chartered Engineer, a member of the Institution of Electrical Engineers, and a senior member of the Institute of Electrical and Electronic Engineers. He is the author of Convolutional Coding: Fundamentals and Applications (Artech House, 1997). His current research interests include error-control coding theory and digital communications.
233
This page intentionally left blank
Index Abstract algebra, 13-37 fields, 20-30 Galois field arithmetic, 30-32 groups, 13-16 matrices, 36-37 rings, 16-20 vector spaces, 32-36 Additive identity, 16, 20 Additive white Gaussian noise (AWGN) channel, 7 Berlekamp-Massey algorithm decoding in, 138, 161, 163 measurements, 81 variance, 136, 161 Amplitude-shift keying (ASK) modulation,
illustrated, 124 iterative, 126-27, 133, 134, 143,
148-49, 156 RS codes, 143-49 summary, 125-2(, See also Decoding Binary, narrow-sense, primitive BCH codes, 112-16 construction, 115 generator polynomials, 231-32 Hamming distance, 114 length, 113 parameters, 112, 137 triple-error-correcting, 132 See also Bose-Chaudhuri-Hocquenhern (BCH) codes Binary amplitude-shift keying (BASK), 6 Binary BCH codes decoding, 119-31 double-error-correcting, 137 performance, 138 triple-error-correcting, 137 See also Bose-Chaudhuri-Hocquenhem (BCH) codes Binary cyclic codes, 90 error-rrapping decoder for, 104 performance, 109 See also Cyclic codes Binary Golay codes, 107 extended, 107 triple-error-correcting, 107 See also Golay codes
3-4 Applications, 199-218 CfImpan discs, 207-18 mobile communications, 202-7 space communications, 199-202 Array codes, 3 Berlekarnp-Massey algorithm, 120-27,
143-49 in AWGN channels, 138, 161, 163 BCH codes, 120-27 decoding stcps, 131-32, 133, 134, 156 defined, 12,)-24 efficiency, 130 for error-locator polynomial computation, 126, U2, 14'i flow-chan, 124
235
236
Error-Control Block Codes for Communications Engineers
Binary Hamming codes, 56-57 bit error rate, 82 generator matrix, 66, 73 parameters, 81 performance, 82 Binary linear block codes, 43, 48-55 examples, 51-55 parity-check matrix, 49 performance, 79-81 syndrome-former circuit, 49 syndrome-former trellis representation, 48-51 trellis diagram, 71 See also Linear block codes Binary phase-shift keying (BPSK) modulation, 5 4-PSK modulation comparison, 167 bandwidth, 166, 167 coded, 162, 168, 169, 170 coherent, 162 demodulator, 75 uncoded, 162, 168, 169, 170 uncoded, performance, 194 See also Modulation Binary primitive polynomials, 221-22 Binary symmetric channel (BSC), 8-9 cross-over probability, 8 defined, 8 illustrared, 9 Binary symmetric erasure channel (BSEC), 9-10 Block codes, 1 applications, 199-218 cyclic, 85-110 examples, 3 implementation, 3 linear, 39-82 types of, 3 Block encoders block diagram, 40 mapping rule, 40, 41 redundancy digits, 39 See also Encoding Bose-Chaudhuri-Hocquenghem (BCH) codes, 3, 111-38 binary, 119-31 binary, narrow-sense, primitive, 112-16 computer simulation results, 136-38
defined, II I designed Hamming distance, 112 error-free decoding, 153 general description, 111-12 generator polynomial, 111, 139 length, 112 minimum Hamming distance, 119 narrow-sense, 112 parity-check matrix, 117 primitive, 112 Burst channel, 10-11 Burst errors, II Channel capacity, I Channel codewords, 39 Channel decoder, 3, 41 Channel encoder, 3 Channel models, 8-11 binary symmetric channel, 8-9 binary symmetric erasure channel, 9-10 burst channel, 10-1 1 discrete memoryless channel, 8 Chien search. 125 Circuits division, 32 hardware realization, 33 multiplication, 32 syndrome-former, 49-51 Co-channel interference, 203 defined, 203 rario (CIR), 207 Code concatenation, 199, 200, 202 Coded digital communication systems, I-II channel encoder/decoder, 3 data sink, 2-3 data source, 2 demodulator, 7 elements, 2-1 1 illustrated, 2 introduction, 1-2 model illustration, 39, 65, 71.109, 137, 162, 165 modulation, 3-7 modulator, 3 Code polynomials, 86, 89, 90 for codevector, 92 coefficients, 91 See also Polynomials
Index Code rare, j Codewords channel, 39 decoded, 190 incorrectly selected, 81 length, 116 minimum squared Euclidean distance, 182 partitioning, 175 transmitted, 61 Commutative groups, 14 commutative rings, 17 Compact discs, 207-18 CIRC, 212-18 convolutional inrerleaver, 212, 214, 215 decoding, 216-18 encoding, 21 1-16 performance, 21 1 recording/decoding system block diagram, 21 I recording medium, 210 See also Applications Computer simulation results BCH codes, 136-38 linear block codes, 81-82 MBCM, 192-95 RS codes, 161-63 Concatenated coding performance, 202 serial, 200, 20 I Convolutional codes, implementation, 3 rypes of, j Convolutional inrcrleaver, 212, 214, 215 Cross-Interleaved Reed-Solomon Code (CIRe),212-18 block encoding, 216 decoder, 216 decoding system, 218 encoder, 212 encoder structure, 213 run length-limited encoding and, 217 See also Compact discs Cross-interleaver, 212, 215 Cross-over probability, 8 Cyclically shifting, 85, 92, 100 Cyclic codes, 3, 85-110 binary, 90, 104
237 codevector, 86 CRC,106 decoding, 95-107 defined, 85 for detecting burst errors, 106 encoding of, 94-95 for error detection, 106 Golay, 107-8 length, 86 matrix description, 91-94 multiple-error-correcting, 111 polynomial description, 85-91 shortened, 107-8 systematic encoding of, 90 See also Block codes; Linear block codes Cyclic redundancy check (CRC) codes, 106 Data sink, 2-3 Data source, 2 Decoding
Berlekarnp-Massey algorithm, 120-27, 143-49, 151-57 binary BCH codes, 119-31 compact discs, 216-18 cyclic codes, 95-107 error-and-erasure, 77, 159-60 error-only, 154 error-trapping, 10,3-7 Euclid algorithm, 127-31, 150-51, 157-61 linear block codes, 58-76 maximum-likelihood, 63-70, 183-84 maximum-likelihood Virerbi algorithm, 70-76 MBCM, 183-90 Meggitt, 102 minimum distance, 59 multistage, 184-86 multistage trellis, 186-90 RS codes, 143-51 standard array, 58-61 syndrome, 61-63, 95-103 word error probabiliry, 80 See also Encoding Demodularion, 7 Demodulators, 7 BPSK,75 process, 7 See also Modulators
238 Discrete Discrete Division Divisors
Error-Control Block Codes for Communications Engineers
memoryless channel (DMC), 8 noisy channel, 7 circuit, 32 of zero, 18
Eight-Fourteen Modulation (EMF) code, 216 Encoding binary, 94 compact discs, 211-16 cyclic codes, 94-95 MBCM, 170-83 Reed-Solomon, 141, 143 systematic, 94 See also Decoding Erasure-locator polynomials, 152, 155 Error-and-erasure-Iocator polynomial, 160 Error-control codes, II Error-evaluator polynomials, 129, 147, 154 Error-locator polynomials, 121-22, 144, 152 computation with Berlekarnp-Massey algorithm, 126, 132, 145 roots, 127, 133, 134 synthesis, 123 See also Polynomials Errors burst, 11 magnitudes, 145 random, II types of, 11 Error-trapping decoding, 103-7 for binary cyclic code, 104 binary cyclic code performance, 109 defined, 103 example, 106 for shortened cyclic code, 108 See also Decoding Euclid algorithm, 127-31, 150-51 BCH codes, 127-31 decoding, 157-61 decoding steps, 131, 135, 136, 150, 161 defined, 127 cfficiency, 130 extended, 127, 129, 135, 136, 150, 158-59, 161
RS codes, 150-51 table, 128-29 Euclidean distance, 68, 74 between any two nearest coded 4-PSK, 183 minimum squared, 182 running squared, 190 squared, 183, 184 Extension fields, 23, 24-30 construction of, 24-26 defined, 23 properties of, 26-30 Fields, 20-30 characteristics of, 21 defined, 18 extension, 23, 24-30 finite, 21, 22 Galois, 21, 23, 223-27 order of, 21, 22 See also Abstract algebra Finite fields, 21, 22 Finite groups, 14 Galileo mission, 201-2 bir error performance, 202 defined, 20 I See also Space communications applications Galois fields, 23 arithmetic implementation, 30-32 defined,21 tables, 223-27 See also Abstract algebra; Fields Gaussian minimum-shifted keying (GMSK) modulation, 205 Gaussian random variables, 7 Generaror matrix, 42, 44 binary Hamming code, 66, 73 dual code, 92 from generaror polynomial, 91 MBCM,176 RM codes, 55 row/column deletion, 58 SEC Hamming codes, 54 single-parity-check code, 52 standard-echelon-form, 93 systematic linear code, 45 See also Matrices
Index Generator polynomials, 90 BCH codes, Ill, 139 binary, narrow-sense, primitive BCH codes, 231-32 defined, 87 generator matrix from, 91 of minimum degree, 87 RS codes, 143 See also Cyclic codes; Polynomials Golay codes, 107-8 binary, 107 defined, 107 extended binary, 107 Ground field, 25 Groups, 13-16 axioms, 14 commutative, 14 defined, 13 finite, 14 order of, 14 subgroup, 16 See also Abstract algebra GSM digital radio system, 203-7 bit-error rate performance, 207, 210 bits distribution, 209 channel coding and reordering, 208 diagonal interleaving srrucrure, 209 discrete noisy channel, 205 full-rate, 203-7, 209, 210 GMSK modulation, 205 illustrated, 204 performance, 210 reordering and partitioning, 206-7 simulation test results, 207 traffic channel rnulrifrarne/rimeslor structure, 209 See also Applications Hamming distance, 42, 67 BCH code, 112 binary, narrow-sense, primitive BCH codes, 114 branch, 73 metric computations, 66, 67-68, 77 minimum, 46, 47, 56, 116, 119, 142 between symbols, 72 Hamming weight, 42 Hard-decision decoding maximum-likelihood, 65-68, 184
239 maximum-likelihood Viterbi algorithm, 72-73 metric computation, 67 metric table, 66 See also Decoding; Soft-decision decoding Integral domain, 18 Interleaving, II block,201 convolutional, 212, 214, 215 cross, 212, 215 depth, 201 Irreducible polynomials, 24, 26, 29 Key equation, 123 Linear block codes, 39-82 augmenting, 56 binary, 43, 48-55, 77, 79-81 computer simulation results, 81-82 concepts/definitions, 40-42 correction of errors and erasures, 76-79 decoding, 58-76 expurgating, 56 extending, 56 lengthening, 57 matrix description. 42-45 minimum distance, 45-48 modifications, 56-58 performance, 79-81 punctuating, 57 q-ary, 42 Reed-Muller (RM) codes, 54-55 repetition codes, 52 shortening, 57-58 single-error-correcting Hamming codes, 53-54 single-parity-check codes, 52-53 standard array, 59 systematic, 45 See also Block codes; Cyclic codes Linear combination, 34 Logarithmic likelihood function, 64 Matrices, 36--37 description of linear block codes, 42-45 generator, 42, 44-45, 54-55, 66, 73, 91-93, 176
240
Error-Control Block Codes for Communications Engineers
k-by-n, 36, 42 parity-check, 44, 45, 49, 53, 116-19, 111l, 176-77 permutation, 174 syndrome-former, 49 Maximum-likelihood decoding, 63-70 defined, 65, I Il.J hard-decision, 65-68, I 114 soft-decision, 61l-70 See also Decoding Maximum-likelihood Virerbi algorithm decoding, 70-76 defined,70 hard-decision, 72-73 hard-decision decoding steps, 74 soft-decision, 73-76 soft-decision decoding steps, 76 using, 75 See also Decoding Meggitt decoders, 102 Minimal polynomials, 28-29 defined, 28 of elements in GF(2m), 229-30 generated by primitive polynomial, 29, 113, 114, liS See also Polynomials Minimum distance decoding, 59 Minimum Hamming distance, 46, 47, 56, 116 of BCH codes, 119 of RS codes, 140, 142 See also Hamming distance Mobile communications, 202-7 co-channel interference, 203 error-control coding techniques, 202-3 GSM digital radio system, 203-7 limitations, 202 See also Applications Modulation, 3-7 ASK,3-4 BPSK, 5, 75, 166-70 defined,3 GMSK, 205 MBCM, 165-95 PSK, 4-5, 166-67, 171-72, 195 QAM,5-6 TCM, 168, 169 Modulators. 3 defined, 3 output waveform, 6
Module-S addition, 15, 16 Modulo- 5 multiplication, 15 Multilevel block-coded modulation (MBCM), 165-95 2-D, 169-70 advantages of, 192 bit-error probability performance degradation, 194 component codes, 170 computer simulation results, 192-95 decoding methods, 183-90 defined, 161l disadvantages of, 192 encoding/mapping of, 170-83 encoding model, 171 generator matrix, 176 introduction, 165-70 model illustration, Ill') parameters, 193 parity-check matrix, 176-77 performance, 191 signal constellation geometty, 169 three-level. 176, 193 total number of codes used, 172 two-level. 175, 193 See also Modulation Multipath propagation effect, II Multiplication circuit, 30 Multiplicative identity, 21 Multistage decoding, 184-86 decision making, 186 defined, 185 drawback, 192 error probability, 191 MBCM performance with, 191 See also Decoding Multistage trellis decoding, 186-90 Newton's identities, 122-23, 153 Nonprirnirive Reed-Solomon codes, 143
Parity-check matrix, 44 12-by-15 binary, us BCH code, I 17 binary linear block code, 49 MBCM, 176-77 SEC Hamming codes, 53 single-parity-check code, 53 systematic linear code, 45
Index of triple-error-correcting, binary, narrow-sense, primary BCH code, 117 See also Matrices Parity-check polynomials, 88 Partitioning of codewords, 175, 179 of signal vectors, 176, 180 of uncoded bits, 175, 178 Performance binary BCH codes, 138 binary block code, 79-81 binary cyclic code, 109 compact disc system, 211 full-rate GSM system, 210 MBCM,191 RS codes, 163 uncoded coherent 4-PSK, 195 Permutation matrix, 174 Phase-shift keying (PSK) modulation, 4-5 4-PSK, 166, 167, 171, 195 8-PSK, 171, 172 bandwidth, 166, 167 BPSK comparison, 167 set-partitioning, 171, 172 uncoded coherent. 195 See also Modulation Plokin upper bound, 48 Polynomial tings, 19-20 Polynomials, 23-24 code, 86, 89, 90, 91 erasure-locator, 152, 155 error-and-erasure-locator, 160 error-evaluator, 129, 147, 154 error-locator, 121-22, 123, 144, 152 generator, 87, 90 irreducible, 24, 26, 29 minimal, 28-29, 113, 229-30 multiplication of, 30 nonzero, 127 parity-check, 88 primitive, 24, 25, 26, 113, 221-22 quorienr, 31, 128 received, 120 reciprocal, 89 remainder, 128 represenrarion, 25-26 syndrome. 97, 98. 129, 150 Power representation, 24
241
Primitive polynomials. 24, 25. 26 binary, 221-22 Galois fields generated by, 223-27 root, 113, 115 See also Polynomials Principal ideal, 19 Quadrature-amplitude modulation (QAM),5-6 Quotient polynomials, 31. 128 Random errors, II Received polynomials, 120 Reciprocal polynomials. 89 Reed-Muller (RM) codes. 3. 54-55 defined, 54 first-order, 54. 55 generator matrix, 55 parameters, 54 second-order, 55 zero-order, 55 See also Linear block codes Reed-Solomon (RS) codes. 3, 139-63 computer simulation results, 161-63 correction of errors and erasures, 151-61 Cross-Interleaved (CIRe), 212-18 decoding, 143-51 decoding procedure, 147--48 defined, 139 description, 139-43 encoder, 141, 143 error-and-erasure correction, 157 error-correcting power. 140 error-only decoding, 154 generator polynomial, 143 nonprimirive, 143 obtaining, 140 parameters. 162 performance of, 163 primitive, 140-41, 141--42, 162 with symbols, 141 triple-error-correcting primitive, 143, 148, 155 true minimum Hamming distance, 140 Regular pulse excited linear predictive coder (RPE-LPC), 203 Remainder polynomial, 128 Repetition codes, 52
Error-Control Block Codes for Communications Engineers
242
Rings, 16-20 commutative, 17 polynomial, 19-20 ring-with-identity, 17 subring, 19 under addition operation, 16 See also Abstract algebra Scalar multiplication, 33 Shannon's channel capacity limit, 199 Signal symbol rate, 6 Single-error-correcting binary linear block codes, 62 Single-error-correcting Hamming codes, 3,
53-54 generator matrix, 54 parameters, 53 parity check matrix, 53 standard array, 59-60 See also Linear block codes Single-parity-check codes, 52-53 defined, 52 generator matrix, 52 parity-check matrix, 53 See also Linear block codes Soft-decision decoding maximum-likelihood, 68-70 maximum-likelihood Viterbi algorithm,
73-76 metric computation, 70 minimum distance, 69 See also Decoding; Hard-decision decoding Space communications applications,
199-202 Galileo mission, 201-2 Voyager missions, 200-20 I See also Applications Standard array decoding, 58-61 defined, 59 minimum distance, 59 See also Decoding Subgroups, 16 Subrings, 19 Subspace defined, 34 k-dimensional, 36, 91 row space, 36 See also Vector spaces
Syndrome decoding, 61-63, 95-103
binary, 95, 99, 102 defined, 62 table, 63 See also Decoding Syndrome-former circuits, 49-51 Syndrome-former matrix, 49 Syndrome-former trellis diagrams, 50-51 binary nonredundant code, 189 binary repetition code, 187, 189 binary single-parity-check code, 187,
189 See also Trellis diagrams Syndrome polynomials, 150, 160 coefficients, 98, 99 defined, 97 error digit effect, 100 error pattern detector tests, 98 infinite-degree, 129, 146, 151, 157 modified, 157-58, 159 See also Polynomials Syndrome register, 100, 101 correctable error pattern, 103 polynomial coefficients, 100 Syndrome vectors, 120-21, 132, 133, 148 Syndrome weight, 105-6 Systematic code code vector generated by, 45 defined, 45 generator matrix, 45 parity-check matrix, 45 Time-division multiple access (TDMA),
203 Tree codes, 3 Trellis-coded modulation (TCM), 168 introduction, 168 signal constellation geometry, 169 Trellis codes, 3 Trellis diagrams for binary linear block code, 71 defined, 48 determination, 49 syndrome-former, 50-51, 187, 188,
189 Units, 18 Vandermonde determinant, 119 Varsharnov-Gilberr, 48
Index Vector addition, 33 Vectors defined, 33 Hamming distance, 42 Hamming weight, 42 information, 18.' linearly dependent, 35 linearly independent, 35 n-ruple, 35 orthogonal, 36 partitioning, 58 syndrome, 120-21, 132, 133, 148 Vector spaces, 32-36 k-dirnension, 35 n-dirnensional, 35 null (dual) space, 36 over field, 33 subspace, 34, 36 See also Abstract algebra
243 Venn diagrams, 18-19 Verhoeff, 48 Viterbi algorithm, 70-76 Euclidean metric minimizarion, 73 Hamming metric minimization, 72 maximum-decision decoding, 70-76 See also Maximum-likelihood Virerbi algorithm decoding Voyager mission, 200-201 interleaving depth, 201 Reed-Solomon q-ary coded sequence,
200 sysrem performance improvement, 200 See also Space communications application Weight enumerator, 89 Zero element, 16, 20