• Aucun résultat trouvé

Linear Block Codes

Dans le document Error Correction Coding (Page 134-137)

Linear Block Codes

86 Linear Block Codes

Given an arbitrary generator G , it is possible to put it into the form (3.4) by performing Gaussian elimination with pivoting.

Example 3.2 For G of (3.3), an equivalent generator in systematic form is

gaussj2 .m) For the Hamming code with this generator, let the message be m = [mo, m i , m2, m3] and let the corresponding codeword be c = [CO, c1,

. . .

, cs]

.

Then the parity bits are obtained by

CO = mo

+

m2

+

m3

cl = m o + m l + m 2 c 2 = m i + m 2 + m 3

and the systematically encoded bits are c3 = mo, c4 = m i . c5 = m2 and cg = m3.

3.2.1 Rudimentary Implementation

Implementing encoding operations for binary codes is straightforward, since the multiplica- tion operation corresponds to the and operation and the addition operation corresponds to the exclusive - o r operation. For software implementations, encoding is accomplished by straightforward matridvector multiplication. This can be greatly accelerated for binary codes by packing several bits into a single word (e.g., 32 bits in an unsigned int of four bytes). The multiplication is then accomplished using the bit exclus ive-or opera- tion of the language (e.g., the

-

operator of C). Addition must be accomplished by looping through the bits, or by precomputing bit sums and storing them in a table, where they can be immediately looked up.

0

3.3 The Parity Check Matrix and Dual Codes

Since a linear code C is a k-dimensional vector subspace of

IF:,

by Theorem 2.8 there must be a dual space to C of dimension n - k.

Definition 3.6 The dual space to an (n, k) code C of dimension k is the (n, n - k) dual As a vector space, C' has a basis which we denote by {ha, hi,

. . .

, hn-k-i}. We form 0 code of C, denoted by C'. A code C such that C = C' is called a self-dual code.

a matrix H using these basis vectors as rows:

H = hn-k-1

This matrix is known as the parity check matrix for the code C. The generator matrix and the parity check matrix for a code satisfy

IGHT.

The parity check matrix has the following important property:

3.3 The Parity Check Matrix and Dual Codes 87

Theorem 3.2 Let C be an (n, k ) linear code over

F,

and let H be a parity check matrix for C. A vector v E

IF:

is a codeword if and only if

vHT = O .

That is, the codewords in C lie in the (left) nullspace of H .

(Sometimes additional linearly dependent rows are included in H, but the same result still holds.)

Proof Let c E C. By the definition of the dual code, h

.

c = 0 for all h E C’. Any row vector h E CL can be written as h = xH for some vector x. Since x is arbitrary, and in fact can select individual rows of H, we must have chr = 0 for i = 0, 1 ,

. . .

, n - k - 1. Hence cHT = 0.

Conversely, suppose that vHT = 0. Then vhr = 0 for i = 0, 1,

. . .

, n - k - 1, so that v is orthogonal to the basis of the dual code, and hence orthogonal to the dual code itself.

Hence, v must be in the code C. 0

When G is in systematic form (3.4), a parity check matrix is readily determined:

H = [Zn-k - P T ] . (3.6)

(For the field

P2,

- 1 = 1, since 1 is its own additive inverse.) Frequently, a parity check matrix for a code is obtained by finding a generator matrix in systematic form and employing (3.6).

Example 3.3 For the systematic generator GI’ of (3.5), a parity check matrix is 1 0 0 1 0 1 1

0 0 1 0 1 1 1

0 1 0 1 1 1 0 1 . (3.7)

It can be verified that GttHT = 0. Furthermore, even though G is not in systematic form, it still generates the same code so that G H T = 0. H is a generator for a (7,3) code, the dual code to the

(7,4) Hamming code. 0

The condition cH = 0 imposes linear constraints among the bits of c called the parity check equations.

Example 3.4 The parity check matrix of (3.7) gives rise to the equations

CO f C3

+

Cg

+

C6 = 0

C l

+

c3

+

c4

+

c5 = 0

C2

+

C4

+

C5 -k Cfj = 0

or, equivalently, some equations for the parity symbols are

CO = C3

+

Cg

+

Cfj

C l = c3

+

c4

+

c5

C2 = C4 f C5 -k Cfj.

A parity check matrix for a code (whether systematic or not) provides information about the minimum distance of the code.

88 Linear Block Codes

Theorem 3.3 Let a linear block code C have a parity check matrix H . The minimum distance dmin of C is equal to the smallest positive number of columns of H which are linearly dependent. That is, all combinations of dmin - 1 columns are linearly independent, so there is some set of dmin columns which are linearly dependent.

Proof Let the columns of H be designated as ho, hi,

. . .

, h,-1. Then since c H T = 0 for any codeword c, we have

O=coho+clhl +...+cn-lhn-1.

Let c be the codeword of smallest weight, w = wt(c) = dmin, with nonzero positions only at indices i l , i2,

. . .

, i,. Then

cil hi,

+

cizhi,

+ . . .

C i w h i w = 0.

Clearly, the columns of H corresponding to the elements of c are linearly dependent.

On the other hand, if there were a linearly dependent set of u < w columns of H , then 0 there would be a codeword of weight u.

Example 3.5 For the parity check matrix H of (3.7), the parity check condition is 1 0 0-

0 1 0 0 0 1 0 1 1 1 1 1 1 0 1 c H T = [CO, cl, ~ 2 , c3, c4, c5, c6]

, , I 1 , 1 The first, second, and fourth rows of H are linearly dependent, and no fewer rows of H T are linearly dependent.

- - cO[l, 0, 01

+

c1[0, 1, 01

+

c2[0, 0% 11

+

C3[1, 1, 01

+

C4[0 1 11 fCs[l 1 11

+

C6[1 0 11

3.3.1 Some Simple Bounds on Block Codes Theorem 3.3 leads to a relationship between d d n , n, and k : Theorem 3.4 The Singleton bound.

bounded by The minimum distance for an ( n , k ) linear code is

dmin 5 n - k

+

1. (3.8)

Note: Although this bound is proved here for linear codes, it is also true for nonlinear codes.

(See [220].)

Proof An ( n , k ) linear code has a parity check matrix with n - k linearly independent rows. Since the row rank of a matrix is equal to its column rank, rank(H) = n - k. Any collection of n - k

+

1 columns must therefore be linearly dependent. Thus by Theorem

0 A code for which dmin = n - k

+

1 is called a maximum distance separable ( M D S ) code.

3.3, the minimum distance cannot be larger than n - k

+

1.

Thinking geometrically, around each code point is a cloud of points corresponding to non-codewords. (See Figure 1.17.) For a q-ary code, there are ( q - 1)n vectors at a

3.3 The Parity Check Matrix and Dual Codes 89

Hamming distance 1 away from a codeword, ( q - l)2(;) vectors at a Hamming distance 2 away from a codeword and, in general, (q - 1)‘

(‘f)

vectors at a Hamming distance 2 from a codeword.

Example 3.6 Let C be a code of length n = 4 over GF(3), so q = 3. Then the vectors at a Hamming distance of 1 from the [0, 0, 0, 01 codeword are

[I,

o,o,

01, [O, 1,0,01, [O, 0 , 1,01, [O,

o,o,

11

[2,0,0,01, [O, 2,0,01, [O, 0,2,01, [O,O, 0,21.

0 The vectors at Hamming distances 5 t away from a codeword form a “sphere” called the Hamming sphere of radius t . The number of codewords in a Hamming sphere up to radius t for a code of length n over an alphabet of q symbols is denoted Vq(n, t) , where

t

v,<n, t ) =

c (;&

-

w.

(3.9)

The bounded distance decoding sphere of a codeword is the Hamming sphere of radius t = L(din - 1)/2] around the codeword. Equivalently, a code whose random error correction capability is t must have a minimum distance between codewords satisfying d ~ n 2 2t

+

1.

The redundancy of a code is essentially the number of parity symbols in a codeword.

More precisely we have

r = n -log, M ,

where M is the number of codewords. For a linear code we have M = q k , so r = n - k .

j =O

Theorem 3.5 (The Hamming Bound) A t-random error correcting q-ary code C must

Dans le document Error Correction Coding (Page 134-137)