• Aucun résultat trouvé

Decoding BCH codes

Dans le document Fundamentals of Error-Correcting Codes (Page 197-200)

4 Cyclic codes

✲ ✲ Delay element

5.4 Decoding BCH codes







w0 w1 · · · wn−1 0 w0γ0 w1γ1 · · · wn1γn1 0 w0γ02 w1γ12 · · · wn−1γn2−1 0

...

w0γ0nk w1γ1nk · · · wn1γnn1k w





.

Notice that if w=0, n1

i=0wivih(γi)=0 for allhPn, implying that v is a nonzero vector inFnq orthogonal to all ofFnq, which is a contradiction. Sow=0.

Exercise 294 Verify that ˇHis a parity check matrix for ˇC. We now verify that ˇCis MDS. Consider the (n−k+1)×(n−k+1) submatrixMof Hˇ formed by anynk+1 columns of ˇH. If the right-most column of ˇHis not among the nk+1 chosen, then M =V D, whereV is a Vandermonde matrix andDis a diagonal matrix. The entries ofV are powers ofnk+1 of the (distinct)γis; the diagonal entries of Dare all chosen from{w0, . . . , wn1}. As theγis are distinct and thewis are nonzero, the determinants ofVandDare both nonzero, using Lemma 4.5.1. ThereforeMis nonsingular in this case. Suppose the right-most column of ˇH is among the nk+1 chosen. By Theorem 2.4.3 anynkcolumns ofHare independent (and hence so are the corresponding nkcolumns of ˇH) asCis MDS. This implies that the right-most column of ˇH must be independent of anynkother columns of ˇH. So all of our chosen columns are independent.

Thus by Corollary 1.4.14, ˇC has minimum weight at least nk+2. By the Singleton Bound, the minimum weight is at mostnk+2 implying that ˇCis MDS.

In summary this discussion and Theorem 5.3.1(i) proves the following theorem.

Theorem 5.3.4 For1≤knq,the GRS codeGRSk(γ,v)is an MDS code,and it can be extended to an MDS code of length n+1.

Recall that a [q−1,k,qk] narrow-sense RS code overFqcan be extended by adding an overall parity check; the resulting [q,k,qk+1] code is a GRS code which is MDS by Theorem 5.3.2. This code itself can be extended to an MDS code by Theorem 5.3.4.

Thus a narrow-sense RS code of lengthq−1 can be extended twice to an MDS code of lengthq+1.

So, in general, GRS codesCand their extensions ˇCare MDS. There are MDS codes that are not equivalent to such codes. However, no MDS code with parameters other than those arising from GRS codes or their extensions is presently known [298].

5.4 Decoding BCH codes

In this section we present three algorithms for nearest neighbor decoding of BCH codes.

The first method is known as Peterson–Gorenstein–Zierler decoding. It was originally developed for binary BCH codes by Peterson [254] in 1960 and generalized shortly

179 5.4 Decoding BCH codes

thereafter by Gorenstein and Zierler to nonbinary BCH codes [109]. We will describe this decoding method as a four step procedure. The second step of this procedure is the most complicated and time consuming. The second method, known as Berlekamp–

Massey decoding, presents a more efficient alternate approach to carrying out step two of the Peterson–Gorenstein–Zierler Algorithm. This decoding method was developed by Berlekamp in 1967 [18]. Massey [224] recognized that Berlekamp’s method provided a way to construct the shortest linear feedback shift-register capable of generating a specified sequence of digits. The third decoding algorithm, discovered by Sugiyama, Kasahara, Hirasawa, and Namekawa in 1975 [324], is also an alternate method to execute the second step of the Peterson–Gorenstein–Zierler Algorithm. Known as the Sugiyama Algorithm, it is a simple, yet powerful, application of the Euclidean Algorithm for polynomials.

In this section we also present the main ideas in a list decoding algorithm which can be applied to decoding generalized Reed–Solomon codes. This algorithm, known as the Sudan–Guruswami Algorithm, will accomplish decoding beyond the packing radius, that is, the bound obtained from the minimum distance of the code. When decoding beyond the packing radius, one must expect more than one nearest codeword to the received vector by Theorem 1.11.4. The Sudan–Guruswami Algorithm produces a complete list of all codewords within a certain distance of the received vector. While we present this algorithm applied to generalized Reed–Solomon codes, it can be used to decode BCH codes, Goppa codes, and algebraic geometry codes with some modifications.

5.4.1 The Peterson–Gorenstein–Zierler Decoding Algorithm

LetCbe a BCH code overFqof lengthnand designed distanceδ. As the minimum distance ofCis at leastδ,Ccan correct at leastt = (δ−1)/2errors. The Peterson–Gorenstein–

Zierler Decoding Algorithm will correct up tot errors. While the algorithm will apply to any BCH code, the proofs are simplified if we assume thatCis narrow-sense. Therefore the defining setT ofCwill be assumed to contain{1,2, . . . , δ−1}, withαthe primitive nth root of unity in the extension fieldFqm ofFq, wherem=ordn(q), used to determine this defining set. The algorithm requires four steps, which we describe in order and later summarize.

Suppose thaty(x) is received, where we assume thaty(x) differs from a codewordc(x) in at mosttcoordinates. Thereforey(x)=c(x)+e(x) wherec(x)Cande(x) is theerror vectorwhich has weightνt. Suppose that the errors occur in the unknown coordinates k1,k2, . . . ,kν. Therefore

e(x)=ek1xk1+ek2xk2+ · · · +ekνxkν. (5.5) Once we determine e(x), which amounts to finding the error locations kj and the er-ror magnitudes ekj, we can decode the received vector as c(x)=y(x)e(x). Recall by Theorem 4.4.2 that c(x)C if and only if c(αi)=0 for all iT. In particular y(αi)=c(αi)+e(αi)=e(αi) for all 1≤i≤2t, since 2t ≤δ−1. For 1≤i ≤2twe de-fine thesyndrome Si of y(x) to be the elementSi=y(αi) inFqm. (Exercise 295 will ex-plore the connection between this notion of syndrome and that developed in Section 1.11.)

The first step in the algorithm is to compute the syndromesSi=y(αi) for 1≤i ≤2tfrom the received vector. This process is aided by the following theorem proved in Exercise 296.

In the theorem we allow Si to be defined as y(αi) even when i>2t; these may not be legitimate syndromes asc(αi) may not be 0 in those cases.

Theorem 5.4.1 Si q=Siqfor all i ≥1.

Exercise 295 LetHbe thet×nmatrix

H =





1 α α2 · · · αn−1 1 α2 α4 · · · α(n1)2

...

1 αt α2t · · · α(n−1)t



.

If y(x)=y0+y1x+ · · · +yn−1xn−1, let y=(y0,y1, . . . ,yn−1). Finally, let S= (S1,S2, . . . ,St), whereSi =y(αi).

(a) Show thatHyT =ST.

(b) Use Theorem 4.4.3 and part (a) to explain the connection between the notion of syndrome given in this section and the notion of syndrome given in Section 1.11.

Exercise 296 Prove Theorem 5.4.1.

The syndromes lead to a system of equations involving the unknown error locations and the unknown error magnitudes. Notice that from (5.5) the syndromes satisfy

Si =y(αi)=ν

j=1

ekji)kj =ν

j=1

ekjkj)i, (5.6)

for 1≤i≤2t. To simplify the notation, for 1≤ jν, let Ej =ekj denote the error magnitude at coordinate kjandXj=αkj denote theerror location number corresponding to the error location kj. By Theorem 3.3.1, ifαi =αk fori andkbetween 0 andn−1, theni =k. Thus knowingXj uniquely determines the error locationkj. With this notation (5.6) becomes

Si =ν

j=1

EjXij, for 1≤i≤2t, (5.7)

which in turn leads to the system of equations:

S1= E1X1+E2X2+ · · · +EνXν, S2= E1X21+E2X22+ · · · +EνXν2,

S3= E1X31+E2X23+ · · · +EνXν3, (5.8) ...

S2t = E1X2t1 +E2X22t+ · · · +EνX2tν.

This system is nonlinear in theXjs with unknown coefficients Ej. The strategy is to use (5.7) to set up a linear system, involving new variablesσ1, σ2, . . . , σν, that will lead directly

181 5.4 Decoding BCH codes

to the error location numbers. Once these are known, we return to the system (5.8), which is then a linear system in theEjs and solve for the error magnitudes.

To this end, define theerror locator polynomialto be σ(x)=(1−x X1)(1−x X2)· · ·(1−x Xν)=1+ν

i=1

σixi.

The roots ofσ(x) are the inverses of the error location numbers and thus σ Once this second step has been completed,σ(x) has been determined. However, deter-miningσ(x) is complicated by the fact that we do not knowν, and hence we do not know the size of the system involved. We are searching for the solution which has the smallest value ofν, and this is aided by the following lemma.

Lemma 5.4.2 Letµt and let

Dans le document Fundamentals of Error-Correcting Codes (Page 197-200)