• Aucun résultat trouvé

Décodeurs Haute Performance et Faible Complexité pour les codes LDPC Binaires et Non-Binaires

N/A
N/A
Protected

Academic year: 2021

Partager "Décodeurs Haute Performance et Faible Complexité pour les codes LDPC Binaires et Non-Binaires"

Copied!
127
0
0

Texte intégral

(1)

HAL Id: tel-00806192

https://tel.archives-ouvertes.fr/tel-00806192

Submitted on 27 Nov 2014

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires

pour les codes LDPC Binaires et Non-Binaires

Erbao Li

To cite this version:

Erbao Li. Décodeurs Haute Performance et Faible Complexité pour les codes LDPC Binaires et Non-Binaires. Autre. Université de Cergy Pontoise, 2012. Français. �NNT : 2012CERG0591�. �tel- 00806192�

(2)

présentée

à l’Université de Cergy-Pontoise

École Nationale Supèrieure de l’électronique et de ses Applications pour obtenir le grade de :

Docteur en Science de l’Université de Cergy-Pontoise Spécialité : Sciences et Technologies de l’Information et de

la Communication

Par

Erbao LI

Équipe d’accueil :

Équipe Traitement des Images et du Signal (ETIS) - CNRS UMR 8051

Titre de la thèse :

Décodeurs Haute Performance et Faible Complexité pour les codes LDPC Binaires et Non-Binaires

Soutenue le 19 Décembre 2012 devant la commission d’examen

M. Charly Poulliat (ENSEEIHT, Toulouse): Examinateur M. Christophe Jego (IMS, Bordeaux): Rapporteur M. Emmanuel Boutillon (UBS, Lorient): Rapporteur M. Valentin Savin (CEA-LETI, Grenoble): Examinateur

M. David Declercq (ETIS, Cergy): Directreur de thèse

(3)
(4)

Résumé II

Abstract IV

List of Figures V

List of Tables XI

1 Introduction 1

1.1 Digital Communication System and Error Correction Codes . . . 1

1.1.1 Digital Communication System . . . 1

1.1.2 Error Correction Codes History . . . 3

1.2 Motivation and Objectives . . . 6

1.3 Organization of the thesis . . . 9

2 LDPC codes and Iterative Decoding algorithms 11 2.1 LDPC codes and its graphical representation . . . 11

2.1.1 Binary and Non-Binary LDPC codes . . . 11

2.1.2 Graphical Representation and Parameters of LDPC codes . . 13

2.2 Iterative decoders for B-LDPC codes . . . 17

2.2.1 Belief Propagation (BP) algorithm for B-LDPC codes . . . 18

2.2.2 LLR based BP and Min Sum algorithms . . . 19

2.2.3 Finite Alphabet Iterative Decoders (FAIDs) . . . 21

2.3 Iterative decoders for NB-LDPC codes . . . 23

2.3.1 Belief Propagation algorithm for NB-LDPC codes . . . 25

2.3.2 Logarithm domain algorithms—Log-BP, Min-sum and EMS . 28 2.3.3 Fourier domain decoding algorithm . . . 31

2.3.4 Complexity Discussion and Simulation Results . . . 33

2.4 Conclusion . . . 35

(5)

3 Diversity decoding with FAIDs — approaching ML performance 37

3.1 FAID rules and its guaranteed error correction capability . . . 37

3.1.1 Discussion of FAID Rules . . . 38

3.1.2 Enumeration of Low Weight Error Patterns . . . 40

3.1.3 Error sets for (155,64,20) Tanner code . . . 43

3.2 Decoding diversity with FAIDs . . . 47

3.2.1 Decoder diversity principle . . . 48

3.2.2 Generation of FAID diversity sets . . . 49

3.2.3 Error correction results for (155,64,20) Tanner code . . . 51

3.3 Diversity decoding with random re-initialization and its Dynamic be- haviors . . . 53

3.3.1 FAID as Discrete Dynamic System and its Dynamic Behaviors 55 3.3.2 FAID decoding diversity—with random re-initialization . . . . 58

3.3.3 ML approaching performance . . . 60

3.4 Conclusion . . . 61

4 Trellis based Extended Min Sum 65 4.1 T-EMS algorithm description . . . 65

4.1.1 Modified trellis representation . . . 65

4.1.2 T-EMS configuration sets . . . 67

4.1.3 T-EMS Algorithm Description . . . 70

4.1.4 T-EMS for cluster codes . . . 73

4.2 Complexity Analysis and Simulation Results . . . 81

4.2.1 Complexity Analysis and Parameters discussion . . . 81

4.2.2 Simulation Results . . . 84

4.3 Hardware implementation of T-EMS . . . 88

4.3.1 Check node unit for T-EMS . . . 90

4.3.2 Decoder structure for T-EMS . . . 92

4.3.3 Synthesis Results . . . 95

4.4 Conclusion . . . 97

5 Conclusion and Perspectives 99

Publications 100

Remerciement 102

Bibliography 104

(6)

Cette thèse se consacre à l’étude de décodeurs itératifs, pour des codes cor- recteurs d’erreurs binaires et non-binaires à faible densité (LDPC). Notre objectif est de modéliser des décodeurs de complexité faibles et de faible latence tout en garantissant de bonne performances dans la région des très faibles taux d’erreur (error floor).

Dans la première partie de cette thèse, nous étudions des décodeurs itératif- s sur des alphabets finis (Finite Alphabet iterative decoders, FAIDs) qui ont été récemment proposés dans la littérature. En utilisant un grand nombre de décodeurs FAIDs, nous proposons un nouvel algorithme de décodage qui améliore la capacité de corrections d’erreur des codes LDPC de degré dv = 3 sur canal binaire symétrique.

La diversité des décodeurs permet de garantir une correction d’erreur minimale sous décodage itératif, au-delà de la pseudo-distance des codes LDPC. Nous donnons dans cette thèse un exemple detaillé d’un ensemble de décodeur FAIDs, qui cor- rige tous les évènements d’erreurs de poids inférieur ou égal à 7 avec un LDPC de petite taille (N=155,K=64,Dmin=20). Cette approche permet de corriger des évène- ments d’erreurs que les décodeurs traditionnels (BP, min-sum) ne parviennent pas à corriger. Enfin, nous interprétons les décodeurs FAIDs comme des systèmes dy- namiques et nous analysons les comportements de ces décodeurs sur des évènements d’erreurs les plus problématiques. En nous basant sur l’observation des trajectoires périodiques pour ces cas d’étude, nous proposons un algorithme qui combine la diversité du décodage avec des sauts aléatoires dans l’espace d’état du décodeur itératif. Nous montrons par simulations que cette technique permet de s’approcher des performances d’un décodage optimal au sens du maximum de vraisemblance, et ce pour plusieurs codes.

Dans la deuxième partie de cette thèse, nous proposons un nouvel algorithme de décodage à complexité réduite pour les codes LDPC non-binaires. Nous avons appellé cet algorithmeTrellis-Extended Min-Sum (T-EMS). En transformant le do- maine de message en un domaine appelée domaine delta, nous sommes capable de choisir les déviations ligne par ligne par rapport à la configuration la plus fiable, tandis que les décodeurs habituels comme le décodeur EMS choisissent les dévia- tions colonne par colonne. Cette technique de sélection des déviations ligne par ligne nous permet de réduire la complexité du décodage sans perte de performance

(7)

par rapport aux approches du type EMS. Nous proposons également d’ajouter une colonne supplémentaire à la représentation en treillis des messages, ce qui résoud le problème de latence des décodeurs existants. La colonne supplémentaire perme- t de calculer tous les messages extrinséque en parallèle, avec une implémentation matérielle dédiée. Nous présentons dans ce manuscrit, aussi bien les architectures matérielles parallèle que les architectures matérielles série pour l’exécution de notre algorithme T-EMS. L’analyse de la complexité montre que l’approche T-EMS est particulièrement adapté pour les codes LDPC non-binaires sur des corps finis de Galois de petite et moyenne dimensions.

(8)

This thesis is dedicated to the study of iterative decoders, both for binary and non-binary low density parity check (LDPC) codes. The objective is to design low complexity and low latency decoders which have good performance in the error floor region.

In the first part of the thesis, we study the recently introduced finite alphabet it- erative decoders (FAIDs). Using the large number of FAIDs, we propose a decoding diversity algorithm to improve the error correction capability for binary LDPC codes with variable node degree 3 over binary symmetric channel. The decoder diversity framework allows to solve the problem of guaranteed error correction with iterative decoding, beyond the pseudo-distance of the LDPC codes. We give a detailed ex- ample of a set of FAIDs which correcting all error patterns of weight 7 or less on a (N=155,K=64,Dmin=20) short structured LDPC, while traditional decoders (BP, min-sum) fail on 5-error patterns. Then by viewing the FAIDs as dynamic systems, we analyze the behaviors of FAID decoders on chosen problematic error patterns.

Based on the observation of approximate periodic trajectories for the most harmful error patterns, we propose an algorithm which combines decoding diversity with random jumps in the state-space of the iterative decoder. Simulations show that this technique can approach the performance of Maximum Likelihood Decoding for several codes.

In the second part of the thesis, we propose a new complexity-reduced decoding algorithm for non-binary LDPC codes called trellis extended min sum (T-EMS). By transforming the message domain to the so-called delta domain, we are able to choose row-wise deviations from the most reliable configuration, while usual EMS-like de- coders choose the deviations column-wise. This feature of selecting the deviations row-wise enables us to reduce the decoding complexity without any performance loss compared to EMS. We also propose to add an extra column to the trellis representa- tion of the messages, which solves the latency issue of existing decoders. The extra column allows to compute all extrinsic messages in parallel, with proper hardware implementation. Both the parallel and the serial hardware architectures for T-EMS are discussed. The complexity analysis shows that the T-EMS is especially suitable for high rate non-binary LDPC codes on small and moderate fields.

(9)
(10)

1.1 Block diagram Digital communication system . . . 2

2.1 Tanner graph representation for B-LDPC codes . . . 14

2.2 Tanner graph representation for NB-LDPC codes . . . 14

2.3 Graphic representation for several trapping sets . . . 16

2.4 Extrinsic messages update illustration . . . 17

2.5 Performance comparison for BP, MS and offset MS on Mackay code with degree (3,6) over AWGN . . . 21

2.6 Frame error rate comparison of BP and FAID under BSC . . . 24

2.7 Message notations for decoding NB-LDPC codes . . . 25

2.8 Trellis representation for NB-LDPC code onGF(4) with dc= 5 . . . 29

2.9 FER comparison of decoding algorithms for NB-LDPC codes onGF(8) over AWGN . . . 34

2.10 FER comparison of decoding algorithms for NB-LDPC codes onGF(64) over AWGN . . . 35

3.1 Recursive Implementation of a Variable Node (casedv = 6) . . . 41

3.2 Example of trapping set (5,3) corrected with FAID . . . 47

3.3 Typical ways in which decoder diversity can correct all error patterns from a pre-determined setE. . . 49

3.4 Number of remaining uncorrected 7-error patterns with sequential use of FAIDs in the diversity sets. . . 54

3.5 FER results on the Tanner Code (155,64,20) with guaranteed error correction of 7 errors. . . 54

3.6 A periodic trajectory of the Bethe free Entropy function (3.12) for a problematic error pattern on the (N = 155, K = 64, dmin = 20) Tanner code. . . 57

(11)

3.7 Freezed trajectory of the Bethe free Entropy function (3.12) for a problematic error pattern on the (N = 155, K = 64, dmin = 20)

Tanner code. . . 57

3.8 Oscillating trajectory of the Bethe free Entropy function (3.12) for a problematic error pattern on the (N = 155, K = 64, dmin = 20) Tanner code. . . 58

3.9 Trajectory of the Bethe free Entropy function (3.12) for a 8-error pattern on the (N = 155, K = 64, dmin = 20) Tanner code. . . 59

3.10 Distribution of periods of FAIDs for problematic error events. . . 59

3.11 FAID-Diversity Results on the rateR = 0.4 (N = 155, K = 64, dmin = 20) Tanner Code [1]. . . 62

3.12 FAID-Diversity Results on rate R = 0.7 (N = 530, K = 371, dmin = 14) QC-LDPC from [2]. . . 62

3.13 FAID-Diversity Results on the rateR= 0.5 (N = 2640, K = 1320, dmin= 40) Margulis code. . . 63

4.1 Trellis for a check node in the EMS with nm = 3. . . 66

4.2 Trellis onGF(4) for delta message and delta index . . . 67

4.3 Offset optimization through Monte-Carlo simulation . . . 74

4.4 Filling the extra column output index 0 . . . 74

4.5 Filling the extra column output index 1 . . . 75

4.6 Filling the extra column output index α . . . 75

4.7 Filling the extra column output index α2 . . . 76

4.8 Using result of 0 in the extra column to fill dc outputs . . . 76

4.9 Using result of 1 in the extra column to fill dc outputs . . . 77

4.10 Using result of α in the extra column to filldc outputs . . . 77

4.11 Using result of α2 in the extra column to fill dc outputs . . . 78

4.12 Fill the missing entries with the minimum or the second minimum in the corresponding row . . . 78

4.13 Messages mapping and trellis representation for check node in cluster decoding. The Fig.(a) represents the message mapping for H1,4 in (4.13), and Fig.(b) represents the trellis representation for first check node in (4.13). . . 80

4.14 Comparison among the numbers of configurations (Nc[EM S],Nc[EM S− F B],Nc[T −EM S]) used for the check-node update in the different algorithms. . . 83

4.15 Performance comparison of BP and T-EMS for code on GF(4) of length 1536, (dv, dc) = (4,32) . . . 85

4.16 Performance comparison for code on GF(4) of length 3888, (dv, dc) = (3,27) . . . 86

(12)

4.17 Performance comparison for code on GF(8) of length 1536, (dv, dc) =

(4,32) . . . 86

4.18 Performance comparison for code on GF(8) of length 155, (dv, dc) = (3,5) . . . 87

4.19 Performance comparison for code on GF(8) of length 155, (dv, dc) = (3,5) with different nr . . . 87

4.20 Performance comparison for code on GF(64) of length 192, (dv, dc) = (2,8) . . . 88

4.21 Performance comparison cluster decoding for cluster size (6,3) with T-EMS and BP, code onGF(64) of length 192, (dv, dc) = (2,8) . . . 89

4.22 Performance comparison of prob BP and T-EMS on a cluster code with lengthNs = 1024 and dc= 8, cluster size is (8,4) . . . 89

4.23 CNU Micro-Architecture, Parallel Implementation . . . 91

4.24 Block Parallel Layered Decoder for T-EMS . . . 94

4.25 Block serial Decoder for T-EMS . . . 94

(13)
(14)

2.1 Binary and Polynomial representation of elements onGF(8) . . . 13 3.1 LUT representation of Φv(−C, Mi, Mj) for a 7-level FAID . . . 39 3.2 Number of Class A Decoders . . . 40 3.3 Trapping set spectrum and low-weight codewords spectrum of the (155,64)

Tanner Code . . . 45 3.4 Cardinalities of error sets considered for the (155,64) Tanner code. . . 46 3.5 t-guaranteed error correction capability of different decoders on the (155,64)

Tanner code . . . 46 3.6 Number of FAIDs for Guaranteed error correction . . . 51 3.7 Statistics on the error correction of several FAIDs used in the decoder

diversity sets. . . 52 3.8 List of some 7-level FAIDs used in this chapter, the first nine FAIDs

guarantee an error correction oft = 6 on the (155,64) Tanner Code. . 53 4.1 ASIC Synthesis results for codes onGF(4) . . . 96 4.2 Estimated results for CNU unit of T-EMS, (837,726) code overGF(32) . . 97

(15)
(16)

Chapter 1

Introduction

After the discovery of Turbo codes and the rediscovery of Low-density Parity- check (LDPC) codes, the iterative decoding algorithms have been widely used in decoding error correction codes. In this thesis, we study two aspects of these it- erative algorithms. One is the low complexity and low latency iterative decoding algorithm for non-binary LDPC (NB-LDPC) codes called trellis based extended min sum (T-EMS) algorithm. The other aspect relates to two decoding diversi- ty algorithms based on the recently introduced finite alphabet iterative decoding (FAID) algorithms. The background and motivations of our work are discussed in this chapter.

1.1 Digital Communication System and Error Cor- rection Codes

1.1.1 Digital Communication System

The block diagram of the traditional communication system is shown in Figure 1.1. Generally it contains three main parts to transmit the information from source to destination: transmitter, channel and receiver. The transmitter and receiver are t- wo entities communicating through the channel where noise or perturbation corrupts the signal. The transmitter is composed of five parts: source, source encoding, en- cryption, a channel encoder and a modulator. Correspondingly, the receiver includes a de-modulator, a channel decoder, decryption, source decoding and destination [3].

In the transmitter part, the natural data (images, audio, etc) are first collected and converted to the digital format in the source block, then a compression pro- cess is exploited in the source encoding block to remove the signal redundancy and as a consequence, improves the data rate. The encryption helps to communicate

(17)

securely by using an algorithm (called a cipher) to make the information unread- able to anyone except those possessing special knowledge, usually referred to as a key. After that, the channel coding adds redundancy to the data. The redundancy helps to make the data more robust to the errors introduced by the channel. The ratio between the information data and the transmitted data is called the coding- rate R of the code, which is a parameter of the coding system. Before sending the message, the collected information is transmitted to the receiver through the chan- nel after modulation, and the channel adds degradation to the transmitted data.

The degradation includes noise modeled as additive white gaussian noise (AWGN), inter-symbol interference (ISI) due to multi-path fading and multi-user interference in multi-user systems.

Figure 1.1: Block diagram Digital communication system

Correspondingly, in the receiver part, messages received from the channel will be first demodulated to bit streams. The channel decoder applies an error correcting procedure to correct the errors introduced by the channel. And decryption is the reverse process of encryption to make encrypted information readable again. The source decoder then decompresses the received information and reconstructs the transmitted data.

Channel coding is a technique used to cope with errors in data transmission over unreliable or noisy communication channels. The principle of channel coding is to add redundancy to the transmitted information and then use this redundancy to re-construct the information after it has been distorted by the channel noise. The channel encoder encodes a message of K bits to a codeword of N bits (shown in (1.1)). The K bits are called as the information bits whereas the M = NK redundant bits are called as the parity checks. The ratio R = K/N is called the rate of the code.

(18)

{0,1}Kc⊆ {0,1}N (1.1) The decoding algorithm aims at finding the codeword which has the smallest distance with the received signal from the demodulator. The strength of an error correcting code is measured by its minimum distance dmin, which is the smallest distance between any two elements in the codeword setC. If the received signal has a distance greater thandmin/2 from the transmitted codeword, there is a possibility that the codeword which is the nearest to the received signal is not the initially transmitted codeword. Thereforedmin plays an important role in terms of the error correction capability of a code.

The decoding procedure is thus an algorithmic problem in which we search the nearest codeword to the received signal in a multidimensional space. The optimal brute force decoder consists in an extensive sequential search for the closest codeword which can make the decoding procedure very complex i.e. of the order O(2K). To circumvent this large complexity issue, various sub-optimal decoding algorithms were proposed in the past decades.

1.1.2 Error Correction Codes History

In 1948, Claude E. Shannon laid the mathematical foundation of modern infor- mation theory and created the very first systematic framework for communication in his landmark paper [4]. In this paper, he presented his famous Shannon’s channel coding theorem, which introduced the concept of redundant channel coding as a method to achieve reliable communication on a noisy channel with known capacity.

In particular, he proved that for sufficiently long codes, arbitrarily reliable commu- nication is possible at any coding rate below the channel capacity. But no such coding method exists when more information than the channel capacity is transmit- ted. Since then, the challenge of channel coding has been to design practical coding and decoding solutions that approach the channel capacity. The famous channel coding theorem can be phrased as follows:

Theorem 1.1 Channel coding theorem: Let C represent the channel capacity for an arbitrary discrete memoryless channel, and for any data rate R < C and any probability of error p >0, a pair of encoder and decoder could be designed to ensure that the data transmission at the rate of R has a decoding error probability less than p.

The channel coding theorem points out that in a noisy channel, as long as the information transmission rate is smaller than C, an arbitrarily low error rate of

(19)

communication could be reached. But as a non-constructive proof, the theorem does not provide a specific solution for capacity-achieving codes and practical decoding algorithms. Actually, three basic principles were given in Shannon’s proof [4]:

1. Using randomly constructed codes 2. Infinite codeword length N →+∞

3. Use of the optimal maximum likelihood (ML) decoding algorithm

Since then, developments on the construction of good codes have been generally conducted in accordance with the basic principles prompted by Shannon. Linear block code, the first to be put forward, mathematically based on such theories as the group, field and ring from algebra and relevant geometric theories. Based on algebraic and geometric methods, researchers invented Hamming codes, BCH codes, RS codes (M-ary BCH code, algebraic and geometric code), Golay code, Goppa (rational fraction code), etc. Among these codes, BCH [5] codes are widely used in telecommunications system, since their structure is simple and their encoding and decoding algorithms have trackable complexity.

Convolutional codes are another important category of channel coding methods.

The introduction of registers into the encoding process increases the correlation a- mong bits of the codewords, which leads to larger coding gain than block codes. With the advent of various decoding methods for convolution codes, especially Viterbi de- coding algorithm [6], convolution codes were gradually applied to more applications.

Later the trellis coded modulations (TCM) technology further contributed to the development and application of convolution codes.

However, traditional error correcting codes, like linear block codes and convo- lutional codes, are neither practical nor strong enough to approach the Shannon limits. Under such circumstances, Forney [7] proposed concatenated codes, which realize performance close to Shannon limit and at the same time reduce the imple- mentation complexity through the concatenation of outer codes and inner codes.

This was a stride towards the practical good codes.

It was not until the 1990s when C. Berrou et al. used the idea of parallel concate- nation of convolutional codes, interleaver and iterative decoding algorithm [8], and put forward Turbo-codes, which smashed the limitation of taking the cut-off rate as the reliable communication rate threshold. In Turbo-codes, the iterative decoding algorithms exchange information iteratively between two concatenated convolutional codes. The iterative evolution of the messages enables the performance of Turbo- codes to approach the Shannon limit. In [8], the authors gave a coding scheme that can approach the Shannon limit within 0.7 dB.

(20)

Consequently, new challenges arised from the introduction of Turbo-codes, like error floor, latency and complexity issues in decoding. In communication and s- torage applications, which require low frame error rate and good latency, the error floor is the most important among them. For the modern sparse graph-based error correcting codes, there is a signal-to-noise ratio (SNR) after which the performance curve does not fall as quickly as before. In other words, there is a region where performance flattens. This region is called the error floor region. The region just before the sudden drop in performance is called the waterfall region.

Inspired by Turbo-codes, many graphical models and iterative decoding algo- rithms were re-investigated and yielded lots of marvelous results. D.J. Mackay, M.

Neal, N. Wiberg et al. [9, 10, 11] rediscovered LDPC codes invented as early as 1963 by R.G Gallager [12], whose performance can also approach the Shannon limit with low complexity. After the rediscovering of LDPC codes, Luby et al. [13] put forward the concept of irregular LDPC codes and proved they are more advantageous over regular ones in 1997. T.J Richardson, R.L Urbanke et al. [14, 15] summarized and developed Luby’s analytical method, proposed the methodology of Density Evolu- tion (DE), and analyzed the capacity of LDPC codes under message passing algo- rithms, contributing greatly to the studies and application of LDPC codes. Under the guidance of these analytical methods, researchers were able to construct long irregular LDPC codes which approach the Shannon limit within 0.0045 dB [16].

At the same time, efforts were also focused on reducing the decoding complexity for binary LDPC codes. The original Belief propagation (BP) decoding algorithm is in the probability domain [11, 12], but is not suitable for hardware implementation.

Then the researchers proposed an algorithm in the logarithm domain [17] and its simplified versions—min sum (MS) algorithms along with its offset and scale versions [18, 19].

After all these developments, binary LDPC (B-LDPC) codes were considered as coding schemes in many standards like DVB-S2, WI-MAX, DSL, W-LAN, 802.16, etc. However, binary LDPC codes start to show their weakness when the code length is short or moderate or when higher order modulation is used for transmission [20, 21]. In particular, the error floor problem in the high signal-noise-ration (SNR) region becomes very serious for the applications which require frame error rate (FER) lower than 10−9.

The recently introduced finite alphabet iterative decoders (FAIDs) [22, 23] solve particularly the error floor issue. FAIDs are very efficient in the error floor region, and surprisingly, this efficiency is achieved with very simple update rules and mes- sages represented by only 3 quantization bits. Moreover, FAIDs do not rely on complicated post-processing techniques, which are usually proposed to lower the error floor. Based on the FAID algorithms, we will try to analyze the iterative de-

(21)

coder’s behavior and try to improve the error floor performance further with FAID diversity decoding algorithms.

As the non-binary counterpart of B-LDPC codes, NB-LDPC codes were first proposed in [24] and they showed better performance for codes with smaller length and defined in higher field orders. More importantly, NB-LDPC codes typically have a much lower error floor than B-LDPC codes. But the high decoding complexity at check node update and long decoding latency are often seen as the bottleneck for the wide application of NB-LDPC codes. The probability domain BP and its fast Fourier transform implementation (FFT) for NB-LDPC codes are explained in [24].

But the probability domain algorithms are not suitable for practical implementation.

In [25], the authors gave a Fourier domain decoding algorithm for NB-LDPC codes which interchanges the decoding complexity at variable nodes with that at check nodes. But this algorithm does not have a logarithm domain decoder because of the nonlinear Fourier transform. In [26], the authors presented the logarithm BP for NB-LDPC codes and introduced the elementary step to reduce the decoding complexity. Then in 2006, the extended min sum (EMS) decoding algorithm was introduced by David Declercq et al. [27, 28]. The EMS partly solved the check node decoding complexity and the hardware resource problem for NB-LDPC codes by using onlynm most reliable items in each incoming message vector. But the latency for EMS is still too large because of the use of elementary steps, especially for the high rate codes.

1.2 Motivation and Objectives

Although the LDPC codes have been accepted as coding scheme in many stan- dards now and show very impressive performance, the iterative decoder’s behaviors have not been well studied yet, especially regarding their behavior in the error floor region. In this thesis we focus on two kinds of decoding methods to improve the performance in the error floor region.

It is widely accepted that the degradation in performance of an iterative decoder for LDPC codes in error floor region [20, 29, 30] is due to correlation of the messages passed between the nodes of the Tanner graph. This dependence is induced by the special topological structures of the code which includes cycles, especially when they are involved with low weight error events. These special topological structures have different names under different channels and under different decoding algorithms:

they are called stopping sets on binary erasure channel (BEC), trapping sets (TS) on

(22)

binary symmetric channel (BSC) and pseudo-codewords on additive white Gaussian channel (AWGN). And they are the main reason why the iterative decoders fail in the error floor regions. There are three approaches we can follow to overcome the influence of small topologies:

1. Graph design—codes construction: A good LDPC code should contain as few as possible short length cycles. In codes design, the most important parameter that needs to be optimized is the girth, which is the smallest cycle’s length in the graph. Large girth and few small cycles mean that there are few low weight trapping sets in the code graph. There are already a lot of papers focusing on this topic [31, 32].

2. Better decoding algorithms: In order to cope with the problematic topolog- ical structures in the graph of the code, more sophisticated decoders need to be considered. FAIDs can solve more problematic trapping sets than the traditional iterative decoders like BP and MS on BSC [22, 33].

3. NB-LDPC codes: Through the simulation results, NB-LDPC codes have much lower error floor than B-LDPC codes, because they have sparser Tanner graphs which contain less problematic topological structures than their binary coun- terparts. But the decoding algorithms for NB-LDPC codes are too complicat- ed, which is the key bottleneck for their application.

In this thesis, we focus on the second and third points. In order to improve the error floor region performance we need to find a more sophisticated iterative decoder.

Although the iterative decoding (ID) algorithms are widely used in modern coding system, their behavior is not well understood yet. The decoders can be regarded as a non-linear dynamic system (DS) [34, 35], and theories and tools applied in DS analysis can be used to study the behavior of ID, like the works on discrete chaos theory. But two problems come in the way. First the dimension of ID is usually too large. Second, the behavior of DS is very sensitive to initial conditions and data pre- cision, so that a very small change in messages can totally alter the trajectory of the system status. This makes it somewhat difficult to analyze traditional probability or logarithm domain iterative decoding algorithms.

So in order to understand the behaviors of ID system, we need to deal with two issues, one being the system dimension, the other being the precision of the messages exchanging between nodes. For the dimension problem, it is convenient to project the system state vector (consisting of all messages in the systems) onto one or two scale values like entropy etc [35]. And in order to further simplify the dimension issue, we use a short structured code as our first target in this thesis.

As for the data precision problem, quantized logarithm domain algorithms [36, 37] need very high level quantization, like 6 or 7 bits, which makes the dynamical

(23)

behavior analysis on these decoders very difficult. Based on the idea of using the look up table (LUT) to represent the update rules for check nodes in quantized Min Sum (MS), we introduced finite alphabet iterative decoding (FAID) algorithms. With these new decoding algorithms, update rules for check node and variable node are described with discrete functions or LUTs.

Although the most popular types of iterative decoders, such as belief-propagation decoder (BP) and the Min-Sum decoder, are trapped by TS attractors, FAID de- coders proposed in [22, 23, 33] can be specifically designed to avoid being trapped by the dominant TS of an LDPC code, and therefore can correct error events that are not correctable by BP or Min-Sum. These FAID decoders have demonstrated their superiority to traditional ID in the error floor region (low FER region), at the cost of a negligible loss of performance in the waterfall region (high FER region). FAIDs are very efficient in the error floor region, and surprisingly, this efficiency is achieved with very simple update rules and messages represented by only 3 quantization bit- s. Moreover, FAIDs do not rely on complicated post-processing techniques, which are usually proposed to lower the error floor [38]. This lower quantization level partly solves the data precision problem in dynamic behavior analysis for iterative decoders.

Additionally, the FAID framework allows us to define a plurality of different iterative decoders [33], with different dynamical behaviors on the received word, and makes it possible for the set of FAIDs to collectively correct an even larger number of dominant error patterns. The sequential or parallel use of a set of different FAIDs with the goal of increasing the error correction performance of an LDPC code is termed "FAID diversity". In [39], this concept was used to increase the guaranteed error correction of regulardv = 3 LDPC codes under iterative decoding.

In the example of the short regular (N = 155, K = 64, Dmin = 20) Tanner code, we managed to identify sets of FAID decoders, with increased guaranteed error correction from t = 5 errors tot = 7 errors, which represented a gain of 3 decades in the error floor region compared to the BP decoder.

In this thesis, the approach of FAID diversity is also extended and combined with random re-initializations of the decoders’ states vectors. By using random dynam- ical re-initializations, we make use of the inherent oscillating behavior of iterative decoders around attractors to improve convergence. By combining decoder diversi- ty and dynamically re-initialized decoders, we are able to approach very closely the performance of MLD on several finite lengths regular dv = 3 LDPC codes over the BSC channel.

The other way to improve the performance in error floor regions is to use NB- LDPC. Through simulations, NB-LDPC codes show much lower error floor compared to their binary counterparts. The reason is that NB-LDPC codes have sparser

(24)

graphs. For the same code-rate and binary-length, the graph corresponding to a NB-LDPC code is typically less dense as compared to the graph corresponding to a B-LDPC code [24]. As a consequence, the graph corresponding to the NB- LDPC code has better topological properties i.e. larger girth and fewer problematic topological structures like stopping sets, trapping sets or pseudo-codewords.

However, the advantages of using NB-LDPC codes come with the consequence of growing decoding complexity. For a code defined inGF(q), the BP and logarith- m BP decoding algorithms have the decoding complexity of the order O(q2) [26].

Similarly, the memory required for storing messages is of order O(q). Consequently, the implementation of an NB-LDPC-decoder defined over a field order q > 64 be- comes practically impossible. Although a lot of low-complexity decoders have been proposed, like min-max [26, 40] and EMS [27, 28] with elementary step and bubble check algorithm [41], there are still some issues to study. Especially the decoding complexity and latency problem for the high rate codes. So the second main ob- jective of this thesis is to develop an algorithm with reduced complexity and good latency for NB-LDPC codes.

1.3 Organization of the thesis

The thesis is organized as follows:

In Chapter 2, we make a basic introduction to LDPC codes and their corresponding parameters. First, the definition of LDPC codes and their graphical representation will be discussed. Then we will give a brief description of the traditional iterative decoding methods for both binary and non-binary LDPC codes. Through the re- view of these different decoding methods, we can have a basic understanding of our motivations.

In Chapter 3 we propose a new decoding approach based on FAIDs [23, 33]. We es- pecially focus on the idea of using multiple FAIDs to improve the error correction in the error floor regions, and we call this approach FAID-diversity. With the decoding diversity method we can approach the ML decoding boundary under BSC. Mean- while, we will also try to give a dynamic analysis of this phenomenon by interpreting the iterative decoder (ID) as a dynamic system (DS). By studying the evolution of the decoders’ trajectories, we briefly show how the diversity decoding methods im- prove the iterative decoder’s performance in error floor regions [39, 42, 43].

Then in Chapter 4, we will present a new complexity reduced and latency saving iterative decoding algorithm for NB-LDPC codes [44, 45] based on EMS algorithm, which we called trellis-EMS (T-EMS). This new decoding algorithm can reduce the

(25)

size of the so-called configuration sets used at check node update. Another feature of T-EMS is that it saves decoding latency by just adding one extra column to the message trellis. The serial and parallel hardware models will also be discussed.

Through resource estimation, we show that T-EMS makes complexity in decoding NB-LDPC codes more practical, especially for the codes defined on moderate fields and with high rates.

We conclude our work in Chapter 5 and give some perspectives for the future work.

(26)

Chapter 2

LDPC codes and Iterative Decoding algorithms

I

n this chapter, the definitions, notations and graphical representation for binary and non-binary LDPC codes are introduced. It is then followed by a brief review of the traditional iterative decoding algorithms for binary and non-binary LDPC codes. Furthermore, their advantages and disadvantages will be fully presented through comparison. After the introduction and comparison of these decoding al- gorithms, the motivations of our work can be seen more clearly.

2.1 LDPC codes and its graphical representation

In this section, we define LDPC codes, introduce their graphical representation and then discuss the code parameters and characteristics described by the graph.

2.1.1 Binary and Non-Binary LDPC codes

Low density parity check codes were first introduced in 1963 by R.G.Gallager in [12]. Since their rediscovery in 1996 by D.J.C.Mackay [11], they have become the most widely used error correcting codes. LDPC codes, together with Turbo codes, constitute what we call modern error-correcting codes. This type of coding scheme features the use of graphical representation and iterative decoding algorithms. Since last decade, LDPC codes have been accepted widely in the standards, like DVB-S2 and IEEE 802.11 and 802.16.

LDPC codes belong to linear block codes class, which can be defined by a parity check matrixHof sizeM×N as shown in (2.1), and the matrixH is given in (2.2)—

all elements in H take values 0 and 1. The number of columns N in H represents the code length and the number of rows M represents the number of parity check

(27)

functions that the code needs to satisfy. The number of nonzero elements in each row is denoted as the check node degree dc, while the number of nonzero elements in each column is denoted as the variable node degree dv. When dv and dc are both constants, the code is said to be a regular LDPC code, otherwise the code is irregular. Generally the density of nonzero elements is very low, which is why the codes are called low density parity check codes.

CH ={c∈GF(2)N| HM×NcGF=(2)0} (2.1)

H =

1111011000 0011111100 0101010111 1010100111 1100101011

(2.2)

Like binary LDPC (B-LDPC) codes, non-binary LDPC (NB-LDPC) codes can also be defined by low density parity check matrix H. But each element hij in matrix H now is an element from Galois field GF(q = 2p) (q can take other values which are not the power of 2, but in this thesis we only deal with the code on the extension field of GF(2)). Thus NB-LDPC codes can be defined as:

CH ={c∈GF(q)N| HcGF=(q)0} (2.3) The addition and multiplication in (2.3) are carried out on field GF(q). Let α denote the primitive element of field GF(q), and then all the elements of this field are{0,1, α, α2, ...αq−2}. Each Galois field can be defined by thep-th order primitive polynomial p(x) (where the coefficients of the primitive polynomial are taken from GF(2)). Using this primitive polynomial and primitive element, we can represent all the elements on GF(q) with a binary sequence of lengthpcalled the binary map of symbol. So the NB-LDPC codeword{c1, c2,· · ·ci,· · ·cn}can be represented as{b11,

· · ·b1p, b21,· · ·b2p,· · ·bi1,· · ·bip,· · ·bn1,· · ·bnp}, where ci = {bi1,· · ·bip} is the binary map of ci.

Take GF(8) as an example. Let the primitive polynomial be 1 +x+x3 and the primitive element be α. Accordingly, the companion matrix [46] of this polynomial is :

H =

0 1 0 0 0 1 1 1 0

(2.4)

Then the primitive element α should satisfy the equation below:

1 +α+α3 = 0

(28)

And all the non-zero elements onGF(8) can be represented byαas{α0, α1, α2, α3, α4, α5, α6}. Using the equationα3+α+ 1 = 0, the polynomial representation of the non-zero elements and their corresponding binary sequences representation [47] can be expressed in Table 2.1:

Element Binary Rep. Polynomial Rep.

0 000 0

α0 100 1

α1 010 α

α2 001 α2

α3 110 1 +α

α4 011 α+α2

α5 111 1 +α+α2

α6 101 1 +α2

Table 2.1: Binary and Polynomial representation of elements on GF(8)

This binary vector representation for the non-binary symbols, also called the binary image of non-binary symbols [46], helps us to understand the cluster decod- ing and encoding of the NB-LDPC codes. By clustering the binary parity check matrix, we can use the decoders for NB-LDPC codes to decode the traditional error correction codes like BCH [48].

2.1.2 Graphical Representation and Parameters of LDPC codes

The parity check matrix H of an LDPC code has one-to-one mapping with a bipartite graph or Tanner graph [49, 31], as shown in Figure 2.1. In this graphical representation, also called the factor graph [50], there are N entries (columns) on one side, each representing one variable node (VN, black circles), and M entries (rows) on the other side, each corresponding to one check node (CN, black squares).

If there is a non-zero element in the position of i-th row andj-th column in matrix H, then the corresponding variable node and check node are connected on the graph.

The number of edges connected to either node is called degree of the node, and we denotedv for VN and dc for CN.

The graphical representation for NB-LDPC codes is similar to Figure 2.1, which is shown in Figure 2.2. There is a new type of node called permutation node corre- sponding to each edge. The function of the permutation node is to permutate the messages between variable nodes and check nodes with the non-zero elements hij in H as in (2.3).

(29)

v

1

v

2

v

3

v

4

v

5

v

6

v

7

c

1

c

2

c

3

v

8

v

9

v

10

c

4

c

5

Variable node

Check node

Figure 2.1: Tanner graph representation for B-LDPC codes

Figure 2.2: Tanner graph representation for NB-LDPC codes

Following this graphical representation of the LDPC codes will be some related definitions and notations, which will be used in the rest of the thesis.

Degree distribution: We say that the polynomialγ(x) of the formγ(x) := Pi≥2γixi−1 is a degree distribution ifγ(x) has nonnegative coefficients andγ(1) = 1[51, 15, 14].

The degree distributions for CN and VN can be denoted by the two functions below:

λ(x) :=

dv

X

i=2

λixi−1, ρ(x) :=

dc

X

j=2

ρjxj−1 (2.5)

Where λij) is the proportion of edges emanating from the variable (check) node with degreei (j). The parameters dv and dc are the largest degree for VN and CN.

Code ensemble: A code ensemble of LDPC codes is the set of codes with the same degree distribution (λ(x), ρ(x)) and length N, and we denote it as CN(λ(x), ρ(x)).

[15, 14].

(30)

In Cn(λ(x), ρ(x)), the number of VN nodes with degree i is:

nbV N(i) =N λi/i

P

k≥2λk/k =N λi/i

R1

0 λ(x)dx (2.6)

The number of CN nodes with degreej (assuming we haveM check nodes) is nbCN(j) =M ρj/j

P

k≥2ρk/k =M ρj/j

R1

0 ρ(x)dx (2.7)

The total number of edges in the code graph can be calculated with equation (2.6) as:

nbE =X

i≥2N λi/i

R1

0 λ(x)dx ×i=N 1

R1

0 λ(x)dx (2.8)

It can also be calculated with (2.7): nbE = M/R01ρ(x)dx. So the code rate for CN(λ(x), ρ(x)) can be written as:

R(λ(x), ρ(x)) = NM

N = 1−

R1

0 ρ(x)dx

R1

0 λ(x)dx (2.9)

The actual rate of a given code may be high since the M parity check equations might not all be independent, but we shall generally ignore this possibility [14].

Girth: The minimum cycle length gmin in the Tanner graph is defined as the girth of the code.

The girth is the parameter which influences the performance of LDPC codes the most. If there is no cycle in the graph, the iterative decoder is a maximum likelihood (ML) decoder, otherwise it is an approximate ML decoder. As long as the number of iteration is less than (gmin −1)/2, all messages being exchanged on the graph are still independent. But usually the iteration number is larger than (gmin−1)/2, which renders the iterative decoders suboptimal.

Two measures are proposed to optimize the design of the code: optimization of the degree distribution and optimization of the graphical structure. Density evolution proposed in [14, 15] and the EXIT method proposed in [52] are usually used to design the degree distributions.

However, to optimize the graph structure is much more complicated, and people normally focus on enlarging the girth of the code like in quasi cyclic LDPC codes [32, 53]. A more sophisticated way is to avoid the occurrence or to reduce the number of the trouble making topological structures, like trapping sets (TS), stopping sets or pseudo-codewords. The definition of TS is given as:

(31)

Trapping set [29]: A trapping set is a subgraph formed from the combination of several cycles in the Tanner graph. A (a, b) TS containsavariable nodes andbcheck nodes with odd degree; thesebcheck nodes have odd connections with variable nodes inside the subgraph.

Several typical trapping sets are shown in Figure 2.3. For the TS with the same (a, b) but having different structures, we will denote the TS with a more detailed notation (a,b;g4x4....gxii). The number of bits involved in such TS isaand there areb check nodes which have odd connections with all thesea variable nodes. The value gi represents the cycle with lengthiandxi denotes the number of such cycles in TS.

More information about the TS notation can be found in [33].

(a) (4,4;81) (b) (5,3;83)

c1 c

2 c

5

c3

c4 c

6

(c) (6,4;83121)

c2

c3 c5

c1

c4 c6

(d) (6,4;81102)

Figure 2.3: Graphic representation for several trapping sets

These special topological structures are the main reason why the iterative de- coders have an high error floor. In the first part of the thesis we will present and study decoding algorithms which can partly eliminate the influence of these struc- tures to improve the error floor performance of LDPC codes on BSC.

(32)

2.2 Iterative decoders for B-LDPC codes

Generally, message passing algorithms require the following features to be spec- ified [54]:

1. the message alphabet M, which could be the probabilities, logarithm likeli- hood ratio messages or finite alphabet messages to represent the reliability of each node

2. the initialization of messages: channel values mv, variable node to check node messagesm(0)vc and check node to variable node messages m(0)cv

3. the update functions at check node and at variable node: Φv(·),Φc(·) 4. the final a posteriori probability (APP) estimation function: Ψv(·)

With the above four points defined, the decoding system is fixed. In this section and the next section we will give several traditional iterative decoders for both binary and non-binary LDPC codes based on these four points. For different versions of iterative decoders, we might change the notations for messages based on the algorithms, but the basic steps of the algorithms are the same.

Without losing generality, we will introduce the iterative decoding algorithms for a regular (dv, dc) LDPC code withN variable nodes andM parity check nodes. The edges connected to variable nodes and check nodes will be labeled as 1,2, .., dv and 1,2, .., dcrespectively. When we calculate the output for one node, we will only give the formula for the last edge—dv-th ordc-th edge, as shown in Figure 2.4, where the subscript ‘cv‘ (‘vc‘) means the message is passed from check (variable) node ‘c‘(‘v‘) to variable (check) node ‘v‘(‘c‘). And the message with a single subscript ‘v‘ denotes the channel message.

m1v

v 1

d v

m vc

m

mv

m

cv

m1c

c 1 d c

m

Variable node Check node

c1

c1

cd

dc

c v

d

v 1

vd

v1

Figure 2.4: Extrinsic messages update illustration

(33)

2.2.1 Belief Propagation (BP) algorithm for B-LDPC codes

In this section, we give a brief introduction to the probability domain belief propagation decoding algorithm for B-LDPC codes. In this algorithm, the messages being exchanged between the nodes are the probabilities for the nodes to take one or zero. The algorithm can be divided into the following four steps [55].

S1: For each variable nodev(v = 1....N), the initial message from the channel is given as:

pv[x] = Pr(xv =x/yv), xGF(2)

where xv (resp. yv) is the coded symbol (resp. channel value) corresponding to the v-th transmitted symbol, and it takes value on GF(2) (real domain R). The incoming message to variable node is p(0)cv[x] = 1/2, x ∈ GF(2). The message domain in BP is MBP = [0,1]

S2: For each variable node, the extrinsic output message for the dv-th edge is calculated as:

p(l)vdv[x] =pv[x]

dv−1

Y

c0=1

p(l−1)c0v [x], x∈GF(2) (2.10) l denotes the iteration number. After we get the messages p(l)vc[x], we need to normalize it so that p(l)vc[0] +p(l)vc[1] = 1. The other dv −1 outputs can be obtained in the same way.

S3: And for each check node, the update rule is:

p(l)cdc[x] =X

dc−1

Y

v0=1

p(l)v0c[xv0]kPdc−1

v0=1⊕xv0=x, x, xv0GF(2) (2.11)

P⊕ is the sum of module 2 on GF(2). And (2.11) is summed over all the combinations whose binary sum equals x. Then we need to normalize p(l)cdc to havep(l)cv[0] +p(l)cv[1] = 1. The convolution in (2.11) can be simplified as [55]:

p(l)cv[x] =

1 + (−1)xdcQ−1

v0=1

(p(l)v0c[0]−p(l)v0c[1])

2 , xGF(2) (2.12)

S4: In the tentative decision part, the final APP of each bit is calculated as:

qv[x] =pv[x]

dv

Y

c=1

p(l)cv[x], x∈GF(2) (2.13)

(34)

Then we estimatexv = 0 if theqv[0]≥qv[1], otherwise we setxv = 1. If all the parity check functions are satisfied, which means Hx=0 (x= [x1, x2, ..., xN] is the estimated message vector), we terminate the iteration, otherwise we move toS2 and set the iteration number l=l+ 1.

The iterative decoding process will be carried out between S2 and S4 until a codeword is found or the maximum iteration number is reached. The probability domain BP (prob-BP) includes a lot of multiplication which makes the hardware implementation very complicated. Also, prob-BP is very sensitive to the quantiza- tion level which means that more hardware resource is needed to store the messages.

In practical application, the logarithm domain algorithms presented in next section are used more often.

2.2.2 LLR based BP and Min Sum algorithms

In this section, we first introduce the logarithm likelihood ratio based BP (LLR- BP) for LDPC codes. Then the simplified versions of LLR-BP will be discussed, such as the min sum (MS) algorithm [19]. In LLR-BP and MS algorithms, the messages updated between the nodes belong to MLLR−BP = MM S = (−∞,+∞).

We first define the logarithm message as:

Lv = log(pv[0]/pv[1]) (2.14) Then we have pv[0] = eLv/(eLv+ 1) and pv[1] = 1/(eLv+ 1), which yield pv[0]− pv[1] = tanh(Lv/2). If there are two binary variables u, v, the convolution of these two variables in (2.11) can be written as [19]:

Lu⊕v = 2 tanh−1(tanh(Lu/2) tanh(Lv/2)) (2.15) Where Lu⊕v = log(pu⊕v[0]/pu⊕v[1]). The iterative process for LLR-BP can be described as follows.

S1: The initial message is given as:

Lv = log(Pr(xv = 0/yv)/Pr(xv = 1/yv))

The incoming message to variable node L(0)cv = 0 (since we assume pcv[0] = pcv[1] = 1/2).

S2: Variable node update is:

L(l)vd

v =Lv+

dv−1

X

c0=1

L(l−1)c0v (2.16)

Références

Documents relatifs

We propose to embed on-chip the polynomial time memory mapping approach and a routing algorithm based on Benes network to solve memory conflict problem in parallel hardware

We also provide in Section V an example of a code family which is asymptotically good but for which the Tanner graph contains a cycle of linear length joining the transmitted

The RS+LDPC- Staircase combination is therefore a very good choice, with many advantages in terms of openness, availability of free codecs, simplicity of the specifications,

In the last chapter of this thesis we propose a exible coding scheme that achieves full-diversity over the block-fading channel. The particularity of our approach is to rely

6-Qui dit Internet, dit écran, donc image, cet aspect de l’outil n’est pas à négliger, bien au contraire à exploiter de manière à ce que l’élève associe chaque

Al analizar más detalladamente un significante que nos parece tan fácil como este adverbio, nos veremos siempre en la obligación de tomar en consideración el

Citons dans cette optique la scène de la crise de tétanie (p.196) qu’a faite Abla au moment où Trakian lui propose de vendre son manuscrit aux enchères, néanmoins une fois