• Aucun résultat trouvé

LDPC Codes Decoding

Dans le document The DART-Europe E-theses Portal (Page 121-124)

3.4 BER-Optimized Power Allocation

4.1.3 LDPC Codes Decoding

Linear block codes conventionally employ syndrome-based decoding methods as explained above, that make use of techniques like MAP ( Maximum a Posteriori) for finding the

82

Chapter 4 : Simplistic Algorithm for Irregular LDPC Codes Optimization Based on Wave Quantification Figure 4.3Message Passing Phenomenon between Variable and Check Nodes

best code-word for a given received vector of bits r. LDPC on the other hand employ a class of iterative decoding techniques known as the ’Message-Passing’ algorithms. The reason for their name is that at each round of the algorithms messages are passed from variable nodes to check nodes, and from check nodes back to variable nodes.

The messages from variable nodes to check nodes are computed based on the observed value of the variable node and all but one of the messages passed from the neighboring check nodes to that variable node. The one neighboring message not used for computing the message to be sent from the variable node v to a check node c, must not take into account the message sent in the previous round from the very same c to v. The same is true for messages passed from check nodes to message nodes. This phenomenon can be pictorically respresened as shown in figure

4.1.3.1 Belief Propagation

One important subclass of message passing algorithms is the belief propagation algorithm.

This algorithm is present in Gallager’s work [47], and it is also used in the Artificial Intelligence community [90]. The messages passed along the edges in this algorithm are probabilities, or beliefs. More precisely, the message passed from a message node v to a check node c is the probability that v has a certain value given the observed value of that message node v and all the values communicated to v in the prior round from check nodes incident to v other than c. On the other hand, the message passed from c to v is the probability that v has a certain value given all the messages passed to c in the previous round from message nodes other than v.

It is sometimes advantageous (computationally) to work with likelihoods, or some-times even log-likelihoods instead of probabilities. For a binary random variable x let L(x) = Pr[x=0]/Pr[x=1] be the likelihood of x. Given another random variable y, the

4.1 Background and Problem Definition 83 conditional likelihood of x denoted L(x|y) is defined as Pr[x = 0|y]/Pr[x = 1|y]. Simi-larly, the log-likelihood of x is ln(L(x)), and the conditional log-likelihood of x given y is ln((L(x|y)). A commonly implemented form of message-passing algorithm is known as the Sum-Product Algorithm because the computation of messages at nodes which are to be exchanged between check and variable nodes, is in the form of a Summation of many Product terms. Generally a decoding algorithm consists of the following four steps:

1. Initialization: For all variable nodesi, initializingqij ( messages to be sent to the check-nodes) based upon the channel response at each variable node.

2. Check to Variable Node Message Passing : Updating rji messages at all the check nodes to be transmitted to the corresponding variable nodes in the form of ln(L(rji)). ln(L(rji)=log(rji(0)/rji(1)) whererji(0) (rji(1)) is the probability of the check-node j being satisfied such that the variable node i is 0 (1).

3. Variable to Check Node Message Passing : Updating qij messages at all the check nodes to be transmitted to the corresponding variable nodes in the form of ln(L(qij)=log(qij(0)/qij(1)) where qij(0) (qij(1)) is the probability of the variable-node i having the value of 0 (1) based upon the information of all the associated check-nodes except check-node j.

4. Decision : Update L(Qi) where Qi is the likelihood of the variable-node i based upon the incoming information from channel and that from all the associated check-nodes.

At the end of each iteration it is checked whether the resultant codeword is satisfying H·vT=0 wherevindicates the values of variable-nodes after each iteration. The decision is given by v=vi such thatvi = 1 if L(Qi)<0; otherwise,vi = 0. If vis a valid codeword satisfying H·vT=0, the algorithm halts; otherwise, the routines from step 2 to step 4 are repeated until some maximal number of iterations is reached without a valid decoding.

Another important note about belief propagation is that the algorithm itself is en-tirely independent of the channel used, though the messages passed during the algorithm are completely dependent on the channel. With regards to the relationship of belief prop-agation and maximum likelihood decoding, the answer is that belief propprop-agation is in general less powerful than maximum likelihood decoding and hence converges iteratively to a sub-optimal solution that may not be the maximum likelihood solution.

84

Chapter 4 : Simplistic Algorithm for Irregular LDPC Codes Optimization Based on Wave Quantification There is a second distinct class of decoding algorithms that is often of interest for very-high speed applications, such as optical networking. This class of algorithms is known as hard-decoding algorithms; as the messages exchanged between the nodes in each iteration consist of the hard values of 0 and 1. They generally have even lower complexity than belief-propagation algorithms, albeit at the cost of somewhat worse performance.

A popular class of such decoders known as the Majority-Based Hard Decoders will be discussed in detail later in this chapter.

Dans le document The DART-Europe E-theses Portal (Page 121-124)