• Aucun résultat trouvé

In Chapter 2, we present the random IRA code ensemble with infinite code block length limit. Section 2.1 introduces the systematic IRA encoder, Sec-tion 2.2 presents binary-input symmetric-output channels, and SecSec-tion 2.3 introduces the message passing belief propagation decoder related to the sys-tematic IRA code ensemble. Section 2.4, the density evolution technique is summarized and applied to the IRA code ensemble. The DE recursions are established and a local stability condition of the fixed-point corresponding to vanishing error is derived.

In Chapter 3, we optimize the repetition profiles of random IRA code ensembles for the collection of binary-input symmetric-output channels. The optimization is formalized as a linear program. In Section 3.1, we formalize the approximate DE of the IRA code ensemble in a general framework, by introducing two mappings from the set of symmetric distributions (respec-tively the set of real numbers) into the set of real numbers (respec(respec-tively the set of symmetric distributions). Section 3.2 presents the extrinsic mutual information transfer function which describes the evolution of mutual infor-mation on a bipartite graph under message passing decoding. We also intro-duce the binary-input symmetric-output capacity functional, whose

proper-1.3 Thesis Outline 7 ties are summarized in Section 3.3. In Section 3.4, we propose four ensemble optimization methods, which stem from our general framework. Section 3.5 shows some properties the DE approximation methods introduced in the pre-vious section. In particular, we analyze the local stability conditions of the new fixed-points and derive some properties of the optimized rates of meth-ods 2 and 4. In Section 3.6, we compare the code optimization methmeth-ods by evaluating the iterative decoding thresholds of the optimized IRA code, using the exact DE evolution, over the BIAWGNC and the BSC.

In Chapter 4, we are concerned with the performance of finite length regular and irregular Repeat and Accumulate codes when used on the BI-AWGNC. Section 4.2 describes how to construct IRA codes whose bipartite graph is free of cycles or stopping sets up to a certain length. Section 4.3 com-pares the theoretical girth(size of the smallest cycle) of the bipartite graph of short-length regular RA codes to that obtained by girth-conditioning. Then, Section 4.4 shows how to determine the average random regular RA code performance under ML decoding. In Section 4.5 gives simulation results on the performances of finite length regular and irregular RA codes of rate 1/2.

Chapter 5 states the theoretical limits on the spectral efficiency achievable by the successive decoder on the power-constrained CDMA channel, in the large system limit. Section 5.1 presents the basic synchronous CDMA AWGN model where users are grouped into a finite number of classes such that users in a given class have the same rate and received SNR. Section 5.2 introduces the Gaussian multiple access channel and the successive decoder.

In Section 5.3, we state existing results on the optimum spectral efficiency of the power-constrained CDMA channel in the large system limit. Our choice for the QPSK input constellation is justified on the basis of complexity and asymptotic optimality, as shown in Section 5.4.

In Chapter 6, we tackle the problem of approaching the optimum spectral efficiency of CDMA with QPSK input modulation, binary-input capacity-approaching binary error codes and a low-complexity successive decoding algorithm. In Section 6.1.1, we consider the optimization of the received power profile of the different classes. Conversely, in Section 6.1.2, we optimize the code rate profile assuming the same received SNR for all users. Section 6.2 presents numerical examples of both system design settings when the binary user codes are optimum irregular LDPC codes found in [9]. Simulation results for finite block length and finite number of users validate the large-system infinite block length assumption made in the proposed optimization methods.

In Chapter 7, we summarize the contributions of the thesis and propose

some directions for future research.

Part I

Irregular Repeat Accumulate Codes

9

Chapter 2

Irregular Repeat Accumulate Codes and Decoding

This chapter introduces the systematic irregular repeat accumulate encoder and its related decoder: the belief propagation message-passing algorithm.

In the infinite block length limit, and for binary-input symmetric-output channels, the density evolution technique is used to analyze the decoder, leading to a two-dimensional dynamical system on the space of symmetric distributions. A local stability condition around the fixed-point of the system is derived.

2.1 Encoding of IRA Codes

Fig. 2.1 shows the block-diagram of a systematic IRA encoder. A block of information bits b= (b1, . . . , bk)∈Fk2 is encoded by an (irregular) repetition code of rate k/N. Each bit bj is repeated rj times, where (r1, . . . , rk) is a sequence of integers such that 2 ≤ rj ≤ d and Pk

j=1rj = N (d is the max-imum repetition factor). The block of repeated symbols is interleaved, and the resulting block x1 = (x1,1, . . . , x1,N)∈FN2 is encoded by anaccumulator,

11

defined by the recursion x2,j+1 =x2,j+

a1

X

i=0

x1,aj+i, j = 0, . . . , m−1 (2.1) with initial condition x2,0 = 0, where x2 = (x2,1, . . . , x2,m) ∈ Fm2 is the accumulator output block corresponding to the input x1, a ≥ 1 is a given integer (referred to as grouping factor), and we assume that m =N/a is an integer. Finally, the codeword corresponding to the information block b is given by x= (b,x2) and the output block length is n=k+m.

The transmission channel is memoryless, binary-input and symmetric-output, i.e., its transition probabilitypY|X(y|x) satisfies

pY|X(y|0) =pY|X(−y|1) (2.2) where y 7→ −y indicates a reflection of the output alphabet1. Binary-input symmetric-output channels are presented in the next section.

x2 channel channel

x1

us

up

b code

Repetition Π x2,j+1=x2,j+

Xa−1 i=0

x1,aj+i

inner code outer code

Figure 2.1: Systematic IRA encoder.

IRA codes are best represented by their Tanner graph [14] (see Fig. 2.2).

In general, the Tanner graph of a linear code is a bipartite graph whose node set is partitioned into two subsets: the bitnodes, corresponding to the coded symbols, and the checknodes, corresponding to the parity-check equations that codewords must satisfy. The graph has an edge between bitnode α and checknodeβ if the symbol corresponding toαparticipates in the parity-check equation corresponding toβ.

1If the output alphabet is the real line, thenycoincides with ordinary reflection with respect to the origin. Generalizations to other alphabets are immediate.

2.1 Encoding of IRA Codes 13 Since the IRA encoder is systematic (see Fig. 2.1), it is useful to further classify the bitnodes into two subclasses: the information bitnodes {vj, j = 1,· · · , k}, corresponding to information bits, and the parity bitnodes{pj, j = 1,· · · , k}, corresponding to the symbols output by the accumulator. Those information bits that are repeated i times are represented by bitnodes with degree i, as they participate in i parity-check equations. Each checknode {cj, j = 1,· · · , m} is connected to a information bitnodes and to two parity bitnodes and represents one of the equations (for a particular j) (2.1). The connections between checknodes and information bitnodes are determined by the interleaver and are highly randomized. On the contrary, the connections between checknodes and parity bitnodes are arranged in a regular zig-zag pattern since, according to (2.1), every pair of consecutive parity bits are involved in one parity-check equation.

A random IRA code ensemble with parameters ({λi}, a) and information block length k is formed by all graphs of the form of Fig. 2.2 with k infor-mation bitnodes, grouping factor a and λiN edges connected to information bitnodes of degree i, for i = 2, . . . , d. The sequence of non-negative coef-ficients {λi} such that Pd

i=2λi = 1 is referred to as the degree distribution of the ensemble. The probability distribution over the code ensemble is in-duced by the uniform probability over all interleavers (permutations) of N elements.

The information bitnode average degree is given by ¯d = 1/( Pd

i=2λi/i).

The number of edges connecting information bitnodes to checknodes is N = k/(Pd

i=2λi/i). The number of parity bitnodes is m = k/(aPd

i=2λi/i). Fi-nally, the code rate is given by

R = k the highest rate with parameter a set to 1 is 1/3. This motivates the use of a ≥2 in order to get higher rates.

Parity bitnodes Checknodes

Information bitnodes

...

...

a

r2 rk

r1 r3

Figure 2.2: Tanner graph of an IRA code.