• Aucun résultat trouvé

Le décodeur LLR somme-produit ou à propagation de croyance

Dans le document Implémentation du code Luby Transform(LT) : (Page 78-102)

PARTIE III : RÉSULTATS ET DISCUSSION 50

A.3 Le décodeur LLR somme-produit ou à propagation de croyance

Dans cette version, les messages échangés sont sous forme de rapports logarithmiques de vrai-semblance (LLR : Logarithmic Likelihood Ratio). Les quantités suivantes doivent alors être défi-nies :

zj : LLR a posteriori de chaque nœud de variablevj à chaque itération.

Les différentes étapes se déroulent alors selon la procédure suivante :

1. Initialisation: La statistique du canal permet de déterminerp0j la probabilité a priori que le bit reçuvj soit un 0 etp1j la probabilité que ce soit un 1 sachant que le bit émis estuj :

p0j =P(vj =0|uj) p1j =P(vj =1)|uj

Comme on suppose que les variablesvj etuj sont aléatoires et équiprobables, la règle de Bayes s’applique et pour x = 0 ou 1 on a :

Ainsi, on initialiseFj=ln Ãp1j

p0j

!

Ensuite, pour chaque nœud de variablevj , et chaque nœud de contrôleci, on initialise : zi j=Fj

2. Étape Horizontale: calcul des messages au niveau des nœuds de contrôle et émission vers les nœuds de variable. Les LLR émis par chaque nœud de contrôleci ont ici l’expression suivante :

Réalisé par Eric Géraud Sègnon SAVY 65

3. Étape verticale: calcul des messages au niveau des nœuds de variable et émission vers les nœuds de contrôle. Les LLR émis par chaque nœud de variablevjs’expriment alors par : zi j=Fj+P

i0²M(j)/ıLi0j

On détermine aussi au niveau des nœuds de variablevj pour chaque j les LLR a posteriori avec :zi=Fj+P

i²M(j)Li j

4. Prise de décision et test: A partir des valeurszjon détermine le mot décidévb=£ cvj¤

avec la règle de décision :vb=1 sizj>0 etvb=0 sizj<0

On teste alors sivbxHT =0. En cas de succès le décodage s’arrête, sinon on revient à l’étape 2.

ANNEXE B

Nouvelle distribution en fonction de la redondance

Dans le souci de validation de notre implémentation nous avons effectué une simulation en sui-vant les mêmes paramètres que dans [7]. Ci-après nous avons les résultats obtenus par notre si-mulation et celle dudit document.

Figure B.1 – Taux d’erreurs binaires en fonction du SNR pour différentes valeurs de N - source [7]

67

Figure B.2 – Taux d’erreurs binaires en fonction du SNR pour différentes valeurs de N

Nous pouvons alors nous accorder que l’implémentation est correcte et que les différents para-mètres ont été bien définis.

ANNEXE C

Codes Matlab - Émission et Réception

C.1 Émission

9 N = 1024; % longueur de chaque frame

10 r = 1 / 2 ; % rendement de code

28 %%% Decoupage et encodage de chaque frame %%%

29 message = messaget ( ( itframes−1) *N + 1 : itframes * N) ;

30 messagetr = message ( 1 : K) ;

31 d i s t r i b u t i o n = makeDistribution ( id , c , delta , K) ;

32 [ encoded ,G] = makeLT ( messagetr , d i s t r i b u t i o n , K , delta ,N) ;

33 encodedf = [ encodedf encoded ] ;

34 Gf = [ Gf G ] ;

40 %%%%%%%%%%%% Renvoi du message code vers OptiSystem %%

41

42 b i t r a t e = InputPort1 . BitRate ;

43 OutputPort1 = InputPort1 ;

44 OutputPort1 . Sequence = encodedf ;

45

46 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

C.2 Réception

1 %%% Main : Simulation du code LT − Reception code

2 %

3 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

4

5 %%%%%%%%%%% Recuperation du messsage %%%%%%%%%%%%%%%%%%%%%%

6

7 output = abs( InputPort1 . Sampled . Signal + InputPort1 . Noise . Signal ) ;

8 decodedf = [ ] ;

15 %%%%%%%%%%% Principe de decodage %%%%%%%%%%%%%%%%%%%%%%%%%%

16

17 f o r gen = 1 : nframes

18 %%% Decoupage et decodage de chaque frame %%%

ANNEXE C. Codes Matlab - Émission et Réception

19 outputi = output1 ( ( gen−1) *N +1: gen*N) ;

20 outputi = outputi . /max(abs( outputi ) ) ;

21 moy = mean( outputi ) ;

22 sigma = s q r t(mean(abs( outputi − moy) . ^ 2 ) /2) ;

23 A = max( outputi ) ;

24 Ao = 0 ;

25 G = Gf ( : , ( gen−1) *N +1:gen*N) ;

26 decoded = LT_BP_Decoding_llr_chi2 ( outputi , G, K , sigma , A , Ao , n , encoded ) ;

27 decodedf = [ decodedf decoded adds ] ;

28 messagetr = messaget ( ( gen−1) *N +1: gen*N) ;

29 teb = length(find( messagetr−decoded ) ) /N;

30 load(’TEB ’) ;

31 t e b f = [ t e b f teb ] ;

32 save(’TEB ’,’ t e b f ’,’−append ’) ;%% Sauvegarde des valeurs de TEB pour chaque i t e r a t i o n

33

34 end

35

36 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

37

38 %%%%%%%%%%% Renvoi du t r a i n b i n a i r e decode vers OptiSystem %%%%%%%%%%%%%

39

40 OutputPort1 = s t r u c t (’ TypeSignal ’,’ Binary ’,’ Sequence ’, decodedf ,’ BitRate ’, b i t r a t e ) ;

Réalisé par Eric Géraud Sègnon SAVY 71

Planning de réalisation du projet

Ici présentées les différents étapes, en terme de durée , pour la réalisation de ce projet.

72

ENGLISH VERSION

Implementation of Luby Transform (LT) code : case of optical channel link

73

Telecommunications technologies improve rapidly. This is due to various and innovatives services which are deployed frequently. Among these services, we have : cloud computing, streaming, on-line games, peer to peer, tele-medicine. All this services increase the data flow in the access net-work. To solve this bandwidth issue, optical fiber is used to provide high bandwidth and conside-rable immunity to interferences. That is why optical fiber is preferred for most access networks.

Also, many researchers worked and are still working on some aspects of optical channel link like signal processing : study of new modulation format, new coding process, etc.

Otherwise, apart from ensure a good bandwidth, we need to increase the reliability of the link. For that, the forward error correction (FEC) codes are usually used. The principle of FEC is to insert redundancy into the useful information to be able to detect or correct eventual error by an Error-Correcting Code (ECC). The ECCs are subdivided into families. One of the last ones is Fountain codes. The first practical implementation of this family code is Luby transform code proposed in [5]. Our work consisted to integrate that code into an optical communication link. To achieve this goal, our document follows this outline :

— First part consists of presentation of ECC in general and Fountain code in particular ;

— Second, presents the description of a general optical transmission ;

— And third, the code implementation, the results obtained followed by some analysis.

By proceeding like that, our objectives are to :

— make an overview of error-correcting codes and fountain codes ;

— study a simple optical link ;

— study and implement LT code ;

— adapt the implementation for optical channel.

ENGLISH VERSION

1. Error correcting code 1.1. Overview

An error correcting code is define as a process which introduce redundant data into source infor-mation to improve reliability of a transmission. It has better performance of one-to-many reliable transmission. The principle of Forward-Error-Correction is that additional data symbols are ad-ded to the original message symbols and generate the symbols called ”codewords”, in such a way that it becomes possible to recover the original message at the data sinks even if some received symbols are corrupted during the transmission.

1.2. Codes definitions and properties

— Universal code : a code is called universal for a certain kind of channel if it can be used for transmitting over it without regarding the different parameters that define the channel. Per example, a code is universal for the BEC channel if its performance does not depend on the erasure probability of the channel.

— Rateless : we say that a code is rateless when its rate is not fixed a priori.

— Minimum Distance Separable : a code is MDS if the code parameters hold the relationd−1= NK whereK is the number of sources symbols,Nthe number of encoded symbols andd the minimum distance code.

— Capacity-achieving : capacity-achieving codes are the ones that can transmit near the Shan-non limit.

— Maximum Likelihood decoding : once a vector is received it chooses as the transmitted vec-tor the one that maximizes the probability of that vecvec-tor being sent given the received vecvec-tor.

It is slow and it can make mistakes, however it is the best decoder.

The are also the kind of representation of the code which can be polynomial or matricial. And the one which determine the overhead used for the code, the raterdetermine by :

r=NK

KNK

N =1−R (D.1)

1.3. Brief history of FEC

In this part, we present a summary of FEC evolution to show the range of Fountain codes in the hierarchy. This history is present bellow :

Réalisé par Eric Géraud Sègnon SAVY 75

1 : Bases on information theory 1948 2 : Golay codes, 3 errors correcting code 1949 3 : Hamming codes, perfect code correcting 1 error 1950 4 : Muller codes proposed by Muller and Reed 1954 5 : Convolutives codes invented by Elias 1955

6 : Cyclics codes by Prange 1957

7 : BCH codes by Hocquenghem, Bose and Chaudhuri 1959

8 : Reed Solomon Codes 1960

9 : First LDPC codes discoverd by Gallager 1962 10 : Concatened codes introduced by Forney 1966

11 : Viberti Algorithm 1967

12 : Berlekamp Massey Algorithm 1969

13 : Coded modulation by Underboeck 1982

14 : Turbo-codes by Berrou, Glavieux et Thitimajshima 1993 15 : New LDPC codes by Mackay and Neal 1996

16 : LT code discovered by Luby 2002

17 : Raptor code by Shokrolahi 2006

18 : Polar codes by Arikan 2009

19 : Spinal codes by Perry and Balakrishan 2011

20 : Systematic LT codes 2012

21 : Novel decoding algorithm 2012

22 : Implementation of modified LT 2013

Througth this, we can notice that the family of LT codes, fountain code is recent and has been the object of many researches during the last years. Also, some turbo codes were integrated in optical communications. In the following part, we present some.

1.4. ECC in optical communication

The firsts Error correcting codes developed in optical communication field were the family of bloc codes. The BCH and the Reed Solomon were the first ones. The performance of an ECC in optical communication is measured by the Net Coding Gain. The NCG is the difference in dB between the signal to noise ratio to obtain the same performance in the cases with and without coding. The evolution of NCG is presented on figure 2.

ENGLISH VERSION

Figure 2 – Evolution of Net Coding Gains [3]

1.5. Code LDPC

LDPC Code was first proposed by Gallager in the early 1960s, but did not get proper attentions until years later. LDPC Code is formed from sparse bit-partite graphs. The graph contains two sets of nodes, variable nodes and check nodes. The bipartite graph is build in such a way that for each check node the sum of the values of its incident variable nodes is equal to zero. Classical LDPC codes do not have an efficient algorithm, they use Gaussian Elimination (complexity is O(k3)) to generate the bipartite graph and matrix-vector multiplication (complexity is O(k2)) to obtain the codewords. There are various methods to have encoding algorithm which can run in a linear time . The most efficient decoding algorithm for LDPC codes is the Belief-Propagation(BP) algorithm. In every iteration, BP algorithm updates the probability that a variable node is zero based on the in-formation obtained in the previous round. The time complexity of the decoding is proportional to the number of edges in the bipartite graph. LDPC code requires a small reception overhead at the receivers to reconstruct the message symbols. The small proportion of the overhead ensures that the encoding/decoding process is still efficient when the input length grows on the order of tens of thousands. It has been proved that the performance of LDPC codes which is obtained from an ap-propriate highly irregular bipartite graph rather than a regular graph is very close to the Shannon bounds. LDPC Codes are also widely used in many applications, e.g. in the DVB technology .

Réalisé par Eric Géraud Sègnon SAVY 77

2. Luby Transform Code

LT codes were introduced for the first time in 2002 by Michael Luby in [5]. They are the first prac-tical realization of the digital fountain approach, also called universal erasure codes. The resulting code is a subclass of an irregular Low-Density Parity Code. The main advantages of LT codes are :

1. Rateless : The number of encoding symbols that can be generated from the data is potentially limitless.

2. Universal : Near optimal for every erasure channel independently from its erasure channel probability because the decoder can recover the original data from any set of a fixed number of encoded packets and the encoder can always generate more encoded symbols.

3. Low complexity : For both encoding and decoding processes, and therefore very suitable for hardware implementations and time constraint applications.

2.1. Encoding

Supposer, the rate of the code ( it’s can be pseudo infinity ). Then determineN, the number en-coded symbols fromK initial symbols. Encoding and decoding are more efficient for large values of l due to overheads. The process for generating each encoding symbol is as follows :

1. Choose the degreedof the encoding symbol randomly from a degree distribution .

2. Choose uniformly randomddifferent input symbols as neighbors of the encoding symbol . 3. The encoding symbol will be the result of the exclusive-or of thedchosen neighbors.

The encoding process defines a bipartite graph ,like the one on the following figure,that connects encoding symbols with input symbols . It is a sparse graph because the mean degree d of the output symbols is smaller than the message lengthK.

ENGLISH VERSION

Figure 3 – Encoded symbols generation

Algorithm 3LT encoding

The encoding process is debcribed by this algorithm : while continue do

choosedwith a degre distribution

Choose at randomn and uniformlydfragments among K as way as all fragments chosen F equal©

f1,f2, ...,fdª sendpj=f1f2⊗....⊗fd

incrementNt x end while

The LT decoding process can be seen as a generalization of the classical process known as bins and balls, wherenballs are randomly thrown intonbins. Encoding symbols are analogous to balls and input symbols are analogous to bins. The process succeeds if at the end all input symbols are covered.

One of the most important charateristic of LT codes is the degre distribution. As follow, there are the robust soliton distribution.

Robust Soliton Distribution

The goal of the degree distribution is to avoid redundant coverage of input symbols by the enco-ding symbols, but at the same to ensure that the process does not fail before all the input symbols are released due to the fact that no more encoding symbols can be found with exactly one input symbol neighbor in the first step of the decoding process. In [6] the Robust Soliton distribution is proposed as a degree distribution. The Robust Soliton distribution isµ(.) defined as follows :Let R=cl n(k/δ)p

kfor some suitable constantc>0 and letτ(i) be

Réalisé par Eric Géraud Sègnon SAVY 79

τ(d)=

Add the Ideal Soliton distributionφ(.) toτ(.) and normalize dividing byβto obtainµ(.) where β=

where the Ideal Soliton distribution is defined as

 This distributions implies some characteristics, like :

1. The number of encoding symbols isK=k+O(p

kln2(k/δ)).

2. The average degree of an encoding symbol isD=O(ln(k/δ)).

3. The decoder fails to recover the data with probability at most δfrom a set of K encoding symbols.

We confronted this distribution and the new one proposed in [7]. The novel distribution is descri-bed as follow :

The decoding is an iterative process :

1. Find encoding symbols with exactly one input symbol neighbor ; 2. Recover the input symbols associated with those encoding symbols ;

3. Remove the recovered input symbols from the rest of the encoding symbols in which they are as neighbors presented through an exclusive-or ;

ENGLISH VERSION

4. Repeat step 1 to 4 until all the input symbols are recovered 2 or no more encoding symbols can be found in step 1.

Algorithm 4LT decoding

while ( decoding is not over and it is still remain inβat least one packet of degre one ) do fj←packet of degre from the memory

for ∀p²β do

if p contains fjthenppfj end if

end for end while

The total amount of degrees is equal to the necessary operations in the decoding process. The decoder needs to know the degree and the set of neighbors of each encoding symbol. This infor-mation can be communicated in several ways. For example, both encoder and decoder can use a pseudo-random generator with the same seed. The degree distribution ford is the crucial part of the code design because it will determine the random behavior of the LT process.

We perform some comparisons of two distributions. We compared effiency, numbers of encode symbols needed to recover the source information. Also we compare the Bit Error rate by changing two parameters : the erasure probability in a BEC and the SNR for an AWGN case.

Summary, the novel distribution present better performances than the Robust soliton distribution.

Then we use the novel distribution for our implementation.

3. Optical Channel

In this part we present a simply optical transmission scheme. We insist on the differents compo-nents in part of the link. This is for apprehend and master the side effects they insert in the chain and also be able to adopt some models to implement and simulate them.

3.1 Scheme

Sender: On this scheme, the sender is a combination of an equiprobable source of bits ‘0’ and ‘1’

and a modulator. The binary train produced are oversampling to generate an electrical signal. This signal is converted in an optical signal by a LASER ( Light Amplification by Simulated Emission of Radiation ) or a LED (Light Emitting Diode). The laser is prefered because it offers the best optical coupling efficacity. Each laser is characterized by a thershold level. In the region which the intensity in entry of the laser is greater than this thereshold current , we have a simulated emission and the optical power is proportionnal to the current intensity. Otherwise, when the intensity is smaller than the thereshold, there are a small emission called spontaneous emission.

Réalisé par Eric Géraud Sègnon SAVY 81

Figure 4 – Optical power - current

This emission generate a Relative Intensity Noise (RIN) and a phase noise. The magnitude noise involves a fluctuation of the power intensity. The phase noise react on the strip due to the non absolute monochromatic aspect of the laser.

Optical fiber: Over fiber, there are some effects which affects the quality oh the signal such as : attenuation , dispersion and non-linear effects. Attenuation corresponds to the reduction of the average power of the signal. The non-linear effect generate an auto-modulation of the phase by the power. Then the fiber represents an complex propagation domain. So there are not a perfect model to describe all its characteristics. But with certain simplifications , there are some models which approach its principal parameter.

Receptor : Receptor or reception circuit is consists of a direct detection module follows by the decision. The detection is performed by a photodetector such as PIN or APD. Any photodiode is characterized by a sensitivity in A/W (current to optical noise ratio). The sensitivity represents the ratio of intensity of current on the optical power. In the expression of the output signal fonction on the optical power introduce powered square.

Y = 1

Differents models are studied, the first is the BEC. For this kind of channel, a symbol transmitted is received completly or not with the propbability of 1 –δwhereδis the probability of erasure. The performance over this channel are studied. The limits of this type of channel is : its not realistic in a real physic context. The following channel studied is the AWGN one, it is the classical model used for a channel with pertubations. An initial signal transmited over an AWGN channel is pertubated

ENGLISH VERSION

by a noise which added on it. The performance on the implementation is also tested, the results is presented bellow. The last channel is theχ2 model, this model have most similarities with an optical channel. Indeed, it’s take part of the quadratique effect introduce by the photodetector at the reception. We modelize anχ2 channel and perform the same tests on it. The results are analysed with the BER and the quality factor.

4. Implementation of Luby Transform code

The additionnals process insert by an ECC in a transmisson are encoding an decoding. The enco-ding is proceed on the source level and the decoenco-ding just before the sink.

4.1 Encoding

At this level, we made the choice of the novel distribution to attribute a degree to each encoded symbol. After that, the process continue as the same way in an classical LT codes. But the degree distribution model determine some paramaters of LT code. This parameters are the average num-ber of encoding symbols with degre one (R). Considering a RSD it’s depend ofc andδ. Changing the value ofcandδaffectsR.

4.2 Decoding

We present 3 kinds of decode schemes according to the type of channel. There are a hard decoding for a BEC, a BP decoding determines by AWGN channel model and theχ2decoding adapted to a χ2model. The last two decoding schemes are based on the belief propagation algorithm. In this algorithm, one of an essential stuff to ensure a good decoding is the initialisation part, in particular the intialisation of probability density.

Tableau 1 – Table of density probabilities

Channels probability density

To perform and analyze the results of our LT code implementation, we made up a simulation en-vironment. This environment is based on co-simulation between two softwares : Matlab and

Op-Réalisé par Eric Géraud Sègnon SAVY 83

Dans le document Implémentation du code Luby Transform(LT) : (Page 78-102)

Documents relatifs