Techniques de codage avancées et applications au CDMA

Claude Shannon proved [1] the existence of codes that allow reliable transmission, provided that the information rate in bits per channel use is less than the channel capacity. A randomly generated code with large block size has a high probability to be a good code, i.e. to closely approach the Shannon limit. However, its decoding complexity is exponential in the block size, and is thus prohibitive in practice. Hence, the central challenge of coding theory consists of designing codes that perform as close as possible to the Shannon limit, and are still decodable with a reasonable complexity. An important step in this direction was the introduction of concatenated codes by David Forney [2], which consist of a powerful outer block code and an inner convolutional code. The inner decoder makes use of the Viterbi algorithm, and is followed by an outer decoder based on hard decisions and algebraic decoding.

In 1993, Berrou, Glavieux and Thitimajshima introduced a novel coding structure, named Turbo Codes [3], which consists of the parallel concatenation of two convolutional codes through an interleaver. This structure admits an iterative decoding scheme, based on the recursive estimation of a posteriori probabilities (APP) using the BCJR algorithm [4] and exchanging extrinsic information between the constituent decoders. The performance of turbo codes approaches the Shannon capacity of the additive white Gaussian noise channel within a fraction of a dB.

The introduction of turbo codes constitutes a major breakthrough in coding theory as it gave rise to a large amount of work in the field of random-like codes, leading to the introduction of “turbo-like” code families. In particular, we note the re-discovery of the low density parity check (LDPC) codes, originally proposed in [5], the introduction of irregular LDPC codes [6, 7] and the introduction of the Repeat-Accumulate (RA) codes [8]. In many relevant settings, the iterative decoding of these codes achieves performances that are very close to the Shannon limit. In [6, 7], irregular LDPC codes were shown to asymptotically achieve the capacity of the binary erasure channel (BEC) under iterative message-passing decoding. Although the BEC is the only channel for which such a result currently exists, irregular LDPC codes have been designed for other binary-input channels, e.g., the binary symmetric channel (BSC), the binary input additive white Gaussian noise channel (BIAWGNC) [9], and the binary-input inter-symbol interference (ISI) channel [10, 11, 12, 13], and have been shown to achieve very good performances.

The Tanner (bipartite) graph [14] is a powerful formalism that provides a description of these turbo-like codes and their iterative decoding. The codes are decoded iteratively by performing local computations at nodes, and passing the resulting information along edges in the graph. The complexity of the decoder depends on the complexity of the local node computation, the complexity of the information passing on the edges, and finally on the number of iterations of the decoder. We will mainly  be interested in one type of iterative decoders, namely message passing decoders, for which messages represent estimates of the transmitted bits. Moreover, we concentrate on the belief propagation (BP) decoder which assumes local independence of incoming messages, and applies probability rules at the computation nodes.

The introduction of irregular LDPC codes motivated other turbo-like coding schemes such as irregular RA codes (IRA), for which the achievability of the BEC capacity has been shown [15], and irregular turbo codes [16, 17]. IRA codes are in fact special subclasses of both irregular LDPC and irregular turbo codes. IRA codes are an appealing choice because they have a low encoding/decoding complexity and their performance is quite competitive with that of turbo codes and LDPC codes.

The recursive finite-state machine is the simplest one which gives full freedom to choose any rational number between 0 and 1 as the coding rate. In this work, we restrict our study to IRA codes that use a two-state convolutional code, obeying the same simple recursion as in [15], although it might be expected that better codes can be obtained by including the finite-state machine as a degree of freedom in the overall ensemble optimization. First attempts to optimize irregular LDPC codes ([18] for the BEC and [19] for other channels) were based on the density evolution (DE) technique, which computes the expected performance of a random-like code ensemble in the limit of infinite code block length. In order to reduce the computational burden of ensemble optimization based on the DE, faster techniques have been proposed, based on the approximation of the DE by a one-dimensional dynamical system (recursion). These techniques are exact only for the BEC, for which DE is one-dimensional. The most popular techniques proposed so far are based on the Gaussian approximation (GA) of messages exchanged in the message passing decoder. GA in addition to the symmetry condition of message densities allows the Gaussian density of messages to be expressed by a single parameter. Techniques differ in the parameter to be tracked and in the mapping functions defining the dynamic system that describes the evolution of probability distributions on the Tanner graph associated to the code ensemble .

If the code block length is finite, i.e., the bipartite graph has a finite length, the assumption of local independence of messages does not generally hold. Indeed, randomly-constructed turbo-like codes of finite length may have poor performances under the BP decoder, and the finite-length gap from channel capacity may not be as good as predicted by the infinite length DE analysis. This gives rise to the interleaver design issue which has a fundamental role in the finite-length performance of codes on graphs. Interleaver design criteria are mainly based on heuristic arguments: the elimination of short cycles in order to limit the propagation of unreliable messages, the maximization of the minimum distance in order to eliminate low-weight codewords responsible for poor performance for medium to high signal to noise ratios (SNR), the maximization of stopping sets [27] responsible for decoding errors of LDPC codes on the BEC.

In the multiple access channel (MAC), several transmitters (users) share the same transmission medium (physical channel). The output of the channel is the noisy superposition of the transmitted signals. In the present work, we consider the Gaussian multiple access channel (GMAC) in which the noise is Gaussian. Multiuser information theory [28] teaches us that the capacity of a multiple access channel, i.e. the total number of users that can be transmitted reliably on the channel, is generally maximized by transmitting mutually interfering signals. The main figure of merit is the spectral efficiency, defined as the total data rate per unit bandwidth (bits per second per Hertz, or bits per channel use) .

In the present work, we investigate low complexity practical coding/decoding schemes to approach the maximum spectral efficiency of the GMAC. Reaching the optimal performance requires suitable coding strategies, and a decoding scheme that can decode the stream of transmitted bits arbitrarily reliably from the noisy superposition of transmitted signals. This is done by the successive decoding technique which decodes a given user treating all other users as noise, then subtracts the contribution of the decoded user from the signal, and repeats this process until all users have been successfully decoded.

The coding strategy that we adopt in the present work is code division multiple access (CDMA) also known as spread spectrum multiple access [29, 30]. In CDMA, each user is assigned a different spreading sequence, which is a unit-norm vector in an N-dimensional signal space. The elements of a spreading sequence are called chips. The spreading factor N is the number of chips per transmitted symbol. We assume that the spreading is random, i.e. spreading sequences are assigned randomly and chips are chosen equally likely and independently. The use of random spreading is justified by the following facts:
• Random spreading accurately models practical CDMA systems which use long pseudo-noise sequences that span many symbol periods [30].
• The spectral efficiency averaged with respect to the choice of signatures is a lower bound to the optimum spectral efficiency achievable with an optimum choice of deterministic spreading sequences.

The spectral efficiency of synchronous CDMA systems with Gaussian noise and random spreading has been found in the large-system limit, where the number of users and the spreading factor are infinite, while their ratio, the channel load, is finite [31, 32]. The maximum spectral efficiency is achieved by Gaussian inputs. However, practical systems make use of discrete smallsize input alphabets. Some recent works use the asymptotic analysis of large CDMA systems with random spreading sequences and various receivers to design practical CDMA systems [33].

Table des matières

1 Introduction
1.1 Advanced Coding Techniques
1.2 Coded CDMA with Successive Decoding
1.3 Thesis Outline
I Irregular Repeat Accumulate Codes
2 Irregular Repeat Accumulate Codes and Decoding
2.1 Encoding of IRA Codes
2.2 Binary-Input Symmetric-Output Channels
2.3 Belief Propagation Decoding of IRA Codes
2.4 Density Evolution and Stability
2.5 Conclusion
2.A Proof of Proposition 2.4
2.B Proof of Theorem 2.5
3 Design of Irregular Repeat Accumulate Code Ensembles
3.1 IRA Ensemble Optimization
3.2 EXIT Functions
3.3 Some Properties of Binary-Input Symmetric-Output Channels
3.3.1 Property 1 .
3.3.2 Property 2
3.3.3 Property 3
3.4 DE Approximation Methods
3.4.1 Method 1
3.4.2 Method 2
3.4.3 Methods 3 and 4
3.5 Properties of the Approximated DE
3.5.1 Stability condition
3.5.2 Fixed-Points, Coding Rate and Channel Capacity
3.6 Numerical Results
3.6.1 Design Example for Rate 1/2 Codes
3.6.2 Thresholds of IRA Ensembles
3.7 Conclusion
3.A Proof of Proposition 3.1
3.B EXIT Function with Monte Carlo
3.C Proof of Theorem 3.7
3.D Proof of Proposition 3.8
3.E Proof of Lemma 3.3
3.F Proof of Theorem 3.9
4 Conclusion

Cours gratuitTélécharger le document complet

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *