Научная статья на тему 'Random coding bound for channels with Memory - decoding function with Partial Overlapping. Part 2. Examples and Discussion'

Random coding bound for channels with Memory - decoding function with Partial Overlapping. Part 2. Examples and Discussion Текст научной статьи по специальности «Физика»

CC BY
43
6
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
ГРАНИЦА СЛУЧАЙНОГО КОДИРОВАНИЯ / RANDOM CODING BOUND EXPONENT / FINITE-STATE CHANNEL MODEL / НЕСОГЛАСОВАННОЕ ДЕКОДИРОВАНИЕ / MISMATCHED DECODING / PERRON / ТЕОРЕМА ПЕРРОНА / FROBENIUS THEOREM / КАНАЛ С МЕЖСИМВОЛЬНОЙ ИНТЕРФЕРЕНЦИЕЙ / INTERSYMBOL INTERFERENCE CHANNEL / КАНАЛ С КОНЕЧНЫМ ЧИСЛОМ СОСТОЯНИЙ

Аннотация научной статьи по физике, автор научной работы — Trofimov A.N.

Introduction: Suboptimal random coding exponent Er*(R; ) for a wide class of finite-state channel models using a mismatched decoding function φ was obtained and presented in the first part ofthis work. We used  function represented as a product of a posteriori probabilities of non-overlapped input subblocks of length 2B+1 relative to the overlapped output subblocks of length 2W+1. It has been shown that the computation of function Er* (R; ) is reduced to the calculation of the largest eigenvalue of a square non-negative matrix of an order depending on the B and W values. Purpose: To illustrate the approach developed in the first part of this study with its application to various channel modelled as a probabilistic finite-state machine. Results: We consider channels with state transitions not depending on the input symbol (channels with freely evolving states), and channels with deterministic state transitions, in particular, intersymbol interference channels. We present and discuss numerical results of calculating this random coding exponent in a full range of code rates for some of channel models for which similar results were not obtained before. Practical computations were carried out for relatively small values of B and W. Nevertheless, even for small values of these parameters a good correspondence with some known results for optimal decoding was shown.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Граница случайного кодирования для каналов с памятью - декодирующая функция с частичным перекрытием. Часть 2. Примеры и обсуждение

Введение: субоптимальная экспонента случайного кодирования Er*(R;)для широкого класса моделей канала с конечным числом состояний, использующая несогласованную декодирующую функцию получена и описана в первой части этой работы. Мы использовали функцию , представленную в виде произведения апостериорных вероятностей неперекрывающихся входных подблоков длины 2B + 1 относительно перекрывающихся выходных подблоков длины 2W + 1. Показано, что вычисление значений функции Er*(R;) сводится к вычислению наибольшего собственного значения квадратной неотрицательной матрицы, порядок которой зависит от параметров канала и от величин W и B. Цель: проиллюстрировать развитый в первой части исследования подход к вычислению экспоненты случайного кодирования в приложении его к различным каналам, модели которых представляют собой вероятностный конечный автомат. Результаты: рассмотрены каналы, в которых переходы в множестве состояний не зависят от входного символа, и каналы с детерминированными переходами, в частности каналы с межсимвольной интерференцией. Получены численные результаты вычисления экспоненты случайного кодирования в полном интервале скоростей кода для ряда моделей каналов, для которых подобные результаты не были ранее получены. Практические вычисления выполнены для относительно малых значений B и W. Тем не менее даже при малых значениях этих параметров получено хорошее соответствие с известными результатами для оптимального декодирования.

Текст научной работы на тему «Random coding bound for channels with Memory - decoding function with Partial Overlapping. Part 2. Examples and Discussion»

Ч КОДИРОВАНИЕ И ПЕРЕДАЧА ИНФОРМАЦИИ

UDC 621.391 Articles

doi:10.31799/1684-8853-2018-4-73-85

RANDOM CODING BOUND FOR CHANNELS WITH MEMORYDECODING FUNCTION WITH PARTIAL OVERLAPPING

Part 2. Examples and Discussion

A. N. Trofimova, PhD, Tech., Associate Professor, andrei.trofimov@vu.spb.ru aSaint-Petersburg State University of Aerospace Instrumentation, 67, B. Morskaia St., 190000, Saint-Petersburg, Russian Federation

Introduction: Suboptimal random coding exponent E*(R; y) for a wide class of finite-state channel models using a mismatched decoding function y was obtained and presented in the first part ofthis work. We used y function represented as a product of a posteriori probabilities of non-overlapped input subblocks of length 2B+1 relative to the overlapped output subblocks of length 2W+1. It has been shown that the computation of function Ef (R; y) is reduced to the calculation of the largest eigenvalue of a square non-negative matrix of an order depending on the B and W values. Purpose: To illustrate the approach developed in the first part of this study with its application to various channel modelled as a probabilistic finite-state machine. Results: We consider channels with state transitions not depending on the input symbol (channels with freely evolving states), and channels with deterministic state transitions, in particular, intersymbol interference channels. We present and discuss numerical results of calculating this random coding exponent in a full range of code rates for some of channel models for which similar results were not obtained before. Practical computations were carried out for relatively small values of B and W. Nevertheless, even for small values of these parameters a good correspondence with some known results for optimal decoding was shown.

Keywords — Random Coding Bound Exponent, Finite-State Channel Model, Mismatched Decoding, Perron — Frobenius Theorem, Intersymbol Interference Channel.

Citation: Trofimov A. N. Random Coding Bound for Channels with Memory — Decoding Function with Partial Overlapping. Part 2. Examples and Discussion. Informatsionno-upravliaiushchie sistemy [Information and Control Systems], 2018, no. 4, pp. 73-85. doi:10.31799/1684-8853-2018-4-73-85

Introduction

This paper is the second part of the work, the first part of which is published earlier [1]. In the previous part, a random coding bound is presented for a wide class of channels with memory, including those for which this bound could not be managed to obtain. The basic idea is to apply a suboptimal decoding rule that is different from the maximum likelihood (ML) decoding. In this part, we give numerical examples of the application of this boundary and their discussion.

For the connectedness of the exposition, we give the definitions and the main result of the previous part of the paper. Let py|x(y|x) be transition probability of the discrete-time channel; for the continuous-output channel it is instead a probability density function (p. d. f.); x e XN, where X be a discrete input channel alphabet and qx = | X | < to; ye YN, where Y is the channel output alphabet, | Y | = q and N is the length of a block code.

To indicate a segment of an arbitrary vector z we use the notation zba = (2(max(1,a)), 2(max(l,a)+l),

_(min(6,L))

), where L is length of the vector z.

For subvectors, or segments of vectors, x and y the notation x and y is used. The difference between them is noted due to the use of ordinary and sans serif font.

The decoding rule is given as

x = argmaxx y(y; x),

where y(y; x) is a real-valued positive decoding function, and the maximization is performed over all code words.

For simplicity we assume that code ensemble is generated by using of independent uniformly distributed (i. u. d.) code symbols. This assumption leads to loss of optimality but simplifies further consideration. Using the classic approach [2] one can derive the suboptimal exponent of the random coding bound in asymptotic form

Er (R; y) = max I max E*> (V, P, _ P-^

1>P>0v ^>0

where

£0(y, p, X) = (1 + p)log qx -

1 Г ^

- lim — logXE^у,х(у|хМу; x)"Xp x')X

^ N у x У1 I *

Hereafter log(-) denotes the binary logarithm, the asterisk in the superscript hereafter means that the code symbols are chosen as i. u. d. random variables.

КОДИРОВАНИЕ И ПЕРЕДАЧА ИНФОРМАЦИИ

Similarly one can get the random coding exponent for ML decoding for fixed code length N

E*r(N,R) = maxp)-pfl), (1)

l>p>cA /

where

E0(N, p) = (1 + p)log qx -

( i V+p

--log У N Г

S Ру|х(У I x)1+p

(2)

with the bound for maximum information rate

^max(^) = dE0(N, p)/ dp

p=0

where R*max(N) < C, where C is channel capacity. Evidently, the inequality E*(R; y) < E*(R) is valid. The asymptotic random coding exponent for ML decoding and the code ensemble with i. u. d. symbols is E*(R) = lim E*(N, R).

By analogy with the channel capacity C and maximal information rate R*max(N) let us define the lower bound on maximum achievable code rate for mismatched decoding C*(y) as

С (у) =

д max E0 (y, p, X)

x>o

dp

< с

(3)

p=0

and value

maxx>0 Eq(у, 1, X),

giving a bound on the cut-off rate R0; evidently, the inequalities Ro(y) < Ro< R0 are valid.

In this study we assume that the channel model is given as a probabilistic finite-state machine [2], i. e. conditional probabilities characterizing this model are given as follows

N

py|xs(y |x, в) = npy\xs(y{n)\Л^);

n=1

N

PS|X(S | X) = Ps (s(0)) n Ps\xs(s(n) I s^),

n= 1

where s = (s(0), s(1), ..., s(n), ...) is the sequence of the channel states, s(n) eS, S is a set of the channel states, and |S| < <»; py\xs(y(n)|x(n), s(n-1)) and Py\xs(y(n)\x(n), s(n-1)) are conditional probabilities of the channel output and channel state transition, respectively, ps(-) is an unconditional (stationary) distribution on the set of the channel states. Also, we assume that the input channel symbol x(n) and

the current channel state s(n-1) are independent. It has been shown in [1] that the probabilities py|x(y|x) can be represented in form of matrix product as

( N

Py\x (У IX) = PS П p(y(ra)\xin))

V n=l

(4)

where

Р(У1 x) = [py\xs (y1 ^ s)ps\xs (s'1 ^ s)] (5)

is a matrix of size |S| x |S|; ps = [ps(1), ..., ps(|S|)] is the vector of the unconditional state probabilities at n = 0, and 1 = (1, ..., 1) is vector of 1's of dimensions 1 x |S|.

Next, we specify type of decoding function. The appropriate choice of decoding function, which allows obtaining a result in the final form, was one of the main problems of this study. In this paper, we proposed a decoding function y(y; x) with partial overlap, which depends on two integer parameters W and B, W > B > 0. For i. u. d. segments x^™)^ the decoding function y(y; x) can be written as

N (B)-l

^(y;x) = П Py\x (УВД-W \ xfe(n)-в),

(6)

n=0

where py|x0|-) is the conditional probability for segments of different, in general, lengths 2W +1 and 2B + 1 respectively, and k(n) = n(2B + 1) + 1. Denote square matrices of order |S| as

2B+1

Py|x (У IX) = П P(y(Z)\ X(Z))'

1=1

y e У25+1, x e X2B+1,

(7)

where P(y|x) is matrix defined in (5). Let D^y; X) be a scalar quantity, and D2(y; Xp) and D(y; X,p) be square matrices of order |S| defined as follows

A (y;*0 = X Py|x (y I x)^,

xeX

D2(y;Xp) = X py|x (\w-lîil x) Py|x (У\ X)"

xeX2B+1

-Xp

2W+1

D(y; X, p) = Di (y; X)pD2(y; Xp), y e Y

Let us also define the square matrices Kj(X, p) of order |S| as

K;/(X, p) =

D(y; X, p), W > 2В +1; X D(y;X, p), W < 2B +1. (8)

y2B+l y2 (W - _B)+1

The correspondence of the indices i, j and the vector y in the expression (8) is given as i ^ y

KOAMPOBAHME M nEPEAAHA MHOOPMAUMM

and ]' ^ yls+21- Finally, we define a square block matrix of order |S|q2(W-B)

K(X, p) = [K;y (X, p)] =

Kn p) ... Kx ^2(W-B)(h p)

K2(w-B) 1 p) ••• K2(w-B) 2(W-B)(h p)

. (9)

The main result obtained in the first part of this paper [1] is formulated as the following assertion.

Theorem. Let channel be specified by conditional probabilities (4), where the matrices (5) are irreducible, and let the decoding function y be given by equation (6) with integer parameters W and B, where W > B > 0. Then the random coding exponent E*(R; y) for the code ensemble with i. u. d. code symbols is

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

E*(R; y) = max (-E°(y, p)-pR),

0<p<lV /

where

-E°(V, P) = max -E°(y, P, =

x>o

(10)

= (1 + p)logqx - (2B +1)"1 logI minr (K(X, p))

1 X>0

and r(K(X, p)) is the maximum eigenvalue (spectral radius) of matrix K(X, p), given in equation (9).

The computational complexity of obtaining the values of the function E*(R; y) depends on the dimensions of pxy(y|x) equal to q2W+1xqx2B+1 [see (7)], and on the order of the square matrix K(X, p) equal to |S|q2(W-B>.

Determination of analytical dependence of asymptotic random coding exponent E*(R; y) on the values of W and B is equivalent to the description of the dependency of spectral radius of the matrix K(X, p) on these parameters. This dependency cannot be expressed in exact and closed analytical form. From general considerations, it follows that the greater the values of W and B are, then better approximation of the ML exponent can be achieved in principle. Moreover, for various values of code rate R different combinations of the values W and B may be preferable. Unfortunately, the increase of the parameters W and B causes the great growth of computational complexity. The common approach consists of testing some combinations and selecting one that gives acceptable results for a given coding rate at a reasonable computational complexity. In next section, we present some results of calculating the random coding exponent for several channel models with memory and comparison with some known results.

Numerical Examples and Discussion

To illustrate application of the suggested approach let us consider some examples. The first example is classical Gilbert model and its generalization — Gilbert — Elliott model. The second example presents a simple model for a fading channel with nonbinary Frequency Shift Keying (FSK). These models give examples of symmetric channels with binary and nonbinary inputs and freely evolving states. The third example is for the channel model that is defined as a deterministic finite-state machine, and the last example is for channel with linear intersymbol interference and q-level quantized output (deterministic finite-state machine model as well).

Example 1. Gilbert channel and Gilbert — Elliott channel. Consider the well-known Gilbert channel model with two states 1 ("good") and 2 ("bad"). In this case qx = q= 2 and |S| = 2. Let the channel state transition probabilities be given as ps|s(1|1) = 0.75 and ps|s(2|2) = 0.97, and let the symbol crossover probabilities be equal to 0 and 1/2 for states 1 and 2 respectively (here we follow example in [3]). For this model, the complexity of calculation of the exponent of the random coding bound is not too large and the function E*(R; y) can be computed for comparatively large values of the parameters W and B. Results of the computations are shown in Fig. 4 for W = 5 and B = 0, 1, ..., 5. The values of C%), shown in Fig. 1 and further, are computed as

max -E°(y, 8, X)

C (y)>

X>0

8« 1,

(11)

giving an approximation for formula (3). The error exponent E*(R) for ML decoding computed for this example using the Egarmin algorithm [4] is also shown in Fig. 4. The channel capacity can be found using the original Gilbert approach [3] and for this example the capacity is C = 0.758 bit/channel use. It can be seen from the Fig. 1 that the curves E*(R; y) are approaching the curve E*(R) from below with the increase of the parameter B. For B = 5 the function E*(R; y) and value of C*(y) give good approximations for the ML random coding exponent E*(R) and the channel capacity C respectively .

It is interesting to compare the function E*(R; y) with the random coding exponent E*(N, R) for the ML decoding for some (small) values of the code length N. The function E*(N, R) can be computed by formulas (1) and (2). In equation (2) the channel conditional probability py|x(y|x) is computed according to the equation (4). Evidently, to compute a single value of py|x(y|x) we have to perform approximately 2N|S|2 operations (multiplications and additions). To compute the value E*(N, p) we need

КОДИРОВАНИЕ И ПЕРЕДАЧА ИНФОРМАЦИИ

Gilbert channel, p(G|G) = 0.97, p(B|B) = 0.75, p(e|G) = 0, p(e|B) = 0.5

0.25 a-

0.2

0.15

0.1

0.05

E(R,W), [W, B] = [5 0], C*M = 0.696602

E/R,V), [W, B] = [5 1], C*M = 0.729864

E/R,V), [W, B] = [5 2], C*M = 0.740846

E/R,V), [W, B] = [5 3], C*M = 0.745788

E/R,V), [W, B] = [5 4], C*M = 0.748429

E/R,V), [W, B] = [5 5], C*M = 0.750188

E*(R), C = 0.758106

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

R, bit/channel use

Fig. 1. Functions E*(R; y) and E*(R), Gilbert channel

to calculate the values of py|x(y|x) for all y e YN and for all x e XN. Thus the total number of operations in general case is about 2N|S|2qNqN For this example |S| = 2 and q = qx =2. Therefore, the total number of operations is about N22N+3 and it can be too large even for small N. For example, for N =16 it is equal to 239 « 5.5 ■ 1011. But this channel can be considered as a channel with binary additive modulo 2 noise. Therefore, the channel output vector is y = x © e, where e is a binary error vector, and py\x(y | x) = pe (y © x), where pe(-) is a distribution on the set of error vectors. Then for the sum over x in the right hand side of (2) we can write

l l l E Py|x(y I X)1+P =Z Pe (y © X)1+P pe (X)1+P

and hence

, „ , 1 + P,

E0(N, p) = P—^log

i J^ Л

X Pe (X)1+P

(12)

Evidently, pe(x) = py|x(0|x), where the probabilities py|x0|-) are given by equation (4). In this case, to compute all values pe(x) it is required to perform about 2N|S|2qN = N2N+3 operations. For example, for N = 16 the number of operations is equal to an acceptable value 223 « 8.4 ■ 106. Fig. 2 presents re-

sults of comparison of the function E*(R; y) with the random coding exponent E*(N, R) for ML decoding for some fixed values of the code length N.

The asymptotic random coding exponent E*(R) for ML decoding computed by the Egarmin algorithm [4] is also shown in Fig. 2. It can be seen from the Fig. 2 that functions E*(N, R) approach the asymptotic function E*(R) from above with increasing of N, and the function E*(R; y) gives quite good approximation of the asymptotic function E*(R) from below.

Let us consider the next example — Gilbert — Elliot model [5]. Here again q = qx =2 and |S| = 2. Let the channel state transition probabilities be given as ps\s(1|1) = 0.99 and ps\s(2|2) = 0.8, and let the symbol crossover probabilities be equal to 0.02 and 1/2 for states 1 ("good") and 2 ("bad) respectively. For this model the ML random coding exponent E*(R) is unknown, but the channel capacity can be computed as it is shown in [6, 7]. The plots of function E*(R; y) are presented in Fig. 3.

It can be seen that the curves are being shifted upwards with the increase of the parameter B. For this case, we do not have a curve for the ML random coding exponent for comparison, but we can compute the capacity C for this channel using statistical version [7] of the algorithm presented in [6]. For this example the true capacity C « 0.775 bit/channel use. For W = 4, B = 4 the value of C%) = 0.765 bit/ channel use, that is very close to the true capacity.

0

KOAMPOBAHME M nEPEAAHA MHOOPMAUMM

Gilbert channel, p(G|G) = 0.97, p(ß|ß) = 0.75, p(e|G) = 0, p(e|ß) = 0.5

■Er(N,R), N = 8, R0(N) = 0.353255, Rmax(N) = 0.747679 it—E*(N,R), N = 12, R0(N) = 0.324717, R*max(N) = 0.751144 +-E*(N,R), N = 16, r0(n) = 0.309266, R*(N) = 0.752877

E*(N,R), N = 20, Rn(N) = 0.299838, Rmax(N) = 0.753916

E*,(R, V), [W, ß] = [5,5], N ^ œ, R0(V) = 0.24986, C*(v) = 0.75019

», r = 0.261971, C = 0.758106

0.4 0.5 R, bit/channel use

0.9

Fig. 2. Functions E*(N, R), E*(R; y) and E*(R), Gilbert channel

Gilbert — Elliott channel, p(G|G) = 0.99, p(ß|ß) = 0.8, p(e|G) = 0.02, p(e|G) = 0.5

0.25

0.2

0.15

0.1

0.05

■e—E(R,y), [W, ß] = [4 0], C(y) = 0.742801

E*(R,y), [W, ß] = [4 1], C(y) = 0.756050

t-—E*(R, y), [W, ß] = [4 2], C*(v) = 0.761093

E*(R, y), [W, ß] = [4 3], C*(v) = 0.763534

E*(R, y), [W, ß] = [4 4], C*(v) = 0.764935 □ C = 0.775384

0 0.1 0.2 0.3 0.4 0.5 0.6

R, bit/channel use

0.7 0.8 0.9

Fig. 3. Functions E*(R; y), Gilbert — Elliott channel

0

The plots of the functions E*(N, R) for ML decoding for some fixed N values calculated using equations (12), (4) and (1) are shown in Fig. 4 for comparison with the suboptimal asymptotic exponent E*(R; y). It can be seen that the curves E*(N, R) are shifting downward, and the achievable code rate

Rmax(N) increases with increasing N. As N ^ <x, we have R*max(N) ^ C (note that in this example C* = C), and the suboptimal random coding exponent E*(R; y) can serve as a lower bound for the random coding exponent for the Gilbert — Elliott channel.

КОДИРОВАНИЕ И ПЕРЕДАЧА ИНФОРМАЦИИ

Gilbert — Elliott channel, p(G|G) = 0.99, p(B|B) = 0.8, p(e|G) = 0.02, p(e|B) = 0.5

0.4'

0.3

0.2

0.1

e—E*(N,R), N = 8, RQ(N) = 0.398779, Rmax(N) = 0.764900 ■*—E*(N,R), N = 12, R'q(N = 0.361830, R*max(N) = 0.766695 E*(N,R), N = 16, r0(W) = 0.337530, R*max(N) = 0.767595 *—E*(N,R), N = 20, R0(N) = 0.321142, R*max(N) = 0.768135 a—E*(R, y), [W, B] = [4,4], N ^<x>, r"q(xV) = 0.23658, C*(y) = 0.76494

0.1

0.2

0.3

0.4 0.5 R, bit/channel use

0.6

0.7

0.8

0.9

Fig. 4. Functions E*(N, R) and E*(R; y), Gilbert — Elliot channel

P(G\G) p(G\M) P(M\B)

Fig. 5. State transitions diagram

p(B\B)

Example 2. Simple model for fading channel with nonbinary FSK. Let us define the channel states as an additive white Gaussian noise channels with different noise power. Consider transmission of qx-ary orthogonal FSK signals over this channel and optimal noncoherent reception. For this model qx = q, and the symbol crossover probabilities are given as

1 -E(s"_1)), y(n) = x(n);

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

e(s

(n-l)

q -1

y^ * ^

where e(s) is symbol error probability for noncoherent reception of q-ary FSK signal for channel state s. This probability can be found as (see, e. g., [8])

Y(s) I,

* ) -1(4 ;1 )^-(-^

where y(s) is signal-to-noise ratio (SNR) in channel state s. Let for instance, qx = q = 4 and S ={1, 2, 3}, i. e. the channel can be in one of three states: 1 ("good", or G), 2 ("medium", or M) and 3 ("bad", or B). Assume that the channel state transitions

are given by the diagram shown in Fig. 5 with the following channel state transition probabilities: Ps|s(l|l) =P(G\G) = 0.99, Ps|s(l|2) =p(G\M) = 0.2, Ps|s(3|2) =P(B\M) = 0.6, Ps|s(3|3) =p(B|B) = 0.9. Let the state SNR be as follows y(1) = 15 dB, y(2) =0 dB and y(3) = -5 dB. The plots of the function E*(R; y) are shown in Fig. 6.

Example 3. Channel with deterministic state transitions. Let X = Y = {0, 1}, S = {1, 2}, and the probabilities ps|xs(s(n)|x(n), s(n^1)) and p^y^x^, s(n-1)) are given in Table 1, where e is symbol crossover probability, 0< e < 1/2.

■ Table 1. Probabilities psws(s(n)\x(n), s(n-1)) and pylxs(.yn)\X-n), s(n-1))

Ps\xs(s(n)\x(n), s(n-1)) Pw\xs(y(n)\x(n), s(n-1))

s(n) (x(n), s(n-1)) y(n) (x(n), s(n-1))

(0,1) (0,2) (1,1) (1,2) (0,1) (0,2) (1,1) (1,2)

1 2 1 0 1 0 0 1 0 1 0 1 1/2 1/2 1 - 8 8 8 1 - 8 1/2 1/2

0

0

KOAMPOBAHME M nEPEAAHA MHOOPMAUMM

Three state FSK4 channel, p(G|G) = 0.99, p(B|B) = 0.9, p(G|M) = 0.2, p(B|M) = 0.6, y = [15 0 -5] dB 0.05

0.04

0.03

0.02

0.01

0

0

0.5

R, bit/channel use

Fig. 6. Functions E*(R; y), three state FSK4 channel

E*(R, v), [W,B] = [2 0], C*(= 1.066815 E*(R, v), [W,B] = [2 1], C*( = 1.220886 t-E*(R, v), [W,B] = [2 2], C*( = 1.290990

1.5

For this model in state 1, the symbol x = 0 flips with probability 1/2, and the symbol x = 1 flips with small probability e. In state 2, on the contrary, the symbol x = 0 flips with small probability e, and the symbol x= 1 — with probability 1/2. In addition, the channel state becomes 1 after symbol 0 comes in, and is equal to 2 after the coming in the symbol 1. The functions E*(R; y) for this model are plotted in Fig. 7 for e = 0.01. In Fig. 7 we present examples for some good combinations of the parameters W and B for W = 0, 1, ..., 6, and the best pairs for this example are [W, B] = [W, W - 1], W > 1.

To compare the plots of functions E*(R; y) with result for ML decoding let us consider the function R* - R. This function coincides with the random coding exponent E*(R) in the interval 0 < R < Rcr, where Rcr is the critical rate [2], and the plot of the linear function R* - R can be considered in example as a known part of the curve for whole random coding exponent E*(R). It can be shown (see Appendix) that R* for ML decoding for this example can be found as R* = 2logqx - logr(H), where r(H) is maximum eigenvalue of the matrix H, and

1 a(e) a(e) 1

a(e) 1 6(s) a(e)

a(e) 6(E) 1 a(e)

1 a(e) a(e) 1

where a(e) = yj(1 -e)/2 +yfT/2 and fc(e) = 2^e(l-e). It follows from the Fig. 7, that the random coding exponent E*(R; y) for W = 6, B = 5 is very close to the straight line R0 - R, so the function E*(R; y) can be considered as a quite good approximation for the ML random coding exponent E*(R).

Example 4. Intersymbol interference channel with q-level quantized output. A simple interference channel model is defined by the vector of coefficients g = [go, gi, ..., gL]. The channel input x = (x(1), ..., x(N)) is a binary sequence and unquantized channel output is

V[n) = 2> (V + (14)

1=0

where x() = 0,1, and are independent Gaussian random variables with zero mean and variance ct2. The following modulation mapping is assumed in equation (14): 0 ^ + 1, 1 ^ -1. The SNR is defined as y = ||g||2/(2CT2). The number of channel states is equal to 2L. The continuous channel output y0n) is subjected by a q-level quantization. In this example we assume that the quantization algorithm is one-dimensional quantization maximizing the Bhattacharya distance between conditional distributions of the quantized values [9]. We consider the simplest case of the model known as the dicode channel with parameters L = 1, g = [+1, -1] and

|S| = 2. For this case y(Qn) = (-1)*"° - (-If""1'

КОДИРОВАНИЕ И ПЕРЕДАЧА ИНФОРМАЦИИ

s = 0.01

0.35

0.3

0.25

0.2

0.15

0.1

0.05

0

в—E*(R, y), [W,B] = [0 0], C*(y) = 0.180885

■*—E*(R, v), [W,B] = [1 1], C*(v) = 0.337793

-t-E*(R, v), [W,B] = [2 1], C*(v) = 0.386812

■*—E*(R, v), [W,B] = [3 2], C*(v) = 0.398002

в—E*(R, v), [W,B] = [4 3], C*(v) = 0.402759

-9-E*(R, v), [W,B] = [5 3], C*(v) = 0.404569

f-E*(R, v), [W,B] = [5 4], C*(v) = 0.405397

^-E*(R, v), [W,B] = [6 4], C*(v) = 0.406806

-a-E*(R, v), [W,B] = [6 5], C*(v) = 0.407075

R0 - R, R0 = 0.322380

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45

R, bit/channel use

Fig. 7. Functions E*(R; y), channel with deterministic state transitions, 8 = 0.01

The dicode channel with two-level quantization of the continuous channel output is equivalent to the channel with deterministic state transitions in Example 3. Plots of the functions E*(R; y) computed according to the equation (10) for y = 4 dB, q = 8 and

some combinations of the parameters B and W, are shown in Fig. 8. The values of C*(y), computed by the formula (11), and values of Rj* for the ML decoding computed using known techniques [10-12] (see also Appendix), are also presented in Fig. 8.

Dicode channel, g = [1 -1], SNR = 4 dB, q = 8

R, bit/channel use ■ Fig. 8. Function E*(R; y), dicode channel, y = 4 dB

KOAMPOBAHME M nEPEAAHA MHOOPMAUMM

As with the previous example we present the plot for the function R* - R in Fig. 8, where R* is computed as it is shown in Appendix. Also in the Fig. 8 we indicate the value of maximum information rate C* for the dicode channel with soft output computed in [13]. We see that maximum of functions E*(R; y) for [W, B] = [3, 2] and [W, B] = [3, 1] are close to the straight line R,* - R and to the value of C*.

Random coding exponent E*(N, R) for the ML decoding computed by formulas (1) and (2) for several small values of the code length N is depicted in Fig. 9. The function E*(R; y) and asymptotic (N ^ <») linear function R* - R for the ML decoding are also presented in Fig. 9 for comparison. Clearly, as N the function E*(N, R) for 0 < R < Rcr tends to the line R* - R. As it follows from the Fig. 9 the suboptimal random coding exponent E*(R; y) is very close to the random coding exponent E*(N, R) for the presented examples of the code length N and is not far from the asymptotic linear function RQ* - R. The functions E*(N, R) for larger values of N are not presented due to high complexity of their computation.

Plots of R0* and R0*(y) as a function of the SNR are presented in Fig. 10. The values of RQ* are calculated by a known method [14-16]. For this case (dicode channel) the values of R0* can be found in closed form as

* —logI f(

-Vl + 16e "Y-2e "2 Y+e "

4 f

.2.

The values of C*(y) and values of maximum information rate for the channel with nonquantized output found in [13] by a simulation-based algorithm are shown in Fig. 11. Note that in [9] a dif ferent definition of SNR is used, namely y = ||g||/a: therefore the plot showing the data of [9] is moved to the left by 3 dB. We see in Fig. 10 and Fig. 11 that the difference is about 10 %, and it seems that the increasing number of quantization levels q leads to a decrease in this difference.

Conclusion

In this work, consisted of two parts, we present the derivation of the exponent of the random coding bound E*(R; y) for suboptimal, or mismatched, decoding. We proposed a decoding function in the form of a product of the a posteriori probabilities of the non-overlapped input subblocks of length 2B + 1 relative to the overlapped output subblocks of length 2W + 1 [see (6)]. The computation of the values of the function E*(R; y) is reduced to the calculation of the largest eigenvalue of a square nonnegative matrix K(X, p) of order |S|q2(W-B), where |S|

Dicode channel, g = [1 -1], SNR = 4 dB, q = 8

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

e— Er(N,R), N = 4, R0(N) = 0.663078, Rmax(N) = 0.810824 E*(N,R), N = 5, R0(N) = 0.685233, R*max(N) = 0.829165 i— E*(N,R), N = 6, R0(N) = 0.699908, R*max(NJ = 0.840936

œ, C(y) = 0.867081

R0 - R, N -+X, q = 8, R0 = 0.773058

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

R, bit/channel use

Fig. 9. Functions E*(N, R) and E*(R; y), dicode channel y = 4 dB

КОДИРОВАНИЕ И ПЕРЕДАЧА ИНФОРМАЦИИ

/

/А / / )/

/ л ]

/А //А )/" ]_

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

л V / У R* soft channe output

: Rq я - 8

# ................Ro я - 4 О RQ(y), [W,B] - [3,2]я - 8 □ RQ(y), [W,B] - [3,2]я - 4

0.1

-8 -6

-4 -2

0 2 4 SNR, dB

10

12

Fig. 10. Plots of R0* и R0*(y) for dicode channel

6

8

.л )...... ) ........E ].............4

f r"

3'" /'"

.c /E )•' / /' f

/ / 3 '•*'* ,.•''

J/ c''

A c'

Л .....Q.....C*0), W - 3, B - 1, я - 8 ......*......C*0), W - 3, B - 2, я - 4 ' ..........C* soft channel output i- i- i-

0.1

-6

-4

-2

0

SNR, dB

Fig. 11. Plots C* and C*(y) for dicode channel

2

4

6

KOAMPOBAHME M nEPEAAHA MHOOPMAUMM

■ Fig. 12. General view of the functions Ë* (R) and E*(R; y)

is the number of channel states and q is the cardinality of the channel output alphabet.

The computational complexity of obtaining the values of the function E*(R; y) depends on the dimension of the values py|x(y|x), equal to q2W+1 x qx2B+1 [see (7)], and on the order of the matrix K(X, p), equal to |S|q2(W-B). Therefore, in the examples presented in this part of the work, the practical computations were carried out for relatively small values of B, W and q. Nevertheless even for small values of these parameters, good results were obtained. The values of R(*(y) and C*(y) found for the suggested suboptimal decoding functions are close to the corresponding values found before for the case of ML decoding for the intersymbol interference channels with soft output. A qualitative picture of the relationship between function E* (R), presented in the introduction in the first part [1] of this work, and function E*(R; y) is shown in Fig. 12. The values of R( and C* for a discrete-time channel with intersymbol interference can be obtained by known techniques [14-16] and [9]. As we see from the Fig. 12 the curve for the function E*(R; y) (solid line) goes higher than known bound E* (R) for high code rates.

We see the same in example in Fig. 8, where for quantized channel output with q = 8 and W=3, B = 1, we have R(*(\[i) = 0.710 bit/channel use, C*(y) = 0.871 bit/channel use; for continuous channel output R(*(y) = 0.824 bit/channel use and C*(y) = 0.920 bit/channel use as follows from Fig. 10 and Fig. 11 for SNR = 4 dB. The curve E*(R; y) is shifted upwards and to the right with increasing values of the parameters q, W and B. This conclusion follows from the fact that the decoding function y becomes the ML decoding function with increasing parameters W and B. Thus, the function E*(R; y) can be a good approximation for the true

but unknown function E*(R). The problem of the extension of the proposed approach to other channel models and the problem of finding an efficient algorithm for numerical computation for large values of B, W and q remain open for research.

Appendix

In this appendix we present a derivation of the expression for R0 for the channel with deterministic state transitions. This derivation is a minor modification of the known results [12, 14-16]. The general expression for R0 follows from the equation or p = 1 and Nœ is

\2

^ 1 ( _Y

R* = 2 log qx - lim — £ p.(y | x)

N.N '

Let us consider the sum over y in (A1)

(A1)

S XV£yix(y1 x)

y V X

f _

XVijy|x(y1 x)ijy|x(y1 x')

(A2)

For finite-state channel model we have the channel conditional probability

Py|X (y I x) = X Pyfr S (y I X, s) PS|x (S I x),

where

N

pyWs( y IX, B) = n Py\xs(y{n)\ «(B),-(B-1))

n=1

S КОДИРОВАНИЕ И ПЕРЕДАЧА ИНФОРМАЦИИ 7"

■ Table A1. Values of h(sa, s'a; sb ,s'b)

(sa > s'a) (sb> sb)

(1,1) (1,2) (2,1) (2,1)

(1,1) XV Py\xs(y \ °,1) Py\xs(y \ °,1) У XV Py\xs(y\°,1) Py\xs(y\1,1) У XV PyIxS(^\1,1) Py\xs(y\°,1) У XVPy\xs(y \ 1,1)Py|xS(y \ 1,1) У

(1,2) XV Py[Xs(y \ 0,D Py[xs(y \ 0,2) V XV Ру\™(У \ Ру\™(У \ !,2) У XV \ 1,1) Py\xs(y \ °,2) У У

(2,1) XV Py\xs(y \ 0,2) Py\xs(y \ 0,D У XV Ру\™(У \ °,2) Py\xs(y \ 1,1) У XV^XS^ \ !,2)^xs^ \ °,1) У XV PyIxS(^\1,1) У

(2,2) XVPy\xs(y\ 0,2)P^xs(y\ 0,2) У XV Py[xs(y \ 0,2) Py\xs(y \ 1,2) У XVPy[xs(y \ 1,2)\ 0,2) У У

and

N

psh(s | X) = (S<0)) П Ps\XS(s(n)\ Л^).

n=1

Hence,

N

Py |х(у I x)= X л (*(0))ПХ Py\xs(ywl

=1S(»>

«(й_1)) Ps\Xs(s{n)I ¿"U""4).

(A3)

For channel with deterministic state transitions the pair (x(n), s(n-1)) uniquely defines the next channel state s(n), therefore the sum over s(n) in (A3) contains only one term. Then it can be written

Py\x(y IX) =X ps (s(0)) Py\xs(y{1)\

„(1) \ v(i)

„(оь

s<0>

) Py\xs(y{2)\ -(2))... pyixs(y^)\

where s(n) = f(x(n), s(n-1)), n = 1, 2, ..., N, and f(y) is a function defining deterministic transition

s(»-l) ^ s(»). Hence, for the expression (A2) we have

XX XV^yix(y1 x)1 x,)

N

< max УУПУ

„(0) „-(O)^^f1 s '3 x x' n=\ у \

Ру\ХЗ(У I ^W^) X < Py\xs(y I ^^ "Ч (A4)

Let us introduce for two pairs of states (sa, s'a) and (sb, sb) the values

h(sa, s'a; sb, sb ) -

XVPy\Xs (У I *, sa )Py\xs (У I *', sb ).

V

if sb = f (x, sa) and sb = f (x', s'a); 0, otherwise,

(A5)

and build matrix H of size |S|2 x |S|2 with the entries

Msa, sa;

sb ,sb), where the pairs (sa, s'a) h (sb, s'b) represent first and second indices of the matrix entry respectively. Then the inequality (A4) can be written as

< max(HW 1T) < mN 1T,

where 1 = (1, ..., 1) is vector of the dimension |S|2. Further, we have

x f _

Om ^ x

v x

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

= lim -ilH*1T,

NN

and using the corollary from the Perron — Frobenius theorem [2, 15] we get finally from (A1) R0* = 2log qx- log r(H), where r(H) is maximum eigenvalue of the matrix H.

For the channel model given in the Examples 3 and 4 the values of h(sa, s'a; sb, sb) defined in (A5) are listed in Table A1.

Expression (13) is obtained using the data given in Table A1.

КОДИРОВАНИЕ И ПЕРЕДАЧА ИНФОРМАЦИИ

References

1. Trofimov A. N. Random Coding Bound for Channels with Memory — Decoding Function with Partial Overlapping. Part 1. Derivation of Main Expression. Informatsionno-upravlyayushhie sistemy [Information and Control Systems], 2018, no. 3, pp. 79-88. doi:10.15217/issn1684-8853.2018.3.79

2. Gallager R. Information Theory and Reliable Communication. New York, John Wiley & Sons, 1968. 588 p.

3. Gilbert E. N. Capacity of a Burst-Noise Channel. Bell System Technical Journal, 1960, vol. 39, no. 5, pp. 1253-1265.

4. Egarmin V. K. Lower and Upper Bounds on Decoding Error Probability for Discrete Channels. Problemy peredachi informatsii [Problems of Information Transmission], 1969, vol. 5, no. 1, pp. 23-39 (In Russian).

5. Elliott E. O. Estimates of Error Rates for Codes on Burst-Noise Channels. Bell System Technical Journal, 1963, vol. 42, no. 5, pp. 1977-1997.

6. Mushkin M., Bar-David I. Capacity and Coding for the Gilbert — Elliott Channels. IEEE Transactions on Information Theory, 1989, vol. 35, no. 6, pp. 1277-1290.

7. Rezaeian M. Computation of Capacity for Gilbert — Elliott Channels, using a Statistical Method. 6th Australian Communications Theory Workshop 2005, Feb. 2-4, 2005, Brisbane, Australia, pp. 56-61.

8. Proakis J. G. Digital Communications. McGraw-Hill, New York, 1995. 928 p.

9. Lin S., Costello D. Error Control Coding — Fundamentals and Applications. Prentice Hall, N. J., 2004. 624 p.

10. Raghavan S., Kaplan G. Optimum Soft Decision Demodulation for ISI Channels. IEEE Transactions on Communications, 1993, vol. 41, no. 1, pp. 83-89.

11. Shamai (Shitz) S., Raghavan S. On the Generalized Symmetric Cutoff Rate for Finite-State Channels. IEEE Transactions on Information Theory, 1995, vol. 41, no. 9, pp. 1333-1346.

12. Trofimov A., Chan Keong Sann. Complexity-Performance Trade-off for Intersymbol Interference Channels - Random Coding Analysis. IEEE Transactions on Magnetics, 2010, vol. 46, no. 4, pp. 1077-1091.

13. Arnold D. M., Loeliger H.-A., Vontobel P. O., Ka-vcic A., Wei Zeng. Simulation-Based Computation of Information Rates for Channels With Memory. IEEE Transactions on Information Theory, 2006, vol. 52, no. 8, pp. 3498-3508.

14. Biglieri E. The Computational Cutoff Rate of Channel Having Memory. IEEE Transactions on Information Theory, 1981, vol. 27, pp. 352-357.

15. Gantmacher F. R. Teoriia matrits [Theory of Matrices]. Moscow, Nauka Publ., 1988. 552 p. (In Russian).

УДК 621.391

doi:10.31799/1684-8853-2018-4-73-85

Граница случайного кодирования для каналов с памятью — декодирующая функция с частичным перекрытием. Часть 2. Примеры и обсуждение

Трофимов А. Н.а, канд. техн. наук, доцент, andrei.trofimov@vu.spb.ru

аСанкт-Петербургский государственный университет аэрокосмического приборостроения, Б. Морская ул., 67, Санкт-Петербург, 190000, РФ

Введение: субоптимальная экспонента случайного кодирования E*(R; для широкого класса моделей канала с конечным числом состояний, использующая несогласованную декодирующую функцию получена и описана в первой части этой работы. Мы использовали функцию представленную в виде произведения апостериорных вероятностей неперекрывающихся входных подблоков длины 2B + 1 относительно перекрывающихся выходных подблоков длины 2W+ 1. Показано, что вычисление значений функции E*(R; сводится к вычислению наибольшего собственного значения квадратной неотрицательной матрицы, порядок которой зависит от параметров канала и от величин W и B. Цель: проиллюстрировать развитый в первой части исследования подход к вычислению экспоненты случайного кодирования в приложении его к различным каналам, модели которых представляют собой вероятностный конечный автомат. Результаты: рассмотрены каналы, в которых переходы в множестве состояний не зависят от входного символа, и каналы с детерминированными переходами, в частности каналы с межсимвольной интерференцией. Получены численные результаты вычисления экспоненты случайного кодирования в полном интервале скоростей кода для ряда моделей каналов, для которых подобные результаты не были ранее получены. Практические вычисления выполнены для относительно малых значений B и W. Тем не менее даже при малых значениях этих параметров получено хорошее соответствие с известными результатами для оптимального декодирования.

Ключевые слова — граница случайного кодирования, канал с конечным числом состояний, несогласованное декодирование, теорема Перрона — Фробениуса, канал с межсимвольной интерференцией.

Цитирование: Trofimov A. N. Random Coding Bound for Channels with Memory — Decoding Function with Partial Overlapping. Part 2. Examples and Discussion // Информационно-управляющие системы. 2018. № 4. С. 73-85. doi:10.31799/1684-8853-2018-4-73-85

Citation: Trofimov A. N. Random Coding Bound for Channels with Memory — Decoding Function with Partial Overlapping. Part 2. Examples and Discussion. Informatsionno-upravliaiushchie sistemy [Information and Control Systems], 2018, no. 4, pp. 73-85. doi:10.31799/1684-8853-2018-4-73-85

i Надоели баннеры? Вы всегда можете отключить рекламу.