• Nem Talált Eredményt

ADAPTIVE CONSTRAINED CODING

N/A
N/A
Protected

Academic year: 2023

Ossza meg "ADAPTIVE CONSTRAINED CODING"

Copied!
83
0
0

Teljes szövegt

(1)

ADAPTIVE CONSTRAINED CODING

PhD Thesis by

PETER VÁMOS

2013

(2)
(3)

Contents

1 Introduction 3

1.1 Spectrum Shaping Coding Algorithms. . . 4

1.2 An Overview of this Thesis . . . 6

1.3 Basic Concepts . . . 7

2 The Principle of the Coder 13 2.1 Generalization of the Charge Concept . . . 13

2.2 The Coder’s Structure and Performance . . . 14

2.3 Mixing the Constraints . . . 18

3 Coding with FIR Filter 21 3.1 Special Properties of the Transition Matrix . . . 22

3.2 The Window-Charge Constrained Channel . . . 26

3.3 Rate, Error Propagation and Channel Capacity . . . 29

3.4 Spectral Density of the Output Signal . . . 32

3.5 Coding for Run-Length Limited Channel . . . 38

4 Coding with IIR Filter 45 4.1 The α-Charge Constrained Channel . . . 45

4.2 Taking the Run-Length into Account . . . 51

5 Coding Biased Sources 57 5.1 Coding Algorithm for Biased Input . . . 57

5.2 A Coding Theorem of Run-Length Limited Channels . . . 63

6 Some Other Applications 67 Appendix 73 Condition of Limited Error Propagation . . . 73

Bibliography 77 Related Publications . . . 77

References . . . 78

Index 81

(4)
(5)

3

Chapter 1 Introduction

Since the beginning of digital communications there has been a contradiction: while the information at the source and drain is handled and processed as digital, still the channel itself remained analogous with its analogous properties. Among these analogous proper- ties one of the most important is the frequency response. For a good communications, therefore, it is necessary to accommodate the digital signal’s spectrum to the channel’s properties. Since the digital signal is the convolution of the modulating data stream X and an elementary pulse h(t):

s(t) =

i=0

Xih(t−kT),

so its spectrum is given as the product of the spectral density of the data stream and the elementary pulse’s spectrum:

S(f) = SX(f)· |H(f)|2.

That is, spectral shaping can be achieved either by shaping the elementary pulse or by manipulating the spectrum of the data stream by coding. In many applications, where it is impossible or difficult to shape the pulse (e.g. digital optical or magnetic recording/transmission), it is important the spectral shaping of the binary data sequence [1–3]. Moreover, some spectral requirements, such as DC suppression can be satisfied only by coding [4,5].

Next to the spectral demands, there are additional requirements to the code, such as the bound of the number of consecutive identical symbols, i.e., the run-length. The run- length upper bound (k constraint) ensures the reliable clock recovery, while establishing a lower bound (d constraint) diminishes the intersymbol interference, both practically important [6,7].

(6)

4 INTRODUCTION

1.1 Spectrum Shaping Coding Algorithms

The most code constructions in the literature concentrate on DC suppression [Chapters 8 and 11 of 8], and one can find only a few general purpose spectrum shaping algorithm, and even less capable to control the spectrum and the run-length simultaneously.

To form the suitable channel sequence complying with given constraints from the source sequence, we should use a transcoder. It adds some redundancyr since not the all binary sequences are obeying the constraints, i.e., the constrained channel has a capacity less than 1. The coder inputs m bits and outputs n=m+r bits. The performance of the coder is characterized by the rate R = m/n. The higher rate is better performance, however, the rate is limited by the capacity of the constrained channel: R ≤C.

The coding algorithms can be classified how the redundancy added. If m and n are constants, it is called fixed length algorithm, while if they are varying, it is referred to as variable length algorithm. For variable length algorithm the rate defined as the expected value of the m to n ratio: R=IE(m/n).

1.1.1 Fixed length algorithms

In 1987 Marcus and Siegel published a block code algorithm which can produce spectral nulls at rational submultiples of the signalling frequency [9]. The(n, m)block code derived from the constrained channel’s adjacency matrix results in a complex code construction, and consequently, in complicated coder structure even for the smoothest spectral con- straints, and the algorithm not suitable to handle run-length limiting (RLL) constraints simultaneously.

The guided scrambling algorithm developed by Fair et al. [10] makes DC suppression by adding one or more redundant bits to the blocks of data stream. It minimizes the accumulated charge on the output of a scrambler over the possible values of the redundant bits. Applying the weighted running digital sum (WRDS) concept introduced inChapter 2 of my thesis, the algorithm can be used even for general purpose spectral shaping. It is also claimed that algorithm limits the run-length as well, however, it is carried out only by limiting the block length, and moreover, k constraint can not be prescribed explicitly.

It makes further difficulties to keep the run-length limit at the block boundaries, and guided scrambling is not suitable for imposing d constraint at all.

The algorithm proposed by Cavers and Marchetto can be taken as special case of guided scrambling with enhanced spectral shaping properties [11]. It minimizes (or maxi- mizes) the output of a finite impulse response (FIR) digital filter representing the spectral constraint by inverting some data blocks along the filter. The inversion is marked on flag

(7)

1.1 SPECTRUM SHAPING CODING ALGORITHMS 5

bits added to each block. By this method, however, can not be handled RLL constraints at all, and it uses the computationally expensive Viterbi algorithm for the encoding.

1.1.2 Variable length algorithms

The rate of a good code construction tries to approach the channel capacity. For fixed length algorithms the rate is a ratio of two integers, while the channel capacity can be even irrational. Usually a good approach can be reached only by large mand n numbers.

It makes the code construction complicated and the coder clumsy. It pertains as well to the so called synchronous variable length algorithms when only the constant m to n ratio is required. The most real (asynchronous) variable length coding algorithms are based on bit stuffing technique. It monitors the output stream, and inserts an extra bit with suitable polarity if the constraint could be hurt by the next source bit. The bit stuffing algorithms are smooth, they don’t require much computing and can be realized easily. Due to the variable code length, their rate can well approach the channel capacity, however, they require buffering to keep the transmission rate constant.

The bit stuffing approach has been applied for decades to control the run-length in binary sequences. The well known HDLC (High-level Data Link Control) protocol inserts a “0” bit after each sequence of five consecutive “1” bits [12]. In 1993 Bender and Wolf suggested a bit stuffing algorithm for generating run-length limited (RLL) sequences with spectral null at zero frequency [13]. However, their solution can scarcely be applied in practice due to its bent for infinite error propagation caused by the infinite memory of the decoder.

Actually, the only drawback of the bit stuff algorithm is its proneness to error prop- agation. It can be kept under control by limiting the memory of the coder. Further improvement can be reached in error propagation if we diminish the error probability at the bit stuff decoder’s input by the application of an outer error correcting code with reverse concatenation [14,15].

In the second half of the 2000s many authors published improvements to the bit stuffing algorithm for coding (d, k) constrained channels [16,17,18]. The rates of these improved algorithms are very close to the channel capacity, and, in some specific cases, they even reach it. However, all they use that the bound of the current run-length is constant, and it doesn’t hold for charge and generalized charge constrained codes [C2] applied for spectral shaping. Moreover, they move position of the stuffed bits and it would deteriorate spectral shaping.

(8)

6 INTRODUCTION

1.2 An Overview of this Thesis

In my thesis I generalize the accumulated charge concept used for generating DC sup- pressed code spectrum and introduce a new class of constraints the generalized charge constraint. The new constraint limits the level at the output of a digital filter, so the spectral requirements can be described easily. An adaptive coder structure, a feedback controlled bit stuff encoder with loop filter is suggested to implement the new constraint.

This structure can perform a very effective and flexible spectrum shaping, and moreover, it can also control the run-length of the output bit stream. The flexibility is due to the digital loop filter, that can be implemented by inexpensive and readily available DSP (digital signal processing) components. In contrast to fixed length algorithms, the coder controls the output sequence continuously, it makes easy to implement both k and dcon- straints allows the application of IIR (infinite impulse response) filters and enhances the effectiveness of spectral shaping. I am calling the above coder structure adaptive because the bit stuff encoder adapts the source sequence with its interventions to the applied constraints, and because the coder is easily reconfigurable.

In Chapter 2 I describe the coder’s structure and performance and give a brief anal- ysis of the coder demonstrating its spectral shaping property. I supply an approximate formula for the spectrum of the output binary signal in case of independent, identically distributed (i.i.d.) input. I also show that the coder performs a sigma-delta converter-like operation. It is presented that the coder can enforce spectral and run-length constraint simultaneously, and the approximate formula for the spectrum is extended for the case when k and spectral constraint are applied together.

In Chapter 3 I give a detailed analysis of coders with FIR (finite impulse response) loop filters. The performance of those coders can be described as finite sate machine (FSM) with the corresponding discrete Markov model. Using the symmetry properties of the transition matrix I supply formulas for the rate, the related constrained channel’s capacity and the spectral density of the output signal. I also deal with the case when explicit run-length constraints are imposed next to the spectral one. As an example, I present an actual coder scheme with low-pass window filter generating DC suppressed code for AC-coupled channels.

InChapter 4 coders with IIR loop filters are discussed. These coders have an infinite state space and the corresponding Markov chain becomes unstable. So, either we use functional equations for the description, or we approximate the actual Markov process with a finite space Markov chain. I demonstrate the method of functional equation by the analysis of a coder with 1st order low pass IIR loop filter.

(9)

1.3 BASIC CONCEPTS 7

Chapter 5 deals with coding biased sources where the probability of binary source symbols are not equal. There is a generalization of the RLL concept introducing thewide- sense RLL channel, and a coding theorem is proved for that channels. Finally, Chapter 6 illustrates the versatility of the new coder concept by presenting a few particular spectral characteristics shaped by different loop filters for actual and proposed applications.

1.3 Basic Concepts

Here I would like to explain some important concepts used in my thesis for the readers not quite familiar with the topic. Maybe it is a bit late since I have already used some of them in this section. For the uncoded source sequence I will use the notation X, while the channel sequence will be denoted by Y. Both X and Y are assumed binary, and

“+1”/ “1” are used for represent the binary symbols instead of “0”/ “1” to avoid discrete DC component in the physical signal.

1.3.1 Entropy

The entropy of an uncorrelated source which emits the symbols {x1, x2, . . . , xk} indepen- dently with probability p(xi) = IP(X=xi)is defined as

H(X) =

k i=1

p(xi) log2 1 p(xi).

If the source is correlated, the entropy is defined as the average information density of an infinite source sequence:

H(X) = lim

n→∞

1

n H(X1, X2, . . . , Xn). (1.1) The most coder can be described as Markov source [1], i.e., the output sequence is corre- lated. Using the conditional entropy

H(X|Y) =

i

p(yi)

j

p(xj|Y=yi) log2 1 p(xj|Y=yi)

=

i

j

p(xj, yi) log2 p(yi) p(xj, yi)

=

i

j

p(xj, yi) log2 1

p(xj, yi)

i

p(yi) log2 1 p(yi)

= H(X, Y)−H(Y), the joint entropy of X and Y can be expressed as

H(X, Y) =H(Y) +H(X|Y).

(10)

8 INTRODUCTION

Applying the above equality with X = Xk and Y = (X1, X2, . . . , Xk1) recurrently (k =n, n−1, . . . ,2) for the joint entropy H(X1, X2, . . . , Xn) in (1.1), we get

H(X1, X2, ..., Xn) =H(Xn|X1, ..., Xn1)+H(Xn1|X1, ..., Xn2)+...+H(X2|X1)+H(X1).

For Markov sources the conditional entropies H(Xn|X1, . . . , Xn−1), . . . , H(X2|X1) are equal. Applying the above result in (1.1), for the entropy of a Markov source we get

H(X) = lim

n→∞

1

n[(n1)H(Xk|Xk1) +H(X1)] =H(Xk|Xk1).

The conditional entropy on the right side can be expressed with the transition probabilities qi,j=IP(Xk=xj|Xk1=xi)and the stationary distributionpi=IP(X=xi)of the generating Markov chain:

H(X) = H(Xk|Xk1) =

i

pi

j

qi,jlog2 1 qi,j

.

1.3.2 Channel capacity

Shannon [19] has defined the capacity of the discrete noiseless channel as C= lim

t→∞

log2N(t)

t ,

where N(T) is the number admissible signals of duration t. Let S(n) denote the number of n bit long binary sequences complying with the given channel constraint. Then using the signalling time as time base, according to Shannon’s definition, the capacity of the binary constrained channel is

C = lim

n→∞

log2S(n)

n . (1.2)

The channel capacity is an upper bound of the information density of the channel sequence:

H(Y)≤C.

1 +1 2 3 4

5 6 7 8

+1 +1

A=

0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0

Figure 1.1. The transition graph and adjacency matrix of the(d, k) = (1,3)RLL channel.

(11)

1.3 BASIC CONCEPTS 9

To determine S(n), let us consider the Mealy-type graph of the constrained chan- nel in Fig. 1.1 where vertices correspond to the momentary channel state, and edges correspond to the channel symbols. Such a graph can be described uniquely by its ad- jacency matrix A. Let si(n) denote the number of n bit long sequences ending in state i ands(n) = [s1(n), s2(n), . . . , sN(n)] the vector of si(n) numbers. Then for s(n) we can write:

s(n) =A s(n1) and S(n) =

N i=1

si(n) =1s(n), (1.3) where 1 denotes the full of one vector [1,1, . . . ,1]. According to (1.3), we can give S(n) numbers by s(0):

S(n) =1Ans(0).

Expressings(0)with eigenvectorsai ofA, we can giveS(n)as sum of geometric sequences with quotients of corresponding eigenvalue λi:

S(n) =

N i=1

λni1ai =

N i=1

ciλni,

where coefficients ci =1ai can be determined from the initial values of sequence S. For large n’s S(n) can be approximated with a single geometric progress with quotient of largest eigenvalue λmax:

S(n) =λnmax(cmax+ϵn),

where ϵnrepresents the vanishing effect of subdominant eigenvalues. Applying this result in (1.2):

C = lim

n→∞

log2nmax(cmax+ϵn)]

n = log2λmax.

Using the variable length symbol representation [31,32], i.e., when edges may corre- spond to sequences of different length, it can greatly reduce the states of RLL channel’s transition graph (Fig. 1.2). Now the channel capacity can be calculated as the base two logarithm of the largest root of the characteristic polynomial det[A(z)I].

1 8

+1 1 +1+1 1 +1+1+1 1

1+1 1 1+1 1 1 1+1

A(z)=

0 z2+z3+z4 z2+z3+z4 0

Figure 1.2. Variable length transition graph and adjacency matrix of the (d, k) = (1,3)RLL channel.

(12)

10 INTRODUCTION

1.3.3 Coder’s rate

We define the rate of a coder as the ratio of the entropies of the channel and source sequences:

R =H(Y)/H(X). (1.4)

(Some authors define the rate as the entropy of the channel sequence.) Since the encoding is supposed to be lossless, the channel should transmit as much as information the source emits. It means that the signalling speed of the channel is 1/R times higher than of the source. As the entropy of the channel sequence can not exceed the channel capacity, using (1.4), we can write:

H(Y) = R H(X)≤C. (1.5)

If the source is maxentropic, i.e., H(X) = 1, the entropy of the channel sequence is the rate. For fixed length algorithms the rate is constant and doesn’t depend on the source distribution. It means that (1.5) should be staisfied for any source entropies, so for the rate of such coders it holds that R≤C.

Since the channel capacity is a theoretical bound on the entropy of the channel se- quence, coding efficiency

E =H(Y)/C is a good measure for the performance of the coder.

1.3.4 Run-length processes

A run is a substring of identical symbols. Let X be a process with finite set of values, and let us define the transition times of the process X as

ti = min{j > ti1|Xj ̸=Xj1} and t0 = 0.

Then the run-lengths are given as the differences Ti =ti−ti1, and the process T(X) is called as the run-length process associated withX.

In binary case, apart from the polarity, the original process can be reconstructed from the associated run-length process. Since the polarity can be given in one bit for the whole process, the entropy of a long binary process and its associated run-length process is almost the same. It is often useful to study the run-length process rather than the original one since the former has a state space of half as large as the latter one has (Fig. 1.3).

In Section 3.1 it is proven that the maximal eigenvalue of the transition matrix derived from run-length process and of the one of the original process are equal.

(13)

1.3 BASIC CONCEPTS 11

1 2 3 4

A=

0 1 0 0 1 0 1 0 1 0 0 1 1 0 0 0

Figure 1.3. The run-length process and the corresponding adjacency matrix of the (d, k) = (1,3) RLL channel.

In order of reliable data recovery the run-length is limited in the most channels. The upper bound, called k constraint, is set to ensure the reliable clock recovery by providing the suitable transition frequency for the synchronization [20]. It maximizes the number of consecutive identical symbols in k+1:

Ti ≤k+ 1 (i= 1,2, . . .).

The lower bound is called d constraint. It minimizes the number of consecutive identical symbols in d+1:

Ti ≥d+ 1 (i= 1,2, . . .).

The d constraint diminishes the intersymbol interference by enlarging the distances be- tween the transitions [6]. It works as if the signalling rate were dropped by d+1, but for the transitions only, so it reduces the capacity less. The dand k constraints together are referred to as run-length limiting or RLL constraints, while the channels with such an input constraint and the sequences complying with them are called run-length limited.

The queer definitions ofd and k constraints have historical roots. It stems from that initially they formed the RLL sequences in two steps. First only the length of ‘zero’ runs were limited (sequence Y), and then by a transformation using mod 2 addition (exor) they formed the channel sequence Y as

Yi =Yi⊕Yi1.

One can see that the above transformation called precoding [1] turns the ‘zero’ runs of length n of Y into ‘zero’ and ‘one’ runs of length n+1 of Y alternately.

(14)
(15)

13

Chapter 2

The Principle of the Coder

2.1 Generalization of the Charge Concept

The most common application of spectral shaping is coding for AC-coupled channel [2,21].

Those channels have a low frequency cut-off which affects the low frequency components of the code, causing a slow fluctuation in level at the receiver. To avoid or rather diminish this fluctuation, the coded sequence should be poor in low frequency spectral components. It is usually carried out by charge constrained codes when the running digital sum (RDS) of the channel sequence Y, (Yi∈{−1,+1}, i= 0,1, . . .;and Yi<0 = 0), i.e., the accumulated charge is limited:

Cn=

i=0

Yni and |Cn| ≤c, (n = 0,1, . . .), (2.1) where cis a given constant. One can see that the RDS is generated by a low-pass filter:

C(z) = 1

1−z1 Y(z), (2.2)

where Y(z) is thez- or discrete Fourier transform of sequence Y with z = exp(j2πf /f0) and f0 stands for the bit frequency. By limiting the RDS, we limit the level at the output of a low-pass filter. So, low frequency components of the channel sequence enhanced by the filter will be suppressed. (Actually the filter in (2.2) has an infinite enhancement around the zero frequency, consequently, Y must have zero spectral density at zero frequency if RDS is limited. Pierobon [22] has proved that the limited RDS is also a necessary condition.) It implies the idea using filters with other characteristics to form an RDS like quantity will also shape the spectrum according to the applied filter. For this purpose, let us generalize the RDS concept by introducing the weighted running digital sum (WRDS):

Definition 2.1 The weighted running digital sum is the convolution of a binary se- quence Yi∈{−1,+1} and a sequence of given constants hiR:

Wn =

i=0

hiYni, or W(z) =H(z)Y(z).

(16)

14 THE PRINCIPLE OF THE CODER

WRDS makes direct connection between the binary sequence and its spectrum. By limit- ing the WRDS we can enforce spectral constraints on the code. Those codes with limited WRDS we will refer to as generalized charge constrained or spectrum constrained codes.

There is another motivation to modify the concept of RDS. From (2.1) one can see that RDS is generated with infinite memory, so a coder producing RDS limited code should have infinite memory too. However, the decoding process should use only finite memory to avoid the infinite error propagation, so decoding should be charge (RDS) independent. It can be carried out only if the charge state itself doesn’t convey information (e.g. application of balanced or low-disparity blocks [8]). However, it diminishes the coding efficiency. (E.g. for low disparity blocks there are two reserved channel blocks for each source block, one with positive and another with negative disparity.) Therefore it is convenient to limit the coder’s memory in Definition 2.1 in the following sense:

i=0

|hi|=cM <∞ (2.3)

In the Appendix it is proven that the absolute convergence of series hi is a necessary and sufficient condition of finite error propagation. Condition (2.3) also implies that we can realize only finite but arbitrarily large suppressions in the spectrum. In exchange, however, we can use the information conveying the charge state, so the code will have a higher efficiency.

2.2 The Coder’s Structure and Performance

For coding WRDS limited channels we will use the coder structure in Fig. 2.1, a feedback controlled bit stuff encoder. The bit stuff encoder has two states: It either transmits a bit from the input to the output or inserts a redundant bit. The bit stuffing is controlled

comparator encoder

digital filter ) H(z (WRDS)

bit stuff X {−1,+1}

Y

Y {−1,+1}

W ) sgn(Wn

abs(Wn)≥c0

Figure 2.1. Feedback controlled bit stuff encoder.

(17)

2.2 THE CODER’S STRUCTURE AND PERFORMANCE 15

by the feedback loop. Whenever the level at the filter’s output reaches a given threshold c0, the bit stuff encoder inserts a bit with opposite sign to the filter’s output signal:

Yn+1 =

Xm+1, if |Wn|< c0;

sgn(Wn), if |Wn| ≥c0,

(2.4)

where Xi, Yi∈{−1,+1}, andWn=i=0hiYni. The indices of the input (X) and output (Y) sequences are different because of the previously stuffed bits: n =m+sn, where sn

stands for the number of stuffed bits till Yn. For threshold c0 it must hold that c0 > cm = min

ϵi |ϵihi|, ϵi∈{−1,+1} c0 ≤cM =|hi|.

(2.5)

For c0 cm the rate would be zero (no information could be transmitted), while for c0 > cM the rate would be 1 (no spectral shaping could be performed). The above structure works actually as a negative feedback with loop filter. The coder tries to keep low the output of the filterH(z) = 1+zf 1H(z). The spectral components enhanced by the filter will be dominant in the control of the bit stuff encoder, so the coder’s interventions are diminishing their power.

The decoding can be performed with a similar, but feed-forward structure inFig. 2.2.

Whenever the forward filter’s output reaches or exceeds the threshold, the decoder removes a bit from the stream. The error propagation can be controlled by the suitable filter design limiting the decoder’s memory. The condition (2.3) ensures that the resulting code will be decodable in the wide sense [23], i.e., it will not induce an infinite error propagation for any non-zero error probability.

comparator decoder

digital filter ) (z

H (WRDS)

bit stuff X {−1,+1}

Y Y {−1,+1}

W

abs(Wn) ≥ c0

Figure 2.2. The decoder’s scheme.

(18)

16 THE PRINCIPLE OF THE CODER

X + Q Y

) (z H W

z1 c0 W

z−1

z1 1/c0

(a)

X + + Y

+

H z( ) c0

z1 c0 W

z1

Q

(b)

Figure 2.3. The coder’s nonlinear equivalent circuitry (a); and its linearization (b).

Keeping the WRDS high by applying the coding rule Yn+1 =

Xm+1, if |Wn| ≥c0; sgn(Wn), if |Wn|< c0,

instead of (2.4) can shape the spectrum as well. In this case the spectral components en- hanced by the filterH(z)f will be enhanced in the output signal too, so the spectral density of the output will emulate the filter’s characteristics. This concept can be useful when the loop filter suitable for the desired spectrum doesn’t ensure the stable performance of coder using coding rule (2.4).

To demonstrate the spectral shaping property, let us suppose that the input sequence X is a series of independent and identically distributed (i.i.d.) random variables, i.e., the input is a binary white noise. Under this circumstance, as far as the statistical properties of the output signal is concerned, there is no matter whether the coder inserts a bit or overwrites the oncoming one, so the following coding rule can be set instead of (2.4):

Yn+1 =sgn(Xn+1 1 c0

Wn) =sgn(Xn+1 1 c0

i=0

hiYni).

To analyze the corresponding nonlinear system in Fig. 2.3.a, let us simulate the perfor- mance of the one-bit quantizer by adding the quantization noise Q (Fig. 2.3.b). On the basis of the figure we can write:

Y(z) =X(z)− z1

c0 W(z) +Q(z) W(z) =H(z)Y(z)

Expressing Y(z), for thez-transform of the output signal we get:

Y(z) = X(z) +Q(z) 1 + z1

c0 H(z)

. (2.6)

(19)

2.2 THE CODER’S STRUCTURE AND PERFORMANCE 17

X + + Q Y

+

Q H z( )

c0

z−1

H z( ) c0

z−1

(a)

X + Q Y

+

Q

) 1 ( 1

z F

F z( )

(b)

Q Y

+

Q

F z( )

X +

1F z( ) Q

Q X'

(c)

F(z) = 1 1 + z1

c0 H(z)

Q(z) = Y(z)−X(z) =F(z)Q(z)

Figure 2.4. The equivalent transformation of the coder.

Now, supposing that process Q is an uncorrelated white noise with negligible cross cor- relation with the input signal, which is more or less satisfied while(c0−cm)/(cM−cm)1, i.e., when the coder performs definitely [24], the spectral density of the numerator of (2.6) is constant. So, for the spectrum of the output we can write:

SY(f) 1

1 + ej2πf /f0

c0 H(ej2πf /f0)

2 . (2.7)

The above formula is an approximation, but one that describes well the main features of the code spectrum in most cases, and thus good basis for the coder’s design.

From (2.6) one can see that the same filtering is applied both for the input signal X and the quantization noise Q. Using the notations

F(z) = 1 1 + z1

c0 H(z)

and X(z) = X(z) 1 + z1

c0 H(z)

=F(z)X(z), as well as introducing the re-quantization noise as

Q(z) = Y(z)−X(z) = F(z)Q(z),

we can transform the layout in Fig. 2.3.a. The feedback quantizer on the right side of the yielding structure inFig. 2.4.cis exactly the circuitry suggested by Spang and Schultheiss

(20)

18 THE PRINCIPLE OF THE CODER

encoder bit stuff X {−1,+1}

Y

Y {−1,+1}

comparator #1 c0

Wc ) abs( ( )

comparator #2 ) 1 abs(W(k) =k+

comparator #3 ) abs(W(d) <d+1

digital filter #1 ) ( z Hc

)

W(c

)

W(k

)

W(d

) sgn(W(c)

Y Y

digital filter #2

Σ

=

= ik i

k z z

H ( ) 0

digital filter #3

Σ

=

= di i

d z z

H ( ) 0

Figure 2.5. The implementation of the run-length constraints.

for shaping the spectrum of the quantization noise [25] and is widely used nowadays. It well demonstrates the coder’s performance: the input binary signal is filtered according to the spectral requirements, then the resulted signal, in general with a continuous ampli- tude distribution, is re-quantized by a quantizer which also shapes the quantization noise spectrum in the same manner. Here we recall the attention of readers familiar with DSP for the structural similarity of the layout in Fig. 2.4.c and sigma-delta modulators [26].

2.3 Mixing the Constraints

In most applications it is also important to limit the run-length [1,7]. In addition to some spectral constraint now let us impose a(d, k)constraint as well upon the code, so no runs shorter than d+1 bits and longer than k+1 bits are allowed. (d < k is always required.) To implement these constraints, we should insert additional feedback loops into the coder for monitoring the run-length (Fig. 2.5). The coding rule will be the following:

Yn+1 =

sgn(Wn), if |Wn| ≥c0; (charge constraint)

−Yn, if |ki=0 Yni|=k+ 1; (k constraint) Yn, if |di=0 Yn−i|< d+ 1; (d constraint) Xm+1, (no stuffing) otherwise.

(2.8)

(21)

2.3 MIXING THE CONSTRAINTS 19

X + Q + Y

Q

1

1 1

Σ

=++

k

k z

i i

c0

z1 ) (z Hc

Y

Figure 2.6. The equivalent circuit of the coder in Fig. 2.5 for estimating the output spectrum whend= 0.

Some kind of charge constraints may occasionally clash with one or other run-length constraints, i.e., they are forcing the bit stuff encoder inserting bit with different sign in the same time. This can be either prevented by imposing an auxiliary constraint upon the code, or resolved by setting priorities for the constraints. An example for the former solution is presented in Section 3.5. When priorities are set, usually it is advisable to order higher priority to the run-length constraints since those are more crucial, if they are set, and generally the false interventions will not deteriorate too much the spectrum.

From Fig. 2.5 one can see that we are using low-pass FIR loop filters for monitoring the run-length both for d and k constraints. Moreover, k constraint is implemented with a coding rule complying with (2.4), which maintains the WRDS low, similarly to one applied for the charge constraint. It implies that when only k constraint is applied with charge constraint, in case of independent binary white noise input, the bit stuff encoder can be replaced with summing circuits and one-bit quantizers (Fig. 2.6), similarly as we did in the previous section, so the output spectrum can be estimated as

SY(z) 1

1 + z1

c0 Hc(z) + z1

k+1 1−z(k+1) 1−z1

2

. (2.9)

The above formula is less accurate than (2.7) due to the fact that fork constraint the bit stuff threshold is set to k+1, i.e., the condition(c0−cm)/(cM−cm)1is barely satisfied and the loop works at the performance limit.

(22)
(23)

21

Chapter 3

Coding with FIR Filter

According to coding rule (2.4), the output is determined by the WRDS and the in- put. For coders applying FIR filters, the momentary WRDS Wn is given by the se- quence Ynnr = Ynr, Ynr+1, . . . , Yn, where r corresponds to the filter’s order. Since Y is a binary sequence, the coder may have only 2r+1 states at most, so it can be described as a finite state machine (FSM) over the state space {S} = {Ynnr} with the corresponding state transition matrix Q = ∥qi,j21r+1. Let Si denote a particular state of the coder, then the elements of Q, i.e., the transition probabilities are given as qi,j =IP(Yn−rn =Sj|Ynnr11=Si). Studying the coder we will suppose that the input process is i.i.d. with IP(X= +1) = IP(X=1) = 1/2 (i.e., the input is DC-free or unbi- ased). The latter one is not a real restriction since any biased sequence can be transformed into an unbiased one as it is presented in Chapter 5, where we will treat the effects of biased input as well.

To construct the transition matrix of the coder or the adjacency matrix of the con- strained channel, we should know which states are contiguous, and whether a state satisfies the constraint. For this purpose we will need the following concepts:

Definition 3.1 The state Sj is a child or first descendant of Si if Sj can be reached from Si by one transition. In general, the k-th descendants are the states which can be reached by k transitions. The set of k-th descendants of Si is called the k-th generation to Si.

Definition 3.2 For the convenience, we define the function W(Si) what gives the WRDS of the state Si:

W(Si) =

r i=0

hiYni,

and |W(Si)| will be referred to as the weight of the state Si.

(24)

22 CODING WITH FIR FILTER

The states can be classified according to their weight:

Definition 3.3 A state is called light if its weight is smaller than the threshold: |W(S)|< c0; and heavy if |W(S)| ≥ c0. A state is called invalid if it can not be reached by the regular operation of the coder and it can be excluded from the sate space.

Coding rule (2.4) implies the following lemma:

Lemma 3.1 A light state has always two descendants, while a heavy state has only one.

According to Lemma 3.1 we can give the elements of the state transition matrix:

qi,j =

1/2, if Sj is a child of Si and |W(Si)|< c0; 1, if Sj is a child of Si and |W(Si)| ≥c0;

0, otherwise.

3.1 Special Properties of the Transition Matrix

The number of elements of the coder’s state space is growing exponentially with the filter’s order and it can become fast unmanageably large. In this section we will discuss some special properties of the transition matrix which hold for any kind of adaptive coder applying FIR loop filter. Those properties make possible to greatly reduce the state space making the analysis simpler, the computations faster and they even enable taking the run-length constraints into account in an easy way.

With suitable ordering of the coder’s states the transition matrix can be transformed into a centrosymmetric form which allows to halve the dimension of the state space:

Property 1 Let denote N the number of internal states of the coder and S¯i the bitwise inverse of the state Si. Then with the labelling

S¯i =SN+1i, (i= 1,2, . . . , N) (3.1) the coder’s transition probability matrix is centrosymmetric, i.e., qi,j =qN+1−i,N+1−j, and the Hermitian and unitary exchange matrix

J=

0

1

. ..

1

0

is an invariant transformation of Q:

J Q J1 =J Q J=Q. (3.2)

(25)

3.1 SPECIAL PROPERTIES OF THE TRANSITION MATRIX 23

Proof: The states Si and S¯i have WRDS with the same magnitude but with opposite sign: W( ¯Si) =W(Si), so they are either both heavy or both light. If they are heavy, according to the (2.4), a stuffing is applied and since the two states have WRDS with opposite sign, the stuffed bits must be of inverse of each other. So, if Si transits into Sj, then S¯i transits intoS¯j. If the states are light andSi transits intoSj for a given input X, then S¯i transits into S¯j for the input X. Since IP(X¯ = +1) = IP(X=1), the transition probabilities are equal:

qi,j=IP(Ynnr=Sj|Ynnr11=Si)

=IP(Ynnr= ¯Sj|Ynnr11= ¯Si)

=IP(Ynnr=SN+1j|Ynnr11=SN+1i)

=qN+1i,N+1j.

(3.3)

(3.2) implies that the transition matrix can be given in the following form [30]:

Q=

Q1 Q2J JQ2 JQ1J

. (3.4)

Property 2 It is a necessary and sufficient condition for centrosymmetry if the matrix has two invariant subspaces orthogonal to each other. An even one (E) consisting of ve= [v, vJ] even vectors, and an odd one (O) consisting of vo= [v, vJ] odd vectors.

Proof: The two subspaces are orthogonal because for anyve∈E and anyvo∈O: vevo= 0.

The necessity can be proven with the help of decomposition (3.4):

veQ=[v(Q1+Q2), v(Q1+Q2)J] = [vQe, vQeJ]; (3.5.a) voQ=[v(Q1Q2),v(Q1Q2)J] = [vQo,−vQoJ], (3.5.b) where Qe=Q1 +Q2 and Qo=Q1 Q2 are the equivalent transformations of the N/2 dimensional even and odd subspaces.

The invariance of subspaces means that for any ve and vo it holds that veQ∈ E and voQ∈O as well. Since veJ=ve and voJ=vo, we can write:

veQ=veJQ=veJQJ and voQ=voJQ=voJQJ. (3.6) Because of the orthogonality of subspacesE and O, any vector vcan be uniquely decom- posed into the sum of an even and an odd vector: v=ve+vo. Now applying equalities (3.6) for the sum ofve and vo, we have

vQ=veQ+voQ=veJQJ+voJQJ= (ve+vo)JQJ=vJQJ.

It implies that Q and JQJ are equivalent, that is, Q is centrosymmetric.

(26)

24 CODING WITH FIR FILTER

Later on we will also use the following corollaries of the above properties:

– The eigenvalues of Qe and Qo together make the eigenvalues of Q, and if u is an eigenvector ofQe with an eigenvalueλand vis an eigenvector ofQo with an eigenvalue µ, then [u, uJ] and [v, vJ] are the eigenvectors of Q associated with eigenvalues λ and µrespectively.

– Since a matrix and its any power have the same eigenvectors, if Q is centrosymmetric, Q±n is also centrosymmetric, and (Q±n)e=Q±ne and (Q±n)o =Q±no .

– Transposing (3.2) and using that J= J, we have Q= J QJ. That is, if Q is cen- trosymmetric, Q is centrosymmetric as well. Consequently, any property holding for left eigenvectors it holds for the right ones.

– From (3.3) one can see that each state can be merged with its inverse, due to the symmetry. It halves the number of states and the state transition probability matrix reads asQ1+Q2 =Qe.

Property 3 Labelling the states according to the last output bit Yn too as 1 ≤i≤N/2, if Yn=1;

N/2< i≤ N, if Yn= +1;

(3.7) matrices Q1 and Q2 have actual physical meaning: Q1 stands for repeating the last input bit, i.e. continuing the current run, while Q2 stands for adding an inverse bit to the last one starting a new run with opposite sign.

Applying the above labelling, transitions represented byQn1 insertnidentical bits into the output stream. If there is an upper bound on the run-length, it implies that powers of Q1 higher than the run-length limit must be identically zero, i.e., Q1 is nilpotent.

Lemma 3.1 implies that each row of matrixQcontains two nonzero elements at most.

If there are two nonzero elements in a row, with a labelling satisfying both (3.1) and (3.7), one falls in Q1 and the other in Q2. So, those matrices can be handled as sparse matrices represented each by a vector of 2r elements. Moreover, since the the invalid states are placed symmetrically (ifSi is invalid, thenS¯i is invalid as well), we can omit them without disturbing the centrosymmetry.

Commonly, applying an orderrFIR loop filter, we will use the most plausible labelling satisfying both (3.1) and (3.7) the lexicographic ordering of the coder’s states with highest

Hivatkozások

KAPCSOLÓDÓ DOKUMENTUMOK

Abstract. In the introductory programming education, during coding input and output management often suppresses in proportion the essential parts of the code. A novice

Abstract. In the introductory programming education, during coding input and output management often suppresses in proportion the essential parts of the code. A novice

• Mostly resource-constrained scheduling has been addressed. Even in the works that dealt with time-constrained scheduling, the problem was often solved by reducing it to a set

In this paper I will argue that The Matrix’s narrative capitalizes on establishing an alliance between the real and the nostalgically normative that serves to validate

The axial temperature and heat flux distribution along each channel of the fuel element for both calculations (THERMAL code and KFKI group) are shown in Figs.. The

Although the most important role of two classical protein types of gluten - the gliadin and glutenin - in the formation of gluten complex is at present not

IV A, values of R N/Z were obtained from a mi- croscopic reaction model calculation, based on the QRPA nuclear structure approach, constrained by both experimental reduced

With regard to the genetic code, specific binding of an amino acid to its cognate codon or anticodon could explain some of the assignment of codons in the genetic code (Woese et