• Nem Talált Eredményt

Recursive calculation of the Wiener-Hopf equation

and the data matrixX[n]can be expressed usingx[n]as

X[n] =

x[n] x[n−1] . . . x[n−K+ 1]

x[n+ 1] x[n] . . . x[n−K+ 1]

... ... ... ...

x[n+N−1] x[n+N−2] . . . x[n+N−K]

. (35)

The error signal vector ε[n] is the expected value of the dierence of the desired signal vectord[n]

and the output signal vectory[n],

ε[n] = E{d[n]−y[n]} (36)

whered[n] can be expressed as dT[n] =

d[n] d[n+ 1] . . . d[n+N−1]

. (37)

The adaptive algorithm should minimise the cost functionC as a function of the lter coecients, con-sidering (32) and (36). It can be expressed as

C(h[n]) =E{d[n]−X[n]h[n]}TE{d[n]−X[n]h[n]}=

=En

d[n]Td[n]o

+hT[n]R[n]h[n]−2hT[n]p[n] (38) where

R[n] = E

XT[n]X[n] (39)

is the auto-correlation matrix of the input signal while p[n] = E

XT[n]d[n] (310)

is the cross-correlation vector of the input and the desired signals.

The optimal solution for the lter coecients known as the WH equation can be given in LS sense through the derivative ofC(h[n])with respect toh[n],

∂C(h[n])

∂h[n] =−2p[n] + 2R[n]h[n] =0. (311)

Evaluating this equation, the optimal coecientsbhopt[n]are given in form [55] as

hbopt[n] =R−1[n]p[n]. (312)

or using dyadic products. Finally, the RSC4BI-algorithm is introduced for the calculation of the inverse of the auto-correlation matrix R−1[n+ 1]. The algorithm calculates R−1[n+ 1] using the previously calculated matrices R[n], R[n+ 1], and R−1[n]. The algorithm exploits higher eciency by splitting the matrix into four submatrices.

For the representation of the column and row elements of matrix X[n] from (35), the following notation will be applied

X[n] =

x1[n] x2[n] . . . xK[n]

=

 x1[n]

x2[n]

...

xN[n]

. (313)

3.2.1 Calculation of X[n+ 1]

For the calculation of the matrixX[n+ 1]from X[n], the following permutationρhas to be dened to change the order of the columns in the matrix

ρ= 1 2 3 . . . K K 1 2 . . . K−1

!

(314)

where the rst row contains the original positions of the columns while the second row describes the new order after permutation. Furthermore, the identity vectoreis dened as follows

eTi =

0 . . . 0 ei= 1 0 . . . 0

. (315)

Using (314)-(315), the permutation matrix according toρis given as

Pρ=

eρ(1) eρ(2) . . . eρ(K)

=

eK e1 . . . eK−1

=

0 1 0 . . . 0 0 0 1 . . . 0 ... ... ... ... ...

0 0 0 . . . 1 1 0 0 . . . 0

(316)

whereρ(i)denotes the new order of theithcolumn after permutation, thusρ(1) =K (314), and after substitution, the identity vectoreρ(1) is also given (315),

eT1 =

0 0 . . . 0 1

. (317)

For example, let us dene the permutationρ3forK= 3,

ρ3= 1 2 3

3 1 2

!

, (318)

·P

ρ3

Figure 24: Eect of the permutation matrixPρ3

X[n]

x1, . . . ,xK−1 xK

X[n]Pρ

xK x1, . . . ,xK−1

X[n+ 1]

·Pρ −xK[n]

+x1[n+ 1]

Figure 25: Column-wise update of the observation matrixX[n]

(diagonal pattern: removed elements, dot pattern: new elements)

which describes the following permutation matrix (Fig. 24) as

Pρ3 =

eρ3(1) eρ3(2) eρ3(3)

=

e3 e1 e2

=

0 1 0

0 0 1

1 0 0

, (319)

Finally, the matrixX[n+ 1]is generated from X[n]using column-wise update as Fig. 25 shows. For that, the columns of X[n] have to be permuted by Pρ. Thereafter, it is compensated by the dyadic product of the dierence of the earliest and latest time domain vectors x1[n+ 1]andxK[n] and the identity vector with its rst element being non-zero

X[n+ 1] =X[n]Pρ+ (x1[n+ 1]−xK[n])eT1. (320) Similarly to (320), the matrixX[n+ 1]can be generated from X[n]using row-wise update as Fig. 26 shows. For that, the rows of X[n]have to be permuted byPρ. Thereafter, it is compensated by the dyadic product of the dierence of the earliest and latest time domain vectors x1[n]andxN[n+ 1] and the identity vector with its rst element being non-zero

X[n+ 1] =PρX[n] +eN xN[n+ 1]−x1[n]

. (321)

X[n]

x1

x2 ...

xN

PρX[n]

x1 x2 ...

xN

X[n+ 1]

·Pρ −x1[n]

+xN[n+ 1]

Figure 26: Row-wise update of the observation matrixX[n]

(diagonal pattern: removed elements, dot pattern: new elements)

R[n] PTρR[n]Pρ R[n+ 1]

·PTρ

·Pρ

Figure 27: Update of of the auto-correlation matrixR[n]using permutation (diagonal pattern: removed elements, dot pattern: new elements)

3.2.2 Calculation of R[n+ 1]

The auto-correlation matrixR[n+ 1]can be calculated usingR[n]either by permutation or by dyadic product as they are presented in the following.

3.2.2.1 Using permutation matrix

Applying (320), the transposed version ofX[n+ 1]can be expressed as XT[n+ 1] = X[n]Pρ+ (x1[n+ 1]−xK[n])eT1T

=PTρXT[n] +e1(x1[n+ 1]−xK[n])T. (322) As a further step, the matrixR[n+ 1]can be given through (39), (320), and (322) as:

R[n+ 1] =XT[n+ 1]X[n+ 1] =

=PTρXT[n]X[n]Pρ+

+PTρXT[n] (x1[n+ 1]−xK[n])eT1 +e1(x1[n+ 1]−xK[n])TX[n]Pρ+

+e1(x1[n+ 1]−xK[n])T(x1[n+ 1]−xK[n])eT1 (323) where the rst term in (323) can be simplied with the aid of (39) as

PTρXT[n]X[n]Pρ=PTρR[n]Pρ, (324) which describes the shifts of the rows and the columns (Fig. 27). After that, the rst row and the rst column are updated by the second, third and forth term of (323). Because of the multiplications by the

identity vector, these terms are zero valued matrices except for the rst column, the rst row and the upper left element to modify the corresponding part ofR[n+ 1]as it can be seen in Fig. 27. As result, the updated auto-correlation matrix is expressed from the previous results using the permutation matrix and the identity vector.

3.2.2.2 Using dyadic product

The auto-correlation matrix can also be expressed by dyadic product as [73, 74]

R[n] =

N

X

i=1

xi[n]T

xi[n] (325)

while the updated version of the auto-correlation matrix can also be derived in a similar manner

R[n+ 1] =

N

X

i=1

xi[n+ 1]T

xi[n+ 1]. (326)

Due to the calculations in a sliding window, it can be noted that

xi[n+ 1] =xi+1[n], fori= 1, . . . , N−1. (327) As a result, the updated auto-correlation matrix can also be expressed through subtracting the dyadic product of the oldest vector and adding the dyadic product of the newest vector to the previous matrix [73, 74]

R[n+ 1] =R[n]− x1[n]T

x1[n] + xN[n+ 1]T

xN[n+ 1]. (328)

3.2.3 Calculation of p[n+ 1]

The cross-correlation matrix p[n+ 1]can also be calculated usingp[n]either by permutation or by dyadic product as they are presented in the following.

3.2.3.1 Using permutation matrix

Similarly to the method that is deduced in Sec. 3.2.2, d[n+ 1]can expressed using the permutation matrix and the identity vector as

d[n+ 1] =Pρd[n]−eN(d[n]−d[n+N]). (329) The updated cross-correlation vector can be derived based on (310), (320), and (329) as

p[n+ 1] =XT[n+ 1]d[n+ 1] =

=PTρXT[n]Pρd[n]−

−PTρXT[n]eN(d[n]−d[n+N]) +e1(x1[n+ 1]−xK[n])TPρd[n]−

−e1(x1[n+ 1]−xK[n])TeN(d[n]−d[n+N]). (330) As a result the updated cross-correlation vector is expressed with the aid of the permutation matrix and the identity vector.

3.2.3.2 Using dyadic product

Following the derivations presented in Sec. 3.2.2, the cross-correlation vector can also be expressed by a dyadic product as

p[n] =

N

X

i=1

xi[n]T

d[n+i−1]. (331)

The consecutive cross-correlation vector in the sliding window can be calculated using a dyadic product as

p[n+ 1] =

N

X

i=1

xi[n+ 1]T

d[n+i]. (332)

Based on (327), (331), and (332), the updated cross-correlation vector can be expressed from the previous cross-correlation vector as

p[n+ 1] =p[n]− x1[n]T

d[n] + xN[n+ 1]T

d[n+N]. (333)

3.2.4 Calculation of R−1[n+ 1] the RSC4BI-algorithm

In this section an ecient algorithm for the calculation of the inverse of the updated auto-correlation matrix, R−1[n+ 1] is presented. The refresh of R[n] is performed by the permutation method (see in Sec. 3.2.2) to get R[n+ 1]. The following supplementary matrix notations and their splitting into four submatrices are used for the auto-correlation matrices and their inverse. These notations can be seen in Fig. 28 as well.

R[n] =A=

a11 . . . a1(K−1)

... ... ...

a(K−1)1 . . . a(K−1)(K−1)

 a1K

...

a(K−1)1

aK1 . . . aK(K−1)

aKK

= A11 a12

aT21 a22

!

, (334)

R[n+ 1] =B=

 b11

b12 . . . b1K

 b21

...

bK1

b22 . . . b2K

... ... ...

bK2 . . . bKK

= b11 bT12 b21 B22

! ,

R−1[n] =U=

u11 . . . u1(K−1)

... ... ...

u(K−1)1 . . . u(K−1)(K−1)

 u1K

...

u(K−1)1

uK1 . . . uK(K−1)

uKK

= U11 u12 uT21 u22

!

(335)

R−1[n+ 1] =V=

 v11

v12 . . . v1K

 v21

...

vK1

v22 . . . v2K

... ... ...

vK2 . . . vKK

= v11 vT12 v21 V22

!

. (336)

The derivation of the inverse of such hypermatrices with 4 submatrices is presented in Appendix C.

Based on (323) and (324) it can be shown thatA11 =B22. Furthermore, the submatrixB−122 can also be expressed using (C27) and (C18) as

B−122 =A−111 =U11−u12u−122uT21. (337) With the result of the previous equation, the reduced submatrix ofBcan also be expressed using (C18) as

b11,r =b11−bT12B−122b21. (338) Considering thatV=B−1, the upper left corner element ofVcan also be easily expressed using (337), (338), and (C27) as

v11=b−111,r = (b11−bT12B−122b21)−1. (339) Similarly, the lower left submatrix ofV can also be calculated using (C28) as

v21=−B−122b21b−111,r. (340)

A11

aT21

a12

a22 A=R[n]

B22 bT12

b21 b11

B=R[n+ 1]

U11

uT21

u12

u22 U=R−1[n]

V22 vT12

v21 v11

V=R−1[n+ 1]

Figure 28: Structure of the correlation matricesRand their inversesR−1

Algorithm 1 The RSC4BI-algorithm INPUT:B≡R[n+ 1],U≡R−1[n]

OUTPUT:V≡R−1[n+ 1]

1: function RSC4BI(B,U)

2: calculateB−122 using the submatrices ofU (337)

3: calculateb11,r usingB−122 andB (338)

4: usingB−122 andb11,r express the submatrices ofV:

5: calculatev11 (339)

6: calculatev21 andvT12 (340)(341)

7: calculateV22 (342)

8: formVusingv11,v21,v12T,v22 (343)

9: end function

The upper right submatrix ofVcan be calculated using (C29) or based on (340) as

vT12=vT21=−b−111,rbT21B−122. (341) As a nal step, the submatrixV22 can be expressed with the aid of (C30) as

V22=B−122 +B−122b21b−111,rbT12B−122. (342) Finally, the updated inverse auto-correlation matrix can be summed and expressed based on the matrices B,U, and the calculated submatrices ofV as

R−1[n+ 1] =V= b−111,r −b−111,rbT21B−122

−B−122b21b−111,r B−122 +B−122b21b−111,rbT12B−122

!

. (343)

The previously presented steps for calculating the inverse of the auto-correlation matrix the RSC4BI-algorithm are given in Alg. 1.

3.2.5 Remarks on the proposed recursive calculations 3.2.5.1 Calculation of R[n+ 1]by modied permutation

As it is presented in Sec. 3.2.2, the auto-correlation matrixR[n]can be updated either by adding dyadic products or by permutation. Applying the second method (see in Fig. 27), the old matrix is permuted rstly, then the elements are modied in the rst column and row to get the updated auto-correlation matrix. Controversy to that, the order of these steps can be changed as Fig. 29 shows. This procedure allows to evaluate the matrix inversion without permutation; however, the modied parts of the matrix have to be rearranged after the inversion to store the updated auto-correlation matrix.

3.2.5.2 Initialization

As it is presented in this section, R−1[n+ 1] can be evaluated based on R−1[n]; therefore, the ini-tialization value of the recursion has to be dened prior to starting the algorithm. In this section two dierent methods are recommended to perform the initialization.

The rst method is a simple solution through inversion of the auto-correlation matrix using the conventional procedures. However, this method requires all the samples to arrive in the observation

R[n] R[n+ 1]

·PTρ

·Pρ

Figure 29: Update of of the auto-correlation matrixR[n]using modied permutation (diagonal pattern: removed elements, dot pattern: new elements)

window; thus, the lter starts to follow the signal only after collecting the requested number of input samples.

The second method is the expansion of the inverse matrix according to the number of input samples.

This solution is performed by applying the Levinson-algorithm, which enables to spread the computational load across the incoming samples (Appendix D) [68].

3.2.5.3 Zero valued input

During signal acquisition, it may occur that the analog-to-digital converter records zero values if the signal level is too small. In such a case,X[n]contains zero value, which may lead to the rows ofX not being linearly independent anymore. SinceR=XTX, the rank of the auto-correlation matrix becomes lower than the number of columns. As a result, R becomes singular, i.e., det (R) = 0; therefore, the inversion of the matrix is not possible. To avoid this issue, the incoming signal should be perturbed by an additive white noise (e.g., dither). Its level has to be low enough not to notably alter the input signal, but it is still large enough to enable the inversion of the auto-correlation matrix.

3.2.5.4 Related algorithms

The algorithm proposed by Kraker is based on block-wise matrix inversion; therefore, it has very similar formulas as the classic numeric methods of the linear algebra. One of them is the Woodbury matrix identity [76]. It states that the inverse of a rank correction of matrixAcan be evaluated by performing the rank correction to the inverse of the original matrix

A+UΣVT−1

=A−1−A−1U Σ−1+VTA−1U−1

VTA−1 (344)

where the matricesA,Σ,U, andVhave the conformable sizes.

Considering (328), it can be seen that the original auto-correlation matrix is modied by the sum of two dyads (R). R has a rank 2; therefore, it is a rank 2 correction of R; thus, the Woodbury matrix identity (344) can be applied for inversion after performing the following resolution as

R= x1[n]T

x1[n] + xN[n+ 1]T

xN[n+ 1] =UΣVT. (345)

To get the matrices of this product, the single value decomposition (SVD) is one of the possible solution.

The reduced result of the decomposition givesU and V with size of K×2, whileΣ is2×2 diagonal matrix containing the non-zero singular values ofR. Seeing these, (344) reduces the inversion to the

following two simple inversions:Σ−1 is given through getting the reciprocals of the singular values, and Σ−1+VTA−1U−1

includes only a2×2matrix inversion.

The Woodbury matrix identity can reduce the sizes of the matrices that have to be inverted, but a singular value decomposition has to be performed before inversion having O K2

complexity. This performance is similar to the complexity of RSC4BI-algorithm presented in Sec. 3.4. Nonetheless, this Woodbury based solution contains more divisions than the RSC4BI-algorithm, which operations may suer from long run time or limited precision.

To perform the continuous update of the inverse of an auto-correlation matrixR, Baranoski introduced a new algorithm based on the inverse Cholesky factorization [77]. Contrary to the sliding window method of Kraker, this approach is a continuous one, where the previous samples and the new ones are weighted by weighting factors.