Научная статья на тему 'Application of the Non-Hermitian singular Spectrum Analysis to the exponential retrieval problem'

Application of the Non-Hermitian singular Spectrum Analysis to the exponential retrieval problem Текст научной статьи по специальности «Физика»

CC BY
51
20
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
EXPONENTIAL RETRIEVAL PROBLEM / MATRIX PENCIL / SVD / PATTERN RECOGNITION

Аннотация научной статьи по физике, автор научной работы — Nicolsky D. J., Tipenko G. S.

Introduction. In practical signal processing and its many applications, researchers and engineers try to find a number of harmonics and their frequencies in a time signal contaminated by noise. In this manuscript we propose a new approach to this problem. Aim. The main goal of this work is to embed the original time series into a set of multi-dimensional information vectors and then use shift-invariance properties of the exponentials. The information vectors are cast into a new basis where the exponentials could be separated from each other. Materials and methods. We derive a stable technique based on the singular value decomposition (SVD) of lagcovariance and cross-covariance matrices consisting of covariance coefficients computed for index translated copies of an original time series. For these matrices a generalized eigenvalue problem is solved. Results. The original time series is mapped into the basis of the generalized eigenvectors and then separated into components. The phase portrait of each component is analyzed by a pattern recognition technique to distinguish between the phase portraits related to exponentials constituting the signal and the noise. A component related to the exponential has a regular structure, its phase portrait resembles a unitary circle/arc. Any commonly used method could be then used to evaluate the frequency associated with the exponential. Conclusion. Efficiency of the proposed and existing methods is compared on the set of examples, including the white Gaussian and auto-regressive model noise. One of the significant benefits of the proposed approach is a way to distinguish false and true frequency estimates by the pattern recognition. Some automatization of the pattern recognition is completed by discarding noise-related components, associated with the eigenvectors that have a modulus less than a certain threshold.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Application of the Non-Hermitian singular Spectrum Analysis to the exponential retrieval problem»

Radio-Engineering Means of Transmission, Reception and Processing of Signals

https://doi.org/10.32603/1993-8985-2020-23-3-6-24 Original article

Application of the Non-Hermitian Singular Spectrum Analysis to the Exponential Retrieval Problem

Dmitry J. Nicolsky1131, Gennadiy S. Tipenko2

1 Geophysical Institute, University of Alaska Fairbanks, Fairbanks, USA 2 Institute of Environmental Geoscience Russian Academy of Sciences, Moscow, Russia

H djnicolsky@alaska.edu

Abstract

Introduction. In practical signal processing and its many applications, researchers and engineers try to find a number of harmonics and their frequencies in a time signal contaminated by noise. In this manuscript we propose a new approach to this problem.

Aim. The main goal of this work is to embed the original time series into a set of multi-dimensional information vectors and then use shift-invariance properties of the exponentials. The information vectors are cast into a new basis where the exponentials could be separated from each other.

Materials and methods. We derive a stable technique based on the singular value decomposition (SVD) of lag-covariance and cross-covariance matrices consisting of covariance coefficients computed for index translated copies of an original time series. For these matrices a generalized eigenvalue problem is solved. Results. The original time series is mapped into the basis of the generalized eigenvectors and then separated into components. The phase portrait of each component is analyzed by a pattern recognition technique to distinguish between the phase portraits related to exponentials constituting the signal and the noise. A component related to the exponential has a regular structure, its phase portrait resembles a unitary circle/arc. Any commonly used method could be then used to evaluate the frequency associated with the exponential. Conclusion. Efficiency of the proposed and existing methods is compared on the set of examples, including the white Gaussian and auto-regressive model noise. One of the significant benefits of the proposed approach is a way to distinguish false and true frequency estimates by the pattern recognition. Some automatization of the pattern recognition is completed by discarding noise-related components, associated with the eigenvectors that have a modulus less than a certain threshold.

Keywords: exponential retrieval problem, matrix pencil, SVD, pattern recognition

For citation: Nicolsky D. J., Tipenko G. S. Application of the Non-Hermitian Singular Spectrum Analysis to the Exponential Retrieval Problem. Journal of the Russian Universities. Radioelectronics. 2020, vol. 23, no. 3, pp. 6-24. doi: 10.32603/1993-8985-2020-23-3-6-24

Acknowledgements. We would like to thank A. Rybkin and V. Romanovsky for all their advice, patience, critique and reassurances along the way. We also appreciate comments and suggestions by the anonymous reviewer, who greatly helped to improve the quality of this manuscript. This research was funded by ARCSS Program and by the Polar Earth Science Program, Office of Polar Programs, National Science Foundation and by the State of Alaska.

Conflict of interest. Authors declare no conflict of interest.

Submitted 05.08.2019; accepted 28.03.2020; published online 29.06.2020

Introduction. The present paper originates from , . , . , .

a classical problem in signal processing, namely: '

how to find a number of exponential constituents s(k)~ V с

and their frequencies |vj J in a time series ~_n 3

If (k)}™_q . One of the techniques is to assume that k 1 ''m 1

© Nicolsky D. J., Tipenko G. S., 2020 6 Контент доступен по лицензии Creative Commons Attribution 4.0 License

This work is licensed under a Creative Commons Attributipn 4.0 License

© ®

and then to apply some least square method. Here, the complex-valued amplitudes {cj j and the real distinct

frequencies {vj j are such that c—j = cj ; v—j = —Vj.

Note that v0 = 0 and c0 is a real-valued constant. The random component œ is commonly interpreted as noise; 5 is called the signal and e is a real constant.

A variety of subspace methods such as the Maximum Entropy Method [1], MUltiple SIgnal Classification (MUSIC) [2], Linear Prediction Methods [3, 4], Estimation of Signal Parameter via Rotational Invariance Technique (ESPRIT) [5], Matrix Pencil (MP) [6], and Minimum-Norm Method [7] have been used to find the exponentials |vj j in the measured data

{f ( k In [8], a unification of concepts of the

subspace methods is presented in terms of the singular value decomposition (SVD) [9] of the d x l +1 trajectory matrix X :

X = (7o, Yi, Yi ),

Yk=[f ( k + Ki ) ,f (k + k2 ),..., f ( k + Kd)J , (1)

for some constant d; translations k ; the constant l = m — Kd. We emphasize that in [8], k = i — 1, whereas we propose to choose arbitrary translations. The new choice of translations allows us to increase the numerical rank [9] of the trajectory matrix X, and to improve accuracy of frequency evaluation.

Our method together with ESPRIT and MP employ shift-invariance properties of the trajectory matrix X, however, there are some differences. ESPRIT was developed to estimate the direction-of-arrival by a uniform linear array (ULA) of sensors. The data readings from the z'-th sensor is associated with the z'-th row of the trajectory matrix X. The data in MP are similarly arranged in the row-wise format. Consequently, in both methods, the matrix X is partitioned into two submatrices Ho and Hi composed by the first d—1 and last d—1 rows of X, respectively. Note that equal spacings between sensors in ULA yields k = i — 1 and hence the space shift-invariance property can be applied. If noise is absent and d = 2n + 2 then the space shift-invariance property let us derive

Hj = TH(

; = {¿{J-1)v£ }A{ei{j-ОV* }_1.

Л = diag [eiv-n, ..., eiv" ],

(2)

where j = 1,..., d — 1; k = — n,..., n and A is the eigenvalue matrix of T. For the non-uniform linear array (NULA) of sensors the translations {ki j are

arbitrary and hence the space shift-invariance in space property of X does not directly apply [10]. However, if d = 2n +1 then the matrix X has a time shift-invariance property:

X1 = «X0;Q = {¿KjV} Л{е~j Л = diag [ eiv-»eiv" ]

lK jvk

-1

(3)

where j = 1,..., d; k = —n,..., n; and the matrices X0, Xi are given by the first l — 1 and last l — 1 columns of the matrix X, respectively; the matrix A is as in (2). The matrix

Л = {e

v },< j

1< j<d,-n<k<n

is a generalized Vandermonde matrix and we assume that it is non-singular [11, 12]. Note that each frequency Vj is given by the argument of the corresponding eigenvalues of Q.

In the presence of noise, (3) no longer holds. Therefore, we construct a d x d matrix Q such that Yfo - QYfois minimal in some sense, where {Yk} are given in (1). In the framework of perturbation analysis it is possible to show that if d — 2n +1 then frequencies {Vj} could be approximated by the arguments of the eigenvalues of Q. However, the number d — 2n +1 of exponentials in the time series f is a priori unknown and needs to be found. To deal with this problem we propose a two-step approach. In the first step, we select d to be greater than the number of exponentials found either by existing methods [13-19] or by computing the rank of Xq. We stress that we do not need to estimate the number of exponentials exactly at this step but to ensure that d > 2n +1. In this case some eigenvalues

of Q are associated with exponentials, while others are related to the noise. Note that just taking into the account information about the eigenvalues, it is impossible to judge whether the eigenvalue of Q is associated with the exponential or noise. Therefore, at the second step we cast the trajectory matrix Xq

into the basis of the eigenvectors of Q. In the new

basis rows of X0, that are associated with the exponentials, have a very specific structure, i. e. the phase portrait is either the circle or an arc. We hence propose to evaluate the number n by a pattern recognition technique. We also show that the frequencies

[v;}n estimated using the information carried by

<■ J>j=—n

the rows are more accurate than those estimated by the eigenvalues of £2.

Before we proceed forward, we adopt notation

l-1

for the inner product ( f, g j = - ^ f (k ) g ( k )

k=0

time series

and the norm \f\ f, f )l of ti

f = [f(k =0 and g = [g(k)}k=0. The operator " " stands for the complex conjugation, i. e.

a + ib = a — ib. If l = we define = lim || fh

» l

and (f, g)M = lim (f, gj. Also, we defined the

l^»

translation operator T:

Tf = (f (1),f(2),..., f (l-1)); f = ( f (0), f (1),..., f (l - 2)),

where f = [f (kJq stands for a time series.

The case of a single exponential. In this section, we consider a single exponential corrupted by noise:

f (k ) = 5 (k ) + e®( k); 5 (k ) = eivk

and highlight key elements of the proposed technique. Our goal is to estimate the value of v given values of f(k) for k = 0,1,..., I.

A number of methods have been developed to estimate a single exponential in the time series, e. g. [2, 3, 5, 6, 20-29]. Some of them are based on an observation that the exponential satisfies a first order auto-regressive process

5 ( k +1) = eivks ( k )

(4)

and poles of the associated filter could be used to identify the frequency v [24, 28].

To estimate v when the observations are corrupted by noise, it is possible to introduce some averaging by solving a first order autoregressive problem, i. e. finding the scalar value of a such that the error term e in the following relation is minimized:

f (k +1) = af (k) + e(k +1); llelll ^ min, or in the matrix form:

X1 = aX0 + E; ||E|| ^ min,

where Xq =[ f (0), f (1),..., f (l -1)] and X1 =[f (1), f (2),..., f (l)] are complex-valued 1 x l — 1 matrices. The value of a is found by solving the least squares and is given by Al such that

❖ ~ * X-Xo =A; XnXn

4 X0X0'

where "*" stands for the matrix conjugate transpose. Consequently, the frequency v can be evaluated by

I )], where is an eigenvalue in the

* *

case of 1 x 1 matrices X^Xo and Xo Xq. After some algebra, it is possible to derive that

V[ = v + s3^(Tco,Ts)^ - (cd,•?)/] + + b23 je-v (Tra, ra)7 -1 [(5, ra) + (Tra, Ts j]21 + + O (b3 ).

When information about a number of exponentials in the time series is missing, one might increase an order of the autoregressive model to find constants ai and a2 :

f (k +1) = a1 f (k) + a2 f (k -1) + e (k +1);

114 ^ min,

or in the matrix form: find the matrix A such that

X1 = AX0 + E; ||E|11 ^ min, (5)

where A =

0 1

^ a2 a1 )

; X0=[Y0,Yh...,Yl_1], and

X| = [>|, >2,..>/], are complex-valued 2xl—l matrices composed of the 2 x 1 information vectors

Yk = [/ (k), f (k +1)]/, and E stands for the

2 x l -1 matrix associated with the noise. Note that in the case of the auto-regressive model of the first order, the information vectors Yk are scalars equal to f ( k ).

Using least squares, the matrix A in (5) is found by (X0X0 )

* / * \ 1 Î2 = X1X0 (X0X0 ) •

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

In the case of the single infinite exponential corrupted by the white noise, i. e. (ra,= 1,

(5, = 0, and (Tra, = 0 we obtain that

a1 = eiv/(e2 + 2) and a2 = e2iv/(e2 + 2). Therefore the eigenvalues kk and eigenvectors vk of Q are

Ч2 = ±в

iV 1 ±4 9 + 4s2

i(2 + s2) '

v1,2 =

2,1 1

One may note that the argument of k = eiv (1 — e2/3) + O(e4) could be used as an estimator of the frequency v. The other eigenvalue k2 =—eiv(12 — e2/12) + O(e4) is rotated by the

angle of n with respect to the argument of elv. The arguments of eigenvalues may be thus used to estimate the frequency v, but k2 provides a false estimate. The modulus of eigenvalues could be used to distinguish genuine and false estimates, e. g. eigenvalues with the absolute values significantly less than one could be associated with the damping exponentials and be discarded. At the same time, the eigenvectors V1 and v 2 also carry the information about the exponentials.

In the proposed technique we look at the dynamics of trajectory matrix Xo by mapping it to the basis of eigenvectors using the matrix V = [ v 1, v 2 ]. An image of the information vector Yk in the new basis is

Zk = V"^ ; к = 0,..., l.

In our case,

V "1 =■

2V9 + 4s2

~ 3 and hence

2e

9 + 4s2 +1

"2e

V9+4S2 " 1

e" (1 " 2sV9) 2 "S2/9 (1 " 2s2/9) 1 + s2/9

-eiV (1 " 2s

O(s4),

Zt =

Г jb Л

v 0 у

s + -

3

2ю( к + 1) + ю( к ) e'

ikv Л

ikv

2 Г Jkv Л

9

-в jkv

v

ю( к + 1)-ю( к ) в'

+ O (s3 ); к = 0,..., l.

The first coordinate of Zk, i. e. Zk 1 for k = 0,..., I rotates around the origin with an angle between Zk+y and Zk ,1 approximately equal to the frequency v. Its phase portrait |(R( Zk,1) ,3( Zk,1 resembles a unit circle (or an

unit arc) with the center at the origin. The second coordinate of Zk, i. e. Zk 2 for k = 0,..., I does not

have a particular well-defined behavior, i. e. its phase portrait is given by a set of points randomly centered around the origin. This difference in the phase portraits allows to distinguish pairs of the eigenvector-eigenvalue corresponding to the true frequency from their false counterparts.

We note that once the coordinate of Zk related to the exponential signal is established (in our case the first coordinate), then the problem is simplified. Any appropriate method of the frequency estimation could be applied to recover a single exponential in the time series.

Linear regression approach in the case of multiple exponentials. In this section, we extend our proposed approach to the time series composed of several exponentials and contaminated by noise. Now and for the rest of this article, we consider the information vectors Yk =[f(k+Ki),f (k+к2),..., f (k+KdJ

associated with arbitrary translations |k j. We consequently partition Yk into two time series Sk and Wk

Yk = Sk +eWk,

where Sk =[ (k + K1),s (k + k2),..., 5 (k + KdJ and obtain that

Yk = УЛкС + sWk, к = 0,...,I, where V = {в1К^к /<^,-и<к<и ; Л = diag [ev

,..., в

C = ( c_n,..., cn . Thus, the trajectory matrices X0 and Xj could be expressed as

Xo = V[C, AC,..., A'_JC] + s[wq, Wj,..., W_i],

Xj = VA[C,AC,..., Al_JC] + s[Wo, Wj,..., Wl].

Note that each row of [C,AC,...,Al_JC] is associated with a unique frequency. The phase portrait

+

of the k -th row is given by the set of points i .. —1

\pkel}Vk } ._0, representing a circle/arc with the radius of |ck| around the origin. A key idea of the proposed method is to represent Xq in a new basis, extract its rows and estimate a frequency associated with each phase portrait.

Linear regression problem. After some algebra, we derive that Yk+1 and Yk are connected by the relation

Yk+1 = QYk + e(Wk+1 — £Wk); £ = VAV+,

where V M V*V )—1 V is the Moore—Penrose inverse of V. In the case of 2n +1 = d, the matrix V

is square, and hence V+= V—1. Furthermore, if 2n +1 = d then similar to (4) we obtain the relation between the signal components:

Sk+1 = , £ = VAV—1

and to find frequencies \vk} in the noiseless time series one needs to compute £ and then find its eigenvalues.

As in the case of the single exponential, we introduce averaging and approximate the matrix £ by

the real-valued d x d matrix £, which is the solution of the linear regression problem ||Yk+1 — AYk|| ^ min. Namely, the matrix ££ is called the best estimate of £, if £2 = argminA J ( A), where

1 M

J ( A ) = - X E* Ek ; l k=0

Ek = Yk+1 " AYfc ; k = 0,..., l -1.

(6)

Note that in the case of multiples exponentials we do not know the number of exponentials contributing to the time series f . For now we assume that 2n +1 < d. We discuss the selection of translations {k and the dimensions of the information vectors Yk later in this section.

It is possible to prove, see Appendix A for the proofs, that if det To ^ 0, then J(A) has the minimum only at £2, satisfying

and its minimal value is Jmin = tr\r2 — r^g1^}, where

1 * 1 * 1 *

r0 = y X0X0; r1 = y X0X0; r2 = y X1X1.

If 2n +1 = d, then for the noiseless time series f, the solution £2 of the linear regression problem coincides with £, and for the noisy data the eigenvalues of £ and £2 are connected via

 = Л + £en л( n ),

n=1

Л n)

where A are some diagonal matrices. Hence, we can use the eigenvalues of £2 = ^r—11 to evaluate those corresponding to £ and by computing the argument of eigenvalues to find the frequencies.

There are some difficulties since the number of exponentials is a priori unknown, but it could be estimated by methods discussed in [13-19] or by computing the rank of Xq . In the case of noiseless time series, we have rank (Xq ) = 2n +1 for values d > 2n +1. This means that after a certain increase in d, the rank of Xq stays constant and this threshold could be used to estimate the number of exponentials. In case of the noisy data, one can instead estimate the numerical rank of To by SVD [8]. Under the white noise assumption and m ^ », the singular

values \Ak of To according to [8, 28] are such that

к

+ с , k = 1,..., 2n +1;

с

.2

k = 2n + 2,..., d,

Г = ÎÎT г

(7)

where |k > 0 are the singular values associated

with the signal component and ct stands for the noise variance. The values of |ik depend on the

choice of translations \k }d_^ and could be close to

zero if the information vectors Yi are almost linearly dependent. By varying the translations, it is possible to make the information vectors more linearly independent, consequently increase ik, and thus to obtain a notable distinctions between the singular values associated with the signal and noise. Note that the singular values associated with the white noise

œ

are constant. In the case when m is not large or when the noise is not white, smallest singular values ^k could start to overlap with those related to noise [30, 31] and hence the notable decrease may be missing. Nevertheless, the SVD provides a good estimate for the numerical rank of matrix [8].

We note that if d > 2n +1 then some eigenvalues

of £2 approximate the frequencies |vi j while the

other are associated with the noise. As in the case with the single exponential, selection of eigenvalues associated with the exponentials is a matter of belief if no information regarding the noise structure is provided.

On other hand, the eigenvectors of ££ can bring more information to decide whether the eigenvalue-eigenvector pair is associated with the exponential or noise. The ideas of using the eigenvectors are closely related to the time series decomposition by the Singular Spectrum Analysis (SSA) [32, 33].

Time series decomposition and the principal component approach. Before introducing the principal components using eigenvectors of £1, we briefly review some key points of the Singular Spectrum Analysis (SSA). Our approach exploits ideas of SSA, yet SSA and its various modifications such as Monte Carlo SSA [34-36] and Multiscale SSA [37], Random Lag SSA [38], Oblique SSA [39] do not compute the frequencies |vj but allow representation of the data f in a new convenient way. In particular, SSA relies on the Karhunen—Loeve decomposition of the correlation matrix

1 i * r=1ue2U ,

and on representation of the vectors |Yk j in a coordinate system defined by the eigenvectors |uk j of r0, or namely

P = UX0; U = |u1,..., ud j,

where the rows |pk j of the matrix P can be seen as the coordinates of |Yk j in the orthogonal base |uk j and are commonly called principal components. Note that the vectors |pk j have the important prop* 2

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

erty of orthogonality PP = E . However, arbitrary exponentials do not have to be orthogonal with respect to the inner product (•,•), due to the finite number of sampling points k = 0,..., I — 1. There-

fore, each principal component pk is usually a linear combination of exponentials even for the noiseless data. We would like to emphasize that even for the noiseless data, there is no one-to-one correspondence between the exponentials and the principal components pk [40]. Hence, our goal is to obtain a one-to-

one correspondence between the frequencies |vj and certain objects in the absence of noise as follows.

We achieve our goal by representing the information vectors Y^ in the basis of eigenvectors of £2 instead of the basis associated with r0 as it completed in SSA. Consequently, we call the proposed method the Non-Hermitian Singular Spectrum Analysis (NH-SSA). For the rest of this article we assume, that the eigenvalues of ££ are all different and the Jordan decomposition of ££ holds as

££ = v AV—1.

We define a central object in our approach, namely the image of information vectors Yk in the

basis of V:

Zk = V ~lYk, k = 0,..., I. (8)

Using the small perturbation theory it is possible to show that, in the case of d = 2n +1, if all eigenvalues of £ are distinct then the eigenvector matrix V is such that

V"1 =(I + eR) V" \ (9)

where the matrix R (e) is analytic with respect to e and R(e) = ^^_genR(n+1), where each diagonal

entry of R(n+1) vanishes for any n. Recalling that

Zk = V—1Yk, we have

Zk = AkC + e [ RAkC + V—11W*. ], or in its coordinate-wisely form as

Zk+1 =k jcj +e^k, j;

2n+1 . ,

^k,j = I kkci + V—Wi); i=1

j = 1,..., 2n +1.

iT"

Here, j stands for the row index of the vector zk = (Zk,1,..., Zk,2n+1 /, Aj = elVj is the eigenvalue of and ^k,j represents noise. For each value of /', tlie consecutive values of Z^j, k = 0, ..., / rotate around the origin, each time turning by the angle of Vj (up to the level of noise O(e)). Furthermore, if

d = 2n +1 and e ^ 0 then there is one-to-one correspondence between the rows of z k and exponentials.

In many practical applications, a number 2 n + 1 of exponentials in the time series f is unknown. Therefore, in order not to loose some exponentials, we need to have d > 2n +1. This implies that we decompose the noise ro into exponentials. There-

-0.08 V 3 (k)

50

100 c

150

fore, some rows of zk are attributed to noise and do not have a stable "rotational" pattern around the origin, as was discussed in section 1 for the case of the single exponential.

Thus, to decide whether the j-th row of zk is associated with the noise or exponentials, we apply the pattern recognition technique by visualizing dynamics of [Zk,j . Namely, the sequence [Zk, j

is thought to represent a single exponential if the phase portrait, i. e. the set of points

H Zk, j ), 3( Zk,

j )]L0 lies between two concentric circles, see Fig. 1, a; the modulus Z

k

is

bounded around a certain constant. One other hand,

\z4 (k )|

0.06

0.055

0.05

0.045

0.04

50

Щ (4)( k)

100 b

150

0.4

0.2

-0.2

-0.4

-0.6

100

200

Fig. 1. Typical results for the sequence [Zk jQ associated with an exponential: a - the phase portrait \[^(Zkj),3(Zkj)]} lies between two concentric circles; dashed and dotted lines represent one and two standard deviations from the mean modulus; b - the modulus of [Zk j is bounded from zero (k e [0, l]); c - the phase yj (k) is a linearly increasing function (k e [0, l — 1]);

d - mapping of [Zk j back to the original space f(1 ^ (k) is an exponential (k e [0, m — 1] )

k

0

4

0

0

к

к

0

d

the

sequence jZk,j j is associated with noise

when the

phase portrait j[^(Zk,j), 3(Zk,j )]

l

k=0

is randomly distributed around the center, as shown in Fig. 2, a; the modulus |Zk, j significantly varies.

Furthermore, for the single exponential signal, the phase y j (k) is linearly increasing as shown in

Fig. 1, c, while for jZk,j ^ related to noise the

phase Vj (m) = Xm=03[ln (Zk+1,j/Zk,j)] could have some irregularities, see Fig. 2, c.

Once the rows of Z k associated with the exponen-

tials are established the process is greatly simplified, note that each jZk, j ^ represents a single exponential and a whole variety of the frequency estimation methods could be applied to recover the frequency.

Finally, we note that similar to the SSA methodology, sequences jZk j j associated with either the I, « J k=0

exponentials or noise could be grouped together and mapped back to the original space as follows. For example, mapping of jZk,j ^ associated with a single exponential back to the original space results in

the trajectory matrixXX = jc.-e

3Zio

0.05

0

-0.05

Zio (k )

0.6

0.4

0.2

-0.1 V j(k)

60n 40n 20n

-0.05 0 0.05 rnzm

a

0

50

100 c

150

WW)(k) 0.1 0.05 0

-0.05 -0.1 --0.15

50 100

b

150 k

100

200

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

>k=o

Fig. 2. Typical results for the sequence jZk j j^ q associated with noise: a - the phase portrait Zk j),3(Zk j)Jj randomly distributed around the origin; b - the modulus of jZk j j is not bounded from zero (k e [0 ,1]); c - the phase y j (k) can have one or multiple "wrapping" events [41] (k e [0,1 — 1]); d - mapping of jZk j j back to the original space

f(j)(k) is an irregular signal (k e [0, m — 1] )

0

k

k

0

d

which is a Hankel matrix when the translations kv =s-l. We average elements in xj3' and thus

map the sequence {Zk ,j to the original space as /jhk) = cJVjk+0(s), k = 0,...,m-\. Fig. 1 ,d

and 2, d show mappings of [Zk,j associated

with the exponential and noise, respectively, to the original space.

Note that 9lf(j )(k) associated with the exponential is a cosine with an approximately constant

amplitude, while the mapping of [Zk j } related

,j >k=0

to the noise results in some irregular structure. Similar to SSA, sequences [Zk,jassociated with

noise could be all grouped together and mapped back to ro. We thus produce a decomposition of the original signal f = f + ro + O(e), where f = f(j). Decomposition of the time series f into a sum

of f(j), where each f(j) is associated with the single exponential, depends on the noise variance, proposed number n of exponentials and translations [k}. In the current model we use translations such that Ki = (i —1) m, where m is called the multiplicity. Hence, the decomposition depends on the pair of parameters (d, m). The multiplicity m can be chosen arbitrarily. However, our numerical experiments show that m should be of the same order as the expected period. In this case, the information vectors [Yk} tend to be more linearly independent in the

computational sense and the matrices To and r be better conditioned.

Numerical realization. In this section, we provide a numerical algorithm to evaluate V—1 in order to map the time series into the principal components. Please note that since To and r are typically ill-conditioned matrices and the straightforward calculation of eigenvectors V of ££ = r^—1 could lead to computational errors [42]. We propose to maximize the "numerical" rank of To and r by varying m. Further, noting that the SVD is less error prone for the symmetrical positive-definite matrices than the eigenvalue decomposition, in order to achieve a

better numerical accuracy we suggest to exploit the SVD of Xq and X1 as follows.

Assuming that trajectory matrices Xq, X1 have the rank equal to 2n + 1 we denote the SVD of Xq and of X1 as

_ * _ *

Xq = U0D0nW0; X1 = U1D1nW1, (10)

where 2n +1 x l matrix n stands for the projector mapping of <' onto tanc[ where the unitary matrices U 5, W5 and the non-negative diagonal matrix D5 are of size 2n + 1x 2n +1, l x l and 2n + 1x 2n + 1 respectively for 5 = 0.1. From (10) it

follows that the equation r = £2T q equivalent to

Q = TR,

where Q = OWf W0 n*; T = D—1U*££U1D1; 1 *

R = D—1U1U0D0. Let us emphasize that the matrix

*

Q is a projection of the unitary matrix W* Wq, and therefore ||Q|| < 1. The matrix T has the same spectrum as A as ££ has.

Let us denote by ® the matrix consisting of the generalized eigenvectors of the matrices (Q; R),

namely Q® = R®A. Taking into the account that

Q = R®A®—1 = TR and the definition of the matrix T it is easy to check that

= ( U1D1R®) A (U1D1R®) 1.

From the Jordan decomposition of £2 it follows that the eigenvectors V of ££ and generalized eigenvectors ® are connected by

V = U1D1R® = U0D0®.

Therefore, by definition (8) we obtain

Finally, we combine all steps together and provide the numerical algorithm to estimate exponentials in the time series:

Input: the time series f(k); k = 0,1,..., m-l.

1. Pick a value of d > n and consider a set of multiplicities in = 0,1, 2,____

2. For a selected value of m, compute the translations K/. = (k-1 )m, where k = 0,1,..d.

3. Form the information vectors {Yk and

trajectory matrices X0 and X1.

4. Repeat Steps 1-3, while computing the condition number cond(X0) = ||X^||on a grid of

(d, m).

5. Select a pair of (d, m), where con d (X0) attains the minimum and compute trajectory matrices X0 and X1 for the found pair of d and m.

6. Perform the SVD on matrices:

_ * _ *

X0 = U0D0nW0, X1 = U1D1nW1.

7. Calculate Q = nW* W0 n* and

R = D—1U*U0D0.

8. Evaluate the generalized eigenvalues A = X\.....X/} and eigenvectors ® for the

pair of matrices Q and R such that Q® = R®A.

9. Compute map information vectors Yk into the

new basis: Zk = «B^D^Uofy, k = \...,I.

10. Discard rows {Zk — j associated with ei-

j k,j jk=0

genvalues X j such that |X j <Xc, where Xc is a given threshold. These rows are related to the signal components f(j) damping with time. The value of Xc could be chosen such that a number of generalized eigenvalues |X j < Xc is the same as the number

of singular values of r 0 associated with the signal.

11. Apply pattern recognition technique to the

remaining jZkand evaluate the number n of exponentials. The number n of exponentials is determined by the number of jZk j which have

the graph lying in a vicinity of the unit circle in the complex plane.

12. Apply one of the frequency estimation methods to recover a single exponential in jZk j j'

,j >k=0

associated with the regular patterns. Here, we use

ESPRIT to find a single exponential in jZk j j .

,j >k=0

Output: List the frequencies {v— j associated

with {Zk — j' showing the regular patterns.

I, ,j ) k=0

Comparison with other high resolution methods. Signal corrupted by the white noise. Let us consider the signal s consisting of four unit-amplitude cosines sampled at m = 300 points. The frequencies for these cosines are v1l(2k) = 0.04; v2 = 0.06; v3 = 0.07 and v4 = 0.12. The signal s is corrupted by the white Gaussian noise ro with the zero mean E (ro) = 0 and variance var (ro) = 1. In this case, the SNRdB defined by

10log10 (l sll m/l roil m) is approximately 3.5 dB.

For a given realization of the time series, we compute the condition number cond (r0) on the

grid of (d, m), as shown in Fig. 3. There are several potential pairs of m and d, marked by red asterisks, where the condition number is close to its minimum. We select the value of m equal to 4, while the choice for d is less restrictive, at the same time is better to select d rather large in order to increase the size of r0 and reduce influence of noise on the information-carrying components.

Our experience reveals that it is important to select the value of d to be at least three to four times larger than the number of exponentials, which could be estimated, for example, by analyzing eigenvalues of the auto-covariance matrix r0, shown by blue triangles in Fig. 4. We note that a significant drop occurs at the 9th eigenvalue. Hence, n = 4 (since the cosines are used to define the signal). Therefore,

logio (cond ( ro ))

2.5

1.5

10

12

14

16 d

18

20

Fig. 3. Dependence of cond (r0 ) on the multiplicity m and the size of F0 . Cells marked by asterisks, where the condition number is minimal, are potential candidates to choose a pair of (m, d)

3

2

Xj 20 15 10 5

f

- white noise

- AR1 noise

- AR2 noise

о о □ 00

D О e a a 0 @ vvvyvyvvyv

0 5 10 15 20 25 j

Fig. 4. Eigenvalues of the auto-covariance matrix To for the signal consisting of a sum of eight exponentials and the constant. The signal is corrupted by either the white Gaussian noise or auto-regressive noise of the first and second orders

A,

1^▼▼▼▼▼▼▼т

A. 3

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

0.5

vvw

w WW vv

vv vv vvvww

?VWw

0 5 10 15 20 25 30 35 j

Fig. 5. Generalized eigenvalues A = {X0, .} for the pencil Q®=R®A

while applying the NHSSA method, we assume d = 18, however any number greater than 12 will be also sufficient. We stress that the precise determination of d is not important. Additional dimensions d — n are used to decompose the noise into some exponentials and these false exponentials are discarded later at either Step 10 or 11.

A number of these false estimates is controlled by the threshold parameter Xc in Step 10. If the value of Xc is reduced then more false estimates are recovered, but then could be discarded at Step 11. At the same time, if the value of Xc is increased then NHSSA may start to omit true frequencies. The value of X could be chosen such that a number of

generalized eigenvalues X j < Xc is the same as the number of rq singular values associated with the signal. Fig. 5 shows the distribution of X j for one of

the realizations of the while noise-corrupted signal, in the case of n = 4 and d = 18. Four pairs of the generalized eigenvalues have a modulus close to 1, while the rest has a significantly lower magnitudes. The value of Xc « 0.8 provides a threshold to separate the true and false estimates in this case. Our experience indicates that the optimal value of Xc « 0.8 , however it might need to be decreased if the SNR is reduced or the time series length is reduced. We nevertheless propose to keep the value of Xc at the lower end to obtain some false estimates and then discard them based on using the pattern recognitions.

We consider 100 realizations, r, of the time series and estimate frequencies by the ESPRIT and NHSSA. To improve the accuracy of frequency estimation, the size of the auto-covariance matrix for ESPRIT is selected to be m/3 = 100. We consider two instances of ESPRIT denoted by ESPRIT( n ), when n = 4 (an exact number of cosines in the time series) or n = 7 is the assumed number of cosines in the ESPRIT algorithm.

Fig. 6, a plots the recovered frequencies for each realization according to each method, i. e. frequencies recovered by NHSSA are plotted by red circles, results of the ESPRIT recovery are shown by dots and crosses. Fig. 6, b shows the probability, p, of occurrence (a number of times the frequency is recovered within the 0.005-wide intervals, which uniformly span the frequency domain, divided by the total number of realizations) for the estimated frequencies. Both instances of ESPRIT almost always

v/(2K) -0.16 0.12

0.08

0.04

о - NHSSA

- ESPRIT(4) x - ESPRIT(7)

p

80

60

40

20

I - NHSSA H - ESPRIT(2) I - ESPRIT(5)

o 20 40 60 80 r o 0.05 0.1 0.15 0.2 v/(2k)

a b

Fig. 6. Recovery of frequencies in the case ofthe white Gaussian noise: a - frequencies estimated by NHSSA and ESPRIT for 100 realizations of the white Gaussian noise. The number of cosines in ESPRIT is assumed to be either 4 or 7; b - the probability, p, of occurrence for the estimated frequencies

Estimation of the frequencies \vk} by ESPRIT and NHSSA in the case of time series s corrupted by the white Gaussian or autoregressive noise of the first and second order

v E (V) var (V))

NHSSA | ESPRIT(4) | ESPRIT(7) NHSSA | ESPRIT(4) | ESPRIT(7)

White Gaussian noise, SNRdB = 3.5 dB

0.04 0.040066 0.040008 0.040018 4.2092e-08 2.5401e-08 2.5111e-08

0.06 0.060517 0.060011 0.060014 1.6183e-06 3.0151e-08 3.0139e-08

0.07 0.069285 0.070011 0.070007 3.1890e-06 2.1462e-08 2.2494e-08

0.12 0.119960 0.119980 0.119970 4.8482e-08 2.9076e-08 3.2675e-08

AR1 noise, SNRdB = 0.9 dB

0.04 0.039971 0.039348 0.039885 1.0164e-06 1.2936e-05 6.7429e-07

0.06 0.059944 0.059894 0.059777 4.8822e-07 8.3477e-06 4.3980e-07

0.07 0.069988 0.068586 0.069860 3.9226e-07 8.6346e-06 2.2843e-07

0.12 0.11999 0.116940 0.119830 1.2697e-07 0.00010443 8.3213e-08

AR2 noise, SNRdB = 1.5 dB

0.04 0.040022 0.040044 0.040022 1.3134e-07 7.3852e-08 7.5397e-08

0.06 0.060098 0.060107 0.060024 2.8021e-07 3.1196e-07 1.0839e-07

0.07 0.069964 0.070132 0.070059 3.0026e-07 1.3255e-07 1.0902e-07

0.12 0.119770 0.120110 0.119980 5.2731e-06 3.3860e-07 2.2155e-07

recover the frequencies, while NHSSA shows good performance for v/(2k) = 0.04 and v/(2k) = 0.12, while slightly underperforms for v/(2k) = 0.06 and v/ ( 2k) = 0.07.

Instead of identifying both of the latter frequencies, NHSSA recovers a single frequency in the middle. We will discuss the separability of frequencies later in this section. Note that all methods recover the frequencies quite well, especially after taking into the account that NHSSA does not have any information about the number of exponentials.

We list the estimated mean and variance for each

{vk }4_j in Table. The variance of ESPRIT is smaller

than that of NHSSA, but the ESPRIT requires a number of cosines as an input variable. Additionally, the ESPRIT with n = 7 provides additional estimates, shown by black crosses, that are distributed rather randomly. It is hard to distinguish these false estimates from the true frequencies if only their values are given (as a potential solution one may apply ESPRIT with different sizes of the auto-covariance matrix to investigate whether the estimates are true or false). The NHSSA also has some false estimates, i. e. red circles lying away from the genuine frequencies.

Finally, we illustrate separability of the original time series f = s + ro into the components, namely:

s - the recovered signal and ro - the estimated noise. The recovered signal s is obtained by grouping and mapping sequences associated with

jZk — , which have a circular phase portrait, to

the original space. The noise ro is estimated by

grouping and mapping the rest of {Zk — j to the

l ,j ) k=0

original space as well. Fig. 7, a shows the signal s and the time series f = s + ro for one of the realizations of ro. The recovered signal s is compared to the original signal s in Fig. 7, b. Note that the two time series match quite well, but s still have some noise. Fig. 7, c shows the comparison of time series for the original ro and estimated ro noise, which match quite good as well.

We emphasize that the decomposition of the time series f = s + ro + O(e) is obtained straight

from grouping appropriate {Zk, — and mapping

them to the original space. The proposed method thus allows us to de-noise the original time series. Despite some inspiring empirical observations, further investigations are required to derive sharp estimates for s — s and ro — ro.

Signal corrupted by the autoregressive noise. To further test the proposed method, we consider a signal s consisting of the same four cosines, but now it is corrupted by the noise ro, which is generated according to an autoregressive (AR) process of either the first or second order:

rok = 0.7ro( k — !) + £( k),

or

®k = 0.7® ( k -1)-0.4® (k - 2) + £( k),

50

100

150 a

200

250

50

100

150 b

200

250

- noise, w;

recovered noise, w

0

50

100

150 c

200

250

k

Fig. 7. Recovery of the signal in the case of the white Gaussian noise: a - comparison of the original signal s consisting of the four cosines to the corrupted signal f = 5 + ro , where ro is the white Gaussian noise; b - comparison of the original signal s and the recovered one s ; c - comparison of the original noise ro and the recovered noise ro

where k) is the white Gaussian noise such that E (£) = 0 and var (£) = 1; the SNRdB for the former

and latter models is 0.6 and 1.5 dB, respectively.

Note that for the AR noise, the eigenvalues of To do not have a clear jump as shown in Fig. 4 and the number of exponentials for ESPRIT could be estimated between j = 9 and j = 15 or higher (since each cosine is associated with two exponentials, and thus the number of cosines could be between 4 and 7). After computing the condition number cond (X1) on

the grid of (d, m), we choose m = 3 and d = 18. As before, we consider 100 different realizations of the noise and estimate frequencies by the same three methods. Results of the frequency recovery for the AR1 and AR2 noise models are shown in Fig. 8 and 9,

respectively. Note that AR1 noise model generates a significant number of false estimates for ESPRIT, there is also a comparable number of false estimates for NHSSA, but the latter could be discarded using the pattern recognitions. For the AR2 noise model, NHSSA performs significantly better than ES-PRIT(7), but less effective than ESPRIT(4). However, ESPRIT(4) has an advantage by exploiting information regarding the correct number of exponentials in the signal. If the number of exponentials is increased, e. g. as in ESPRIT(7), false estimates occur. For the sake of completion, we list the mean and variance for each recovered frequency in Table.

Separability. In this section, we consider a signal composed of two cosines with close lying frequencies that are hard to detect simultaneously due to the shortness of the time series f, i. e. the sum of

0

k

k

0

Ш

2

0

2

- • , • oo„ «>5 oVid" w о oTW °0 ee ¿p*. «^.«aiSP

20

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

p

80 60 40

1 HI - NHSSA

- ESPRIT(2)

- ESPRIT(5)

0

20

40

60

80

0

0.2 у/(2л)

0.1 0.15

a b

Fig. 8. Recovery of frequencies in the case of the AR1 noise: a - frequencies estimated by NHSSA and ESPRIT for 100 realizations of the autoregressive noise of the first order. The number of cosines in ESPRIT is assumed either to be 4 or 7; b - the probability, p, of occurrence for the estimated frequencies

v/ (2л)

0.16 0.12 0.08 0.04

's

в " ""° x *

-J*"" .-s

o'n,

.V-.

- NHSSA

- ESPRIT(4) ч — ESPRIT(7)

« *

о О 1x 8

a 9«

: «*„ 8

ььоееьылм

ов ООО

V

8]

p

80 60 40 20

л1

1 n - NHSSA

- ESPRIT(2)

- ESPRIT(5)

0 20 40 60 80 r 0 0.05 0.1 0.15 0.2 V/(2K)

a b

Fig. 9. Recovery of frequencies in the case of the AR2 noise: a - frequencies estimated by NHSSA and ESPRIT for 100 realizations of the autoregressive noise of the second order. The number of cosines in ESPRIT is assumed either to be 4 or 7; b - the probability, p, of

occurrence for the estimated frequencies

cosines is sampled at m =100 points. Namely, we consider vj ( 2k) = 0.01 and v2/( 2k) = 0.015, ck = c—k = 0.5 for k = 1, 2, and c0 = —1; the noise ro is assumed white Gaussian with the zero mean E (ro) = 0 and var (ro) = 1/16. The signal s and time series f are shown in Fig. 10, a; the SNR is 15 dB.

Estimation of the frequencies both by ESPRIT and NHSSA is shown in Fig. 11. Note that ESPRIT(2) with the correct number of cosines fails to estimate both frequencies simultaneously, and rather estimates a single frequency somewhere in the middle between v^/(2k) and v2/(2k). The NHSSA also fails to distinguish both cosines and recovers a single frequency — some sort of averaging of v^/(2k) and v2/(2k) as well. This frequency is related to an eigenvalue pair (the cosine consists of two exponentials) lying close to the unit circle, while other eigenvalues have significantly lower moduli and are related to noise. The

phase portrait Zk,j ), 3(Zk,j )]V

>k=0

of the se-

quence {Zk — j' associated with the pair lying

,j >k=0

close to the unit circle is a spiral, see Fig. 12, a.

We stress that the original problem of exponential recovery was concerned with finding frequencies jvk : vk e tf j. However, both ESPRIT [43] and NHSSA could be applied to recover damping exponentials with frequencies jvk : vk e C+ j, where C+

is the upper complex plane, which discussion is beyond the scope of this manuscript. Therefore, instead of recovering both exponentials with closely lying frequencies, the NHSSA approximated them by a single damping cosine, as could be observed in the

mapping of {Zk — j' back to the original space,

,j >k=0

see Fig. 12, d. This information is not revealed by standard ESPRIT, since it only relies on the eigenvalues and discards information carried by the eigenvectors. The noise oo and s are both recovered in NHSSA by grouping appropriate eigenvalues and mapping back to the original space, see Fig. 10, b, c. Nevertheless, further investigations are necessary to understand when the recovery of two exponentials sampled on a short interval is feasible.

10

20

30

40

50 b

60

70

80

90

0.5 0

-0.5 -

0

10

20

30

40

50 c

60

70

80

90

Fig. 10. Recovery of the signal in the case of two cosines with close lying frequencies: a - comparison of the original signal s consisting of two cosines with closely lying frequencies to the corrupted signal f = s + ro , where ro is the white Gaussian noise; b - comparison of the original signal s and the recovered one s ; c - comparison of the original noise ro and the recovered noise ro

v/ ( 2л) 0.02 0.015 0.01 0.005

о- NHSSA

- ESPRIT(2) x- ESPRIT(5)

S о

0

10

20

30

40

P 25

20

15

10

5

0

- NHSSA

- ESPRIT(2)

- ESPRIT(5)

0.005 0.01

0.015 b

0.02 0.025 v/( 2л)

Fig. 11. Recovery of frequencies in the case of two cosines: a - frequencies estimated by NHSSA and ESPRIT for 100 realizations of the white Gaussian noise. The number of cosines in ESPRIT is assumed to be either 2 or 5; b - the probability,p,

of occurrence for the estimated frequencies

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Conclusions. We present a new method of estimating exponentials and their frequencies in the time series. The proposed method decomposes the

time series consisting of several exponentials into components by casting the information vectors into a new basis. Each component corresponds either to

к

0

к

r

a

3Z?

0.15

-0.15

-0.3

V j ( k )

-0.15 0 0.15

а

0.05 0

ЦТ (2)( k )

1.5

0.5

-0.5

20 40

b

50 d

Fig. 12. Typical results for the sequence [Zk j }l associated with an average of two exponentials with closely lying

V ) k=0

frequencies: a - the phase portrait Zk j),Zk jis a spiral; dashed and dotted lines represent one and two standard

deviations from the mean modulus; b - the modulus of [Zk j is generally decreasing (k e [0, l]); c - the phase yj (k)

is a linearly increasing function (k e [0, l — 1]); d - mapping of [Zk j back to the original space f(1 ^ (k) is a damping

cosine (k e [0, m -1] )

only one of the exponentials or to noise. For the information-carrying component one of many frequency estimation techniques (e. g. ESPRIT, MUSIC, ML, etc.) could be applied to recover a single exponential. The overall accuracy of the proposed method is comparable with that of the widely used ESPRIT method, if the latter is provided with the number of exponentials in the signal. Furthermore, when the model order (number of exponentials) in ESPRIT is overestimated, the proposed method can reduce the number of the false frequency estimates, as shown by numerical examples.

One of the significant benefits of the proposed approach is a way to distinguish false and true frequency estimates using the pattern recognition. The primary automatization of the pattern recognition is completed by discarding noise-related components, associated with the eigenvectors that have a modulus less than a certain threshold Xc. At the second stage, the phase portrait and phase dynamics for the remaining components could be visually analyzed. Images of associated with the true frequencies have phase portraits resembling unit circles and phase dynamics with zero or a minimal number of phase

0

k

l

0

1

0

к

c

wrapping events. Closeness of the phase portrait to the unit circle and mapping of the component back to the original space could provide certain levels of confidence that the component is associated with the exponential or noise. False frequencies associated with the phase portraits that have a random structure. At the same time further research is needed to produce a fully automatic pattern recognition algorithm.

Finally, we note that under certain conditions the proposed method could be generalized to estimate exponentials in the time series where some measurements are missing [44].

A Derivations of main mathematical results. Proof of (7): Note that the cost function J defined by (6) has the following representation

= tr

J (A ) = tr

1 l-1

1 Z (Yk+1 ■

l J--Л

Л i-i

1 i-i

1 Z

l k=0

-AYk )(Yk+1 — AYk)

k=0

Considering the change of variables A = T^—1 + U, it is easy to see that the term corresponded to the first power of U vanishes by the definition of r and ro; and the cost function is

J(A)=1 tr {(X1 — r1r—11Xq )(X1 — r1r—1Xq ) } +

+ — tr { l 1

* *) UX0X0U }.

Since the matrix rQ = WAW is non-negative definite, the matrix of its eigenvalues A = diag(X1,...,X^) has only non-negative entries.

Therefore, introducing U = UW, we have

tr(lJr0lJ*) = tr(uAU*) =

0

d id

i=i

j=i

>0,

and hence the unique extremum exists at U = 0, or

A = r1r—1.

Proof of (9): Let the Jordan decomposition of O is

V—1O=AV—1, (11)

where V and A are matrices consisting the eigenvectors and eigenvalues of O, respectively. Then after multiplying (11) by the matrix V from the left and exploiting the definition (9) of V, we have

V—1OV + eRV—1OV = A + eAR. (12)

Note that since matrices To and r are second degree polynomials with respect to e, the Taylor series expansion of O = ^r—1 has all powers of e:

(m)

Q = Q +

m=1

and O is an analytic function of e. From the theory of Linear Algebra it is known that the eigenvalues

A of O are analytic functions of e and are given by

® t \ A = A +£em a( m),

m=1

where A and A are the diagonal matrices containing eigenvalues of O and O respectively. By the same argument, it is possible to prove that V and

hence R are analytic matrices too. Therefore, the representation

( n+1)

R (e)=Ze" R

n=0

holds. Substituting this formula into (12) and equating the terms in front of the same powers of e , we derive

R( n)A — AR( n)= A(n) + G( n),

where

G(n )=-V -1Q( n )V +

+Zn-1( Л(к )R(n-k)- R(k )V-1Q( n-k )V).. Therefore

Л(n) = - diag (G( n)), R(n) =

V

G

( n)

i3

Ai - A j

, i * j

0, i = j

да

References

1. Burg J. Maximum Entropy Spectrum Analysis. Proc. 37th annual international meeting of the society of the exploration geophysicists, International Meeting of the Exploration Geophysicists. Modern Spectrum Analysis. Ed. by D. G. Childers. Oklahoma City, Okla. Pisca-taway, IEEE Press, 1978 (1967), pp. 42-48.

2. Schmidt R. Multiple Emitter Location and Signal Parameter Estimation. IEEE Transactions on Antennas and Propagation. 1986, vol. 34, no. 3, pp. 276-280. doi: 10.1109/TAP.1986.1143830

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

3. Tufts D., Kumaresan R. Singular Value Decomposition and Improved Frequency Estimation Using Linear Prediction. IEEE Transactions on Acoustics, Speech and Signal Processing. 1982, vol. 30, iss. 4, pp. 671-675. doi: 10.1109/TASSP.1982.1163927

4. Kay S. Modern Spectral Estimation: Theory and Application. Englewood Cliffs, NJ, Prentice-Hall, 1988, 543 p.

5. Roy R., Kailath T. ESPRIT - Estimation of Signal Parameters via Rotational Invariance Techniques. IEEE Transactions on Acoustics, Speech and Signal Processing. 1989, vol. 37, iss. 7, pp. 984-995. doi: 10.1109/29.32276

6. Hua Y., Sarkar T. Matrix Pencil Method for Estimating Parameters of Exponentially Damped/Undamped Sinusoids in Noise. IEEE Transactions on Acoustics, Speech and Signal Processing. 1990, vol. 38, iss. 5, pp. 814-824. doi: 10.1109/29.56027

7. Kumaresan R. On the Zeros of the Linear Prediction-Error Filter for Deterministic Signals. IEEE Transactions on Acoustics, Speech and Signal Processing. 1983, vol. 31, iss. 1, pp. 217-220. doi: 10.1109/TASSP.1983.1164021

8. Van Der Veen A., Deprettere E., Swindlehurst A. Subspace Based Signal Analysis Using Singular Value Decomposition. Proc. of the IEEE. 1993, vol. 81, iss. 9, pp. 1277-1308. doi: 10.1109/5.237536

9. Golub G. H., Loan C. F. V. Matrix Computations. 4th ed. Baltimore, Maryland, The John Hopkins University Press, 2013, 784 p.

10. Buhren M., Pesavento M., Bohme J. A New Approach to Array Interpolation by Generation of Arti Cial Shift Invariances: interpolated ESPRIT. Proc. IEEE Int. Conf. Acoust., Speech and Signal Processing (ICASSP). 2003, vol. 5. doi: 10.1109/ICASSP.2003.1199904

11. Marchi S. D. On Computing the Factors of Generalized Vandermonde Determinants. Recent Advances in Applied and Theoretical Mathematics. 2000, pp. 140-144.

12. Heineman E. R. Generalized Vandermonde Determinants. Transactions of the American Mathematical Society. 1929, vol. 31, no. 3, pp. 464-476.

13. Akaike H. A New Look at the Statistical Model Identification. IEEE Transactions on Automatic Control. 1974, vol. 19, iss. 6, pp. 716-723. doi: 10.1109/TAC.1974.1100705

14. Schwartz G. Estimating the dimension of a model. Annals of Statistics. 1978, vol. 6, no. 2, pp. 461-464.

15. Rissanen J. Modeling by Shortest Data Description. Automatica. 1978, vol. 14, pp. 465-471.

16. Wax M., Kailath T. Detection of Signals by Information Theoretic Criteria. IEEE Transactions on Acoustics Speech and Signal Processing. 1985, vol. ASSP-33, no. 2, pp. 387-392. doi: 10.1109/TASSP.1985.1164557

17. Zhao L. C., Krishnaiah P. R., Bai Z. D. On Detection of the Number of Signals in Presence of White Noise. J. of Multivariate Analysis. 1986, vol. 20, iss. 1, pp. 1-25. doi: 10.1016/0047-259X(86)90017-5

18. Fuchs J. Estimating the Number of Sinusoids in Aditive White Noise. IEEE Transactions on Acoustics, Speech and Signal Processing. 1998, vol. 36, iss. 12, pp. 1846-1853. doi: 10.1109/29.9029

19. Badeau R., David B., Richard G. Selecting the Modeling Order for the ESPRIT High Resolution Method: an Alternative Approach. Proc. IEEE Int. Conf. Acoust., Speech and Signal Processing (ICASSP). Montreal, Canada, 17-21 May 2004. Piscataway, IEEE, 2004, vol. II, pp. 1025-1028. doi: 10.1109/ICASSP.2004.1326435

20. Kumaresan R., Tufts D. Estimating the Parameters of Exponentially Damped Sinusoids and Pole-Zero Modeling in Noise. IEEE Transactions on Acoustics, Speech and Signal Processing. 1982, vol. 30, iss. 6, pp. 833-840. doi: 10.1109/TASSP.1982.1163974

21. Tretter S. Estimating the Frequency of a Noisy Sinusoid by Linear Regression. IEEE Transactions on Information Theory. 1985, vol. 31, iss. 6, pp. 832-835. doi: 10.1109/TIT.1985.1057115

22. Stoica P., Moses R. L., Friedlander B., Soderstrom T. Maximum likelihood estimation of the parameters of multiple sinusoids from noisy measurements. IEEE Transactions on Acoustics, Speech and Signal Processing. 1989, vol. 37, iss. 3, pp. 378-392. doi: 10.1109/29.21705

23. Kay S. A Fast and Accurate Single Frequency Estimator. IEEE Transactions on Acoustics, Speech and Signal Processing. 1989, vol. 37, iss. 12, pp. 1987-1990. doi: 10.1109/29.45547

24. Stoica P., Moses R. Spectral Analysis of Signals. Prentis-Hall, Upper Saddle River, NJ, 2005, 452 p.

25. Cedervall M., Stoica P., Moses R. Mode-Type Algorithm for Estimating Damped, Undamped, or Explosive Modes. Circuits, Systems and Signal Processing. 1997, vol. 16, iss. 3, pp. 349-362. doi: 10.1109/ACSSC.1995.540870

26. So H., Chan K., Chan Y., Ho K. Linear Prediction Approach for Efficient Frequency Estimation of Multiple Real Sinusoids: Algorithms and Analyses. IEEE Transactions on Signal Processing. 2005, vol. 53, iss. 7, pp. 2290-2305. doi: 10.1109/TSP.2005.849154

27. Qian F., Leung S., Zhu Y., Wong W., Pao D., Lau W. Damped Sinusoidal Signals Parameter Estimation in Frequency Domain. Signal Processing. 2012, vol. 92, iss. 2, pp. 381-391. doi: 10.1016/j.sigpro.2011.08.003

28. Li T.-H. Time Series with Mixed Spectra. CRC Press, 2014, 680 pp.

29. Zhao S., Loparo K. Forward and Backward Extended Prony Method for Complex Exponential Signals with/without Additive Noise. Digital Signal Processing. 2019, vol. 86, pp. 42-54. doi: 10.1016/j.dsp.2018.12.012

30. Stewart G. W. Perturbation Theory for the Singular Value Decomposition. SVD and Signal Processing, II: Algorithms, Analysis and Applications. 1991, pp. 99-109.

31. Li F., Vaccaro R. J. Performance Degradation of DOA Estimators Due to Unknown Noise Fields. IEEE Transactions on Signal Processing. 1992, vol. 40, iss. 3, pp. 686-690. doi: 10.1109/78.120813

32. Broomhead D. S., King G. P. Extracting Qualitative Dynamics from Experimental Data. Physica D. 1986, vol. 20, iss. 2-3, pp. 217-236. doi: 10.1016/0167-2789(86)90031 -X

33. Vautard R., Ghil M. Singular-Spectrum Analysis in Nonlinear Dynamics, with Applications to Paleoclimatic Time Series. Physica D. 1989, vol. 35, iss. 3, pp. 395-424. doi: 10.1016/0167-2789(89)90077-8

34. Ghil M., Vautard R. Interdecadal Oscillations and the Warming Trend in Global Temperature Time Series. Nature. 1991, vol. 350, pp. 324-327.

35. Allen M., Read P., Smith L. Temperature Time Series. Nature. 1992, vol. 355, p. 686.

36. Allen M., Read P., Smith L. Temperature Oscillation. Nature. 1992, vol. 359, p. 679.

37. Yiou P., Sornette D., Ghil M. Data-Adaptive Wavelets and Multi-Scale SSA. Physica D. 2000, vol. 142, pp. 254-290. doi: 10.1016/S0167-2789(00)00045-2

38. Varadi F., Pap J. M., Ulrich R. K., Bertello L., Hen-ney C. J. Searching for Signal in Noise by Random-Lag Singular Spectrum Analysis. The Astronomical J. 1999, vol. 526, pp. 1052-1061.

39. Golyandina N., Shlemov A. Variations of Singular Spectrum Analysis for Separability Improvement: Nonorthogonal Decompositions of Time Series. Statistical Interface. 2014, vol. 8, iss. 3, pp. 277-294. doi: 10.4310/SII.2015.v8.n3.a3

40. Goljandina N., Nekrutkin V., Zhigljavsky A. Analysis of Time Series Structure: SSA and related techniques. London, Chapman and Hall, 2001, 320 p.

41. Fowler M. Phase-Based Frequency Estimation: A Review. Digital Signal Processing. 2002, vol. 12, iss. 4, pp. 590-615. doi: 10.1006/dspr.2001.0415

42. Golub G., Milanfar P., Varah J. A Stable Numerical Method for Inverting Shape from Moments. SIAM Journal on Scientific Computing. 1999, vol. 21, iss. 4, pp. 1222-1243. doi: 10.1137/s1064827597328315

43. Kundu D., Mitra A. Estimating the Parameters of Exponentially Damped/Undamped Sinusoids in Noise: A Non-Iterative Approach. Signal Processing. 1995, vol. 46, iss. 3, pp. 363-368. doi: 10.1016/0165-1684(95)00094-6

44. Avdonin S., Bulanova A., Nicolsky D. Boundary Control Approach to the Spectral Estimation Problem. The Case of Simple Poles. Sampling Theory in Signal and Image Processing. 2009, vol. 8, iss. 3, pp. 225-248.

Information about the authors

Dmitry J. Nicolsky, Interdisciplinary Ph.D. from the University of Alaska Fairbanks (2007), a Research Assistant Professor (2013). The author of more than 40 scientific publications. Area of expertise: understanding of coupled ground-atmosphere-ocean processes in the Arctic and numerical solution of partial differential equations. Address: Geophysical Institute, University of Alaska Fairbanks, Fairbanks, PO Box 757320, Fairbanks, AK 99775, USA E-mail: djnicolsky@alaska.edu https://orcid.org/0000-0001-9866-1285

Gennadiy S. Tipenko, Cand. Sci. (Phys.-Math.) (1985), Assotiate Professor (1994), Leading Researcher in the Institute of Environmental Geoscience Russian Academy of Sciences. The author of more than 40 scientific publications. Area of expertise: spectral theory of differential operators; numerical modeling in geocryology problems. Address: Institute of Environmental Geoscience Russian Academy of Sciences, 13 Ulansky Pereulok, PO Box 145, Moscow 101000, Russia E-mail: gstipenko@mail.ru https://orcid.org/0000-0003-1137-5695

i Надоели баннеры? Вы всегда можете отключить рекламу.