Научная статья на тему 'IDENTIFICATION OF DISCRETE TIME SYSTEMS WITH RANDOM JUMP PARAMETERS AND INCOMPLETE INFORMATION'

IDENTIFICATION OF DISCRETE TIME SYSTEMS WITH RANDOM JUMP PARAMETERS AND INCOMPLETE INFORMATION Текст научной статьи по специальности «Математика»

CC BY
34
40
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
IDENTIFICATION ALGORITHM / MARKOV CHAIN / ESTIMATES / INCOMPLETE INFORMATION

Аннотация научной статьи по математике, автор научной работы — Kim Konstantin S., Smagin Valery I.

The identification problem for a discrete system with jump parameters is considered. The proposed approach assumes the use of estimates constructed using the Kalman extrapolator with estimates of unknown inputs and estimates of unknown inputs in model of observation vector. The example is given to illustrate the proposed approach.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «IDENTIFICATION OF DISCRETE TIME SYSTEMS WITH RANDOM JUMP PARAMETERS AND INCOMPLETE INFORMATION»

ВЕСТНИК ТОМСКОГО ГОСУДАРСТВЕННОГО УНИВЕРСИТЕТА

2021 Управление, вычислительная техника и информатика № 54

УДК 519.2

DOI: 10.17223/19988605/54/6

K.S. Kim, V.I. Smagin

IDENTIFICATION OF DISCRETE TIME SYSTEMS WITH RANDOM JUMP PARAMETERS

AND INCOMPLETE INFORMATION

This work was supported by the RFBR grant No. 19-31-90080.

The identification problem for a discrete system with jump parameters is considered. The proposed approach assumes the use of estimates constructed using the Kalman extrapolator with estimates of unknown inputs and estimates of unknown inputs in model of observation vector. The example is given to illustrate the proposed approach. Keywords: identification algorithm; Markov chain; estimates; incomplete information.

Estimation and identification problems are relevant for different systems. As an example of such systems, one can consider, for example, economic systems [1, 2], energy systems [3, 4], flight systems [5], communication systems [6, 7]. Such problems occupy a special place in the problem of fault detection [8-10].

In [11], the problem of filtering and simultaneous diagnostics of a jump parameter for discrete systems with multiplicative perturbations was considered. In this paper, we consider the problem of simultaneous extrapolation and identification of a state with a jump parameter described by a Markov chain, which is included in the description of a linear stochastic system.

The solution was obtained using the separation principle, Kalman extrapolator and the vector of estimates of the unknown input [12-15]. It is proposed to select a filter transmission matrix based on minimizing the sum of quadratic forms of estimation errors. The identification problem is solved in the conditions of incomplete information about the observation (there is an unknown input in the observation channel model).

A numerical example of solving the problem of extrapolation and identification of a linear system with Markov jump parameter is given.

1. Problem statement

Consider the following linear discrete-time stochastic system with a jump parameter:

x(k +1) = Ai(k)x(k) + By{k)u(k) + qi(k), x(0) = x0, (1)

where x(k) e Rn denotes the state of the system, u (k) e Rm denotes the known input, X0 is a random vector, Ay(k) and By(k) are matrices of corresponding dimensions; у = y(k) is a jumping parameter not available to observations (Markov chain with r states y1,...,yr ); q (k}(k) are random perturbations with characteristics:

E{ qy(k ) (k )} = 0, Eiq^k)(k)qTk)(j) IУ© = T(k),k < j} = Q^bj.

Here E{ •} denotes the mathematical expectation, T denotes matrix transposition and Ъщ is Kronecker delta. The probability of states of the jump process pj(k) = P{y(k) = j}, j = 1,r satisfies the equation

r _

Pj (k +1) = Zp.(k)p,j, Pj (0) = Pj,0, j = 1, r, (2)

i=i

where pj,j is the probability of transition from the state i to the state j for one step, pj,0 is the initial probability of the j-th state.

An observation vector with incomplete information is:

y(k) = Sy(k) x(k) + Hy{k y(k) + vy{k) (k), (3)

where y(k) is an unknown input, HY(k) is a matrix, vy(k) (k) is the Gaussian random sequence independent of qY(k)(k) x0 and Y(k) with characteristics: E{Vj(k)(k)} = 0, E{vy(k)(k) v^k)(j) | Y(%) = Y(k),k < % < j} = V^fifj . The pair of matrices Ay, ST (i = 1,r) are detectable.

It is required to determine the estimate of the parameter y(k) (identification problem) and find the corresponding optimal estimate for the extrapolation of the state vector X(k +1) from the observations (3) received at time k for the following criterion given in the interval k e [0, T]

J[0,T,i] = E|ZeT(k)e(k) | y(0) = Y, |, (4)

where e(k) = x(k) - X(k) is the error vector.

2. Insertion of unknown input in model (1) under condition of identification errors y

If the jump parameter is observed exactly without errors, a classical Kalman extrapolator could be used to solve the extrapolation problem. It is not difficult to verify that with identification error y(k), an additional vector of unknown input appears in model (1).

If the system (1) is in the j-th state (y = yy.), but this state is erroneously identified as the i-th (j ^ i),

then equation (1) can be represented as a model with an unknown input:

x(k +1) = Ax(k) + Bu(k) + f (k) + qt (k), x(0) = X, (5)

where the unknown input vector is determined by the formula:

f (k) = (Aj - A)x(k) + (Bj - Bt)u(k) + qj (k) - qt(k). (6)

Here we introduce notations for the matrices A(k), BY(k), SY(k), Q,ik), HY(k), VY(k) when y(k) = yi : ai, bi, si, Q, H, V respectively (i = 1, r).

3. Extrapolator synthesis

To solve the problem of extrapolation of the state vector on a step and estimating the unknown input, we use the model representation in the form (5), the information from the observation (3), and a separation principle. This means that we first constructed the estimate of vector X(k +1) on the assumption that the vector f (k) and the value of the jump parameter Y(k) are known, then constructed vectors of estimates f (k) and Y(k), on the assumption that the state vector of estimates X(k) is known. We define the vector X( k) using the Kalman extrapolator:

X(k +1) = AX(k) +1(k) + K (k)(y(k) - SiX(k) - Hty(k)), xc(0) = X0, (7)

where K (k) (i = 1, r) are transfer matrices of the extrapolator, which we define from the minimum of a criterion (4) for k e [0, T ].

The analytic expressions for the matrices Kt (k) are determined from the following theorem. Theorem. Let there exist positive definite matrices Nt (i = 1, r) that are a solution to the Cauchy problem:

N (k+1) = (A - k (k)Si )(£ PjN3 (k)) x

j=i (8)

x(A -K(k)Si)T + Q + Kt(k)VlKl(k)T,N(0) = N0.

Then the optimal matrices Ki (k) are determined as follows:

K(k) = At(¿j(k))SiT[Si (¿j(k))SiT + V-]-1. (9)

j=i j=1

Proof. We represent criterion (4) as a sum

T

J [0,t , i] = £ tr N (k), (10)

k=0

where tr is the trace operation, matrices N (k) = M{e(k)e(k)T| y = ^} (i = 1, r) are determined from equation (8).

Introduce the Lyapunov function:

T

W(k, N (k)) = tr N (k)+tr£[Q + K (t)viKi (t)T +Qi (t)L (t), (11)

t=k

where Q (t) > 0 are some matrices.

Additionally, we assume that there exist matrices Lt (t) > 0, satisfying the equations:

r _

L, (k) = (A - K (k)S, )T (£ P.jLj (k + 1))(Ai - K (k)S,) +I, L, (T) = H, i = 1, r, (12)

j=1

where I is an unit matrix, H > 0 is some matrix.

Let us sum over k = t, T -1 the finite differences of the function W(k, N (k)), taking into account formula (12):

T-1 T-1 T-1

£AW(k,N(k)) =£[W(k +1,N(k +1))- W(k,N(k))] = £tr[Ni(k + 1)Li(k +1)-

k=t k=t k=t

-Nt (k) L, (k) - [Qi + K (k)VK (k)T + Q (k)]Li (k)]. (13)

On the other hand, this expression can be represented as follows:

T -1

£ AW (k, N (k)) = W (t +1, N (t +1)) - W (t, N (t)) +...

k=t

+w (T, N (T)) - w (T -1, N (T -1)) = tr N (T )L (t ) -

T-1

-tr N (t )Li (t) - trEQi + K (k)VK (k)T + Q (k )]Li (k). (14)

k=t

Add to the formula (10) the difference of the right-hand sides (13) and (14). Given that this difference is zero, then criterion (10) will take the form:

J[0,T, i] = £ tr Nt (k)-E tr N, (k)L (k) +

k=0 k=0

+£ tr [(a, - K(k)Si)(]Tpi,jNj (k))(a, - Ki(k)Si)T +Qi + Ki(k)VK(k)T]Li(k +1). (15)

k=0 j=1

Applying the rules of differentiating the trace function from the matrix [15], we calculate the derivatives

J [0,T, i] {£ tr N (k) - E tr N (k) L (k) +

dKt (k) dKt (k){E i ( ) E i ( ) i ( )

+£ tr [(A, -K,(k)S,)(]Tpi,jNj(k))(A, -K(k)S,)T +

k=0 j=1

J[0,,] = lim1E jx eT (k)e(k) | y(0) = y, ),

T t 1J

+Q + к (k )VlKI (k )T ]L, (k +1)} = X 2[-L (k +1) A (X P,, -N- (k )S +

4=0 j=i

r

+L (k + 1)K (k )S,. (X J (k ))S T + l (k +1) к (k )V ]. (16)

j=1

Equating this derivative to zero and assuming that each summand over i is equal to zero, we obtain formula (9) for determining the matrix Ki(k).

Now calculate the finite difference of the Lyapunov function

AW (k, N (k)) = W (k +1, N (k +1)) - W (k, Nt (k)) =

T

= tr Nt (k +1) + tr X [Q + K, (t)VK (t)T + Q,(t)]L,. (t) -

t=k+1 T

-tr N, (k) - trX[Q + K, (t)VK, (t)T +Q, (t)]L, (t) =

t=k

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

= tr N (k +1) - tr N (k) - tr[Q. + K (k)VK (k)T +Q, (k)]L, (k). (17)

Since the Lyapunov function (11) is positive, and its finite difference (17), specifying the matrices Q, (t) > 0 accordingly, is negative, this guarantees the stability of the extrapolator (7). The theorem is proved.

4. Stationary extrapolator

In this case, the optimized criterion has the form

(18)

the transfer matrices Ki are constants and are determined from the following matrices algebraic equations:

r

N = (A -KS,)(XP^N-)(A -KS..)T + Qt + Kt(k)V1K1 (k)T, (19)

j=1

K , = A(Xp ,,J-Nj)ST[S, (XtPujNj)ST + V]-1. (20)

j=1 j=1

So, the stationary extrapolator takes the form

X(k +1) = Aii(k) + ft (k) + Kt (y(k) - StX(k) - (k)), X(0) = x0. (21)

Note that if there are positive definite solutions Nl (. = 1, r) of the matrices equation (19), then from the condition Q + KVVfi^ > 0 follows the validity of Theorem 1.6 [17], and this means the stability of the stationary extrapolator (21).

5. Unknown input and jump parameter estimation

As an algorithm for estimating an unknown input f (k) and \j>; (k) we will use LSM-estimates; in this case, an estimate can be constructed on the basis of minimizing the additional criterions [12] under the assumption that the value of the jump parameter is known (у = y;):

G,(v(k)) = X{||y(t) - Stx(t))\Wwi V(t -1) ||W2}, (22)

k \

G (f (k)) = X {| y(t) - H, y(t) - S, (A X(t -1) + Bru(t -1) + f (t -1))|| W f (t -1) IIW J, (23)

t=1 12)

where W, W, W, W are positive definite weight matrices.

In (23), the estimate y (t) (t = 1, k) is determined by minimizing the criterion (22):

y(k) = (H,T WH, + W2)-1 Hi W1 [y(k) - S, x(k)]. (24)

Minimizing (23), we obtain estimates of the unknown input:

f (k) = (S,T WS + W2)-1 ST W [y(k) - Hiyi(k) - S, (A, X(k -1) + B, u(k -1))], i = 1,7, (25)

The identification algorithm for the parameter y(k) uses a smoothed estimate of the norm of the unknown input (24) and it is constructed by the method of exponential smoothing:

9(7', k +1) = a fi (k) + (1 -a)9(i, k), i = 1, r,

(26)

where a is the specified smoothing factor. Next, the value i is determined, for which the smoothed value of the norm 9(7',k) will be minimal. This number i will give an estimate of the jump parameter ^ (k).

6. Numerical simulation

Consider the problem of modeling a filter for the discrete-time stochastic system with two-dimensional state vector, 3-mode Markovian jump parameter y(k) (y = 1, y2 = 2, y3 = 3) with the transition probability matrix

'0,4 0,2 0,4^ 0,3 0,5 0,2

[ Pi, i 1 =

v 0,3 0,3 0,4y

The simulation was performed on a time interval k e [0, 400]. Consider system (1) with the data:

( 0,65 0,12^ ( 0,85 0,1 ^

Ay =

v-0,04 0,52y

4 =

v-0,05 0,74y

4 =

B1 = B2 = B3 =

1 0^ 0 1

02 =

, u(k) = 0 ^

0,5

v 0,7 y

, 01 =

' 0,25 0,03^ v-0,02 0,3 0,01 0 ^

0 0,02

03 =

0,02 0 ^ 0 0,02

0,005

0 0,01

V ' y V y

In the simulation, the unknown input y(k) was set in accordance with the formula:

0,2 + 0,01 cos(k)x(k) if i < 150, y(k) = j 0 if 150 < i < 280,

-0,2 + 0,01cos(k)x(k) if i > 280, where x(k) is a random value (E{x(k)} = 0, E{x(k)x(j)} = 18^.). The data describing observation vector (3) are as follows:

S1 = S2 = S3 =(1 1),

H = H = H = 1, V = V = V = 0,1.

Weight matrices of criterion (22) are taken as

(0,1 0 ^

W = 1, W2 =

v 0 0,1y

, W = 1, W2 = 0,1.

The extrapolation estimates are calculated according to equations (19)—(21), in which the estimate of the unknown input is determined by formula (23). The jump parameter was estimated using the algorithm described in Section 6.

The simulation results are presented in Fig. 1 and 2. These results illustrate the quality of estimation of the jump parameter y(k) and vector x(k).

Fig. 1. The jump parameter y(k) and its estimate y(k).

Fig. 1 shows that identification errors occur at the moment of changing the value of the jump parameter. Using the statistical modeling method (for 100 implementations), the percentage of erroneous estimates of the parameter values Y,(k) is 3,28%.

Fig. 2. Smoothed values of norms q>(i, k) (i = 1,3 )

Tables 1 and 2 show the results of comparing the standard errors of deviations of the estimates of the state vector x(k) and vector of unknown inputf(k) for recurrent extrapolation algorithms using y (k) estimates and an extrapolation algorithm that does not use this estimate. Averaging was performed over 100 implementations. The calculation of the standard errors of the estimate was made according to the formulas (l = 1,2):

^ , =

k = 1

£ ( x, (k) - x (k))2 £ f (k) - f, (k))

N -1 3'L V N -1

Table 1

Standard errors ax, of state vector x(k)

Components i With using estimate Vj ( k ) Without using estimate Vj) ( k)

1 0,346 0,354

2 0,273 0,356

Standard errors af, of input vectorf(k)

Components i With using estimate Vj ( k ) Without using estimate \j) ( k)

1 0,287 0,329

2 0,186 0,252

Table 2

The results shown in Tables 1 and 2 show that the construction and use of the unknown input y(k) in the proposed estimation algorithm can improve the accuracy of the estimation of the vector x(k) and vector fk) that appears when the parameter y identified with error.

Conclusion

The solution to the problem of synthesizing the extrapolation algorithm and identifying the state of a jump parameter included in the description of a linear discrete system is obtained. The problem is solved by introducing an unknown input vector into the system model, which appears when the jump identification fails. The simulation results confirmed the effectiveness of the proposed algorithm.

references

1. Cajueiro, D.O. (2002) Stochastic optimal control of jumping Markov parameter processes with applications to finance. Ph.D.

Thesis. InstitutoTecnologico de Aeronautica-ITA, Brazil.

2. Dombrovskii, V., Obyedko, T. & Samorodova, M. (2018) Model predictive control of constrained markovian jump nonlinear

stochastic systems and portfolio optimization under market frictions. Automatica. 87. pp. 61-68. DOI: 10.1016/j.automatica.2017.09.018

3. Ugrinovskii, V.A. & Pota, H.R. (2005) Decentralized control of power systems via robust control of uncertain Markov jump

parameter systems. International Journal of Control. 78. pp. 662-677. DOI: 10.1109/CDC.2004.1429255

4. Sales-Setien, E. & Penarrocha-Alos, I. (2019) Markovian jump system approach for the estimation and adaptive diagnosis of

decreased power generation in wind farms. Iet Control Theoryand Applications. 13(18). pp. 3006-3018. DOI: 10.1049/iet-cta.2018.6199

5. Zhang, H., Gray, W.S. & Gonzalez, O.R. (2005) Performance analysis of recoverable flight control systems using hybrid

dynamical models. Proc. American Control Conference 2005 (ACC). Portland, Jun 08-10, 2005. pp. 2787-2792.

6. Zhu, Y., Zhong, Z., Zheng, W.X. et al. (2018) HMM-based H-infinity filtering for discrete-time markov jump LPV systems over

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

unreliable communication channels. IEEE Transactions on Systems Man Cybernetics-Systems. 48(12). pp. 2035-2046. DOI: 10.1109/TSMC.2017.2723038

7. Wang, J., Yao, F. & Shen, H. (2014) Dissipativity-based state estimation for Markov jump discrete-time neural networks with

unreliable communication links. Neurocomputing. 139/SI. pp. 107-113. DOI: 10.1016/j.neucom.2014.02.055

8. Wang, H., Wang, C., Gao, H. & Wu, L. (2006) An LMI approach to fault detection and isolation filter design for Markovian jump

system with mode-dependent time-delays. Proc. of the American Control Conference. Minneapolis. USA. pp 5686-5691. DOI: 10.1109/ACC.2006.1657631

9. Yao, X., Wu, L. & Zheng, W.X. (2011) Fault detection filter design for Markovian jump singular systems with intermittent

measurements. IEEE Transactions on Signal Processing. 59/7. pp. 3099-3109. DOI: 10.1109/TSP.2011.2141666

10. Gagliardi, G., Casavola, A. & Famularo, D.A. (2012) Fault detection and isolation filter design method for Markov jump linear parameter-varying systems. International Journal of Adaptive Control and Signal Processing. 26(3/ SI). pp. 241-257. DOI: 10.1002/acs.1261

11. Kim, K.S. & Smagin, V.I. (2020) Filtration and diagnostics in discrete stochastic systems with jump parameters and multiplicative perturbations. Vestnik Tomskogo gosudarstvennogo universiteta. Upravlenie, vychislitelnaya tekhnika i informatika - Tomsk State University Journal of Control and Computer Science. 51. pp. 79-86. DOI: 10.17223/19988605/51/9

12. Gillijnsand, S. & Moor, B. (2007) Unbiased minimum-variance input and state estimation for linear discrete-time systems. Automatica. 43. pp. 111-116. DOI: 10.1016/j.automatica.2006.08.002

13. Smagin, V.I. & Smagin, S.V. (2011) Filtering for linear not stationary discrete system with unknown disturbances. Vestnik Tomskogo gosudarstvennogo universiteta. Upravlenie, vychislitelnaya tekhnika i informatika - Tomsk State University Journal of Control and Computer Science. 16(3). pp. 43-50.

14. Smagin, V.I. (2017) Prediction of states of discrete systems with unknown input of the model using compensation. Russian Physics Journal. 59(9). pp.1507-1514. DOI: 10.1007/s11182-017-0937-6

15. Koshkin, G. & Smagin, V. (2016) Kalman filtering and forecasting algorithms with use of nonparametric functional estimators. Springer Proceedings in Mathematics & Statistics. 175. pp. 75-84. DOI: 10.1007/978-3-319-41582-6_6

16. Athans, M. (1968) The matrix minimum principle. Information and Control. 11. pp 592-606.

17. Li, F., Shi, P. & Wu, L. (2016) Control and Filtering for Semi-Markovian Jump Systems. New York: Springer.

Received: September 28, 2020

Kim K.S., Smagin V.I. (2021) IDENTIFICATION OF DISCRETE TIME SYSTEMS WITH RANDOM JUMP PARAMETERS AND INCOMPLETE INFORMATION. Vestnik Tomskogo gosudarstvennogo universiteta. Upravlenie, vychislitelnaja tehnika i informatika [Tomsk State University Journal of Control and Computer Science]. 54. pp. 48-55

DOI: 10.17223/19988605/54/6

Ким К.С., Смагин В.И. (2021) ИДЕНТИФИКАЦИЯ В ДИСКРЕТНЫХ СИСТЕМАХ СО СЛУЧАЙНЫМИ СКАЧКООБРАЗНЫМИ ПАРАМЕТРАМИ ПРИ НЕПОЛНОЙ ИНФОРМАЦИИ. Вестник Томского государственного университета. Управление, вычислительная техника и информатика. 2021. № 54. С. 48-55

Рассмотрена задача идентификации в дискретной системе со скачкообразными параметрами. Алгоритм предполагает использование оценок, построенных с помощью экстраполятора Калмана с оценками неизвестных входов и оценками неизвестных входов в модели вектора наблюдения. Для иллюстрации предлагаемого подхода приведен пример.

Ключевые слова: алгоритм идентификации; цепь Маркова; оценки; неполная информация.

KIMKonstantin Stanislavovich (Post-graduate Student, National Research Tomsk State University, Tomsk, Russian Federation). E-mail: [email protected]

SMAGIN Valery Ivanovich (Doctor of Technical Science, Professor of the National Research Tomsk State University, Professor of the Tomsk State University of Control Systems and Radioelectronics, Tomsk, Russian Federation). E-mail: [email protected]

i Надоели баннеры? Вы всегда можете отключить рекламу.