Научная статья на тему 'Model predictive control of constrained with non linear stochastic parameters systems'

Model predictive control of constrained with non linear stochastic parameters systems Текст научной статьи по специальности «Математика»

CC BY
164
61
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
MODEL PREDICTIVE CONTROL (MPC) / УПРАВЛЕНИЕ С ПРОГНОЗИРУЮЩЕЙ МОДЕЛЬЮ / ОГРАНИЧЕНИЯ / СЛУЧАЙНЫЕ ЗАВИСИМЫЕ ПАРАМЕТРЫ / DISCRETE TIME CONTROL SYSTEMS / NONLINEAR STOCHASTIC PARAMETERS / MULTIPLICATIVE NOISE / CONSTRAINTS / COMPUTATIONAL METHODS / STOCHASTIC CONTROL

Аннотация научной статьи по математике, автор научной работы — Dombrovskii V. V., Obyedko T. U.

In this paper we consider the model predictive control problem of discrete-time systems with non-linear random depended parameters for which only the first and second conditional distribution moments, the conditional autocorrelations and the mutual cross-correlations are known. The open-loop feedback control strategy is derived subject to hard constraints on the control variables. The approach is advantageous because the rich arsenal of methods of non-linear estimation or the results of nonparametric estimation may be used directly for describing characteristics of random parameter sequences.

i Надоели баннеры? Вы всегда можете отключить рекламу.

Похожие темы научных работ по математике , автор научной работы — Dombrovskii V. V., Obyedko T. U.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Model predictive control of constrained with non linear stochastic parameters systems»

2011

ВЕСТНИК ТОМСКОГО ГОСУДАРСТВЕННОГО УНИВЕРСИТЕТА Управление, вычислительная техника и информатика

№ 3(16)

УПРАВЛЕНИЕ ДИНАМИЧЕСКИМИ СИСТЕМАМИ

УДК 519.865.5

V.V. Dombrovskii, T.U. Obyedko

MODEL PREDICTIVE CONTROL OF CONSTRAINED WITH NON LINEAR STOCHASTIC PARAMETERS SYSTEMS

In this paper we consider the model predictive control problem of discrete-time systems with non-linear random depended parameters for which only the first and second conditional distribution moments, the conditional autocorrelations and the mutual cross-correlations are known. The open-loop feedback control strategy is derived subject to hard constraints on the control variables. The approach is advantageous because the rich arsenal of methods of non-linear estimation or the results of nonparametric estimation may be used directly for describing characteristics of random parameter sequences.

Keywords: discrete time control systems, model predictive control (MPC), nonlinear stochastic parameters, multiplicative noise, constraints, computational methods, stochastic control.

Lately there has been a steadily growing need and interest in systems with stochastic parameters and/or multiplicative noise. The same systems have been gaining greater acceptance in many engineering and finance applications.

Optimization techniques to various control and estimation problems for such systems have been intensively studied in the literature.

In particular, a linear quadratic control for systems with random independent parameters is studied in [1-3]. In [4-8] the authors consider stochastic optimal control problem of systems with dependent parameters which switch according to a Markov chain.

In above-mentioned papers there are no constraints on the state and control variables. However, constraints arise naturally in many real world applications.

In recent years considerable interest has been focused on model predictive control (MPC), also known as receding horizon control (RHC), as an appropriate and effective technique to solve the dynamic control problems having input and state/output constraints. The basic concept of MPC is to solve an open-loop constrained optimization problem at each time instant and implement only the first control move of the solution. This procedure is repeated at the next time instant. Some of the recent works on this subject can be found in literature [9-20].

MPC for constrained discrete-time linear systems with random independent parameters is considered in [15, 16]. In [17, 18] MPC of linear with random dependent parameters systems under constraints is examined, where parameters evolution is described by multidimensional stochastic difference equations. In [19] the MPC problem of discrete-time Markov jump with multiplicative noise linear systems subject to constraints on the control variables is solved.

In this paper we consider MPC for constrained discrete-time with stochastic nonlinear parameters systems. The main novelty of this paper is that no assumption is made about the form of the function describing parameters evolution. The first and second conditional distribution moments, the conditional autocorrelations and the mutual crosscorrelations are known only. The performance criterion is composed by a linear combination of a quadratic and liner parts. The open-loop feedback control strategy is derived subject to hard constraints on the input variables. Predictive strategies computation includes the decision of the sequence of quadratic programming tasks.

The approach is advantageous because the rich arsenal of methods of non-linear estimation or if no model can be found that describes the underlying non-linear structure adequately, then the results of nonparametric estimation may be used directly for describing characteristics of non-linear time series.

1. Problem formulation

We consider the following discrete-time with non-linear stochastic parameters system on the probabilistic space (Q, F, P):

x(k +1) = Ax(k) + B[n(k +1), k + 1]u(k), (1)

where x(k) is the «.-dimensional vector of state, u(k) is the «„-dimensional vector of control; n(k) (k=0,1,2...) denotes a sequence of depended ^-dimensional random vectors. A, B[n(k),k] are the matrices with appropriate dimensions, where B[n(k),k] linearly depend on n(k).

Let F =( Fk )k>i is the flow of sigma algebras defined on (Q, F, P), where Fk denotes the sigma algebra generated by the {(x(s), n(s)): £=0, 1, 2,...,k} up to time instant k and it means the information (measurements) before the time k.

We assume that we know conditional moments for the process n(k) about Fk :

M {(k + i)/ Fk } = n(k + i), (2)

M {(k + i) - n(k + i)] [n(k + j) - n(k + j)]T / Fk } = ®ij (k),

(k = 0,1,2,...), (i,j = 1,2,...). (3)

In that follows, we use notation: for any matrix y[n(k),k], dependent on n(k), y(k) = E {v|/[n(k), k]/ Fk} without indicating the explicit dependence of matrices on n(k). Also we use the standard notation, for square matrix A, A>0 (A>0, respectively) to denote that the matrix A is positive semidefinite (positive definite), and tr(A) to represent the trace of A.

We impose the following inequality constraints on the control:

„min (k) < S(k)„ (k) < „max(k), (4)

where S(k) is the matrix with appropriate dimension.

The cost function of the RHC is defined as a function, composed by a linear combination of a quadratic part and a linear part, which is to be minimized at every time k

J (k + m / k) = ^Tm {(k + i) ^(k, i) x(k + i) - R3(k, i) x(k + i)/ Fk } +

i =1

+^ M {uT (k + i / k)R2 (k, i)u (k + i / k) - R4 (k, i)u(k + i -1/ k) / Fk }, (5)

i=0

on trajectories of system (1) over the sequence of predictive controls u(k/k), ..., u(k+m-1/k) dependent on the state x(k), under constraints (4); R1(k,i)>0, R2(k,i)>0, R3(k,i) >0, R4(k,i) >0 are given symmetric weight matrices of corresponding dimensions; m is the prediction horizon.

Only the first control vector u(k/k) is actually used for control. Thereby we obtain control u(k) as a function of state x(k), i.e. the feedback control. This optimization process is solved again at the next time instant k+1 to obtain control u(k+1). The synthesis of predictive control strategies leads to the sequence of quadratic programming problems.

2. Model predictive control strategies design

Consider the problem of minimizing the objective (5) with respect to the predictive control variables u(k+i/k), subject to constraints (4).

Theorem. The set of predictive controls U(k)=[uT(k/k), ...,uT(k+m-1/k)]T, such that it minimizes the objective (5) subject to (4), for each instant k is defined from the solving of quadratic programming problem with criterion

Y(k + m / k) = [2X (k)G(k) - F(k)] U(k) + UT (k)H(k)U(k) (6)

under constraints

where

Umin (k) < S(k)U(k) < Umax (k),

S (k) = diag(S (k),..., S (k + m -1)),

(7)

Umin (k) = [„min (kX.- „min (k + m - 1)]T , Umax (k) = [„max (k),..., uL

(k + m - 1)]T

H(k), G(k), F(k) - are the block matrices

■ Hu(k) H12 (k)

H (k) =

H21 (k ) H22 (k )

[ Hm1(k ) Hm 2 (k )

G (k) = [G1 (k) G2(k) F (k) = [ F1(k) F2(k)

Hm (k)'

H2m (k)

Hmm (k). Gm (k)], Fm (k)],

where the blocks satisfy the following recursive equations

Htt (k) = R2 (k, t -1) + BT (k +1)Q(m -1)B(k +1) + +M {BT (k + t)Q(m -1)B(k +1)/ Fk },

Htf (k) = ~B (k +1)(AT)f-t Q(m - f)B(k + /) +

+M BT (k +1)(BT)f-t Q(m - f )B(k + f)/ Fk }, t < f,

Hf (k) = H fT (k), t > f,

Gt (k) = (At )TQ(m -1)B(k +1),

m _

Ft (k) = £ R3 (k, j) Aj-t B (k +1) + R4(k, t -1),

j=t

(8)

(9)

(10)

(11)

(12)

(13)

(14)

(15)

Q(t) = ATQ(t -1)A + R1 (k, m -1),(t = 1, m), (16)

Q(0) = R1(k, m). (17)

The optimal control law is achieved by

u <k) = [^ 0«. 0«. ]U <k), <18)

where I« is «.-dimensional identity matrix, 0, is «.-dimensional zero matrix.

nu nu

In the case of unconstrained control the optimal control strategy for system (1) is achieved by equation (18), where

U (k) = - H- (k)

GT (k)x(k) -1FT (k)

2

In this case the optimal value of the criterion is achieved by equation

Jopt (k + m /k) = xT (k) [Q(m) - R1 (k, 0) - G(k)H-1 (k)GT (k)] x(k) -

- L(k) x(k) -1F (k) H - (k) FT (k),

4

(19)

(20)

where L(k) = £ R3(k, i)A!.

i=1

Proof. Denote

Jk + 5 = XT (k + 5 + 1) R1 (k, 5 + 1) x (k + 5 + 1) - R3 (k, 5 + 1) x(k + 5 + 1) +

+.T (k + 5)R2 (k, 5)u (k + 5) - R4(k, 5).(k + 5) +

+xT (k + 5 + 2)R1 (k, 5 + 2) x(k + 5 + 2) - R3 (k, 5 + 2)x(k + 5 + 2) +

+. (k + 5 + 1)R2 (k, 5 + 1). (k + 5 + 1) — R4 (k, 5 + 1). (k + 5 + 1) + ... +

+xT (k + m)R1 (k, m)x(k + m) - R3 (k, m)x(k + m) +

+.T (k + m - 1)R2 (k, m -1). (k + m -1) - R4 (k, m - 1).(k + m -1).

It is easy to see that

Jk+5 = xT (k + 5 +1) R1 (k, 5 +1) x (k + 5 +1) - R3 (k, 5 +1) x(k + 5 +1) +

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

+.T (k + 5) R2 (k, 5). (k + 5) - R4 (k, 5). (k + 5) + Jk+5+1 (21)

and J(k + m/k) = M{Jk /Fk}. (22)

Let us consider the following equation

Jk+m-1 = xT (k + m) R1 (k, m) x(k + m) - R3 (k, m) x(k + m) +

+.T (k + m -1)R2 (k,m - 1).(k + m -1) -R4 (k,m - 1).(k + m -1). (23)

Using (1) in (23) for x(k+m), we obtain

Jk+m1 = xT (k + m -1)ATQ(0)Ax(k + m -1) + 2xT (k + m -1)ATQ(0) x

xB[n(k + m), k + m]. (k + m -1) + .T (k + m -1) [BT [n(k + m), k + m]Q(0) x xB[n( k + m), k + m] + R2 (k, m -1)]. (k + m -1) -

-R3 (k, m)Ax(k + m -1) - R3 (k, m)B[n(k + m), k + m].(k + m -1) -

-R4 (k, m -1). (k + m -1),

where Q(0) are defined by (17).

We assume that for some q

Jk+m-q = xT (k + m - q)ATQ(q -1)Ax(k + m - q) +

q

+2xT (k + m - q)I (AT )q-i+1 Q(i -1)B[n(k + m - i +1), k + m - i +1]. (k + m - i) +

i=1

q

+I.T (k + m - i)[BT [n(k + m - i +1), k + m - i + 1]Q(i -1) x

i =1

xB[n(k + m - i +1), k + m - i +1] + R2 (k, m - i)]. (k + m - i) +

q-1 q

+2I I .T (k + m - i)BT [n(k + m - i +1), k + m - i + 1]Q(j -1)Aj 1 x

i =1 j=i+1

xB[n(k + m - j +1), k + m - j +1]. (k + m - j) -

-I

IR3 (k, m - j + 1)Ai jB[n(k + m - i +1), k + m - i +1] + R4 (k, m - i)

.j=1

-IR3(k, m - i + 1)Am i+1x(k + m - q),

w(k + m - i) -(24)

i=1

where Q(i) are defined by (16) - (17).

Prove that (24) holds for q+1. Indeed, from the (21) we have the result:

Jk+m- (q+1) = xT (k + m - q)R1 (k, m - q)x(k + m - q) - R3 (k, m - q)x(k + m - q) +

+.T (k + m - (q +1))R2 (k, m - (q +1)).(k + m - (q +1)) -

-R4 (k, m - (q + 1)).(k + m - (q +1)) + Jk+m-q. (25)

Applying (24) in (25) and using (1) for x(k+m-q) after some calculations, we obtain

Jk+m-(q+1) = xT (k+m - (q +1))AT Q(q)Ax(k + m - (q+1)) +

(q+1)

+2xT (k + m - (q + 1)) I (AT )(q+1) i+1 Q(i - 1)B[n(k + m - i + 1),k + m - i +1]. (k + m - i) +

i=1

(q+1)

+ I .T (k+m -i)[BT [n(k + m -i + 1),k + m -i + 1]Q(i -1)B[n(k + m -i + 1),k + m -i +1] +

i=1

q q-1

+R2 (k ,m - i)]. (k + m - i ) + 2II .T (k + m-i)BT [n(k + m-i + 1),k + m-i + 1]x

i=1 j=i+1

(q+1)

-i:

i=1

xQ( j -1) AJ iB[n(k + m - j +1), k+m - j + 1].(k + m - j) -

i

IR3 (k,m - j +1) Ai-1 B[x\(k+m - i + 1),k + m - i +1]+R4 (k,m - i)

j=1

(q+1)

x.

(k + m - i) - I R3(k, m - i +1)Am i+1x(k + m - (q +1)).

i =1

By the induction from (26) we have that (24) holds for each q=1,2,.. .m.

According to (22) u (24) we have that

J(k + m / k) = xT (k)ATQ(m -1)Ax(k) +

m _

+2x (k)I (a1 )t Q(m - i)B(k + i).(k + i -1/ k) +

i=1

m /__t _ ^

+I.T (k + i -1/ k) {B (k + i)Q(m - i)B(k + i) + R2(k, i -1)}. (k + i -1/ k) +

i=1

m

+I .T (k + i -1/ k)M B>T (k + i)Q(m - i)B(k + i)/ Fk } . (k + i -1/ k) +

i =1

+21 I .T (k + i -1/k)B (k + i)(AT ) lQ(m - j)B(k + i).(k + j -1/k) +

i=1 j=i+1

m-1 m

+21 I .T (k + i -1/ kM {BT (k + i)(AT)1 -i Q(m - j)B(k + i)/ Fk } .(k + j -1/ k) -

i=1 j=i+1

(k + i -1/ k) -1R3 (k, i)Aix(k). (27) i=1

-I IR3 (k, j)A^^'B(k + i) + R4 (k, i -1)

i=1 [ j=i

The (27) can be written on matrix form

J(k + m / k) = xT (k)ATQ(m -1)Ax(k) -

-L(k)x(k) + [2xT (k)G(k) - F(k)] U(k) + UT (k)H(k)U(k), (28)

where the matrices H(k),G(k),F(k) take the forms (8)-(17), matrix L(k) takes the form

m

L(k) = I R3(k, i)Ai. i=1

Thus we have that the problem of minimizing the criterion (28) subject to (4) is equivalent to the quadratic control problem with criterion (6) subject to (7).

Obviously, the optimal control law for system (1), such that it minimizes criterion (5), without constraint is achieved by equation (19). It is easy to show that in this case the optimal value of the criterion (5) is achieved by equation (20).

Conclusion

In this paper the predictive control strategy for discrete-time linear systems with non-linear random dependent parameters for which the first and the second conditional distribution moments are known is derived. The main novelty of this paper is that we don’t assume the explicit form of model for describing the parameters evolution.

One can apply obtained results in a wide class of models with non-linear stochastic uncertainties. If no model can be found that describes the underlying non-linear structure adequately, then the results of nonparametric estimation may be used directly for describing the characteristics of the multidimensional time series [21, 22].

In addition offered approach can be extended to the following cases:

- when matrix A in (1) changes in time;

- when system dynamics (1) contains additive noises depended on parameter n;

- when matrix A in (1) depends on stochastic independent parameters sequence noncorrelated with n;

- when constraints (4) are defined by convex functions. In this case synthesis of predictive control strategies leads to the sequence of convex optimization problems;

- when parameter n is non-observable. Then the cost functional (5) should be averaged over all possible states of process n.

REFERENCES

1. Домбровский В.В. Ляшенко Е.А. Линейно-квадратичное управление дискретными системами со случайными параметрами и мультипликативными шумами с применением к оптимизации инвестиционного портфеля // Автоматика и телемеханика. 2003. № 10. С. 50-65.

2. Dombrovskii V.V., Lyashenko E.A. Linear quadratic control of discrete systems with random parameters and multiplicative noises with application to investment portfolio optimization // Automation and Remote Control. 2003. V. 64. Мэ. 10. P. 1558-1570.

3. Fisher S., Bhattacharya R. Linear quadratic regulation of systems with stochastic parameter uncertainties // Automatica. 2009. Мо. 45. P. 2831-2841.

4. Пакшин П.В. Дискретные системы со случайными параметрами и структурой. М.: Физматлит, 1994.

5. Costa O.L.V., Paulo W.L. Generalized coupled algebraic Riccati equations for discrete-time Markov jump with multiplicative noise systems // European J. Control. 2008. No. 5. P. 391-408.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

6. Costa O.L.V., Okimura R.T. Discrete-time mean variance optimal control of linear systems with Markovian jumps and multiplicative noise // International J. Control. 2009. V. 82. No. 2. P. 256-267.

7. Elliott R.J., Aggoun L., Moore J.B. Hidden Markov Models: Estimation and Control. Berlin: Springer-Verlag, 1995.

8. Dragan V., Morozan T. The linear quadratic optimization problems for a class of linear stochastic systems with multiplicative white noise and Markovian jumping // IEEE Transactions on Automatic Control. 2004. V. 49. No. 5. P. 665-675.

9. Rawlings J. Tutorial: model predictive control technology // Proc. Amer. Control Conf. San Diego. California. June 1999. P. 662-676.

10. Mayne D.Q., Rawlings J.B., Rao C.V, ScokaertP.O.M. Constrained model predictive control: Stability and optimality // Automatica. 2000. V. 36. No. 6. P. 789-814.

11. Bemporad A., Borrelli F., Morari M. Model predictive control based on linear programming

- The Explicit Solution // IEEE Trans. Automat. Control. 2002. V. 47. No. 12. P. 1974-1985.

12. Bemporad A., Morari M., Dua V., Pistikopoulos E.N. The explicit linear quadratic regulator for constrained systems // Automatica. 2002. V. 38. No. 1. P. 3-20.

13. Cuzzola A.F., Geromel J.C., Morari M. An improved approach for constrained robust model predictive control // Automatica. 2002. V. 38. No. 7. P. 1183-1189.

14. Seron M.M., De Dona J.A., Goodwin G.C. Global analytical model predictive control with input constraints// 39th IEEE Conf. Decision Control. 2000. Sydnej. Australia, 12 - 15 December. P. 154-159.

15. Домбровский В.В., Домбровский Д.В., Ляшенко Е.А. Управление с прогнозированием системами со случайными параметрами и мультипликативными шумами и применение к оптимизации инвестиционного портфеля // Автоматика и телемеханика. 2005. № 4. С. 84-97.

16. Dombrovskii V.V., Dombrovskii D.V., Lyashenko E.A. Predictive control of random -parameters systems with multiplication noises. Application to the investment portfolio optimization // Automation and Remote Control. 2005. V. 66. No. 4. P. 583-595.

17. Домбровский В.В., Домбровский Д.В., Ляшенко Е.А. Управление с прогнозированием системами со случайными зависимыми параметрами при ограничениях и применение к

оптимизации инвестиционного портфеля // Автоматика и телемеханика. 2006. № 12. С. 71-85.

18. Dombrovskii V.V., Dombrovskii D.V., Lyashenko E.A. Model predictive control of systems with random dependent parameters under constraints and its application to the investment portfolio optimization // Automation and Remote Control. 2006. V. 67. No.12. P. 1927-1939.

19. Домбровский В.В., Объедко Т.Ю. Управление с прогнозированием системами с марковскими скачками и мультипликативными шумами при ограничениях // Вестник Томского госуниверситета. Управление, вычислительная техника и информатика. 2010. № 3(12). С. 5-11.

20. Kiseleva M.Y., Smagin V.I. Model predictive control for discrete systems with state and input delays // Вестник Томского госуниверситета. Управление, вычислительная техника и информатика. 2011. № 1(14). С. 5-12.

21. Добровидов А.В., Кошкин Г.М. Непараметрическое оценивание сигналов. М.: Наука; Физматлит, 1997. 336 с.

22. Васильев В.А., Добровидов А.В., Кошкин Г.М. Непараметрическое оценивание функционалов от распределений стационарных последовательностей. М.: Наука, 2004. 508 с.

Домбровский Владимир Валентинович

Объедко Татьяна Юрьевна

Томский государственный университет

E-mail: dombrovs@ef.tsu.ru; tani4kin@mail.ru Поступила в редакцию 22 марта 2011 г.

i Надоели баннеры? Вы всегда можете отключить рекламу.