Научная статья на тему 'Second Order Krotov Method for Discrete-Continuous Systems'

Second Order Krotov Method for Discrete-Continuous Systems Текст научной статьи по специальности «Математика»

CC BY
84
27
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
discrete-continuous systems / sufficient optimality conditions / control improvement method. / дискретно-непрерывные системы / достаточные условия оптимальности / метод улучшения управления

Аннотация научной статьи по математике, автор научной работы — Irina Rasina, Olga Danilenko

In the late 1960s and early 1970s, a new class of problems appeared in the theory of optimal control. It was determined that the structure of a number of systems or processes is not homogeneous and can change over time. Therefore, new mathematical models of heterogeneous structure have been developed. Research methods for this type of system vary widely, reflecting various scientific schools and thought. One of the proposed options was to develop an approach that retains the traditional assumptions of optimal control theory. Its basis is Krotov’s sufficient optimality conditions for discrete systems, formulated in terms of arbitrary sets and mappings. One of the classes of heterogeneous systems is considered in this paper: discretecontinuous systems (DCSs). DCSs are used for case where all the homogeneous subsystems of the lower level are not only connected by a common functional but also have their own goals. In this paper a generalization of Krotov’s sufficient optimality conditions is applied. The foundational theory is the Krotov method of global improvement, which was originally proposed for discrete processes. The advantage of the proposed method is that its conjugate system of vector-matrix equations is linear; hence, its solution always exists, which allows us to find the desired solution in the optimal control problem for DCSs.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Метод Кротова второго порядка для дискретно-непрерывных систем

В конце 60-х и начале 70-х гг. прошлого века в теории оптимального управления появился новый класс задач. Оказалось, что структура описания ряда систем или рассматриваемых процессов не однородна и может изменяться с течением времени. Итог: появление новых математических моделей систем и процессов управления неоднородной структуры. Методы исследования таких систем очень разнообразны и отражают различные научные школы и направления. Один из вариантов состоит в развитии подхода, позволяющего остаться в рамках традиционных предположений теории оптимального управления. Его основа достаточные условия оптимальности В. Ф. Кротова для дискретных систем, сформулированные в терминах произвольных множеств и отображений. В работе рассматривается одна из разновидностей неоднородных систем: дискретнонепрерывные системы (ДНС) для случая, когда все однородные подсистемы нижнего уровня не только связаны общим функционалом, но имеют и свои собственные цели. Далее для построения метода применяется обобщение достаточных условий оптимальности В. Ф. Кротова. Идейной основой служит метод Кротова глобального улучшения, предложенный изначально для обычных дискретных процессов. Преимущество предлагаемого метода состоит в том, что его сопряженная система векторно-матричных уравнений линейная и, следовательно, ее решение всегда существует, что позволяет найти искомое решение в задаче оптимального управления для ДНС.

Текст научной работы на тему «Second Order Krotov Method for Discrete-Continuous Systems»

% H» ■■■■м i?

Серия «Математика»

2020. Т. 32. С. 17-32

Онлайн-доступ к журналу: http://mathizv.isu.ru

УДК 517.977.5 MSC 34H05

DOI https://doi.org/10.26516/1997-7670.2020.32.17

Second Order Krotov Method for Discrete-Continuous Systems

I.V. Rasina1, O. V. Danilenko2

1 Program Systems Institute of RAS, Pereslavl-Zalessky, Russian Federation,

2 Institute of Control Sciences of RAS, Moscow, Russian Federation

Abstract. In the late 1960s and early 1970s, a new class of problems appeared in the theory of optimal control. It was determined that the structure of a number of systems or processes is not homogeneous and can change over time. Therefore, new mathematical models of heterogeneous structure have been developed.

Research methods for this type of system vary widely, reflecting various scientific schools and thought. One of the proposed options was to develop an approach that retains the traditional assumptions of optimal control theory. Its basis is Krotov's sufficient optimality conditions for discrete systems, formulated in terms of arbitrary sets and mappings.

One of the classes of heterogeneous systems is considered in this paper: discrete-continuous systems (DCSs). DCSs are used for case where all the homogeneous subsystems of the lower level are not only connected by a common functional but also have their own goals.

In this paper a generalization of Krotov's sufficient optimality conditions is applied. The foundational theory is the Krotov method of global improvement, which was originally proposed for discrete processes. The advantage of the proposed method is that its conjugate system of vector-matrix equations is linear; hence, its solution always exists, which allows us to find the desired solution in the optimal control problem for DCSs.

The paper is dedicated to the blessed memory of our teachers, professors V. F. Krotov and V. I. Gurman

Keywords: discrete-continuous systems, sufficient optimality conditions, control improvement method.

1. Introduction

Scientific developments and discoveries lead to both new technologies and modifications of old ones. Traditionally, mathematical models have been used to solve optimization problems. Often, these models do not fully reflect the investigated processes and require completion. For example, the problem of extending an old investment policy to a new period may require additional research on a separate mathematical model, not just a change in some of the parameters. In such situations, the mathematical model used becomes two-level, and, therefore, a new class of optimization problems appears.

Such systems with a heterogeneous structure are widespread in practice and have different names. These include discrete-continuous, logical-dynamic, impulse, hybrid, and a number of other systems [1; 3; 7; 11; 14]. Further examples are given in [2; 4]. Such systems continue to attract the attention of researchers in various scientific areas, which has been reflected in the subject matter of scientific conferences in recent years.

The approach proposed in [3], based on the interpretation of an abstract model of a multistep controlled process [8] as a discrete-continuous system (DCS), made it possible to construct a two-level model by decomposing an inhomogeneous system into homogeneous subsystems. And then, based on a generalization of the known optimality conditions, it was possible to construct optimization algorithms similar to those developed for homogeneous systems. Here, by homogeneous systems, we mean systems with an unchanged structure that are studied in the classical theory of optimal control. All homogeneous subsystems in such a model are connected by a common goal, the role of which is played by the functional. This does not exclude the fact that each homogeneous subsystem can have its own goal. For such a case, when intermediate criteria for homogeneous lower-level models are available, a generalization of the previously obtained sufficient optimality conditions is given in [13]. They are presented in this paper for a better understanding, and on their basis, a second-order method for control improvement is constructed. It can be considered as a development of the Krotov method of global improvement [10], proposed initially for ordinary discrete processes. The theorem on the improvability of the initial approximation is formulated and proved here.

The advantage of the proposed method is that its conjugate system of vector-matrix equations is linear; therefore, its solution always exists. It does not contain the Riccati matrix equation, as in [12; 15]. The proposed method may not have a solution and will require the development of an additional procedure to eliminate the problem. To demonstrate the operability of the method, an illustrative example is considered.

2. Discrete-Continuous System Model

Let us consider abstract controlled system [8], all of its objects of arbitrary nature (possibly different):

x(k + 1) = f(k,x(k),u(k)), k e K = {k/,k/ + 1,...,fcFj. (2.1)

where k is the number of the step (stage), x and u are respectively variables of state and control, f is the operator, U(k, x) is the set given for each k and x, k/, kF are the initial and final steps, respectively.

On some subset K' c K, kF e K', a continuous low-level system operates in the role of a control component

dxc

xc = = fc (z, t, xc, uc), t e T (z) = [t/(z), tF(z)], (2.2)

xc(k, t) e Xc(z, t) c Rn(k), uc(k, t) e Uc (z,t,xc) c Rp(k), z = (k, x, ud) .

for the system (2.2) an intermediate goal is defined on the interval [t/(z), tF(z)] in the form of a functional:

Ik _ I ^c

J fk(i,xc(fc,i),uc(fc,i))di ^ inf.

T(z(k))

For each k e K' the right-hand side operator (2.1) is the following f (k, x(k), u(k)) = d (z, yc) , where

Yc = (t/, xC, tF, xF) e rc(z),

rc(z) = {7c: t/ = t(z), xc = C(z), (tF,xF) e rF(z)}. Here, z = (k, x, u^ is a set of upper-level variables (parameters at the

c/

lower level), is a control variable of arbitrary nature, t/ = t(z), = {(z)

are given functions of z.

The solution of this two-level system is the set m = (x(k), u(k)) (called a discrete-continuous process), where for k G K':

u(k) = (ud(k), mc(k)) ,mc(k) G Dc (z(k)).

For the element m mc(k) is a continuous process (xc(k,t), uc(k,t)), t G T(z(k)), and Dc(z) is the set of admissible processes mc, complying with the differential system (2.2) with additional restrictions for piecewise continuous uc(k, t) and piecewise smooth xc(k, t) (at each discrete step k). It is assumed that the functions fk have all properties required for the existence of the functionals Ik. Let us denote the set of elements m satisfying all the above conditions by D and call it a set of admissible discrete-continuous processes.

For the model (2.1), (2.2) we consider the problem of finding the minimum on D of the functional I = F (x (kF)) for fixed initial and final steps kI = 0, kF = K, x (ki) and additional constraints

x(k) e X(k), xc e Xc (z,t), (2.3)

where X(k), Xc (z,t) are given sets.

Note that the construction of a discrete top-level model that connects homogeneous continuous systems operating at different time intervals is a kind of heuristic method and reflects the researcher's views on the problem under consideration. The model may not be the only one possible. The researcher has decided what information about the end of a stage should be transmitted to the upper level and what control actions the upper level passes to the lower level. There are no publications about the choice of a single top-level model.

The term DCS (or discrete-continuous process) was proposed in [3], when research on such systems was just beginning. This name is also used by other authors, for example, in the works of B. M. Miller and E. Ya. Rubinovich. The more common term is hybrid systems, especially abroad.

DCSs with intermediate criteria are characteristic of astronautics, chemical production, and economics. So, when traveling from one planet to another at different stages of movement, different systems of equations and different types of engines are used. For each stage, the task is to minimize fuel consumption. But in general, a soft fit is required. Other examples can be found in the works of A.S. Bortakovsky [2] and V.I. Gurman [4].

3. Optimality and Improvement Sufficient Conditions

The sufficient optimality conditions for this model were obtained in [13] and are as follows.

Theorem 1. [13]. Let there be a sequence of discrete-continuous processes {ms} С D and functionals (, (c such that:

1) цc (z, t) is piecewise continuous for each z;

2) R (k, xs (k) , Us (k)) ^ ц (k) , k e K;

3) It(zs) (Rc (zs, t, xS (t), uS (tc)) - цс (zs, t)) dt ^ 0, k e K',t e T (zs);

4) Gc (zs, yO - 1c (zs) ^ 0, k e K';

5) G (xs (tF)) ^ I.

Then the sequence {ms} is a minimizing sequence for I on D.

The basic constructions of the theorem 1 1, representing a generalization of the constructions of sufficient Krotovs optimality conditions for homogeneous continuous and discrete systems [9], take the form:

G (x) = F (x) + (f (kF, x) — (f (ki, x (ki)),

V(k) =

R (k, x, u) = p (k + 1, f (k, x, u)) — p (k, x), Gc (z, yc) = — p (k + 1,0 (z, 7c)) + p (k, x) + +pc (z, tF, xF) — (z, t/, x/) , Rc (z, t, xc, uc) = pC? fc (z, t, xc, uc) — fk (z, t, xc, uc) + (z, t, xc). (z, t) = sup {Rc (z, t, xc, uc) : xc e Xc(z, t), uc e Uc (z, t, xc)}, 1c (z) = inf {Gc (z, yc) : yc e r(z), xc e Xc(z, tF)}, C sup{R (k, x, u) : x e X(k), u e U (k, x)}, t e K\K',

\ — inf{1c (z) : x e X (k), ud e Ud (k,x)}, k e K', l = inf{G (x) : x e r n X (K)}, L = G (x (kF)) — ^ R(k, x(k), u(k))+

K\K'\kF

+ E (Gc(z(k),Yc(z(k))) — f Rc(z(k),t,xc(k,t),uc(k,t))-t), K

K T(z(k))

where p, pc are the Krotov functions for the upper and lower levels respectively , pCc is the gradient of pc, T is the transposition sign.

We note that L = I on D. This reflects the principle of the extension [9] and is one of the foundations for constructing the method.

Theorem 2. [13]. For any element m e D and any p, pc the estimate is

I(m) — inf I < A = I(m) — l.

Let there be two processes m1 e D and m11 e E and Junctionals p and pc, such that L (m11) < L (m1) = I (m^ , and m11 e D. Then I(m11) < I(m1).

4. Krotov Method

Suppose that k/,x/, K, t/ (k) ,tF (k) are fixed, X (k) = Rm (k), Xc (k, t) = Rn (k), r(z) = R2m, rc(z) = R2n (k), xc (k) = £ (k,x (k)), there are no constraints for state variables of both levels and upper-level control variables, lowerlevel subsystems do not depend on ud, and the used constructions of sufficient optimality conditions are such that all the following operations are valid.

We will also assume that solutions of homogeneous systems (2.2) exist for each k e K'. The case of non-existence requires a change in the model and is not considered.

When constructing methods the problem of improving the element is used, which consists, essentially, in constructing some operator ш : D ^ D, such that I(ш(т)) < I(m) [5]. The problem of improving is following: we have an element m1 € D and we need to find an element m11 € D such that I (m1) > I (m11).

We will lead search for an element m11 and corresponding functions p (k, x (k)), pcI (z, t, xc) from the fulfillment of the conditions:

R (k, x(k), u1 (k)) ^ min, (4.1)

x

G (x) ^ max, (4.2)

Rc (z, t, xc (k, t), ucI (k, t)) ^ min, (4.3)

Rc (z, t, xc (k, t), ucI (k, t)) ^ min, (4.4)

x

Gc (z, xF, xC) ^ max. (4.5)

x

Let

U(k,x) = arg max R (k,x (k) , u (k)), (4.6)

u€U(fc,x)

uc (z,t,xc) = arg max Rc (z,t, xc,uc). (4.7)

Then, from the given discrete-continuous system and the initial conditions for the obtained controls, the functions xII(k), xcII(k,t) and control programs are obtained:

un (k) = u (k, xII(k)), ucH (k, t) = uc (k, t, xII(k), xcII(k, t)) ,

i.e. an element mn, such that I (mn) < I (W). Repeating iteratively these operations, we obtain an improving sequence {ms}. In this case the following theorem is valid.

Theorem 3. If the element mI is not a solution to the problem, then the inequality L (W) > L (mn) is valid.

Proof. Let us show that I (mn) — I (W) = L (mn) — L (W) < 0, following the paper [6]. We obtain

L (mII,pI,pcI) — L (mI,pI,pcI) = = G (xII) — G (xI) — ^ (R(k, xII (k) ,uII (k) ,pI) —

K\K'\fcF

—R(k, xI(k), uI(k), pI)) + ^ (GC (zII (k), pI, pcI)) —

K'

T(z)

„cI/.\ „,cI/,\ ,A ,„cI

-Rc(zI(k),t,xcI(t),ucI(t),^cI))dt = Ai - A2 + A3 - A4, where Ai = G (xn) - G (x^ < 0 by condition (4.2).

A2 = ^ (R(k,xn(k), un(k)V) - R(k,xI(k)y(k),^))) =

K\K'\kF

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

= ^ (R(k, xII(k), uII(k), - R(fc,xII(fc),uI(fc),^I)))+

K\K'\kF

+ ^ (R(k,xII(k), uI(k), ) - R(k, xI(k),uI(k), <^))) > 0

K\K'\kF

according to (4.1). Then we obtain

A3 = ^ (Gc (zn (k), ^cI) - Gc (zI (k), <^cI)) < 0,

K'

and

A4

T(z)

- Rc(zI(k), t, xcI(t), ucI(t), <^cI))dt = = J (Rc(zII(k),i,xcII(i),ucII(i),^I,^cI)-

T(z)

- Rc(zII(k), t, xcII(t), ucI(t), <^cI))dt+ + J (Rc(zII(k),t,xcII(t),ucI(t),^I,^cI)-

T(z)

- Rc(zI(k),t,xcI(t),ucI(t),^cI))dt > 0

cI cI I cI

according to the conditions (4.3), (4.4) and (4.5). Then ( ) ( )

L (mn) - L (mI) = Ai - A2 + A3 - A4 < 0.

From the theorem 3 it follows that if the above conditions are satisfied, we can construct such an improving sequence ms, that I (ms+1) < I (ms).

Let us consider the method of finding functions p, pc. We are using principle of expansion [9] and the theorem 2. Conditions (4.1)—(4.5) mean, that the functional L, calculated for controls u1 (k), uc/(k, t), is investigated to the maximum. We consider the increment of the functional L = I, which we represent in the form:

5L и dG + 1 d2G - E (dR + 1 d2Rj + / \

Gc + 1 d2Gc - J ^dRc + 1 d2Rc^ dt V T(z) /

+ E

K'\fcF

or

AL и GTAx + 1AxtGxxAx - E (RAX + lAxTRccAxj +

K\K'\fcF ^ '

+ E ( GXTf AxF+

K'\fcF

+ IaxFtGXc+ GXTAx + 1axtGXxAx + Axcft GCxFx Ax ) -

J ( Rfc Axc + RXTAx+

T( z ) „cTDc

+ ^AxcT RXcxcAxc + AxTRcxxcAxc + ^AxT RCcAx ) dt.

TDc

2

2

Here, the first and second derivatives of the functions R, G, Rc,Gc are calculated for u = u\uc = ucI, a Ax = x — x!(fc), Axc = xc — xcI (k,t),

/\ /y»c - /y»c _ /y»cI

ax^ — x^ x^ .

To fulfill the conditions (4.1)-(4.5) just assume:

Gx = 0, Rx = 0, Rxc = 0, GXc =0, GX = 0, RX = 0. (4.8)

Gcc — —Л , Rcc — Л , RcCcC — Л , RccC — 0, Gc Gc c _0 Gc __Л 5 rc __Л

-Л4

(4.9) (4.10)

where -A1, A2, A3,-A4,-A5, —A6 are positive definite diagonal matrices. It is easy to see, that conditions (4.8)-(4.10) are sufficient conditions for the extrema of the functions G, Gc, R, Rc. We supplement the conditions (4.8) - (4.10) with the conditions of the first and second orders of level joining:

d

— (p(k + 1,0(fc, x, xF, xI)) - pc(fc, x.tF, xF)) — 0,

(4.11)

cc

F cF

6

d2 7

(p(k + 1,0(k, x, xF, xI)) - pc(k, x.tF, xF)) = A7, (4.12)

A7 is a positive definite diagonal matrix

We define the functions p and pc in the following form:

p (k, x) = (k) x+1 AxTct (k) Ax, pc (z, t, xc) = AT (k, t) x+^cT(k, t)xc+

+1 AxcTctc (k, t) Axc + lAxT(k, t) Ax + AxTA (k, t) Axc,

where ^ (k), A (k, t), (k, t) are vector functions of dimensions m, n, n, and ctc (k,t), ct (k,t), A(k,t) are matrices of dimensions n x n, m x m, m x n, respectively. Then px = ^ (k), pxx = ct (k), px = A (k, t), = (k,t), pXX = (k,t), pXcXc = (k,t), pXcX = A(k,t). In addition, we introduce the functions

H (k, x(k), ^(k + 1), u(k)) = (k + 1)f (k, x(k), u(k)), k G K\K'\kF,

H (k, x(k), ^(k + 1), x(kl), x(kF)) = (k+1)0 (k, x(k), x(kl), x(kF)) , k G K',

Hc (k, x(k), ^c(k, t), xc(k, t), uc(k, t)) =

= ^cT(k, t)fc (k, x(k), xc(k, t), uc(k, t)) - fk(t, xc(k, t), uc(k, t)).

Taking into account the introduced notation from the conditions (4.8)-(4.10) and conditions of levels joining (4.11)-(4.12) we obtain:

^ (kF) = -Fx, ^ (k) = Hx, k G K\K'\kF, ^ (k) = Hx + (HxcCx)T + A(k, tF) - A(k, ti) + eT^c(t/), k G K'\kF, A = -HX, a (k,tF) = Hx + eTHxc ,V>c = -HXc, (k,tF)= HxF, CTc (k, tF) = Cc (tF)ct (k + 1) 9XC (tF) + Hxcxc + A4,

CTC = -CTCfxCc - fxCTCTC - Hxcxc + A3, CTC(tF) = CT(k + 1)^,

^ = -2AfxCcx - 1 (Afxcx)T - 1Af;c - 1 (Afx)T - H£x + ^ ^^(k, tF ) = c ct (k + 1) ^x + Hx + Hxxc ex+ + CT(k + 1)^xcex + eTHxcxCx + CxxHxf ,

A = -2Afic - 2(Afxc)T - 2fxTctC - 2CTCfx- Hxxc, A (k, tF) = CCT (k + 1) dxc (tF) + Hxxc CT (k) = fxTCT (k + 1) fx + Hxx + A2, k G K\K'\kF,

a (k) = + Hxxc (t/+ С a (k + 1) 0x + eT ac (t/) &.+ + HXc (t/){xx + eTС (t/)a (k + 1) ^ (t/)£x + eTHxcxc (t/)Cx + +Hc{xx0(i/) +л5, k e K'\kF, a(kF) = -Fxx + Л1.

5. The Algorithm of the Method

1. We define arbitrary functions p (k, x (k)) and pcI (z, t, xc).

2. We calculate controls U(k,x) , uc (k,t,xc) using (4.6), (4.7).

3. We define trajectories xI,xcI and controls U (k) , ucI (k,t) (element m1) using equations of a discrete-continuous process (2.1), (2.2). We calculate the value of the functional II.

4. We resolve the DCS from right to left with respect to functions ■0,0c, Л and matrices a, ac, ad, Л. For definiteness, we can put all the Ajj are equal to constant < 0, i = 2,3, 5, > 0, i = 1,4. Then we define new functions pII(k,x(k)) and pcII(z, t, xc). We back to the step 2.

Remark 1. If the functional has not improved, then the values Лг, playing the role of regulators of the proximity of neighboring approximations must be increased.

Remark 2. The system of vector-matrix equations with respect to vector functions 0,0c, A and matrices a, ac, ad, Л is linear and therefore always has a solution.

Example 1. The following 2-stage problem is considered.

1st stage t e [0,1] : xc1 = (xc2)2, xc2 = uc1, |uc1| < 1,

xc1(0) = 0, xc2(0) = 1,f1 = xc1.

2nd stage t e [1, 2] : xc1 = uc2 - (xc1 )2, |uc2| < 2, I = xc1(2) ^ inf. Let us consider this system as discrete-continuous. We obtain k = 0, 1, 2, 3. Since the role of the connecting variable in the two stages under consideration is played by xc1, it is easy to write the upper level process in terms of this variable. We establish the correlation between the variables. At the beginning of the process k = 0,t = 0,x(0) = xc1(0) = 0. Further, x(1) = xc1(0,1), xc1 (1,1) = x(1). Then I = x(2). The last instant 3rd stage plays the role of a transmitter of information about the end of the whole process. Then I = x1(3) = x1(2). So, the DCS model has the form:

k = 0, xc1(0,t) = (xc2(0,t))2, xc2(0,t) = uc1 (0,t), |uc1(0,t)| < 1, xc1(0, 0) = 0, xc2(0, 0) =1,f 1(0,t) = xc1(0, t), x(0) = xc1(0, 0) = 0,t e [0,1].

k = l, xc1(l,t) = uc2(l,t) - (xc1 (l,t))2, |uc2(l,t)| < 2,

x(l) = xc1(0, l), t e [l, 2],

k = 2, x(2) = xc1(l,2), k = 3, I = x1(3) = x1(2).

Obviously, the set K' = {l, 2} and functions 0(l) = xc1(0, l),{(l) = x(l), 0(2) = xc1(l, 2),{(2) = x(2). We obtain the necessary constructions:

Rc(0, t, x1, x2, xc1, xc2, uc1) = pXci (xc2)2 + p^uc1 - xc1 + p£,

Rc(l,t,x1,xc1,uc2) = pXci(uc2 - (xc1)2) + p£. It is easy to see, that

uc1 = signpXc2, uc2 = 2signpXci,

Hc (0, t, 0c1, 0c2, xc1, xc2, uc1) = 0c1 (xc2)2 + 0c2uc1 - xc1,

Hc(l, t, 0c1, xc1, uc2) = 0c1 (uc2 - (xc1)2),

H(l, x, ■0(2)) = 0(2)xc1(0, l), H(2, x, 0(3)) = 0(3)xc1(l, 2).

Since the lower level process is independent of x, then A = 0, = 0, A = 0. Let <12 = <21 = <22 = 0 and define the functions p and pc in the following form:

p = 0(l)x + l<(l)(x - (x)1)2, k = 0, p = 0(2)x + 2<(2)(x - (x)1)2, k = l, pc(0, t) = 0c1xc1 + 0CV2 + lai1(xc1 - (xc1)!)2, k = 0,

pc(l, t) = 0C V1 + lai1(xc1 - (xc1)!)2, k = l.

Then the equations of the method will take the following form. At the first stage for k = 0

0c1 = l, 0c2 = -20cv2,0c1(l) = 01(2), 0c2(l) = 0(l), 7-q = 0,

^ix(l) = <(2) + ¿4(2), 0(l) = 0c1(l, l),<n(l) = ¿1(l).

At the second stage for k = l

0c1 = 20c1xc1,0c1(2) = 0(2), 7c = 4<cxc1 + 20c1,

<c(2) = <(3) + ¿4(3), 0(3) = -l, <7(3) = ¿1(3).

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

We set the initial approximation uc1(0, t) = uc2(l, t) = 0. Then xc1 = t, k = 0 and xc1 = 1, k = l,1° = 0.5

The solution to the example was obtained in one iteration and the functionality changed from 0.5 to -3.44. Control variables and trajectories

Figure 1. Trajectory change.

0--------------------------------------------------------------------

uc

■0.5 -

-1 -

■1.5 - -1—

-------u

II

-uc

-2 - -

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 t 2

Figure 2. Control change.

are shown in Fig. 1 and Fig. 2. At the beginning of the calculations, the parameters 51, were assumed to be equal to zero at both stages. Then they were changed over the interval [0;0.3], which did not affect the calculation results. So the functional at ¿i = = 0.3 at both stages changed from the value 0.5 to the previous value -3.44.

6. Conclusion

Thus, an analog of the well-known Krotov global improvement method is obtained. A second order method of improving for DCS with intermediate criteria is constructed, its algorithm is formulated and tested on an illustrative example. The calculation results confirm the efficiency of the method.

References

1. Bortakovskii A.S. Sufficient Conditions of Control Optimality for Determined Logical Dynamic Systems [Dostatochnye uslovija optimal'nosti upravlenija deter-minirovannymi logiko-dinamicheskimi sistemami]. Informatika. Ser. Avtomatizacija proektirovanija, 1992, vol. 2-3, pp. 72-79. (in Russian)

2. Bortakovskii A.S. Synthesis of Optimal Control-Systems with a Change of the Models of Motion. Journal of Computer and Systems Sciences International, 2018, vol. 57, no. 4, pp. 543-560. https://doi.org/10.1016/10.1134/S1064230718040056

3. Gurman V.I. Theory of Optimum Discrete Processes. Autom. Remote Control, 1973, vol. 34, no. 7, no. 6, pp. 1082-1087. (in Russian)

4. Gurman V.I. Printsip rasshireniya v zadachakh upravleniya The expansion principle in control problems. Moscow, Nauka Publ., 1985, 288 p. (in Russian)

5. Gurman V.I. Abstract Problems of Optimization and Improvement. Program Systems: Theory and Applications. 2011, no. 5(9), pp. 14-20. URL: http://psta.psiras.ru/read/psta2011_5_14-20.pdf (in Russian)

6. Gurman V.I., Trushkova E.A. Approximate Methods of Control Processes Optimization. Program Systems: Theory and Applications, 2010, no, 4(4), pp. 85-104. URL: http://psta.psiras/pdfs/psta2010_4_85-104.pdf. (in Russian)

7. S.V. Emel'yanov. Teorija sistem s peremennoj strukturoj [Theory of Systems with Variable Structure]. Moscow, Nauka Publ., 1970, 592 p. (in Russian)

8. Krotov V.F. Sufficient Optimality Conditions for Discrete Controlled Systems [Dostatochnye uslovija optimal'nosti dlja diskretnyh upravljaemyh sistem]. Dokl. Akad. Nauk SSSR. 1967, vol. 172, no. 1, pp. 18-21. (in Russian)

9. Krotov V.F., Gurman V.I. Metody i zadachi optimal'nogo upravlenija [Methods and Problems of Optimal Control]. Moscow, Nauka Publ., 1973, 448 p. (in Russian)

10. Krotov V.F., Fel'dman I.N. Iterative Method for Solving Optimal Control Problems [Iteracionnyj metod reshenija zadach optimal'nogo upravlenija]. Izvestija AN SSSR. Tehnicheskaja kibernetika, 1983, no. 2, pp. 160-168. (in Russian)

11. Miller B.M., Rubinovich E.Ya. Optimizacija dinamicheskih sistem s impul'snymi upravlenijami [Optimization of Dynamic Systems with Impulse Controls]. Moscow, Nauka Publ., 2005, 429 p. (in Russian)

12. Rasina I.V. Ierarkhicheskie modeli upravleniya sistemami neodnorodnoy struktury [Hierarchical Models of Control of Heterogeneous Systems]. Moscow, Fizmatlit Publ., 2014, 160 p. (in Russian)

13. Rasina I.V. Descrete-Continuous Systems with Intermediate Criteria [Diskretno-nepreryvnye sistemy s promezhutochnymi kriteriyami]. Materialy XX Yubileynoy Mezhdunarodnoy konferenysii po vychislitel'noy mekhanike i sovremennym prik-ladnym programmnym sistemam [Materials of the XX Anniversary International Conference on Computational Mechanics and Modern Applied Software Systems]. Moscow, MAI Publ.,2017, pp. 699-701. (in Russian)

14. Lygeros J. Lecture Notes on Hybrid Systems. Cambridge, University of Cambridge, 2003, 84 p.

15. Rasina I., Danilenko O. Second-Order Improvement Method for Discrete-Continuous Systems with Intermediate Criteria. IFAC-Papers Online, 2018, vol. 51, no. 32, pp. 184-188. https://doi.org/10.10167j.ifacol.2018.11.378

Irina Rasina, Doctor of Sciences (Physics and Mathematics), Associate Professor, Program Systems Institute of RAS, 4a, Petr Pervyi st., l52020, Pereslavl-Zalessky, Russian Federation, tel.: (48535)98028, e-mail: irinarasina@gmail.com, ORCID iD https://orcid.org/0000-000l-8939-2968.

Olga Danilenko, Candidate of Sciences (Physics and Mathematics), Institute of Control Sciences of RAS, 65. Profsoyuznaya st., ll7997, Moscow, Russian Federation, tel.: (495)3349l59, e-mail: olga@danilenko.org, ORCID iD https://orcid.org/0000-000l-6369-6947.

Received 27.03.2020

Метод Кротова второго порядка для дискретно-непрерывных систем

И. В. Расина1, О. В. Даниленко2

1 Институт программных систем РАН, Переславль-Залесский, Российская Федерация,

2 Институт проблем управления РАН, Москва, Российская Федерация

Аннотация. В конце 60-х и начале 70-х гг. прошлого века в теории оптимального управления появился новый класс задач. Оказалось, что структура описания ряда систем или рассматриваемых процессов не однородна и может изменяться с течением времени. Итог: появление новых математических моделей систем и процессов управления неоднородной структуры. Методы исследования таких систем очень разнообразны и отражают различные научные школы и направления.

Один из вариантов состоит в развитии подхода, позволяющего остаться в рамках традиционных предположений теории оптимального управления. Его основа -достаточные условия оптимальности В. Ф. Кротова для дискретных систем, сформулированные в терминах произвольных множеств и отображений.

В работе рассматривается одна из разновидностей неоднородных систем: дискретно-непрерывные системы (ДНС) для случая, когда все однородные подсисте-

мы нижнего уровня не только связаны общим функционалом, но имеют и свои собственные цели.

Далее для построения метода применяется обобщение достаточных условий оптимальности В. Ф. Кротова. Идейной основой служит метод Кротова глобального улучшения, предложенный изначально для обычных дискретных процессов. Преимущество предлагаемого метода состоит в том, что его сопряженная система век-торно-матричных уравнений линейная и, следовательно, ее решение всегда существует, что позволяет найти искомое решение в задаче оптимального управления для ДНС.

Ключевые слова: дискретно-непрерывные системы, достаточные условия оптимальности, метод улучшения управления.

Список литературы

1. Бортаковский А.С. Достаточные условия оптимальности управления детерминированными логико-динамическими системами // Информатика. Серия: Автоматизация проектирования. 1992. Вып. 2-3. С. 72-79.

2. Бортаковский А. С. Синтез оптимальных систем управления со сменой моделей движения // Известия РАН. Теория и системы управления. 2018. № 4. С. 57-74. https://doi.org/10.31857/S000233880002512-2

3. Гурман В. И. К теории оптимальных дискретных процессов // Автоматика и телемеханика. 1973. № 7. С. 53-58.

4. Гурман В. И. Принцип расширения в задачах управления. М. : Наука, 1985. 288 с.

5. Гурман В. И. Абстрактные задачи оптимизации и улучшения // Программные системы: теория и приложения : электрон. научн. журн. 2011. № 5(9) С. 14-20. URL: http://psta.psiras.ru/read/psta2011_5_21-29.pdf

6. Гурман В. И., Трушкова Е. А. Приближенные методы оптимизации управляемых процессов // Программные системы: теория и приложения : электрон. научн. журн. 2010. № 4(4). С. 85-104.

URL: http://psta.psiras/pdfs/psta2010_4_85-104.pdf.

7. Теория систем с переменной структурой / под ред. С. В. Емельянова. М. : Наука, 1970. 592 с.

8. Кротов В. Ф. Достаточные условия оптимальности для дискретных управляемых систем // ДАН СССР. 1967. Т. 172, № 1. С. 18-21.

9. Кротов В. Ф. Гурман В. И. Методы и задачи оптимального управления. М. : Наука, 1973. 448 с.

10. Кротов В. Ф., Фельдман И. Н. Итерационный метод решения задач оптимального управления // Известия АН СССР. Техническая кибернетика. 1983. № 2. С. 160-168.

11. Миллер Б. М., Рубинович Е. Я. Оптимизация динамических систем с импульсными управлениями. М. : Наука, 2005. 429 с.

12. Расина И. В. Иерархические модели управления системами неоднородной структуры. М. : Физматлит, 2014. 160 с.

13. Расина И. В. Дискретно-непрерывные системы с промежуточными критериями // Материалы XX Юбилейной Международной конференции по вычислительной механике и современным прикладным программным системам. М. : Изд-во МАИ, 2017. С. 699-701.

14. Lygeros J. Lecture Notes on Hybrid Systems. Cambridge : University of Cambridge, 2003. 84 p.

15. Rasina I., Danilenko O. Second-Order Improvement Method for Discrete-Continuous Systems with Intermediate Criteria // IFAC-Papers Online. 2018. Vol. 51, N 32. P. 184-188. https://doi.org/10.1016/j.ifacol.2018.11.378

Ирина Викторовна Расина, доктор физико-математических наук, доцент, Институт программных систем РАН, Российская Федерация, 152020, Переславль-Залесский, ул. Петра Первого, 4а, тел.: (48535)98028, e-mail: irinarasina@gmail.com, ORCID iD https://orcid.org/0000-0001-8939-2968.

Ольга Владимировна Даниленко, кандидат физико-математических наук, Институт проблем управления РАН, Российская Федерация, 117997, Москва, ул. Профсоюзная, 65, тел.: (495)3349159, e-mail: olga@danilenko.org, ORCID iD https://orcid.org/0000-0001-6369-6947.

Поступила в 'редакцию 27.03.2020

i Надоели баннеры? Вы всегда можете отключить рекламу.