Научная статья на тему 'Linear programming and dynamics'

Linear programming and dynamics Текст научной статьи по специальности «Математика»

CC BY
86
10
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
Ural Mathematical Journal
Scopus
ВАК
Область наук
Ключевые слова
LINEAR PROGRAMMING / OPTIMAL CONTROL / BOUNDARY VALUE PROBLEMS / METHODS FOR SOLVING PROBLEMS / CONVERGENCE / STABILITY

Аннотация научной статьи по математике, автор научной работы — Antipin Anatoly S., Khoroshilova Elena V.

In a Hilbert space we consider the linear boundary value problem of optimal control based on the linear dynamics and the terminal linear programming problem at the right end of the time interval. There is provided a saddle-point method to solve it. Convergence of the method is proved.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Linear programming and dynamics»

URAL MATHEMATICAL JOURNAL, Vol. 1, No. 1, 2015

LINEAR PROGRAMMING AND DYNAMICS1-2

Anatoly S. Antipin

Computing Center of RAS, Moscow, Russia, asantip@yandex.ru

Elena V. Khoroshilova

CMC Faculty, Lomonosov Moscow State University, Moscow, Russia,

khorelena@gmail.com

Abstract: In a Hilbert space we consider the linear boundary value problem of optimal control based on the linear dynamics and the terminal linear programming problem at the right end of the time interval. There is provided a saddle-point method to solve it. Convergence of the method is proved.

Key words: Linear programming, Optimal control, Boundary value problems, Methods for solving problems, Convergence, Stability.

Introduction

Linear programming is one of the most powerful and popular tools of mathematical modeling, covering vast areas of human activity, including the economy, the environment, technology, complex networks, research and more. Linear programming exists in various forms. A large part of the scientific literature is devoted to linear programming problems. Thousands of researchers were involved in carrying out these researches. The results of Ivan Ivanovich Eremin are prominent among these studies. In his research, I.I. Eremin studied various problems of linear and convex programming, including improper and conflicting objectives, problems of lexicographic and Pareto optimization, problems of disjunctive programming and pattern recognition, and more [1-5]. The merit of prof. Eremin is that he studied all these problems in primal and dual forms simultaneously [6]. Continuing this line, we consider the linear programming problem in terms of dynamics.

In general, the linear programming problems are static, and they describe the state of a system at a fixed time. However, real objects, immersed in some medium, vary under the influence of various external factors. For example, economic facilities vary with economic conditions. This means that in the coming times static economic models will not be adequate to the changing real objects. This entails a mismatch between the real object and its mathematical model. To resolve this inadequacy, it is reasonable to introduce the time factor in the mathematical model. In this paper, it was done for boundary value problems of optimal control.

1. Problem statement

Consider the simplest formulation for the linear boundary value problem of optimal control. When controls u(-) run over the set U, the linear controlled dynamics generates the corresponding trajectories. The right ends x(t1) = x1 of the trajectories describe the terminal set X1 = X(t1) C Rn called the set of attainability.

1This work was supported by the Russian Foundation for Basic Research (project no. 15-01-06045-a), and the Program for Support of Leading Scientific Schools (project no. NSh-4640.2014.1.)

2Published in Russian in Trudy Inst. Mat. i Mekh. UrO RAN, 2013. Vol. 19. No 2. P. 7-25.

In the Hilbert space [to, ti], we consider the terminal control problem with linear programming problem as the boundary value problem (at the right end):

d

—x(t) = D(t)x(t) + B(t)u(t), x(to) = xo, x(ti) = xl, (1.1)

x1 e Argmin{(<pi,x(ti)) | Aix(ti) < ai, x(ti) e Xi Q Rn}, (1.2)

u(') e U = {u(-) e Lr2[to,ti]| ||u(-)||l2 < const}. (1.3)

Here D(t),B(t) are n x n, n x r -matrix functions depending continuously on time, Ai is a fixed matrix of size m x n (m > n, these constraints x > 0 are included in the polyhedron); a-\_,x0 are given vectors. Controls u(-) are elements of the space L2[t0, ti], and satisfy the condition (1.3) at all points of the interval [t0,ti] to within a set of measure zero. The vector e Rn is fixed and determines the normal to a linear function.

Any pair (x(-),u(-)) e Ln[t0,ti] x U satisfying identically the condition

x(t) = x(t0) + / (D(r)x(t) + B(t)u(t))dr (1.4)

Jt 0

for almost all t e [t0,ti], is considered as a solution to a differential system (1.1)-(1.3).

Identity defines a generalized solution of (1.1)-(1.3). As it was shown in [7, Book 1, p. 443], for any control u(-) e U and given x0 there exists a unique trajectory x(-) subject to (1.1)-(1.3), and these functions satisfy the identity (1.4). In applications, the control u(-) is often a piecewise continuous function. The presence of break points on controls u(-) does not affect the values of the trajectory x(-). Moreover, the trajectory remains unchanged even if we change the values of u(-) on the set of measure zero.

The trajectory x(-) in (1.4) is an absolutely continuous function [8]. The class of absolutely continuous functions is a linear variety, which is everywhere dense in Ln[t0,ti]. In what follows we shall denote this class as ACn[t0,ti] C Ln[t0,ti]. Newton-Leibniz formula and the formula for integration by parts obviously hold for any pair of functions (x(-),u(-)) e ACn[t0, ti] x U.3 In [7] it was proved that the solution xi e Xi, (xl( ),u*(■)) e ACn[t0,ti] x U of the problem always exists.

Consider how the system (1.1)-(1.3) operates. This controllable system is a linear constraint that allocates the variety of linear functions x(-),u(-) defined on the interval [t0,ti]. As already noted, the right ends of the trajectories generate the set Xi. On this set, the linear function (^i,xi) is defined and allocates either a unique minimum point or a closed convex set of such points.

Now the problem is as follows: it is necessary to choose a control u*(-) e U such that the right end of the trajectory x*(-) coincides with a solution of the linear programming problem (1.2) formulated on the attainability set for the dynamical system (1.1)-(1.3).

The problem is treated as a dynamic system, which by using a selected control transfers the linear problem (1.2) from the initial state to the terminal state. Such construction allows us to adapt and adjust the model of an object to the constantly changing realities of the environment in which it is immersed.

2. Classic Lagrangian

The considered problem is a linear programming problem formulated in the Hilbert space. In the linear programming theory for finite-dimensional spaces, it is well known that the primal problem always exists simultaneously with the dual problem in the dual space. Through appropriate

3Scalar products and norms are defined as (x(-),y(-)) = (x(t),y(t))dt, ||x(-)||2 = J^1 |x(t)|2dt, where

n n

(x(t),y(t)) = £xi(t)yi(t), |x(t)|2 = £x2(t), x(t) = (xi(t), ...,xn(t))T, y(t) = (yi(t),...,yn(t))T, t e [to,ti].

1 1

analogy, one can try to obtain explicit dual problem for the system (1.1)-(1.3). For the system (1.1)-(1.3), we introduce a linear convolution known as the Lagrangian:

L(pi,xi,^(-),x(-),u(-)) = (^i,x{) + (pi,AiXi - ai)

rt i d (2.1)

+ / (^(t),D(t)x(t) + B(t)u(t) - — x(t))dt, Jt0 dt

for all pi e xi e Rn, e ^[to,ti], (x(-),u(-)) e ACn[t0,ti] x U, where ^^[to, ti] is a linear variety of absolutely continuous functions from the dual space. This set is everywhere dense in Ln[t0,ti], i.e., the closure of the variety ^n[t0,ti] in the norm of Ln[t0,ti] coincides with Ln [t0,ti].

The saddle point (pi,y*(t); x*(ti),x*(t),u*(t)) of the Lagrange function, consisting of primal (x*(ti), x*(t),u*(t)) and dual (p\,^*(t)) solutions of (1.1)-(1.3), satisfies, by definition, the system of inequalities:

r ti d

(fi,x*(ti)) + (pi, Aix*(ti) - ai) + (4>(t),D(t)x*(t) + B(t)u*(t) --x*(t))dt

Jt0 dt

^i d <(pi,x*(ti)) + (pi,Aix*(ti) - ai) + / (y*(t),D(t)x*(t) + B(t)u*(t) --x*(t))dt (2.2)

Jt0 dt

r ti d

< (pi,xi) + (p*,Aixi - ai) + / (y*(t),D(t)x(t) + B(t)u(t) - — x(t))dt

Jt 0 dt

for all pi e Rm, xi e Rn, $(•) e ^[t0,ti], (x(-),u(-)) e ACn[t0,ti] x U, x(t0) = x0. Next, we use the notation x*(ti) = x*.

So, if the original problem (1.1)-(1.3) has the primal and dual solutions, then they form the saddle point of the Lagrange function. We now show that the converse proposition is also true: a saddle point of the Lagrangian (2.1) is formed by the primal and dual solutions of the original problem (1.1)-(1.3).

The left-hand inequality of (2.2) is the problem of maximizing the linear function in the variables

m +

d

{f(t) - y (t),D(t)x(t) + B(t)u~(t) - -

t0

with pi e R+m,y(-) e *n[t0,ti]. From (2.3) we have

(pi,y(-)) on the whole space Rm x ^n[t0,ti]:

f ti d

(pi - p*,Aix* - ai) + / (^(t) - y*(t),D(t)x*(t) + B(t)u*(t) - — x*(t))dt < 0 (2.3) 1 1 t0 dt

(p1 - p* , A1x* - a1) < 0, d

D(t)x*(t) + B(t)u*(t) - d^x*(t) = 0, x*(t0) = x0, for all pi e Rm. Putting at first pi = 0, then pi = 2p*, we obtain

(p*, A1x* - a1) = 0, A1x* - a1 < 0, d

D(t)x*(t) + B(t)u*(t) - d^x*(t) = 0, x*(t0) = x0.

(2.4)

(2.5)

The right-hand inequality of (2.2) is the problem of minimizing the Lagrangian in the variables (xi,x(-),u(-)) at fixed values pi = p*, y(t) = y*(t). We show that (p*,x*,y*(t),x*(t),u*(t)) is the solution of (1.1)-(1.3). In view of (2.5), from the right-hand inequality of (2.2) we have

ti , ....... . ....... d

r i d

(tpi, x*) < &i,xi) + (p*,Aixi - ai) + / (y*(t),D(t)x(t) + B(t)u(t) --x(t))dt (2.6)

t0 dt

for all xi e Rn, (x(-),u(-)) e ACn[t0,ti] x U.

Consider the inequality (2.6) under the additional scalar constraints

r ti d

(pi,Aixi - ai) < 0, (^*(t),D(t)x(t) + B(t)u(t) - — x(t)>dt = 0.

t0 dt

Then we get the optimization problem (pi,x\) < (pi,xi) under constraints

rtl , ....... . ....... d

t0

r ti d

(pl,Aixi - ai) < 0, {$*(t),D(t)x(t) + B(t)u(t) - — x(t))dt = 0 (2.7)

t0 dt

for all xi e Rn, (x(-),u(-)) e ACn[t0,ti] x U.

From (2.5) we see that the solution (x*(t), u*(t)) belongs to a narrower set than (2.7). Therefore, this point remains a minimum on a subset of solutions of (2.5), i.e.

(pi,x\) < (pi,xi), Aixi < ai, (2.8)

d

—x(t) = D(t)x(t) + B(t)u(t) (2.9)

for all xi e Rn, (x(-),u(-)) e ACn[t0,ti] x U. Thus, if the Lagrangian (2.1) has a saddle point then its primal components form a solution to the original problem of convex programming.

3. Dual Lagrangian

Show that the Lagrangian plays the role of "bridge" allowing to move from the original problem (in the primal space) to the dual problem (in the dual space). Using the formulas for the transition to the adjoint linear operators (^,Dx) = (DT^,x), (^,Bu) = (BT^,u) and the formula for integration by parts on the interval [t0,ti]

/>ti d f ti d (i}(ti),x(ti)) - {&(t0>),x(t0>))= (-^(t),x(t))dt +/ (^(t),—x(t))dt, (3.1)

t0t0

we write out the dual to (2.1) Lagrange function and the saddle-point system (2.2) in the dual form

L(pi,xi,^(t),x(t),u(t)) = (pi + ATpi,xi) - (ai,pi)

i d I' ti

+ / (DT(t)^(t) + -M),x(t))dt + / (BT(t)^(t),u(t))dt - {^(ti),x(ti)) + (^(t0),x(t0)) (3.2) t0 dt t0

for all pi e R^, xi e Rn, e ^[t0,ti], (x(-),u(-)) e ACn[t0,ti] x U; x0 = x(t0), ^0 = ^0), ■0i = ^(ti).

Both Lagrangians (primal and dual) have the same saddle point (pi,^*(t); x\,x*(t),y*(t)) which satisfies the saddle-point dual system

fti d (pi + ATpi,xi) + (-ai,pi) + Jo (DT (t)0(t) + ~^(t),x*(t)) dt

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

+ (BT(t)^(t),u*(t))dt - mi),x*i) + m0),x0)

J to

ft i d d

i i d

<(pi + ATpi,xi) + (-ai,pl) + Jo {DT (t)r(t) + dt^*(t),x*(t)) dt

+ f^ (Bt(t)r(t),u*(t))dt -(r(ti),xl) + (r(t0),x0)

J to

fti d

<(pi + ATpi,xi) + (-ai,p\) + Jo (DT (t)r(t) + -r(t),x(t))dt

+ f1 (BT(t)^*(t),u(t))dt -(4>*(ti),xi) + (r(h),x0) t0

(3.3)

for all pi e R+, xi e Rn, y(-) e ^[t0,ti], (x(-),u(-)) e ACn[t0,ti] x U. From the right-hand inequality of (3.3) we have

i i d

(pi + Ajp* - y*(ti),x* - xi) + Jo (DT(t)y*(t) + jfy*(t),x*(t) - x(t)>dt

¡■ti

+ (BT(t)y*(t),u*(t) - u(t))dt < 0 t0

t0

for all xi e Rn, (x(-),u(-)) e ACn[t0,ti] x U. Putting u(-) = u*(-) in the resulting inequality, we get

rti d

(pi + AT pi - y*(ti),x* - xi) + / (DT (t)y*(t) + - y* (t),x*(t) - x(t)> dt < 0 (3.4)

t0 dt

rti ,*„, . d t0

for all xi e Rn, x(-) e ACn[t0,ti]. Assuming x(-) = x*(■), we find

f ti

/ (BT(t)y*(t),u*(t) - u(t))dt < 0 (3.5)

t0

for all u(-) e U.

Given that (3.4) is the problem of maximizing a linear function in the variables (xi,x(-)) e

Rn x ACn[t0,ti], the inequalities (3.4), (3.5) can be rewritten in the form

d

DT(t)y*(t) + -y*(t) = 0, pi + ATp* - = 0,

r ti

/ (BT(t)y*(t),u*(t) - u(t))dt < 0, u(') e U. (3.6)

t0

From the left-hand inequality of (3.3) with the equations (3.6) we have

rti d

(pi + ATpi - yi,x*) + (-ai,pi) + Jo (DT(t)y(t) + -y(t),x*(t))dt + (y(t0),x0)

+ 1U (BT(t)y(t),u*(t))dt <(-ai,p*) + r (BT(t)y*(t),u*(t))dt + (y*(t0),x0). t0 t0

Note that y*(t0) = 0. Indeed, suppose that y*(t0) = 0, and fix the values of all variables, except y(t0), in this inequality. Passing to to in those components for which the corresponding components of x(t0) are greater than zero, we obtain a contradiction with the existence of the saddle point of the Lagrange function.

Given that y*(t0) = 0, we consider this inequality provided two scalar constraints

(pi + A{pi - y ,x*) =0,

rti T,.s,,.s d

rti d

J0 (DT (t)y(t) + d-tm,x*(t)) dt = 0.

t0 dt Then we get the problem of maximizing the scalar function

fti rti

(-ai,pi) + / 1 (y(t),B(t)u*(t))dt <(-ai,p*) + / 1 (y*(t),B(t)u*(t))dt t0 t0

under two scalar constraints

(pi + A\pi - y, x*) = 0,

¡•ti d

<DT (t)0(t) + -^(t),x*(t)) dt = 0,

Jtn dt

I to dt

where we come to the dual problem under the vector constraints:

ti to

(p\,^*(t)) £ Argmaxj —a1,p1) + J^ (^(t),B(t)u*(t))dt \ (3.7)

pi + ATpi - fa = 0, DT(t)0(t) + ^0(t) = O, pi £ Rm, 0(0 £ ^[tc,ti]}. (3.8)

Thus, the system (3.7), (3.8) gives the dual problem with respect to (1.1)-(1.3). This problem can be viewed as a generalization of dual problem in the finite-dimensional linear programming.

4. Mutually dual problems

Write out together a pair of mutually dual problems. Primal problem:

(xi,x*(t),u*(t)) £ Argminj{<pi,x(ti)) | Aix(ti) < ai, x(ti) £ Rn, d

—x(t) = D(t)x(t) + B(t)u(t), to < t < ti, (4.1)

dt

x(t0) = x0, x(ti) = xi £ Xi, u() £ U}.

Dual problem:

(p*i,r(t)) £ Argmaxj{-ai,pi) + J* ty(t),B(t)u*(t))dt \pi > 0, 0(0 C ^[to,ti],

d t

pi + ATpi - fa =0, DT(t)0(t) + -0(t) = 0}, (4.2)

r ti

/ (BT(t)0*(t),u*(t) - u(t))dt < 0, u(0 £ u. (4.3)

to

If there is no dynamics in (4.1)-(4.3), the system takes the form of primal and dual linear programming problems for finite-dimensional optimization:

xi £ Argminj{pi,xi) I Aixi < ai, xi £ Rnj,

pi £ Argmax{{-ai,pi) | pi + Afpi = 0, pi > 0}.

Each problem in the system (4.1)-(4.3) (separately or in combination) can be the basis for the development of methods for calculating the saddle points of the Lagrangian [9-18]. Another family of methods can be obtained by combining the left-hand inequality of the saddle-point system for the primal Lagrange function with the right-hand saddle-point inequality for the dual Lagrangian. Thus constructed methods will converge monotonically in the norm to saddle points of Lagrangians. With regard to the initial boundary value problem of optimal control it means the weak convergence in controls, the strong convergence in trajectories, conjugate trajectories and terminal variables.

In this paper, we will consider an iterative process for solving a boundary value differential system obtained from the saddle point inequalities. It will be shown that this system is close to the similar differential system received from the Pontryagin maximum principle.

5. Primal-dual (combined) differential system

Consider the left-hand saddle-point inequality of (2.2) for the classical Lagrangian together with the right-hand saddle-point inequality of (3.3) for the dual Lagrangian. Subsystems (2.5) and (3.6) were obtained as a result of these systems.

Combining them together, we write out the primal-dual differential system

d

dtx*(t) = D(t)x*(t) + B(t)u*(t), x*(to) = xo, (pi - p*,Aix* - a\) < 0,

d (5.1)

DT(t)y*(t) + d^*(t) = 0, pi + Ajpl - y* = 0,

Íll

/ (BT(t)y*(t),u*(t) - u(t))dt < 0, u(') e U, pi e R+.

J to

The primal-dual system (5.1) was obtained from the necessary and sufficient conditions for a saddle point of the Lagrange function. A similar system can be obtained on the basis of the Pontryagin maximum principle. Due to the linearity of the dynamics, the Hamiltonian for this optimal control problem takes the form of variational inequalities. In view of the convexity of U, the Pontryagin maximum principle can be written as

d

—x;(t) = D(t)x*(t) + B(t)u*(t), x*(to) = xo, (p i - p*,A ix* - a i) < 0,

(5.2)

d

dt (t)y*(t) + dty*(t) = 0, p i + aTp* - y * = 0,

(BT(t)y*(t),u*(t) - u(t)> < 0,

for all pi e R+m, u(-) e U and almost all t e [t0, ti].

The variational inequalities (5.1) and (5.2) with variable u(-) are, in fact, different inequalities. The first inequality describes the problem of maximizing a linear function in functional space on given set U. The second inequality is actually a family of finite-dimensional variational inequalities, depending on certain parameter t e [t0,ti]. Moreover, each of these inequalities is a finite-dimensional problem of maximizing the linear function in variable u.

Without a doubt, the system (5.2) is more universal statement than (5.1), but (5.1) clearly emphasizes its saddle-point nature and allows us to build techniques within the Hilbert spaces. These methods converge to the problem solution in all its variables: controls, trajectories, conjugate trajectories as well as primal and dual variables of terminal problems. Similar methods based on the maximum principle are not known to the authors. The variational inequality (5.1) is usually associated with the integral maximum principle [7, Book 2, p. 450].

Return to the system (5.1). The variational inequalities of the system can be rewritten in the equivalent form of operator equations with operators of projection onto the corresponding convex closed sets. Then we arrive at the system of operator and differential equations

d

—x*(t) = D(t)x*(t) + B(t)u*(t), x* (to) = xo, (5.3)

p* = ^+(p* + a(A ix* - a i)), (5.4) d

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

DT (t)y*(t) + -y*(t) = 0, p i + ATp* - y * = 0, (5.5)

u*(t) = nu(u*(t) - aBT(t)y*(t)), (5.6)

where n+(-),nU(■) are the projection operators onto the positive orthant R+ and the set U of admissible controls, a > 0. Here (p*,y*(t);x*,x*(t),u*(t)) is the solution of (5.3)-(5.6).

6. Saddle-point method for solving problem

Consider an iterative process constructed on the basis of (5.3)-(5.6). Suppose, the values of the dual variable p1 = pk £ Rm and the control u(t) = uk(t) £ U are known on the k-th iteration. Then you can solve the differential equation (5.3) and find the trajectory xk(t). Then you calculate the terminal value xk(ti) = xk and using pk, xk implement the step (5.4). Then, using the transversality condition you calculate the terminal value yk = pi + Ajpk, solve the system (5.5) and find the conjugate trajectory yk(t). Finally, when yk(t) and uk(t) became known you make a move by the control variable (5.6) and define the next iteration uk+1 (t) £ U.

Formally, the process described above has the form

d

jtxk (t) = D(t)xk (t) + B(t)uk (t), xk (to) = xo, (6.1)

pk+1 = n+(pk + a(A 1 xk - a 1)), (6.2) d

DT (t)yk (t) + -yk (t) = 0, yk = pi + AT pk, (6.3)

uk+1(t)= nu (uk (t) - aBT (t)yk (t)), k = 0,1, 2,... (6.4)

Note that in this method, each iteration is actually reduced to the solution of two systems of differential equations (6.1), (6.3).

The process (6.1)-(6.4) refers to the methods of simple iteration and is the simplest of the known computational processes. In the case of strictly contractive mappings this process converges at a geometric rate. However, in our case we deal with the saddle-point problem for which the simple iteration methods do not converge. Therefore, to solve the saddle-point problem, we use the saddle-point extragradient approach. Other gradient-type approaches were considered by many authors, mainly applied to variational inequalities [19].

The extragradient method for solving the problem (5.3)-(5.6) is the controlled process (6.1)-(6.4), each iteration of which is divided into two half-steps.

The formulas of this iterative method have the form:

1) predictive half-step

d

dxk (t) = D(t)xk (t) + B(t)uk (t), xk (to) = xo, (6.5)

dt

pki = n+(pk + a(A1xk - a1)), (6.6)

y

uk(t) = nu(uk(t) - aBT(t)yk(t)); (6.8)

d

-yk (t) + DT (t)yk (t) = 0, yk = p1 + ATpk, (6.7)

2) basic half-step

d

jtxk (t) = D(t)xk (t) + B(t)uk (t), xk (to) = xo, (6.9)

pk+1 = n+(pk + a(A1xk - a0), (6.10)

d

-yk (t)+ DT (t)yk (t) = 0, y k = p1 + AT pk, (6.11)

uk+1(t)= nu (uk (t) - aBT (t)ypk (t)), k = 0,1, 2,... (6.12)

Here, two differential equations are solved and the iterative move in controls is carried out on each half-step.

From the formulas of this process, we can see that the differential equations (6.5), (6.7) and (6.9), (6.11) are only used to calculate the functions xk(t), xk(t), yk(t) and yk(t), so the process

can be written in a more compact form

pi = n+p + a(Aix'k - ai)), (6.13)

uk(t) = nu(uk(t) - aBT(t)0k(t)), (6.14)

pki+i = n+(pk + a(Aixk - ai)), (6.15)

uk+i(t) = nu(uk(t) - aBT(t)0k(t)), (6.16)

where t £ [to,ti], xk(t), xk(t), 0k(t) and ipk(t) are calculated in (6.5), (6.9) and (6.7), (6.11).

For auxiliary estimates we present the operator equations (6.5)-(6.12) in the form of variational inequalities

{pi - pk - a(Aixk - ai), pi - pk) > 0, (6.17)

{pk+i - pk - a(Aixk - ai), pi - pk+i) > 0, (6.18)

f 1 <uk(t) - uk(t) + aBT(t)0k(t),u(t) - uk(t))dt > 0, (6.19)

J to fti

/ (uk+i(t) - uk(t) + aBT(t)tpk(t), u(t) - uk+i(t))dt > 0 (6.20)

to

for all pi £ Rm, u(0 £ u.

The following estimates were obtained from the operator equations (6.17)-(6.20):

IPk - pki+iI < aIAi(xk - xk )| < alAiHIx! - xk I, (6.21)

\\uk(•) - uk+i()l< allBT(t)(0k(•) - 4>k(•))!< aBmaxUk(•) - $k(-)ll, (6.22)

where Bmax = maxlB(t)l for all t £ [t0,ti], a > 0.

1. In the proof of the method convergence to the solution, we need two more estimates. This refers to the deviations Ixk(t) - xk(t)I and I^k(t) - 0 k(t)I, t £ [t0,ti]. By the linearity of the equations (6.5) and (6.9), we have

d (xk(t) - xk(t)) = D(t)(xk(t) - xk(t)) + B(t)(uk(t) - uk(t)), xk(to) - xk(to) = 0. dt

Integrate the resulting identity from to to t:

(xk(t) - xk(t)) - (xk(to) - xk(to)) = f * D(t)(xk(t) - xk(t))dT B(t)(uk(t) - uk(t))dr.

to to

From the last equation, we get the estimate

Ixk(t) - xk(t)I < Dmax f Ixk(t) - xk(t)IdT + Bm&x P Iuk(t) - uk(t)IdT, (6.23)

to to

with Dmax = max 11D(t)11, t £ [to,ti]. We apply the lemma Gronwall [7, Book 1, p. 472] as: inequality 0 < to < t < ti , wh from the (6.23)

inequality 0 < p(t) < a J*o p(t)dT + b, to < t < ti, leads to the inequality p(t) < bea(ti to), to < t < ti, where p(t) is continuous, a > 0, b > 0 are constants. Using this lemma, we obtain

Ixk (t) - xk (t)I < BmaxeDmax(ti-to) ( 1 Iuk(t) - uk (t)Idt.

to

Evaluating the integral in the right-hand side of this inequality by the Cauchy-Schwarz inequality, we have

Ixk(t) - xk(t)I2 < B^axe2Dmax(ti-t0)(ti - to)lluk(•) - uk(^)l2. (6.24)

Hence for t = t we find the deviations for terminal values of the trajectories

\xk - xk|2 < B'maxe2Dmax(tl-t0)(t 1 - to)\\uk(•) - uk(0\\2. (6.25)

To prove that the sequence {xk(•)} is limited, we actually have to repeat the above arguments. Recall the highlights. Write down the difference of two linear equations (6.5) and (5.3):

d (xk(t) - x*(t)) = D(t)(xk(t) - x*(t)) + B(t)(uk(t) - u*(t)), xk(to) - x*(to) = 0. dt

Passing from this difference to the analog of (6.23), we have

\xk(t) - x*(t)\ < Dmxi' \xk(T) - x*(t)\dT + Bmax f1 \uk(t) - u*(t)\dr.

J to J to

Concluding these considerations, we obtain the analogue of (6.24):

\xk(t) - x*(t)\2 < BLxe2D—(tl-to)(t1 - to)\uk(•) - u*(0\\2. (6.26)

2. Finally, from the equations (6.7), (6.11) we obtain the similar estimates for conjugate trajectories \yk(t) - ypk(t)\:

d y (t) - ypk(t)) + DT(t)(yk(t) - pk(t)) = 0, (6.27)

where yk - ypk = AT(pk - pk). Integrating (6.27) from t to t 1, we have

.C d y (t) - ypk(t)) dt + l*1 DT(t)(yk(t) - ypk(t))dt = 0,

from whence

fti

yk(t) - ypk(t) = Jt DT(t)(yk(t) - pk(t))dt + yk - pk.

Consequently, the estimate is valid:

\yk (t) - pk (t)\< f1 \DT (t)(yk (t) - pk (t))\dt + \yk - pk \< Dmax f1 \yk (t) - p (t)\dt + b, (6.28) tt

where t £ [to,t 1 ], b = \yk - ypk\. Apply again the Lemma Gronwall [7, Book 1, p. 472]: if 0 < p(t) < a f*1 p(t)d(T) + b, to < t < t 1, then the inequality p(t) < bea(tl-t) is also true. Here p(t) is a continuous function, a > 0, b > 0 are some constants. Based on this statement, we get out of (6.28)

\yk(t) - ypk(t)\2 < \yk - ypk\2e2Dmax(tl-t). (6.29)

From (6.7), (6.11) for terminal values, we have

\yk - ypk\2 = \at(pk - pk)\2 < \\at\\2\pk - pk\2. (6.30)

Substitute (6.30) into (6.29)

\yk(t) - ypk(t)\2 < \\AT\\2e2Dm*x(tl-t)\pk - pk\2. Integrating inequality from to to t , we find

\\yk(•) - ypk(^)\2 < \\AT\\2/(2Dmax) (<e2Dmax(tl-to) - ^ \pk - pk\2. (6.31)

Similarly we prove the boundedness of conjugate trajectories by evaluating \yk(t)-y*(t)\. From (6.7) and (5.5), we have

dt y (t) - y*(t)) + DT(t)(yk(t) - y*(t)) = 0,

Passing from this difference to analogs of (6.28)-(6.31), we obtain

\\yk(0 - y*()\2 < \\AT\\2/(2Dmax) (<e2Dmax(tl-to) - ^ \pk - p*\2. (6.32)

7. Proof of method convergence

Show that the process (6.5)-(6.12) converges monotonically in norm to one of solutions of the original problem.

Theorem 1. If the set of solutions (p\,ip*(t); x*,x*(t),u*(t)) for the problem (5.3)-(5.6) is not empty, and the terminal problem is a linear programming problem then the sequence (pk,ipk(■); x\, xk( ),uk(■)), generated by (6.5)-(6.12) with step length chosen from the condition 0 < a < , where K = max(K i, K2),

K2 = BlaJAj\\2/(2Dmax) (e2Dmax(tl-t0) - l) , K2 = \\Ai\\2B2maxe2Dmax(tl-to)(ti - to),

contains a subsequence which converges to the solution of the problem. Namely, the convergence in controls is weak, while the convergence in trajectories, conjugate trajectories as well as in the finite-dimensional variables of the terminal problem is strong.

In particular, for this subsequence the sequence of total deviations

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

|\\uk(■) - u*()\2 + \pk - p*|2}

decreases monotonically on the Cartesian product Lr2[t0,ti] x R

Proof. The main efforts in the theorem are focused to obtain the estimates of Iuk(t) -u*(t)I2 and Ipk-piI2. For this purpose we will use variational inequalities. In the proposed iterative process, one part of the formulas is written in the form of variational inequalities, the other part - in the form of differential equations, so in order to uniform the reasoning we will write the differential equations also in the form of variational inequalities.

1. Write down the equation (6.11) in the form of variational inequality

C ti _ d

{p i + AT pk - p k, x i - xk) + / (DT (t)pk (t) + dpk (t), x*(t) - xk (t)) dt > 0. i i i i i t0 dt

f t'1 d -{pi + AT pi - fa*,x*i - xk )- <DT (t)0*(t) + d-é*(t),x*(t) - xk (t)) dt > 0.

Jtn dt

Similarly we proceed with the equation (5.5):

rti . d t0

Sum these inequalities

at pk - at pi - (tp k - ^D,x\ - xk)

rti d (7.1)

+ Jo {DT(t)(0k(t) - r(t)) + Jt(0k(t) - r(t)),x*(t) - xk(t))dt > 0.

Using the formula for integration by parts

f ti d fi d

{d(0k(t) - r(t)),x*(t) - xk(t))dt = - / {0k(t) - r(t),f (x*(t) - xk(t)))dt t0t0

ti d ti d to dt k(t) - r(t)),x*(t) - xk(t))dt = - Jo {0k(t) - r(t),jt("*'

+{0k - iPl,x*i - xk),

we transform the differential term in the left-hand side of (7.1) (this transformation means the transition to the conjugate differential operator)

{AT'Pi - ATp*i, xi - xk) - {0k - tp*1,x*i - x'k)

ftl _ d

' i.kf+\ ™kfj-W //J-\ i /„i,k „/,* =,k\

+ ak(t) - y*(t),D(t)(x*(t) - xk(t)) -j-(x*(t) - xk(t)))dt+ak - y**,x1 - xk) > 0. to dt

By reducing similar terms, after multiplication by a, we obtain the inequality

i'tl _ d a{AT (pk - p*),x* - xk) + a / (yp k (t) - y*(t),D(t)(x*(t) - xk (t)) - d (x* (t) - xk (t))) dt > 0. (7.2)

to dt

2. Obtain now a similar inequality with respect to the variable p1. To do this, put p1 = pk+1 in (6.17):

p - pk - a(Ax - a1),pk+1 - pk)> 0. Add and subtract the term aA1xk:

p - pk ,pk+1 - pk) - a{A1(-xk + xk ),pk+1 - pk) - aAxk - a1,pk+1 - pk) > 0. Put p1 = p* in (6.18):

{pk+1 - pi,p* - pk+1) - a{A1xk - a1,p* - pk+1) > 0. Add up these inequalities

p1 - pk ,pki+1 - pk) + {p1+1 - pki ,p* - pki+1)

-a{A1xk - a1,p* - pk) - a{A1(xk - xk),pk+1 - pk) > 0. Assuming p1 = p k1 in the second inequality of the system (5.1), we have

a{p1 - pk,A1x1 - a1) > 0.

Summarize the last two inequalities

p - pki ,pki+1 - pk) + {p1+1 - pki ,p* - pki+1)

-a{A1(xk - x*),p* - pk) + a{A1(xk - xk),pk+1 - pk) > 0. Finally, add up the resulting inequality with (7.2)

{pk - p1 ,pk+1 - p k) + {pk+1 - p1 ,p1 - p^1) + a{A1(xk - x1 ),pk+1 - pk)

ftl _ d (7.3)

+a / {ypk(t) - y*(t),D(t)(x*(t) - xk(t)) - d(x*(t) - xk(t)))dt > 0.

to dt

3. Consider the inequalities in controls. Put u(-) = uk+1(-) in (6.19) ftl

f l{uk(t) - uk(t) + aBT(t)yk(t),uk+1(t) - uk(t))dt > 0.

to

/„-k+\ „.k+^ „-.kt

to

Add and subtract the term tpk(t) under the sign of the scalar product:

i1 (uk(t) - uk(t),uk+1(t) - uk(t)>dt - a ^ (BT(t)(ypk(t) - yk(t)),uk+1(t) - uk(t))dt

Jto Jto

ftl

+a (BT(t)ypk(t),uk+1(t) - uk(t))dt > 0.

to

Put u = u*(-) in (6.20)

r tl

(7.4)

I (uk+1(t) - uk(t) + aBT(t)ypk(t),u*(t) - uk+1(t))dt > 0. (7.5)

to

Add up (7.4) and (7.5) then

r {uk(t) - uk(t),uk+i(t) - uk(t))dt + r {uk+i(t) - uk(t), u1 (t) - uk+i(t))dt

J to Jto

-a f \bt(t)(Pk(t-k(t)),uk+i(t)-uk(t))dt+a / \bt(t)Pk(t),u1 (t)-uk(t))dt > 0. t0t0

Substituting u(t) = uk(t) in the variational inequality of (5.1), we have

rti

(7.6)

{BT k

t0

Summarize (7.6) and (7.7)

rti rti

I i {BT(t)ip 1(t),uk(t) - u1 (t))dt > 0. (7.7)

t0

{uk(t) - uk(t),uk+i(t) - uk(t))dt + / \uk+i(t) - uk(t), u1 (t) - uk+i(t))dt t0t0

-at^ {BT(t)(Pk(t) - (t)),uk+i(t) - uk(t))dt + a ^ {Pk(t) - 01(t),B(t)(u1 (t) - uk(t)))dt > 0. t0t0

(7.8)

4. Summing (7.3) and (7.8), we obtain

{p - pk,pk+i - p) + {pk+i - pk,pi - pk+i) + a{Ai(xk - xk),pk+i - p)

i'ti d +a / {pk(t) - 01(t),D(t)(x1 (t) - xk(t)) + B(t)(u1 (t) - uk(t)) - d(x1 (t) - xk(t)))dt+ Jto dt

fti rti (7.9)

+ / {uk(t) - uk(t),vk+i(t) - uk(t))dt + {vk+i(t) - uk(t),u(t) - uk+i(t))dt t0t0 ti

-a [ i {BT(t)(Pk(t) - 4>k(t)),uk+i(t) - uk(t))dt > 0. t0

5. Estimates obtained in points 1-4 of this theorem follows from the right-hand inequality of (2.2). Get a similar estimate from the left-hand inequality of the same system. Subtract the equation (6.9) from (5.3):

d

D(t)(x1 (t) - xk(t)) + B(t)(u1 (t) - uk(t)) - dt(x1 (t) - xk(t)) = 0.

By the last equation, the fourth term in the inequality (7.9) resets to zero, and as a result we have {pk - pk,pk+i - Pi) + {pk+i - pk,p* - pk+i) + a{Ai(xk - xk),pk+i - pkk)

+ ^{uk(t) - uk(t),uk+i(t) - uk(t))dt + ^{uk+i(t) - uk(t),u(t) - uk+i(t))dt (710)

Jto -Jto

rti

-a {BT(t)(pk(t) - 4>k(t)), uk+i(t) - uk(t))dt > 0. t0

Taking into account (6.21), (6.22) we estimate the third and last terms in the left-hand part of (7.10) and obtain

pi - pi,pk+i - i>k) + {pk+i - pi,pi - pk+i) + (aIIAiII)2Ixk - xk)i2 + f1 {uk(t) - uk(t),uk+i(t) - uk(t))dt + r {uk+i(t) - uk(t),u(t) - uk+i(t))dt (711)

Jto Jto

+ (aBmax)2 f1 IPk(t) - Pk(t)I2dt > 0. t0

6. Using the identity \y1 - y2\2 = \V1 - y3\2 + 2{y1 - y3,V3 - V2) + \V3 - V2\2, we can rewrite the scalar product from (7.11) in the form of the sum (difference) of squares

\pki-p1\2-\pk+1-p^\2 - \Vk - p1 \2-\p1+1 - p1 \2+2(a\\A1\\)2 \x1 -xk\2+\\uk(0-u*(0\\2

(7.12)

-\\uk(•)-uk(•)\2 — \uk(•)-uk+1()\2-\\uk+1t)-u*()\2+2(aBmax)2\\ypk(-k(0\\2 > 0. Rewrite (7.12) in the form

\ pk+1-p1 \ 2+\pk-pk \ 2+\pk -pk+1 \ 2-2(a\A1\)2 \ xk-xk \ 2+\\vk+1(^)-v*()\2+\\vk (•)-# (o\\2 + \\uk (•) uk+1 (•) \ \2 2(aBmax)2 \\ yp k (•)-y k (•)\2<\uk (•) u* () \\ 2+\pk-p1 \ 2.

(7.13)

Estimate, given the inequalities (6.25) and (6.32), the fourth and last terms in the left-hand side of this inequality:

2(a\\A1\\)2 \xk - xk\2 < 2(a\\A1\\)2B2maxe2Dmax(tl-to)(t1 - to)\\uk(•) - uk(0\\2;

2(aBmax)2\ypk (•) - yk (0\\2 < 2(aBmax)2\A1\2/(2Dmax) (<e2Dmax(tl-to) - ^ \ pi - pk \2. Substituting these estimates in (7.13), we finally get

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

\ pk+1 -p1 \ 2 + \ pk -pk+1 \ 2 + \\uk+1(^) - u*(^)\2 + \\uk(•) - uk+1(^)\2

(7.14)

+d1 \pk - pk\2+d2\\uk(•) - uko\\2 < \ pk - p1 \ 2 + \\uk(•) - u*()\2, where d1 = 1 - 2a2Kf, d2 = 1 - 2a2K22,

K = BmjAT\\2/(2Dmax) [e2Dmax(tl-to) - 1) , K22 = \\A1\\2Bmaxe2D-(tl-to)(t1 - to).

Denote K = max(K1,K2) then provided d1 > 0, d2 > 0, i.e. 0 < a < , from (7.14) we get a monotone decreasing of the sequence {\\uk (•) - u*(-)\\2 + \ pk - p\ \2} on the Cartesian product L2[to,t1] x Rm:

\ p1+1 - p1 \ 2 + \uk+1(^) - u*()\2 < \pk - p1 \ 2 + \\uk(•) - u*()\2.

7. Summarize the inequality (7.14) from k = 0 to k = N:

N N

\pN+1 - p1 \ 2 + \uN+1(^) - u*(0\\2 + E p - pk+1 \ 2 + E \\uk (•) - u^OW2

k=o k=o

NN

\ pk - H\2 + d2 £ \\uk(•) - u%)\\2 < \p1 - p1 \ 2 + \\u°(0 - u*(0\\2. k=o k=o

From this inequality, provided 0 < a < , it follows that the sequence is bounded for any N

\pN+1 -p* \2 + \uN +(•) - u*()\2 < \po -p1 \ 2 + \\uo(-) - u*(0\\2. (7.15)

From here also we get the boundedness for series

E \ pk - pk+1 \ 2 < «>, E \\uk(•) - uk+1 (•)\\2 < E \ pk - pk\2 < E \\uk(•) - uk(•)\\2 < «>;

k=o k=o k=o k=o

and therefore the convergence to zero for the following terms

\pk - pk+1 ^ 0, \\uk(0 - uk+1(•)W2 ^ 0, \pk - pk\ ^ 0, \\uk(0 - ukt)\\2 ^ 0. (7.16)

Hence by the triangle inequality, we obtain |p\ — | — 0, \\uk(•) — uk+i (^)\\ — 0, k — to. From (6.25) and (6.32), it follows that

| xk(t) — xk(t)I — 0, | xk — xkI — 0, \\/(•) — pk(OW — 0, k — to. (7.17)

Moreover, from (7.15) we get the boundedness of the sequences

P — PH < const, \\uk(•) — u*()\ < const,

and from (6.26) and (6.32) — the boundedness of the sequences in other variables:

xk(t) — x*(t)K const, |x1 — x 11 < const, \\pk(•) — p*(0\\< const.

8. Since the sequence (pi, tpk(•); xk,xk(•),uk(•)) is bounded on RmxW[to, ti] xRn xACn[to, ti] x U then it is weakly compact [8]. The latter means that there exists a subsequence (pk,tpki(•);x^, xki ( ),uki(•)) and a point (p1, tp (•); x1,x ( ),u (•)) which is the weak limit of this subsequence. Weak convergence is understood in the sense of pointwise convergence of the linear functionals on the Cartesian product R+m x Wn[t0,t1] x Rn x ACn[t0,t1 ] x U for any fixed element (p1,pp(-);x1,x(^), u(^)) as i — to. In finite-dimensional spaces the weak and strong convergences coincide [8].

Now we can show that (p1, tp (•); x1,x (•), u (•)) is a solution of (5.3)-(5.6). To do this, first note that the pair of equations (6.8), (6.12) and variational inequalities (6.19), (6.20) are equivalent, because the projection operator represents the problem of minimizing of the quadratic function 2|u(t) — (uki(t) — aBT(t)pki(t))|2 on the set U, and the variational inequalities (6.19), (6.20) are necessary and sufficient conditions for a minimum of this quadratic function.

In [7, Book 2, p. 651] it was shown that a linear operator is weakly continuous. Taking the limit as ki — +to in the system (6.5)-(6.12) (except for the equations (6.8) and (6.12), we obtain

d

—x (t) = D(t)x (t) + B(t)u (t), x (to) = xo,

pi = n+(p1 + a(Ai x1 — ai)), (7.18)

d

DT (t)tp'(t) + -p'(t) = 0, pi + ATp1 — pPl = 0.

Since the point (pi, pp(•); x1,x'(•),u (•)) satisfies the system (7.18), or what is the same, the system (5.1), it is a saddle point of the Lagrange function (2.1) (Sec. 2 "Classic Lagrangian"). As shown in Sec. 3 ("Dual Lagrangian"), this point is also the saddle point of the dual Lagrangian (3.2). Thus, the point (p1,p (•); x1,x ( ),u (•)) satisfies the saddle-point system (3.3). In turn, this system leads to the fulfillment of conditions (3.6), i.e.

DT (t)p'(t) + dp'(t) = 0, pi + AT pi — pi = 0

rti (7.19)

/ (BT(t)p'(t),u (t) — u(t))dt < 0, u(^) e U.

J to

Comparing (7.18), (7.19) with (5.3)-(5.6), we can conclude that (pi, p(•);xi,x'(•),u (•)) = (pi, tp*(); x*i,x*t),u*t)). In other words, any weak limit point of the sequence (6.5)-(6.12) is a solution to the original problem. This process decreases monotonically in the norm in the sense of the inequality

|pk+i — pi |2 + / ^ ^(t) — u^t^dt < p — pi|2 + f ^ |uk(t) — u* (t^dt.

to to

Here (k + 1)-th iteration is embedded in the ball on the k-th iteration. Note that the component ft! |uk+i(t) — u*(t)|2dt due to its weak convergence does not necessarily tend to zero as k — +to.

Along with the common process taking place in the functional space, there is a sub-process in the terminal space - on the attainability set. This sub-process is described by the formulas (6.6), (6.10) and takes place in the finite-dimensional Euclidean space. By the common scheme, this sub-process converges to a saddle point of the Lagrange function l(pi,xi) = {pi,x-\) + {pi,Aixi - ai) for the convex programming problem formulated on the set of attainability. The convergence of the sub-process to a saddle point of the Lagrangian is strong due to the fact that the weak and strong convergences in finite-dimensional spaces coincide. The theorem is proved.

8. Conclusion

In this paper, the terminal control problem is treated as a saddle-point dynamic problem with the boundary value condition. This condition is defined implicitly as a solution to the linear programming problem. The saddle-point dynamic problem generates a system of saddle-point inequalities in functional space. These inequalities are seen as strengthening the Pontryagin maximum principle in the convex case. The saddle-point inequalities generate the differential system, which is close to a similar system in the maximum principle. Based on this system, the saddle-point process was formulated, and its convergence to the saddle point of the Lagrange function was proved. Namely, it was proved the weak convergence in controls, the strong convergence in phase and conjugated trajectories as well as the strong convergence to a solution of the boundary-value optimization problem on set of attainability.

REFERENCES

1. Eremin I.I., Mazurov Vl.D., Astafjev N.N. Improper problems of linear and convex programming. oscow: Nauka, 1983. 336 p. (in Russian)

2. Eremin I.I. Conflicting models of optimal planning. oscow: Nauka, 1988. 160 p. (in Russian)

3. Eremin I.I. Duality for Pareto-successive linear optimization problems // Tr. In-ta matematiki i mekhaniki UrO RAN. 1995. Vol. 3. P. 245-261 (in Russian)

4. Eremin I.I. The theory of linear optimization. Ekaterinburg, Ekaterinburg, 1999. 312 p. (in Russian)

5. Eremin I.I., Mazurov V.D. Questions of optimization and pattern recognition. Sverdlovsk, Sredne-Ural. knizh. izd-vo, 1979. 64 p. (in Russian)

6. Eremin I.I. The theory of duality in linear optimization. Chelyabinsk: Publishing house YUUrGU, 2005. 195 p. (in Russian)

7. Vasiliev F.P. Methods of optimization: in 2 bks. Bk. 1, 2. Moscow, MTsNMO, 2011. 620 p. (in Russian)

8. Kolmogorov A.N., Fomin S.V. Elements of the theory of functions and functional analysis. Moscow: FIZMATLIT, 2009. 572 p.(in Russian)

9. Vasiliev F.P., Khoroshilova E.V., Antipin A.S. An Extragradient Method for Finding the Saddle Point in an Optimal Control Problem // Moscow University Comp. Maths. and Cybernetics. 2010. Vol. 34. No 3. P. 113-118.

10. Antipin A.S., Khoroshilova E.V. On methods of extragradient type for solving optimal control problems with linear constraints // Izvestiya IGU. Seriya: Matematika. 2010. Vol. 3. P. 2-20. (in Russian)

11. Vasiliev F.P., Khoroshilova E.V., Antipin A.S. Regularized extragradient method for finding a saddle point in optimal control problem // Proceedings of the Steklov Institute of Mathematics. 2011. Vol. 275, Suppl. 1. P. 186-196.

12. Antipin A.S. Two-person game with Nash equilibrium in optimal control problems // Optim. Lett. 2012. 6(7). P. 1349-1378.

13. Khoroshilova E.V. Extragradient method of optimal control with terminal constraints // Automation and Remote Control. 2012. Vol. 73, no. 3. P. 517-531.

14. Khoroshilova E.V. Extragradient-type method for optimal control problem with linear constraints and convex objective function // Optim. Lett., August 2013. Vol. 7, Iss. 6, P. 1193-1214.

15. Antipin A.S. Terminal Control of Boundary Models // Comput. Math. Math. Phys. 2014. V. 54, no 2. P. 257-285.

16. Antipin A.S., Khoroshilova E.V. On boundary value problem of terminal control with quadratic quality criterion // Izvestiya IGU. Seriya: Matematika. 2014. Vol. 8. P. 7-28.

17. Antipin A.S., Vasilieva O.O. Dynamic method of multipliers in terminal control // Comput. Math. Math. Phys. 2015. Vol. 55, No 5. P. 766-787.

18. Antipin A.S., Khoroshilova E.V. Optimal Control with Connected Initial and Terminal Conditions // Proceedings of the Steklov Institute of Mathematics. 2015. Vol. 289, Suppl. 1. P. 9-25.

19. Konnov I.V. Equilibrium Models and Variational Inequalities. Amsterdam, 2007. Vol. 210. 248 p. (Ser. Mathematics in Science and Engineering.)

i Надоели баннеры? Вы всегда можете отключить рекламу.