Научная статья на тему 'COMPUTING THE REACHABLE SET BOUNDARY FOR AN ABSTRACT CONTROL SYSTEM: REVISITED'

COMPUTING THE REACHABLE SET BOUNDARY FOR AN ABSTRACT CONTROL SYSTEM: REVISITED Текст научной статьи по специальности «Математика»

CC BY
3
1
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
Ural Mathematical Journal
Scopus
ВАК
Область наук
Ключевые слова
Reachable set / Nonlinear mapping / Control system / Extremal problem / Maximum principle

Аннотация научной статьи по математике, автор научной работы — Mikhail I. Gusev

A control system can be treated as a mapping that maps a control to a trajectory (output) of the system. From this point of view, the reachable set, which consists of the ends of all trajectories at a given time, can be considered an image of the set of admissible controls into the state space under a nonlinear mapping. The paper discusses some properties of such abstract reachable sets. The principal attention is paid to the description of the set boundary.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «COMPUTING THE REACHABLE SET BOUNDARY FOR AN ABSTRACT CONTROL SYSTEM: REVISITED»

URAL MATHEMATICAL JOURNAL, Vol. 9, No. 2, 2023, pp. 99-108

DOI: 10.15826/umj.2023.2.008

COMPUTING THE REACHABLE SET BOUNDARY FOR AN ABSTRACT CONTROL SYSTEM: REVISITED

Mikhail I. Gusev

Krasovskii Institute of Mathematics and Mechanics, Ural Branch of the Russian Academy of Sciences, 16 S. Kovalevskaya Str., Ekaterinburg, 620108, Russian Federation

gmi@imm.uran.ru

Abstract: A control system can be treated as a mapping that maps a control to a trajectory (output) of the system. From this point of view, the reachable set, which consists of the ends of all trajectories at a given time, can be considered an image of the set of admissible controls into the state space under a nonlinear mapping. The paper discusses some properties of such abstract reachable sets. The principal attention is paid to the description of the set boundary.

Keywords: Reachable set, Nonlinear mapping, Control system, Extremal problem, Maximum principle.

1. Introduction

The paper explores the issue of describing the boundary of the reachable set of a nonlinear control system. A reachable set consists of all state vectors that can be reached along trajectories generated by admissible controls. For a system with geometric (point-wise) constraints, it is known that control steering the trajectory to the boundary of the set satisfies Pontryagin's maximum principle [13, 16]. Many algorithms for computing reachable sets are established based on solving optimal control problems and (or) use of the maximum principle [2, 5, 12, 14, 17]. For systems with integral constraints, some properties of reachable sets and algorithms for their construction are given in [6, 7, 15].

For integral quadratic constraints, it was shown in [8, 10] that any admissible control leading to the reachable set boundary provides a local extremum in some optimal control problem. Therefore, this control satisfies the maximum principle. This result was generalized in [11] for several mixed integral constraints in which the integrands depend on both control and state variables. In [9] (see, also [1]), we proposed to consider the reachability problem in terms of nonlinear mappings of Banach spaces. With this approach, the reachable set is treated as the image of the set of all admissible controls under the action of a nonlinear mapping. In the present paper, we extend the results of [9] to a broader class of abstract control systems. These systems are determined by differentiable maps of Banach spaces with different types of constraints on controls. The paper weakens the conditions of [9], which makes it possible to consider the problem with constraints specified by nonsmooth functionals. The use of nonsmooth analysis constructions allowed us to consider problems with multiple constraints within the framework of a unified scheme.

2. Single constraint control systems

Let us consider the system

x(t) = f1(t,x(t)) + f2(t,x(t))u(t), x(t0) = X0, u() € U,

(2.1)

on a time interval [t0,t^. Here, x(t) € Rn, u(t) € Rr, and U is a given set in the space Lp, p > 1.

Functions /2 : Rn+1 — Rnxr are considered to have continuous Frechet derivatives in x and satisfying the conditions:

||/l(t,x)|| < 1l(t)(1 + ||xM), ||/2(t,x)H„xr < l2(t), to < t < tl, x € Rn.

Here, 11(-) € L1 and 12() € L2, where L1 and L2 denote the spaces of summable and square summable functions, respectively.

For any u(-) € L1, there is a unique absolutely continuous solution x(t,u(-)) to system (2.1) such that x(t0) = x0.

A reachable set G(t1) of system (2.1) at time t1 under the constraint u(-) € U C L1 is defined as follows:

G(t1) = {y € Rn : y = x(t1,u(-)), u() € U}.

This definition of a reachable set fits into the framework of the following abstract construction. Let X and Y be real Banach spaces, and let U C X be a given set. We will call a map F : U — Y an abstract control system. Here, u € U is called a control and the set U is called a constraint. The reachable set G of this system is

G = {y € Y : y = F(u), u € U}.

Thus, G = F(U) is an image of the set U under the mapping F. Further, we set { }

U = {u € X : <p(u) <

so U is a level set of a continuous function ^ : X ^ R; 0isa given number. In control problems for system (2.1), one can take X = Lp, p > 1, including p = to, as the space X and Y = Rn. The mapping F in this case is determined as

F (u) = F (u(-)) = x(t1, u(-)). (2.2)

With standard requirements on system (2.1) (see, for example, [10]), F(u(-)) is a single-valued mapping having a continuous Frechet derivative F'(u(-)) : L2 Rn:

FU (u(-))Au(-) = Ax(t1).

Here, Ax(t) is a solution to system (2.1) linearized around (x(t,u(-)),u(t)),

Ax(t) = A(t)Ax(t) + B(t)Au(t), Ax(to) = 0,

d/ d (2.3)

A(t) = J^(t,x(t)) + ^[f2(t,x(t))u(t)], B(t) = f2(t,x(t)),

corresponding to the control Au(t). If system (2.3) is controllable on [t0, t1], then Im F'(u(-)) = Rn. Let us consider the geometric constraints on controls that are standard for control theory:

u(t) € Q, a.e. t € [t0,t1].

In many cases, the set Q can be represented as

Q = {v € Rr : ||Qv| < 1},

where Q is a matrix and || ■ || is some norm in Rm. It is clear that we can take here X = L^ and

^(u(-)) = ess sup ||Qu(t) ||. t0 <t<ti

Such a functional is obviously continuous in the space L^.

Another example of control constraints is an integral constraint. In this case, X = Lp, p > 1,

and

¥>(«(•))= f1 I|u(t)||p dt.

J to

We call the joint constraints on both control and state variables of the form

i ti

<p(u(-)):=\ (Q(t,x(t)) + uT(t)R(t,x(t))u(t)) dt < p, u() € L2,

to

the isoperimetric constraints.

Let BX(x,r) and By(y,r) be the balls of radius r centered at x € X and y € Y, respectively. Further analysis is based on a well-known Lyusternik's theorem.

Theorem 1 [4, Theorem 2]. Let a mapping F from a Banach space X to a Banach space Y be continuously Frechet differentiable at a point U and such that Im F'(U) = Y. Then there are a neighborhood V of the point U and a number s > 0 such that, for any BX (u, r) C V,

By(F(u),sr) C F(Bx(u,r)).

The condition Im F'(u) = Y is called the Lyusternik (regularity) condition. If this condition is met, F is said to be regular at the point u.

Using this theorem we get the following statement.

Theorem 2. Let W be some neighborhood of the set U, let F : W ^ Y be a mapping continuously Frechet differentiable at a point u € U, and let Im F'(u) = Y. To x = F(u) € dG, it is necessary that u be a local extremum in the problem

(p(u) ^ min, F(u) = x, (2.4)

and (f(u) = p.

Proof. The proof is by contradiction. Assume that ^(%i) < p. Since ^>(u) is continuous at the point u, there is a neighborhood Vi of u such that ^>(u) < p Vu € Vi. Let us choose a neighborhood V and a number s whose existence follows from Theorem 1. Then, for any ball Bx(u, r) € V P| Vi, we have

Bx(u,r) C U,

By(x, sr) = By (F(u), sr) C F(Bx(u,r)) C F(U) = G,

which contradicts the condition x € dG. Hence, ^>(u) = p.

Let us again choose V and s from Theorem 1. Assume that u is not a local minimum in (2.4). Then there is u € V such that F(u) = x and ^>(u) < ^(ui) = p. Let us choose r > 0 such that Bx(u,r) C V. Then, by Theorem 1,

By(x, sr) = By(F(u),sr) C F(Bx(u,r)) C F(U) = G

contrary to the condition x € dG. This completes the proof.

Let us write down the necessary extremum condition for problem (2.4), assuming that <^(u) is continuously differentiable at u. Since the constraint F(u) = x is regular at the point u, there is a Lagrange multiplier y* € Y* such that

p'(u)+ F '*(u)y* =0. (2.5)

Here, F'*(u) denotes the operator conjugate to the continuous linear operator F'(u).

If ^>'(u) = 0, then equality (2.5) implies that y* = 0. If we divide both sides of equality (2.5) by | y* | , then it takes the form

F '*(u)y* + A^'(u) = 0, (2.6)

where ||y*|| = 1 and A > 0. Since <^(u) — ^ = 0, we also have the equality

A(p(u) — = 0. (2.7)

It is easy to see that relations (2.6) and (2.7) also give the necessary optimality conditions for the problem

(y*,F(u)) — min, <p(u) < (2.8)

where (■, ■) denotes a bilinear form establishing the duality of the spaces Y and Y*. Here, equality (2.6) means that the derivative of the Lagrange function

L(u,A) = (y*,F (u)) + A(p(u) —

in u is equal to zero, and equality (2.7) is a complementary slackness condition. Thus, the following statement is true.

Theorem 3. Assume that F(u) = x € dG, u € U, F(u) is regular, and <^(u) is continuously differential at the point u and <^(u) = 0. Then, there is y* € Y*, ||y* || = 1, such that u satisfies the necessary extremum conditions (2.6) and (2.7) in problem (2.8).

As it is easy to see, problem (2.8) can be rewritten in the equivalent form

(z*, y) — max, y € G.

where z* = —y*. The latter is the problem of calculating the support function of G. Recall that a support function (z*) is defined on Y* by the equality

(z*) = sup(z*, y).

yeG

The point at which the supremum is reached is called the support point. Since the reachable set G in the nonlinear case is not necessarily convex, the boundary point x is not necessarily a support point. But it meets the necessary optimality conditions as if it would be a support point.

Next, we will consider the case when ^ is not continuously differentiable but is Lipschitz continuous at the point u. For simplicity, we will assume also that Y = Rn.

Denote by dC/(u) the Clarke subdifferential of a function / at a point u. If / is Lipschitz continuous in some neighborhood of u, then dC/(u) = 0 is a convex weakly* compact set [3]. Let L be a Lagrange function

L(u, A, y*) = Ap(u) + (y*, F (u) — x),

where A > 0 and y* € Y* = Rn are Lagrange multipliers.

Assume that u is a local solution to problem (2.4) and <^(u) is Lipschitz continuous at the point u. Then, there exist A > 0 and y* € Rn, A + ||y*|| = 0, such that

0 € dcL(u, A, y*) = Adc^(u) + F'*(u)y*, (2.9)

where dcL is taken with respect to u (see, for example, [3, Theorem 6.1.1]). Let us show that A > 0. Indeed, if A = 0, then ||y*|| = 0 and F'*(u)y* = 0. This contradicts the regularity of F at the point u.

Without loss of generality, we set A = 1. Suppose that 0 / dc<^(u). Then F'*(u)y* = 0 and condition (2.9) takes the form

—F'*(u)y* € dcp(u). (2.10)

Let us show that this inclusion is a necessary extremum condition in problem (2.8). Let

L(u, a, ft) = a(y*, F (u)) + ft(^>(u) —

be the Lagrange function for problem (2.8). If u is a local minimum point in problem (2.8), then there are a > 0 and ft > 0, a + ft = 0, such that

0 € dcL(u,a,ft). (2.11)

Note that if 0 / dc<^(u), then a > 0 and ft > 0. Indeed, if a = 0, then ft > 0 and 0 € dc<^(u). If ft = 0, then aF'* (u)y* = 0 and a > 0, which is impossible due to the regularity condition. Divide both sides of inclusion (2.11) by ft and take ay*/ft as a new vector y*. Then inclusion (2.11) takes the form (2.10).

As a result, we get the following statement.

Theorem 4. Assume that F(u) = x € dG, u € U, F(u) is regular, and <^(u) is Lipschitz continuous at the point u and 0 / dc<^(u). Then there is y* € Y*, ||y*|| = 1, such that u satisfies the necessary extremum condition (2.10) in problem (2.8).

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Remark 1. If <^(u) is convex, then dc<^(u) = d^(u) is a subdifferential of a convex function.

The condition 0 / dc<^(u) in this case is equivalent to Slater's condition: there is u such that ^(u) < ^(u).

Remark 2. If a mapping F is defined by formula (2.2) and <^(u(-)) is an integral quadratic in u functional, then Theorem 2 implies the necessary extremum conditions [10] in the form of Pontryagin's maximum principle.

Note that, under integral quadratic constraints, the relations of the maximum principle follow directly from the extremum conditions (2.10). Below we present its proof. Assume that X = L2, Y = Rn, the mapping F is defined by formula (2.2), and <^(u(-)) = 1/2 (u(-),u(-)) is an integral quadratic functional. In this case, d^(u(-)) = |^'(u(-))} = |u(-)} and the equality <^(u(-)) = ^ implies that <^'(u(-)) = 0. Therefore, (2.10) takes the following equivalent form:

F'*(u)z* = u, z* = —y*, z* = 0.

Recall that F'(u) = F'(u(-)) is defined by the equality F'(u(-))Au(-) = Ax(ti), where x(t) is the solution of (2.3). Let us represent this solution in the integral form

Ax(ti) = [ 1 X(ti, r)B(t)Au(t) dr,

J to

where X(t,T) is the Cauchy matrix. For any z* € Rn, we have

rti

(z*,F'(u(-))Au(-)) = <F'*(u())z*, Au()> = z*T f ' X(ti,T)B(t)Au(t)dT

to

*T

to

pT (t )B (t )Au(t )dT,

to

where p(t) = XT(t1,T)z* satisfies the adjoint equation

p(t) = -AT(t)p(t), p(ti) = z*.

Thus, we have

F '*(u())z* = BT(.) p(.)= u(-),

which implies that

u(t) = BT(t)p(t), to < t < ti.

Finally, we obtain a system of relations of the maximum principle for the boundary control u(t) (see [10])

x(t) = fi(t, x(t)) + f2(t, x(t))B(t)p(t), x(to) = xo, (2.12)

p(t) = -AT(t)p(t), p(t) = 0, u(t) = B(t)p(t), (2.13)

A(t) = |fj(t,x(t)) + £[/2(i,*(i))u(i)], 5(i) = ./,(/.,•(/)).

Now suppose that the constraints have the form

Y(u(t)) < p, a.e. in [t0,ti],

where y(u) is a convex function in Rr (for example, a norm in Rr). In this case, we can take X = L^ and

f(u()) = ess sup y(u(t)).

t0<t<t1

Such a functional is obviously convex and continuous in the space X. Assume that there is u € Rr such that y(u) < p. As before, we believe that Y = Rn. Since ^>(u( ■ )) is convex, we can substitute dcf(u(■ )) by a subdifferential of the convex function df (u(■ )).

If F(u()) € dG, then f(ii(-)) = p and hence 0 € df (u(■ )). Thus,

F'*(u( ■ ))z* € df(u(■ ))

for some z* € Rn, z* = 0. Here, the point F'*(u(■ ))z* belongs to the space L^. Similar to the previous case, it can be proven that F'*(u(■ ))z* = BT( ■ )p( ■ ), where p(t) = 0 is a solution to the adjoint system.

From the properties of df(ui(■ )), we get

f(u(■ )) - f(u(■ )) >(F'*(u(■ ))z*,u(■ ) - u(■ )}

for every u( ■ ) € L^. From this inequality, for every u( ■ ) such that f(u(■ )) < p, we have

f ti

0 > PT(t)B(t)(u(t) - u(T))dT. (2.14)

to

Choose a point t € (t0,ti) and a vector v € Rr such that y(v) < p, and sufficiently small e > 0. Let

( u(t), t € [t,t + e],

u(t) =

I v, t € [t,t + e].

Then, (2.14) implies the inequality

1 f T+£ 1 t-T +£

- j pT (t)B(t)u(t)dt >- J pT(t)B(t)vdt. Passing here to the limit, we get

PT(t)B(t)u(t) > pT(T)B(t)v for almost every t € [t0,ti] and every v such that y(v) < p. So, we have

PT(t)B(t)u(t) = max pT(T)B(t)v,

P(t ) = —A(t )p(t ), p( ■ ) = 0.

Introducing the Hamiltonian

H(t,x,p,u) = pT(fi(t, x) + f2(t,x)u), we can write the last relations in the standard form of the maximum principle:

H(t,x(t),p(t),u(t)) = max H(t,x(t),p(t),v), a.e. t € [t0,ti], (2.15)

dH

p(t) = -A(t)p(t) = - — (t,x(t),p(t),u(t)), r€[i0,ii]. (2.16)

3. Multiple constraints on the control

In this section, we consider constraints specified by the inequalities

fi(u) < pi, i = 1,...,k. (3.1)

Here, fi : X ^ R are functionals and pi, i = 1,... ,k, are given positive numbers.

One can assume without loss of generality that pi = 1, i = 1,... ,k. Then (3.1) can be replaced by the single constraint f (u) < 1 by setting

f (u) = m(fi(u),..., fk(u)), m(x) = m(xi,..., xk) = max xi.

i<i<k

Since m(x) is a continuous function, the functional f (u) is obviously continuous at a point of continuity of all functionals fi(u). Therefore, for describing the reachable set boundary, we can use Theorem 2, which leads to the following statement.

Corollary 1. Let W be a neighborhood of the set U, and let F : W ^ Y be a mapping continuously Frechet differentiable at the point u € U such that Im F'(u) = Y. Assume that

G = {F(u) : fi(u) < 1, i = 1,...,k},

where fi(u) are continuous at the point u. To x = F(u) € dG, it is necessary that u be a local extremum in the problem

f(u) = m(fi(u),..., fk(u)) ^ min, F(u) = x,

and f(u) = 1.

The derivation of extremum conditions in this problem is more complicated than before because the function m(x) is not differentiable. However, the superposition <^(u) = m(^1(u),... (u)) is locally Lipschitz at the point u if such are the functions ^(u). Moreover, if each of the functions (u) is either convex or continuously differentiable at the point u, then

dc^(u)=co (J dc^¿(u), (3.2)

i€/(u)

where I(u) = {i : ^ (u) = ^>(u)} and co A denotes a convex hull of A [3].

Let the conditions of Corollary 1 be satisfied. Let initially all functionals be continuously differentiable at u. Then dc^¿(u) = {^>j(u)} and, taking into account (3.2), we get

dc^j(u) = | ^ aj^j(u) : ^ aj = 1, aj > 0|

je/(u) je/(u)

= { X] aj^j(u) : ^ aj = 1, aj > 0, aj(^j(u) — 1) = 0, i = 1,... , kj. i<j<fc i<j<fc

Here, the condition 0 / dc^¿(u) takes the form

y^ aj = 1, aj > 0, aj(^j(u) — 1) = 0, i = 1,..., k ^ ^ aj^j(u) = 0. i<j<fc i<j<fc

In particular, it is satisfied if the vectors ^¿(u) form a positive linear independent set. If this condition is met, we can write down the necessary condition for the inclusion F(u) € dG as follows:

F'*(u)z* = ^ aj^j(u), ^ aj = 1, aj > 0, aj(^j(u) — 1) = 0, i = 1,...,k. i<j<fc i<j<fc

Using the previous scheme, we can also write this condition in the form of Pontryagin's maximum principle [16] (see also [11]).

Let us next consider a system with double control constraints. We will assume that one of the constraints is specified by a convex differentiable functional (u) and the second by a convex functional ^>2(u). An example of such a problem is system (2.1) with integral quadratic and geometric constraints. If <^2(u) < ^i(u), then dc<^(u) = {^i(u)}; if ^i(u) < <^2(u), then dc<^(u) = {d^>2(u)}; and, finally, if (u) = <^2(u), then dc<^(u) = co({^i(u)} U d^2(u)).

Lemma 1. Let a € X, and let B C X be a convex set. Then

co({a}U B) = C := [J (Aa + (1 — A)B).

0<A<i

Proof. Obviously, C C co({a} U B). To prove the lemma, it suffices to prove the convexity of C. Let

ci = Aia + (1 — Ai)bi, C2 = A2a + (1 — A2)b2, bi,62 € B. Let us choose a, ft > 0, a + ft = 1, and show that

C3 = aci + ftc2 € A3a + (1 — As)B

for some A3 € [0,1]. To this end, we try to find numbers ai,fti > 0, ai + fti = 1, such that

aci + ftc2 = a(Aia + (1 — Ai)bi) + ft(A2a + (1 — A2)&2) = A3a + (1 — A3)(ai bi + fti 62).

Equating the coefficients at the vectors a, bi, and b2 on both sides of the equality, we obtain

A3 = a\i + $A2, a(1 - Ai) = ai(1 - A3), £(1 - A2) = £i(1 - A3). This implies the inequality 0 < A3 < 1. For 0 < A3 < 1, we have

a(l-Ai) = /3(1 — A2) _ 1 - A3 1 - A3

so, ai,pi > 0 and ai + = 1. If A3 = 1, then either aAi = 1 or @A2 = 1. In both of these cases, we get c3 = a. This completes the proof. □

Let us further assume that Slater's condition is satisfied: there exists u such that fi(u) < 1, i = 1,2. Then the condition 0 / dcf(u) is satisfied. Indeed, suppose on the contrary that 0 € dcf (u). Then, it follows from Lemma 1 that there is A € [0,1] such that

0 € Afi(u) + (1 - A)df2(u) = d(Afi + (1 - A)f2)(u).

For the convex function Afi + (1 - A)f2, the last condition is necessary and sufficient for the minimum at u. Thus,

(Afi + (1 - A)f2)(u) < (Afi + (1 - A)f2)(u),

which contradicts Slater's condition. Let further X = L^ and

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

r ti

rt 1

<pi(u( ■ )) = c/2(u( ■ ),u( ■ )) = c/2 uT(t)u,(t)dt, ^(u( ■ ))=esssup j(u(t)). (3.3)

J to to<t<ti

ut(

'to to<t<ti

The constant c> 0 is chosen here such that to write down the constraints in the form fi(u(■ )) < 1, i = 1,2. Since fi(u( ■ )) = cu(■ ), the optimality conditions F'*(u(■ ))z* € df(u(■ )) take the form

F'*(u(■ ))z* - Acu( ■ ) € (1 - A)df 2(u(■ ))

for some A € [0,1].

For A = 0, we get a maximum principle of the form (2.15), (2.16). For A = 1, we get (2.12), (2.13). Finally, for 0 < A < 1, we get

F'*(u(■ ))w* - acu(■ ) € df2(u(■ )),

where w* = z*/(1 - A) and a = A/(1 - A). Introducing the Hamiltonian

H(t, x,p, a, u) = -acu + pT(fi(t, x) + f2(t, x)u), we can write these relations in the form of maximum principle:

H(t, x(t),p(t), a, u(t)) = max H(t, x(t),p(t), a, v), a.e. t € [t0,ti],

dH

j)(t) = —a(t)p(t) = - — {t,x{t),p{t),o,u{t)), t € [i0,ii]-Thus, we arrive at the following statement.

Corollary 2. Let functionals fi(u(■ )) : L^ R, i = 1,2, be given by equalities (3.3), and let F(u(■ )) = x(ti), where x(t) is a solution to system (2.1). Let

G = {F(u(■ )): fi(u(■ )) < 1, i = 1, 2}.

If F(u(■ )) € dG and system (2.1) linearized around u( ■ ) is controllable, then there exist a function p(■ ) = 0 and a number a > 0 such that the relations of maximum principle are satisfied.

4. Conclusion

The paper proposes a unified scheme for studying extremal properties of the reachable set boundary. Within the framework of this approach, the reachable set is treated as the image of the set of admissible controls under a nonlinear mapping of a Banach space. The proposed scheme is based on the results of nonlinear and nonsmooth analysis and is equally applicable to systems with integral and geometric control constraints, including multiple constraints.

REFERENCES

1. Ananyev B.I., Gusev M.I., Filippova T.F. Upravlenie i ocenivanie sostoyanij dinamicheskikh sistem s neopredelennost'yu [Control and Estimation of Dynamical Systems States with Uncertainty]. Novosibirsk: Izdatel'stvo SO RAN, 2018. 193 p. (in Russian)

2. Baier R., Gerdts M., Xausa I. Approximation of reachable sets using optimal control algorithms. Numer. Algebra Control Optim, 2013. Vol. 3, No. 3. P. 519-548. DOI: 10.3934/naco.2013.3.519

3. Clarke F.H. Optimization and Nonsmooth Analysis. New York: J. Willey and Sons Inc., 1983. 308 p.

4. Dmitruk A.V., Milyutin A. A., Osmolovskii N.P. Lyusternik's theorem and the theory of extrema. Russian Math. Surveys, 1980. Vol. 35, No. 6. P. 11-51. DOI: 10.1070/RM1980v035n06ABEH001973

5. Gornov A. Yu., Finkel'shtein E. A. Algorithm for piecewise-linear approximation of the reachable set boundary. Autom. Remote Control, 2015. Vol. 76, No. 3. P. 385-393. DOI: 10.1134/S0005117915030030

6. Guseinov Kh. G. Approximation of the attainable sets of the nonlinear control systems with integral constraint on controls. Nonlinear Anal. Theory, Methods Appl., 2009. Vol. 71, No. 1-2. P. 622-645. DOI: 10.1016/j.na.2008.10.097

7. Guseinov K.G., Ozer O., Akyar E., Ushakov V.N. The approximation of reachable sets of control systems with integral constraint on controls. Nonlinear Differ. Equ. Appl., 2007. Vol. 14, No. 1-2. P. 57-73. DOI: 10.1007/s00030-006-4036-6

8. Gusev M.I. On reachability analysis of nonlinear systems with joint integral constraints. In: Lecture Notes in Comput. Sci., vol. 10665: Large-Scale Scientific Computing (LSSC 2017). Lirkov I., Margenov S. (eds.). Cham: Springer, 2018. P. 219-227. DOI: 10.1007/978-3-319-73441-5.23

9. Gusev M.I. Computing the reachable set boundary for an abstract control problem. AIP Conf. Proc., 2018. Vol. 2025, No. 1. Art. no. 040009. DOI: 10.1063/1.5064893

10. Gusev M.I., Zykov I.V. On extremal properties of the boundary points of reachable sets for control systems with integral constraints. Proc. Steklov Inst. Math., 2018. Vol. 300, Suppl. 1. P. 114-125. DOI: 10.1134/S0081543818020116

11. Gusev M. I., Zykov I. V. On the geometry of reachable sets for control systems with isoperimetric constraints. Proc. Steklov Inst. Math., 2019. Vol. 304, Suppl. 1. P. S76-S87. DOI: 10.1134/S0081543819020093

12. Kurzhanski A. B., Varaiya P. Dynamics and Control of Trajectory Tubes. Theory and Computation. Systems Control Found. Appl., vol. 85. Basel: Birkhauser, 2014. 445 p. DOI: 10.1007/978-3-319-10277-1

13. Lee E.B., Marcus L. Foundations of Optimal Control Theory. New York: J. Willey and Sons Inc., 1967. 576 p.

14. Patsko V. S., Pyatko S. G., Fedotov A. A. Three-dimensional reachability set for a nonlinear control system. J. Comput. Syst. Sci. Int., 2003. Vol. 42. No. 3. P. 320-328.

15. Polyak B. T. Convexity of the reachable set of nonlinear systems under L2 bounded controls. Dyn. Contin. Discrete Impuls. Syst. Ser. A Math. Anal., 2004. Vol. 11. P. 255-267.

16. Pontryagin L. S., Boltyanskii V. G., Gamkrelidze R. V., Mishechenko E. F. The Mathematical Theory of Optimal Processes. New York/London: J. Willey and Sons Inc., 1962. 360 p.

17. Vdovin S. A., Taras'ev A. M., Ushakov V. N. Construction of an attainability set for the Brockett integrator. J. Appl. Math. Mech., 2004. Vol. 68, No. 5. P. 631-646. DOI: 10.1016/j.jappmathmech.2004.09.001

i Надоели баннеры? Вы всегда можете отключить рекламу.