Научная статья на тему 'Lyapunov’s first method: estimates of characteristic numbers of functional matrices'

Lyapunov’s first method: estimates of characteristic numbers of functional matrices Текст научной статьи по специальности «Математика»

CC BY
39
8
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
LYAPUNOV'S FIRST METHOD / STABILITY THEORY / CHARACTERISTIC NUMBERS / THE LYAPUNOV EXPONENT / FUNCTIONAL MATRICES / ПЕРВЫЙ МЕТОД ЛЯПУНОВА / ТЕОРИЯ УСТОЙЧИВОСТИ / ХАРАКТЕРИСТИЧНЫЕ ЧИСЛА / ПОКАЗАТЕЛИ ЛЯПУНОВА / ФУНКЦИОНАЛЬНЫЕ МАТРИЦЫ

Аннотация научной статьи по математике, автор научной работы — Ermolin Vladislav S., Vlasova Tatyana V.

This paper contains the development of theoretical fundamentals of the first method of Lyapunov. We analyze the relations between characteristic numbers of functional matrices, their rows, and columns. We consider Lyapunov’s results obtained to evaluate and calculate characteristic numbers for products of scalar functions and prove a theorem on the generalization of these results to the products of matrices. This theorem states necessary and sufficient conditions for the existence of rigorous estimates for characteristic numbers of matrix products. Also, we prove a theorem that establishes a relationship between the characteristic number of a square non-singular matrix and the characteristic number of its inverse matrix, and the determinant. The stated relations and properties of the characteristic numbers of square matrices we reformulate in terms of the Lyapunov exponents. Examples of matrices illustrate the proved theorems.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Первый метод Ляпунова: оценки характеристичных чисел функциональных матриц

Статья посвящена развитию теоретических основ первого метода Ляпунова. Проводится анализ соотношений между характеристичными числами функциональных матриц, их строк и столбцов. Доказана теорема, обобщающая на произведение матриц равенство Ляпунова, выведенное им для оценки и вычисления характеристичного числа произведения скалярных функций. Установлены необходимые и достаточные условия существования строгих оценок для характеристичных чисел произведений матриц. Кроме того, доказана теорема, выявляющая связь характеристичного числа квадратной неособой матрицы с характеристичным числом ее обратной матрицы и определителя. Приведенные соотношения и свойства характеристичных чисел квадратных матриц переформулированы в терминах показателей Ляпунова. Даются примеры матриц, иллюстрирующие теоремы.

Текст научной работы на тему «Lyapunov’s first method: estimates of characteristic numbers of functional matrices»

UDC 517.926 Вестник СПбГУ. Прикладная математика. Информатика... 2019. Т. 15. Вып. 4

MSC 93D05, 34D08, 34A30

Lyapunov's first method: estimates of characteristic numbers of functional matrices

V. S. Ermolin, T. V. Vlasova

St. Petersburg State University, 7—9, Universitetskaya nab., St. Petersburg, 199034, Russian Federation

For citation: Ermolin V. S., Vlasova T. V. Lyapunov's first method: estimates of characteristic numbers of functional matrices. Vestnik of Saint Petersburg University. Applied Mathematics. Computer Science. Control Processes, 2019, vol. 15, iss. 4, pp. 442-456. https://doi.org/10.21638/11702/spbu10.2019.403

This paper contains the development of theoretical fundamentals of the first method of Lyapunov. We analyze the relations between characteristic numbers of functional matrices, their rows, and columns. We consider Lyapunov's results obtained to evaluate and calculate characteristic numbers for products of scalar functions and prove a theorem on the generalization of these results to the products of matrices. This theorem states necessary and sufficient conditions for the existence of rigorous estimates for characteristic numbers of matrix products. Also, we prove a theorem that establishes a relationship between the characteristic number of a square non-singular matrix and the characteristic number of its inverse matrix, and the determinant. The stated relations and properties of the characteristic numbers of square matrices we reformulate in terms of the Lyapunov exponents. Examples of matrices illustrate the proved theorems.

Keywords: Lyapunov's first method, stability theory, characteristic numbers, the Lyapunov exponent, functional matrices.

1. Introduction. In [1] A.M.Lyapunov presented the fundamental concepts of stability theory and outlined two approaches to the problem of the stability of motion. In practice, the second method of Lyapunov is of considerable current use [2-7]. Lyapunov's first method is also widely used for the stability analysis of both linear and nonlinear systems that may be time-invariant or time-varying [8]. Lyapunov introduced the notion of the characteristic number of a function and described its basic properties. Lyapunov's idea was developed by N. G. Chetaev [9], I. G. Malkin [10], B. F. Bylov [11], B. P. Demido-vich [12], V. I. Zubov [6, 7], and others. In [11, 12], the term of the characteristic exponent is defined. In the literature it is often called the Lyapunov characteristic exponent or the Lyapunov exponent. The idea of using the theory of characteristic numbers (characteristic exponents) is the basis of Lyapunov's first method. Nowadays, this theory is applied not only to the analysis of the stability of motion described by differential equations [13], but also to other problems of mathematical modeling of controlled and uncontrolled processes [14]. The theory of the Lyapunov exponents is widely used in the theory of dynamical and stochastic systems, including ergodic theory, probability, functional analysis [15]. In [16, 17] one can find historical reviews of the basic mathematical results on the Lyapunov exponents.

Lyapunov also gave the definition of the characteristic number of a set of functions [1, p. 44]. In [12] the notion of the characteristic exponent of a matrix is introduced, and some of its properties are discussed. The aim of our paper is to develop the theory of the Lyapunov characteristic numbers in the part that deals with the characteristic

© Санкт-Петербургский государственный университет, 2019

numbers of functional vectors and matrices. Using the classic Lyapunov notion of the characteristic number of a function we generalize this concept to matrices and prove the properties of characteristic numbers of matrices and vectors similar to the properties established by Lyapunov for scalar functions. This paper continues the research previously set out in [18, 19].

The paper is organized as follows. Section 2 contains basic concepts and notations. In Section 3 we formulate and prove the main properties of the characteristic numbers of rectangular matrices. Sections 4 and 5 contain the main results of this paper. In Section 6 we formulate the main results of the paper in terms of the Lyapunov characteristic exponents. In Section 7 some concluding remarks are given.

2. Basic concepts and notations. Let us recall the notion of the characteristic number of a function introduced in [1]. Following Lyapunov, let a function f (t) be real or complex, and continuous for real t > t0.

Definition 1. A real number a is called the characteristic number of a continuous function f (t) defined for t > t0 if the following conditions hold for any arbitrarily small e > 0: _

(i) ~hnT\f(t)\e(-a+e'>t = +oo,

t—+ œ

(ii) "iïiîr /(t)e(a-£)i = o. t—

The characteristic number of the function f (t) is the symbol if lim f (t)eat = 0

t—

holds for any a. The characteristic number of the function f(t) is the symbol —oo, if

lim f(t)eat = + oo holds for any a.

t—

Under such conditions, any function f (t) has a finite or infinite characteristic number. By x[f ] we denote the characteristic number of a function f (t). The following formula is known [7, 9] for the calculation of the characteristic number of the function f (t):

■ln|/(i)|

X[f] = - lim

t—+ ^ t

This holds if there exists T > 0 such that f (t) = 0 for all t > T.

Many researchers use the term the Lyapunov characteristic exponent instead of the characteristic number, but it essentially coincides with the characteristic number taken with the sign minus. We use the classical concept of the characteristic number given by Lyapunov. Following Lyapunov [1, p. 44], we formulate a definition.

Definition 2. The characteristic number of a set is the least of the characteristic numbers of the functions comprising the set.

We shall consider m x n matrices X(t) = {xj(t)} with real or complex elements defined and continuous for t > 0.

According to Definition 2, the characteristic number of a matrix (or a vector) X(t) is the least of the characteristic numbers of its elements.

As in [18, 19], by x[X(t)] we denote the characteristic number of a functional matrix X(t). Throughout the paper, we use the following notations: let xj be the j-th column of the matrix X(t); x[ be the «-th row of the matrix X(t); let Xj = x[xj], X[ = x[x(] be the characteristic numbers of the j-th column and of the «-th row of the matrix X(t) correspondingly; let XT(t) be the transposed matrix of X(t); X(t) be the complex-conjugate matrix of X(t); X*(t) be the Hermitian conjugate matrix of X(t). Let a norm of a matrix (or a vector) || • || be the same as in [12] including Euclidean norm. Then, in the above notations, Definition 2 for the characteristic number of the matrix X(t) may be written as

x[X(t)} = mm_x\xij(t)]-

3=1," г=1,т

3. Basic properties of the characteristic numbers of rectangular matrices.

Let us formulate the basic properties of the characteristic numbers of rectangular matrices (some of the properties of the Lyapunov characteristic exponents are represented in [12]).

Property 1 (the relation between the characteristic number of a matrix and the characteristic numbers of its columns and its rows).

1). The characteristic number of a matrix equals the minimum characteristic number of its columns and the minimum characteristic number of its rows

X[X] = min_{Aj} = min {A/}.

j=l,n i=l,m

(2)

2). The minimum characteristic number of matrix columns is equal to the least of characteristic numbers of the matrix rows.

Proof. The proof follows from Definition 2. □

Property 2 (the relation between the characteristic number of the matrix X(t) and the characteristic number of its norm \\X||).

The characteristic number of a matrix is equal to the characteristic number of its

x[X(t)]= X ||X(t)

(3)

P r o o f. The proof is based on the evaluation of the characteristic numbers of functions. Indeed, for all types of considered norms the inequality

Xij(t)| < WX(t)|| \xij(t)|

(4)

i= l,m j=l,n

holds for V i,j. From the left-hand side of this inequality, by a monotonicity property,

we have x

\xij (t) \

> X

|X (t)|

for V i,j. Therefore, by Definition 2, for the least

of the characteristic numbers for a set {xj(t)} of the matrix X(t) elements we can write

mm у

i= l,m j=l,n

xij(t)\ = x[X(t)] > x ||X(t)

(5)

Using the properties of the characteristic number for a sum of functions and Definition 2, from right-hand side of double inequality (4) we get

X

|X (t)|

^ min x

г=1,т j=l,n

xij (t)\ = x[ X (t)].

Combining this inequality with (5), we obtain Equation (3). □

Property 3 (the relation between characteristic numbers of matrices X(t), XT{t),X{t),X*{t)). _

Characteristic numbers of matrices X(t), XT(t), X(t), X* (t) are equal

x[X(t)]=x[XT(t)]=x[X(t)]=x[X*(t)\.

Proof. This proposition follows from Property 2 since the norms of all above mentioned matrices are equal. □

Property 4 (the relation between the characteristic number of the sum of matrices and characteristic numbers of summands).

Let X(t) = Y(t) + Z(t). We state Property 4 as Theorem 1 and its Corollary. Theorem 1.

1). If characteristic numbers of summands are different, then the characteristic number of the sum of matrices is equal to the least of the characteristic numbers of the terms.

2). If characteristic numbers of summands are equal, then the characteristic number of the sum of matrices is not less than the characteristic number of the term.

Thus, the following relations hold:

x[ X (t)]=min{x[ Y (t) ],x[ Z (t) ]}, if x[ Y (t)]= x[ Z (t)], x[ X (t) ] > x[ Y(t)]= x[ Z(t)], if x[ Y(t)]= x[ Z(t)]. ()

These relations still stand if the characteristic numbers of the summands are either or —x.

P r o o f. Theorem 1 can be proved by using the property of the characteristic numbers of a function sum and Equation (1). For each element xij (t) of the matrix X (t) we can write Xij(t) = yij(t) + Zij(t), i = l,m, j = 1, n, where yij{t) and Zij(t) are the corresponding elements of the matrices Y(t) and Z(t). By Equation (1), for all i = l,m, j = l,n we have x[ Vij (t)] > x[ Y(t)], x[ zij (t)] > x[ Z(t)].

Case 1. Let x[Y(t)] = x[Z(t)]. Assume that x[Y(t)] < x[Z(t)]. Then for all «, j the inequalities x[ zij(t)] ^ x[ Z(t)] > x[ Y(t)] hold. Let us denote

Y1 = {yij(t) | x[yij(t)} = x[y(t)}, « = 1^, j = Y2 = {yij(t) | x[yij(t)]>x[Y(t)], ¿ = J=T^).

Then, for characteristic numbers of the elements xij (t) we obviously have

x[ xij (t) ] = x[ Vij (t) + Zij(t) ] = x[ Y(t) ] for Vij(t) e Yi,

(7)

x[xij(t)]= x[Vij(t) + zij(t)] > min{x[Vij(t^, x[zij(t^ } > x[Y(t)]

for Vij (t) e Y2.

These follow from the property of the characteristic number of the function sum Vj (t) + zij(t). Indeed, in case Vij(t) e Yi we have x[ Vij(t)] = x[ Y(t)] < x[ Z(t)] < x[ zij(t)], and in case Vij(t) e Y2 we have x[Vij (t)] > x[Y(t)], x[ zij (t)] > x[ Z(t)] > x[ Y(t)]. Thus, using (7), we get

X[X(t)]= mili_x[xij(t)]=x[Y(t)l

i=l, m j= 1, n

which means that the minimum is reached at the elements xij (t) built on the elements Vij(t) e Yi. This completes the proof of Theorem 1 for Case 1, when x[ Y(t) ] = x[Z(t)].

Case 2. Assume that x[ Y ] = x[ Z ]. Using the property of the characteristic number of the sum of functions and Definition 2, we obtain the general estimate

x[xij(t) ] > min{x[ Vij(t)] ,x[ zij(t) ]} > x[ Y(t)] = x[Z(t)].

This estimate is true for all i and j. In particular, the inequality holds for xij (t), such that x[ xij(t) ] = x[X(t)]. This ends the proof of Theorem 1. □

Remark 1. Property 4 is formulated for two summands.This can be applied to a sum of matrices with a finite number of summands. Namely, let Yk(t), k = 1, N, be to x n matrices and X(t) = ^k=1 Yk(t). The following Corollary is true.

Corollary 1.

1). For a finite number of matrices, the characteristic number of a sum is not less than the least of the characteristic numbers of summands

x[X{t)]> mhi_X[Yk(t)}.

k=l,N

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

2). If among the matrices Yi~(t), k = 1, N, there is only one matrix with the minimum characteristic number, then the characteristic number of the matrix sum is equal to the least of the characteristic numbers of the summands.

Proof. Using relations (6) of Theorem 1, the proof of the Corollary is by induction on k. Moreover, the Corollary can be proved by repeating the same algorithm as in the proof of Theorem 1, using the elements of all added matrices. □

Example 1. Consider y = (e*,t2 + 1)T, z = (ext,t5)T, x = y + z. Let us evaluate the characteristic number x[x] = x[y + z], then let us find its exact value. We have

X[ y ]=min{x[ e* ], x[ t2 + 1]} = -1,

X[*]=min{x[ext ], x[t5 ]}

-A for A > 0, 0 for A < 0.

Obviously, a) x[ y ] <x[ z ] for A< 1; b) x[ z ] <x[ y ] for A> 1; c) x[ z ]=x[ y ] = -1 for A =1. Therefore, using Theorem 1 yields the following estimates: a) x[ x ] = x[ y ] = -1 for A< 1; b) x[ x ]=x[ z ] =-A for A> 1; c) x[ x ] > x[ y ]=x[ z ] = -1 for A =1.

By direct calculation of the sum x = y + z we find the exact value of x[ x ] for A = 1. We get x =(2e*, t5 +12 + 1)T, x[x ] = min{x[ 2e*], x[t5 +12 + 1}. Finally, we obtain x[x] = -1, i.e. the characteristic number of the vector function x equals the boundary value of its estimate calculated by Theorem 1.

Example 2. Consider y =(et,t2 + 1)T, z = (-ext,t5)T, x = y + z. As in Example 1, we have the same cases a), b) and c). In cases a) and b) the results of both examples are the same. By Theorem 1, given A = 1, in case c), we obtain the estimate

x[x ] > -1. (8)

By direct calculation of the sum x = y + z for A =1 we find

x[ x ]=min{ x[e* - e* ],x[ t5 + t2 + 1]} or { [ ] [ ]}

x[x]=min{x[0], ^t5 + t2 + ^ = min{+^,0} = 0.

Finally, in case c), for A =1 we have x[x] =0. Comparing this result with estimate (8) calculated by Theorem 1 we can conclude that x[x]=0>-1, i.e. the value x[x] exceeds the lower bound of its estimate.

Property 5 (estimation of the characteristic number of a matrix product using the characteristic numbers of multipliers). Consider X = nLi^W' where k = 1 >N>

are matrices admitting sequential multiplication. We state Property 5 as the following theorem.

Theorem 2. The characteristic number of the matrix product X(t) is not less than the sum of the characteristic numbers of matrices-multipliers Y]~{t), k = I, N. In other words, the inequality holds

N

x[ X (t)] Yk (t)]- (9)

k=1

This inequality holds for matrices-multipliers Yk (t) with not only finite characteristic number values. Estimate (9) is also valid in each of the cases:

1) some of the matrices Yk(t), k = 1 ,N, have the characteristic number +oo;

2) some of the matrices Yk(t), k = 1, N, have the characteristic number —oo.

If among multipliers Yk(t), k = 1, N, there are matrices having the unlimited characteristic numbers of opposite signs, then (9) can not be applied to x[X(t)].

Proof. This theorem can be proved by repeating the same algorithm as in the proof of Theorem 2 from [12, p. 134]. In our reasoning, we also use (3) and Lyapunov's Lemma V on the characteristic number of the product of two functions [1, p. 41]. Indeed, by (3), it follows that

x[X(t)] = x[||x(t) in = X

N

nyk (t)

k=1

By the property of the norm of a matrix product, HN=1 Yk(t) < nN=1 \\Yk(t) ||- Then, using the monotonicity property of the characteristic number of a function and Lyapunov's

Lemma V, we get x[ X(t)] >

N

Lk=i X

Yk(t) II . By (3), substituting x[Yk(t)] for

x

Yk (t)(t) || , we obtain (9). This proves Theorem 2. Example 3. Consider a row vector y[ and a column vector y2:

Viz

1

't2 + 1

У2 = (e

X t cos t j.2

t2+1)

where A is a real parameter. Let us evaluate the characteristic number of the product x = y1 y2. We have

X[У1 ]= min | x[etcost ],X

1

t2 + 1

= min{ —1, 0} = -1,

x[y2]=min{x[extcos ^ , ^t2 + 1} =min{-|A|, 0} = -|A|. By Property 5, we obtain the estimate

x[ x ] = x[ y1 • y2 ] > x[ y1] +x[ y2 ] > - |A|.

(10)

To find the exact value of the characteristic number of the function x we multiply out the product x = y[ y2. We obtain x = y[ y2 = cos 1 + I.

To calculate x[ x ] we use the property of the characteristic number of the function sum. Clearly, the characteristic numbers of summands are x [e(1+A)icos4] = -|1 + A|, x[l] = 0. Therefore, we get x [ x ]= x [ e(1+A)i cos * + 1] = min{x[ e(1+x)t cos * ], x[1]}, if x [e(1+x)tcost ] = x[1]. If x [e(1+x)tcos*] = x[1] =0, then x[x] > 0.

Sincex [e(1+A)tcos^ = -|1 + A| for all A except A = -1,we have

t cos t

e

х[X]=min{-|1 + А|, 0} = -|1 + А| for А = -1.

If A = -1, then e(1+A)tcos4 = 1. Thus, x[x] = x[2] = 0.

Finally, for all A the characteristic number of the function x = y1 y2 satisfies (10) x[x] = -|1 + A\ > —1 — |A|. Here the equality holds for A > 0. In other cases a strict inequality holds x[x] > —1 — |A|.

Example 4. Let a column vector x(t) be defined by the matrix product x(t) = Y(t)z(t), where Y(t) = {y1(t),y2(t)} is a 2 x 2 matrix with columns y1(t), y2(t):

yi(t)

t2 + 1

y2(t) = (0, e

-t sin t\T

Vector z(t) of dimension 2 x 1 is defined by z(t) = (t , -jt^j etsinty . We need to

construct the estimate of x[ x(t)] and to calculate the exact value of this characteristic number. Let us find the characteristic numbers of the column vectors y1(t), y2(t), and z(t). Applying formula (1) yields

X[yi ] = min|x[<

X

t2 + 1

= -1, x[У2] = min{ x[0], x\e tsin ^ } = -1,

x[z] = minj x[t2 ], X

t2 + 1

1.

It now follows that x[ Y(t) ] = min{x[y1 ], x[ y2 ]} = — 1. By Property 5 we have

x[x(t) ] = x[ Y(t) z(t) ] > x[ Y(t) ] + x[z(t) ] = —1 — 1 = —2.

The result is x[x(t)] > —2.

Next, we find the exact value of the characteristic number of the vector x(t). We

/12 e t sin t ^

multiply the matrix Y(t) by the vector z(t), x(t) = Y(t)z(t) =

0

Then,

x[x(t)] =min{x[t2et sln^, x[0]} =min{ —1, Finally, we obtain x[x(t)] = -1 > -2.

The examples show that the characteristic number of the product of matrices may coincide with the lower bound, which is given by the estimate, but may exceed it.

Remark 2. All above-mentioned properties of the characteristic numbers hold for arbitrary matrices of finite dimensions, namely, rectangular and square matrices, as well as column vectors x(t) and row vectors x'(t).

4. Properties of characteristic numbers of square matrices. In this section we present additional properties of the characteristic numbers of square matrices established in [18]. Consider a non-singular square matrix X(t) = {xij(t)}, i = l,n, j = 1,n, defined and continuous for t > 0. Let Xj = x[ xj ], = x[ x[ ] be the characteristic numbers of the j-th column and the i-th row of the matrix X(t) correspondingly; let S = ^2,"=iXj, S' = E"=iX'i be the sums of characteristic numbers of the columns and rows correspondingly; X-1(t) be the inverse matrix. Let Ax = det X(t) be a determinant of the matrix X(t); Ajj (t) be an algebraic cofactor of the element xj (t); Aj (t) be a row vector consisting of the algebraic cofactors for the j-th column xj (t) of the matrix X (t). It is clear that Aj (t) = (Aij, A2j ,...,A

nj). Let Aj(t) be a column vector consisting of the algebraic cofactors

for the «-th row x'i (t) of the matrix X(t). Obviously, Aj(t) = (Aj Then, we can write

ЛыУ

1

t sin t

e

1

2

t

t sin t

e

i

Ax = ^2xij(t)Aij(t) = A'j(t)xj(t) for Vj = 1,n (11)

i=i

or

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

n

Ax = YJxij(t)Aij(t)=x'i(t)Ai(t) for V» = l~n. (12)

j=i

The relationship between the characteristic number of a matrix and of its determinant.

Lemma 1. The following inequalities hold:

X[Ax] > Xj+xl&'j] >S>nX[X] for Vj = (13)

X[AX] + >S'^nX[X] for \/i=T~^. (14)

Inequalities (13) and (14) can be proved applying the properties of characteristic numbers of the function sum and the function product to the determinant of the matrix

X (t).

Proof. Fix any j in (11) and any i in (12). By Theorem 2, for x[Ax] from (11) and (12) we have

xiAx ] > xixj(t) ] + xiAj (t) ] = Xj + xiAj (t)], (15)

x[Ax ] > xix'(t)]+xiAi (t)] = Xi + x[Ai (t)]. (16)

Taking into account that all elements Ajj(i), i = l,n, of the vector Ai(t) in (15) are the sums of products we disclose minors Aij (t) in the row vector Aj (t).

By the properties of the characteristic numbers of the function sum and product, we

get

X[Aj (t)} > min x[ Aij(t) ] > Ai + A2 + • • • + Xj-i + Xj+1 + ■■■ + Xn = S - Xj. (17)

i= l,n

Note that the characteristic number of the multiplier of the column xk (t) is not less than the characteristic number of the column xk(t). Thus in (17), we substitute the characteristic number of every multiplier with the characteristic number of its column. Since there are no elements of the column xj (t) in the vector Aj (t), it follows that the characteristic number Xj is missing in (17). From (17), we get two left-hand side inequalities in (13). The third inequality S > nxiX ] is obvious. This follows from Property 1 of the characteristic number of a matrix (see (2)). In the same way, inequalities (14) can be proved using (16). To prove this result it suffices to show that x[Ai (t)] > X[ + X'2 + • • • + X'i-1 + X'i+1 + • • • + X'n = S' — Xi. Since there are no elements of the row x((t) in the vector Ai (t), it follows that the characteristic number Xi is missing in these inequalities. From this, we get two left-hand side inequalities in (14). The last inequality S' > nxiX ] follows from Property 1 on the characteristic number of a matrix (see (2)). This proves Lemma 1. □

Corollary 2. The following inequalities hold for Vj = 1, n and Vi = 1, n:

xiAx ] — Xj > xiAj ] > S — Xj , xiAx ] — Xi > xiAi ] > S' — Xi.

The proof is trivial.

4-2. The relationship between the characteristic number of a matrix and of its inverse matrix.

Lemma 2. The following inequalities hold:

^ 0,

(n - 1)x[X ] + x

1

Ax

(18)

(19)

Proof. The proof of (18) and of the left-hand part of double inequality (19) is found in [18].

Let us show that the right-hand side of double inequality (19) holds. The application of the left-hand part of inequality (19) to the matrix X(t) yields the right-hand side of inequality (19). In fact, substituting X-1 for X(t), X(t) for X-1, and Ax for (Ax)-1 in the left-hand side of inequality (19), we obtain x[X ] > (n — 1)x [X-1] + x [Ax ]. Therefore, x ^ ¿T ~~ x[Ax]) • The right-hand side of inequality (19) is

proved. This completes the proof of Lemma 2. □

Corollary 3. For any non-singular n x n matrix X(t) defined and continuous for t ^ 0 the following inequality holds:

x[x(t)]+x[ X-1(t)] < 0. (20)

Proof. Using Property 5 and Lemma 2, let us apply estimate (9) to the product of matrices X(t) and X-1(t): X(t)X-1(t) = E, where E is the unit n x n matrix. Obviously, x[ E ] = 0. From inequalities (9) and (18) it follows that the Corollary is true. □

Remark 3. Inequality (20) is the generalization to non-singular square matrices of Lyapunov's inequality xW{t) ] + x[ ] ^ proved by him for a scalar function f(t) that is never equal to zero for any t > t0.

5. Rigorous evaluation of characteristic numbers of a matrix product. Consider a product

X(t) = L(t) Y(t), (21)

where L(t) is a non-singular n x n matrix, real or complex defined and continuous for t ^ 0. Let X(t) and Y(t) be rectangular matrices of dimension n x m. By Theorem 2, we find the following estimate of the characteristic number of the matrix X(t):

x[X(t)] > x[ L(t)] + x[ Y(t)]. (22)

The structure of (22) seems to be similar to the Lyapunov estimate for the product of two functions x[ f ] ^ x[¥] + x[^] (see [1]), where functions f (t), y(t), ^(t) are such that f (t) = <£>(t) • ^(t). Besides, Lyapunov has shown that the equality x[ f ] = x[ ¥ ] + x[ ^ ] holds if the function f(t) satisfies the equality x[f(t)] + x[^y] = 0- Such relationship has not been found for matrices.

Now we introduce the following concept.

Definition 3. We shall say that the estimate of the characteristic number of the matrix product X(t) is called rigorous if formula (22) is in the form of the equality

х[ X (t)]= х[ L(t)] + х[ Y (t)].

Lemma 3. Let rectangular matrices X(t) and Y(t) be defined and continuous for t ^ 0; L(t) be a non-singular square matrix; and equality (21) holds; then the following estimates are true:

x[X(t)] > x[L(t)] + x[Y(t)], (23)

X[Y(t)] > x[L-1(t)]+x[X(t)]. (24)

Proof. Let equality (21) holds. Then, applying estimate (9) to this equality and to Y(t) = L-1(t) X(t), we get estimates (23), (24). Lemma 3 is proved. □

Corollary 4. Suppose the equality

X(t)= Y(t) L(t) (25)

holds, where L(t) is a non-singular n x n matrix, real or complex, defined and continuous for t ^ 0; X(t) and Y(t) are rectangular matrices of dimension m x n defined and continuous for t ^ 0; then the estimates (23), (24) are true. The proof is trivial.

5.1. Necessary and sufficient conditions under which estimates (23), (24) are rigorous. Now we state and prove the conditions that establish strict values for characteristic numbers of two kinds of matrix products. The following theorem holds. Theorem 3. The estimates

x[ X (t)]= x[ L(t)]+ x[ Y (t)], (26)

x[ Y (t)]= x[ L-1(t)]+x[ X (t)] (27)

are rigorous if and only if

x[ L(t)]+x[ L-1(t)]=0. (28)

Proof. Necessity. Suppose (26), (27) hold. Summing left and right sides apart and equating the results, we obtain x[X(t)] + x[Y(t)] = x[L(t)] + x[L-1(t)] + x[Y(t)] + x[X(t)]. This is followed by (28). The necessity is proved.

Sufficiency. Assume that (28) holds. We now show that under such conditions we have (26), (27). Substituting in (24) the equality x[ L-1(t)] = -x[L(t)], and combining this with (23), we get x[ Y(t) ]+x[ L(t) ] >x[X(t) ] >x[L(t) ]+x[ Y(t)]. From these inequalities we obtain equality (26). Putting in it x[L(t)] = -x[L-1(t)], we have (27). Theorem 3 is proved. □

Let us give examples of matrices illustrating Theorem 3. Example 5. Consider

l=0 fort>

Clearly, det L = 1, det L-1 = 1, L-1 = ^ — 0 ^. By these, x[L] = 0,x[ L-1] = 0.

Hence, (28) holds and equalities (26), (27) have the form x[X] = x[Y] for products (21), (25).

Example 6. Consider

( e1 L = '

Let l1 and l2 be the first and the second columns of the matrix L(t) respectively. Then, X[ li ] = —1, x[ I2] = -1- Consequently, x[ L ] = -1, det L = Al = e 2t, det L-1 = e-2t, x[AL ] = —2. The inverse matrix L-1(t) is

L-1(t) =

0

— sin te 2t

„ -t

Thus, x[l1 ] = 1,x[l2] = 1,x[L-1] = 1, where l1, l2 are the first and the second columns of the matrix L-1(t) respectively. Therefore, x[ L ]+x[ L-1 ] = 0, i. e. equality (28) holds. This implies that for products (21), (25) equalities (26), (27) have the form x[X] = —1 + x[ Y], x[ Y] = 1 + x[X ].

5.2. The connection between equality (28) and characteristic numbers of the determinant AL(t) and (AL(t))-1. The following statements are true.

Theorem 4. The matrix L(t) satisfies equality (28) if and only if one of the two following conditions holds: either

nx[ L ] = x[Al], 1

x[AL ]+x

A

L

(29)

(30)

x[ L]

-X

1

дГ

0.

(31)

To prove this theorem, the following lemma is needed. Lemma 4. Equalities (29), (30) are equivalent to condition (31). Corollary 5. Equality (28) holds if and only if characteristic numbers of all columns and all rows of the matrix L(t) are equal to

x[L(t)] = --x

n

1

дГ

Lemma 4, Theorem 4 and Corollary 5 were proved by V. S. Ermolin in [18]. Example 7. Consider

L(t)

tet

t > 0.

Denote [L(t)] a matrix of the characteristic numbers for corresponding elements of the matrix L(t). We have

L(t)

x[ et ] x[tet ] " —1 —1

x[0] x[ et] _ —1

(32)

and the determinant AL(t) = det L(t) = e2t = 0. This means that L(t) is a non-singular matrix. The determinant of the inverse matrix L_1(t) is detL_1(t) = -^jjt) = e~2t- The matrix L-1(t) is

L-l(t) =

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

e-t —te-t

0 e-t

Let us find the corresponding matrix of the characteristic numbers

- -1

[L-1(t)] =

x[ e-t ] x[ —te-t ] " 1 1"

x[0] x[ e-t ] 1

(33)

or

1

t

e

Using (32) and (33), we have x[L(t)] = — 1, x[L-1(t)] = 1, x[L(t)] + x[L-1(t)] = 0. Therefore, L(t) satisfies (28).

Now let us calculate characteristic numbers of the columns and the rows of L(t), as well as the characteristic numbers of AL(t) and (AL(t))-1. Clearly, A1 = —1, X2 = — 1 are the characteristic numbers values of the first and the second columns of L(t); A[ = —1, X'2 = —1 are the characteristic numbers values of the first and the second rows of L(t). Hence, the characteristic numbers of all columns and rows of L(t) coincide. This means that

the Corollary 5 is true. Obviously, x[AL(t)] = x[e ] = -2,x

A Lit)

X[ e

-2t 1

2.

Thus, x[^b(t)\+X n = 2, then

A Lit)

= 0. This yields that equality (30) of Theorem 4 holds. Since

nX[L(t) ] = -2 = x[A L(t) ] and Х[Щ} + -x

n

1

lAl(í)j

= -1 + 1 = 0.

Consequently, equality (29) holds. Moreover, it is clear that (31) also holds. We see that all conditions of Theorem 4 and its Corollary 5 are satisfied.

In the same way, it is easy to verify the implementation of these relations for

characteristic numbers of L

,2t

-2t 1

(t)

1(t), because x[AL-t (t)] = x[e~ 2 ] =2, x x[ e 2t ] = -2.

6. The main results of the paper in terms of the Lyapunov characteristic exponents. The results obtained in this paper both for rectangular matrices and nonsingular square matrices can be reformulated in terms of the Lyapunov characteristic exponents. For this we introduce the following notation.

Definition 4 (see Definition 2 and (1)). According to [12, p. 132], a number or a symbol (—<x>)

x[X{t)} = max x\xa(t)}

j= 1, n

is called the characteristic exponent of a matrix X(t) = {xij(t)} defined on [to,

Let X(t) = {xij(t)}, i = l,n, j = 1 ,n, be a non-singular square matrix defined and continuous for t > 0. Let Xj = x[xj], X[ = xX] be the characteristic exponents of the j-th column and of the i-th row of the matrix X(t) correspondingly. We define S = J2"=1 Xj,

S' = ^r¡=1 Xi. Then, in new terms, Lemma 1 takes the following form. Lemma 5. The following inequalities hold:

X[AX] < Xj+xl^'j] <S^nx[X] for Vj=T X[AX] + ^S'^nx[X] for \li=T~a.

Lemma 5 establishes the relations between the characteristic exponents of square matrices and their determinants. The proof is similar to the proof of Lemma 1.

Now we give the relations connecting the characteristic exponents of a matrix X(t) and of its inverse matrix X-1(t) (see Lemma 2). Lemma 6. The following inequalities hold:

^ 0,

1

1

1

(n — 1)x[X ] + x

1

Âx

The proof of Lemma 6 repeats the proof of Lemma 2.

Corollary 6 (see Corollary 3 and inequality (20)). For any non-singular n x n matrix X(t) defined and continuous for t ^ 0 the following inequality holds:

X[ X(t)] + x[X-1(t)] > 0.

Assume now that (21) holds, where L(t) is a non-singular nxn matrix, real or complex defined and continuous for t > 0; X(t) and Y(t) are rectangular matrices of dimension n x m. We have for the product of matrices the following inequality:

X[X] < x[L]+x[Y]. (34)

Definition 5. The estimate of the characteristic exponent of the matrix product X(t) is called rigorous, if (34) is the relation of equality

X[ X (t)]= x[ L(t)] + X[ y (t)].

Lemma 3 and Theorem 3 are reformulated as follows.

Lemma 7. Let rectangular matrices X(t) and Y(t) be defined and continuous for t ^ 0; L(t) be a non-singular square matrix; and equality (21) holds; then the following estimates are true

X[ X (t)] < X[ L(t) ] + x[ Y (t)] x[ Y (t)] < X[ L-1(t) ] + x[ X (t)].

Theorem 5. The estimates

X[ X (t)] = x[ L(t)] + x[ y (t) ] x[ Y (t) ] = x[ L-1(t) ] + x[ X (t) ]

are rigorous if and only if

X[ L(t)]+x[ L-1(t) ] = 0. (35)

The proof is trivial.

Theorem 6 (see Theorem 4). Matrix L(t) satisfies equality (35) if and only if one of the two following conditions holds: either

x[A

nx[ L ] = x[Al ], 1

x

or

x[L] + -x

n

AL 1

ÂT

0.

(36)

(37)

(38)

The proof of Theorem 6 repeats the proof of Theorem 4.

Lemma 8 (see Lemma 4). Equalities (36), (37) are equivalent to condition (38). The proof is trivial.

Corollary 7 (see Corollary 5). Equality (35) holds if and only if characteristic exponents of all columns and rows of the matrix L(t) are equal to

x[L(t)] = --x

n

1

ÂT

The proof of Corollary 7 is similar to the proof of Corollary 5.

7. Conclusion. This paper contains the development of the theoretical foundations of Lyapunov's first method and the generalization to rectangular and square matrices of Lyapunov's results for scalar functions. The conditions establishing the relationship between characteristic numbers of rows and columns of functional matrices are stated and proved. Moreover, corresponding relations between characteristic numbers of transposed and conjugated matrices are also presented. In Lemmas 1 and 2, we set the relations between the characteristic number of a non-singular square matrix with the characteristic number of its inverse matrix and the determinant. In Lemma 3, we prove the conditions extending to non-singular square matrices Lyapunov's inequality obtained by him for evaluation and calculation of the characteristic number of a scalar function product. In Theorem 3, we formulate and prove the necessary and sufficient conditions that enable to equate the characteristic number of a matrix product to the sum of characteristic numbers of the matrices-multipliers. Theorem 4 is proved for matrices satisfying the hypothesis of Theorem 3. In Theorem 4, we state the necessary and sufficient conditions for the connection of the characteristic number of a square matrix with the characteristic number of its determinant, as well as with the matrix dimension. The Corollary 5 of Theorem 4 shows the additional properties of the characteristic numbers of matrix rows and columns. Presented examples of matrices illustrate the results set forth above. Furthermore, the stated relations and properties of the characteristic numbers of square matrices we reformulate in terms of the Lyapunov exponents.

The results of this paper make it possible to extend the use of the characteristic numbers and the characteristic exponents both for the evaluation of coefficient matrices of differential equations systems and for the evaluation of the solution behavior.

References

1. Lyapunov A. M. Obshaya zadacha ob ustojchivosti dvizheniya [The general problem of the stability of motion]. Moscow, Gostekhizdat Publ., 1950, 472 p. (In Russian)

2. Ekimov A.V., Balykina Yu.E., Svirkin M.V. Analysis of attainability sets of bilinear control systems. AIP Conference Proceedings (ICNAAM 2016). Rodos, American Institute of Physics Publ. LLC, 2017, vol. 1863, no. 170012.

3. Ermolin V. S. Value sets of the discrete interval length in the problem of discrete stabilization. Avtomatika, 1995, no. 3, pp. 15—21.

4. Ermolin V. S., Vlasova T. V. Identification of the domain of attraction. Proceedings of SCP 2015 Conference. St. Petersburg, IEEE Publ., 2015, pp. 9-12.

5. Zubov A. V. Stabilization of program motion and kinematic trajectories in dynamic systems in case of systems of direct and indirect control. Automation and Remote Control, 2007, vol. 68, no. 3, pp. 386-398.

6. Zubov V. I. Kolebaniya v nelinejnyh i upravlyaemyh sistemah [Fluctuations in nonlinear and control systems]. Leningrad, Sudpromgiz Publ., 1962, 632 p. (In Russian)

7. Zubov V. I. Lekcii po teorii upravleniya [Lectures on the theory of control]. 2nd ed. St. Petersburg, Lan's Publ., 2009, 496 p. (In Russian)

8. Kozlov V.V., Furta S.D. Lyapunows first method for strongly non-linear systems. Journal of Applied Mathematics and Mechanics, 1996, vol. 60, no. 1, pp. 7-18.

9. ChetaevN.G. Ustojchivost' dvizheniya [Thestability of motion]. Moscow, Gostekhizdat Publ., 1955, 176 p. (In Russian)

10. Malkin I. G. Teoriya ustojchivosti dvizheniya [Theory of stability of motion]. Moscow, Nauka Publ., 1966, 530 p. (In Russian)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

11. Bylov B.F., Vinograd R. E., Grobman D.M., Nemyckij V.V. Teoriya pokazateley Lyapunova i ejo prilozheniya k voprosam ustojchivosti [The theory of Lyapunov characteristic numbers and their application to the theory of stability]. Moscow, Nauka Publ., 1966, 576 p. (In Russian)

12. Demidovich B. P. Lekcii po matematicheskoy teorii ustojchivosti [Lectures on the mathematical theory of stability]. St. Petersburg, Lan's Publ., 2008, 480 p. (In Russian)

13. Adami T. M., Best E., Zhu J. J. Stability assessment using Lyapunov's first method. Proceedings of the Annuo! Southeastern Symposium on System Theory. Huntsville, IEEE Publ., 2002, pp. 297—301.

14. Masarati P., Tamer A. Sensitivity of trajectory stability estimated by Lyapunov characteristic exponents. Aerospace ¡Science and Technology, 2015, vol. 47, pp. 501—510.

15. Oseledets V. I. A multiplicative ergodic theorem. Lyapunov characteristic numbers for dynamical systems. Trans. Moscow Mathematical Society Journal, 1968, vol. 19, pp. 197—231.

16. Cencini M., Ginelli F. Lyapunov analysis: from dynamical systems theory to applications. Journal of Physics A: Mathematical and Theoretical, 2013, vol. 46, no. 250301.

17. Young L.-S. Mathematical theory of Lyapunov exponents. Journal of Physics A: Mathematical and Theoretical, 2013, vol. 46, no. 254001.

18. Ermolin V. S. Invariantniye preobrazovaniya v pervom metode Lyapunova [Invariant transformations in Lyapunov's first method]. Vestnik of Saint Petersburg University. Series 10. Applied Mathematics. Computer Science. Control Processes, 2014, iss. 2, pp. 36—48. (In Russian)

19. Ermolin V. S., VlasovaT. V. A group of invariant transformations in the stability problem via Lyapunov's first method. Proceedings of ICCTPEA 2014 Conference. St. Petersburg, IEEE Publ., 2014, pp. 48-49.

Received: February 01, 2019.

Accepted: November 07, 2019.

Author's information:

Vladislav S. Ermolin — PhD in Physics and Mathematics, Associate Professor; vse40@mail.ru

Tatyana V. Vlasova — PhD in Physics and Mathematics, Associate Professor; t.vlasova@spbu.ru

Первый метод Ляпунова: оценки характеристичных чисел функциональных матриц

В. С. Ермолин, Т. В. Власова

Санкт-Петербургский государственный университет, Российская Федерация, 199034, Санкт-Петербург, Университетская наб., 7—9

Для цитирования: Ermolin V. S., Vlasova T. V. Lyapunov's first method: estimates of characteristic numbers of functional matrices // Вестник Санкт-Петербургского университета. Прикладная математика. Информатика. Процессы управления. 2019. Т. 15. Вып. 4. С. 442456. https://doi.org/10.21638/11702/spbu10.2019.403

Статья посвящена развитию теоретических основ первого метода Ляпунова. Проводится анализ соотношений между характеристичными числами функциональных матриц, их строк и столбцов. Доказана теорема, обобщающая на произведение матриц равенство Ляпунова, выведенное им для оценки и вычисления характеристичного числа произведения скалярных функций. Установлены необходимые и достаточные условия существования строгих оценок для характеристичных чисел произведений матриц. Кроме того, доказана теорема, выявляющая связь характеристичного числа квадратной неособой матрицы с характеристичным числом ее обратной матрицы и определителя. Приведенные соотношения и свойства характеристичных чисел квадратных матриц переформулированы в терминах показателей Ляпунова. Даются примеры матриц, иллюстрирующие теоремы.

Ключевые слова: первый метод Ляпунова, теория устойчивости, характеристичные числа, показатели Ляпунова, функциональные матрицы.

Контактная информация:

Ермолин Владислав Степанович — канд. физ.-мат. наук, доц.; vse40@mail.ru Власова Татьяна Владиславовна — канд. физ.-мат. наук, доц.; t.vlasova@spbu.ru

i Надоели баннеры? Вы всегда можете отключить рекламу.