Научная статья на тему 'Стохастические Дифференциальные уравнения второго порядка: устойчивость, диссипативность и периодичность. I. - обзор'

Стохастические Дифференциальные уравнения второго порядка: устойчивость, диссипативность и периодичность. I. - обзор Текст научной статьи по специальности «Математика»

CC BY
130
20
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
STOCHASTIC DIFFERENTIAL EQUATION / STABILITY IN PROBABILITY / EXPONENTIAL STABILITY IN MEAN SQUARE / DISSIPATIVITY / PERIODIC SOLUTION / ITO INTEGRAL / STRATONOVICH INTEGRAL / СТОХАСТИЧЕСКОЕ ДИФФЕРЕНЦИАЛЬНОЕ УРАВНЕНИЕ / УСТОЙЧИВОСТЬ ПО ВЕРОЯТНОСТИ / ЭКСПОНЕНЦИАЛЬНАЯ УСТОЙЧИВОСТЬ В СРЕДНЕМ КВАДРАТИЧЕСКОМ / ДИССИПАТИВНОСТЬ / ПЕРИОДИЧЕСКОЕ РЕШЕНИЕ / ИНТЕГРАЛ ИТО / ИНТЕГРАЛ СТРАТОНОВИЧА

Аннотация научной статьи по математике, автор научной работы — Шумафов Магомет Мишаустович

Делается обзор результатов исследований качественных свойств решений стохастических дифференциальных уравнений и систем второго порядка: стохастической устойчивости и диссипативности, существования стационарных и периодических решений. Рассматриваются два типа случайных возмущений: белый шум и процессы с ограниченным математическим ожиданием.Работа состоит из двух частей. В первой части дается краткий обзор результатов исследований по стохастической устойчивости дифференциальных уравнений и систем второго порядка. Далее приводятся некоторые предварительные сведения из теории вероятностей и теории случайных процессов. Во второй части даны определения стохастических интегралов Ито и Стратоновича. Определены понятия стохастического дифференциального уравнения в формах Ито и Стратоновича. Даны достаточные условия устойчивости по вероятности и экспоненциальной устойчивости в среднем квадратическом, достаточные условия диссипативности и существования стационарных и периодических решений рассматриваемых уравнений. Проводится сравнение условий устойчивости решений стохастических уравнений в форме Ито и Стратоновича. В качестве примера рассматривается гармонический осциллятор, возмущенный белым шумом и случайным процессом с конечным математическим ожиданием.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

SECOND ORDER STOCHASTIC DIFFERENTIAL EQUATIONS: STABILITY, DISSIPATIVITY, PERIODICITY. I. - A SURVEY

This paper reviews results concerning qualitative properties of second order stochastic differential equations and systems. The properties, which we address, are stochastic stability, stochastic dissipativity and existence of stationary and periodic solutions. Two types of perturbed random processes, white noise and process with bounded mathematical expection, are considered. The work consists of two parts. In the first part we give a short overview on stability of solutions of second order stochastic differential equations by Lyapunov functions techniques. Here we present some mathematical prelimanaries from probability theory and stochastic processes. In the second part we will define the notion of stochastic integrals in Ito and Stratonovich forms. Then we define stochastic differential equations in the Ito and in the Stratonovich sense. Sufficient conditions of stability on probability, exponential stability in mean square of solutions of second order differential equations and systems will be given. Also, sufficient conditions of dissipativity of the second order stochastic systems are rendered. Sufficient conditions for existence of stationary and periodic solutions of systems considered are provided. A comparation between stability conditions for the stochastic equations interpreted in the Ito and in the Stratonovich sense is made. As an example a harmonic oscillator perturbed by random process of the type of white noise and a process with bounded mathematical expectation is considered.

Текст научной работы на тему «Стохастические Дифференциальные уравнения второго порядка: устойчивость, диссипативность и периодичность. I. - обзор»

МАТЕМАТИКА MATHEMATICS

УДК 517.95 ББК 22.181.62 Ш 96

Shumafov Magomet Mishaustovich

Doctor of Physics and Mathematics, Professor of Department of Mathematical Analysis and Methodology of Teaching Mathematics, Head of Department, Adyghe State University, Maikop, ph. (8772) 593905, e-mail: [email protected]

Second order stochastic differential equations: * stability, dissipativity, periodicity. I. - A survey *

(Peer-reviewed)

Abstract. This paper reviews results concerning qualitative properties of second order stochastic differential equations and systems. The properties, which we address, are stochastic stability, stochastic dissipativity and existence of stationary and periodic solutions. Two types of perturbed random processes, white noise and process with bounded mathematical expection, are considered. The work consists of two parts. In the first part we give a short overview on stability of solutions of second order stochastic differential equations by Lyapunov functions techniques. Here we present some mathematical prelimanaries from probability theory and stochastic processes. In the second part we will define the notion of stochastic integrals in Ito and Stratonovich forms. Then we define stochastic differential equations in the Ito and in the Stratonovich sense. Sufficient conditions of stability on probability, exponential stability in mean square of solutions of second order differential equations and systems will be given. Also, sufficient conditions of dissi-pativity of the second order stochastic systems are rendered. Sufficient conditions for existence of stationary and periodic solutions of systems considered are provided. A comparation between stability conditions for the stochastic equations interpreted in the Ito and in the Stratonovich sense is made. As an example a harmonic oscillator perturbed by random process of the type of white noise and a process with bounded mathematical expectation is considered.

Keywords: stochastic differential equation, stability in probability, exponential stability in mean square, dissi-pativity, periodic solution, Ito integral, Stratonovich integral.

Шумафов Магомет Мишаустович

Доктор физико-математических наук, профессор кафедры математического анализа и методики преподавания математики, заведующий кафедрой, Адыгейский государственный университет, Майкоп, тел. (8772) 593905, e-mail: [email protected]

Стохастические дифференциальные уравнения второго порядка: Устойчивость, диссипативность и периодичность. I. - Обзор**

Аннотация. Делается обзор результатов исследований качественных свойств решений стохастических дифференциальных уравнений и систем второго порядка: стохастической устойчивости и диссипатив-ности, существования стационарных и периодических решений. Рассматриваются два типа случайных возмущений: белый шум и процессы с ограниченным математическим ожиданием. Работа состоит из двух частей. В первой части дается краткий обзор результатов исследований по стохастической устойчивости дифференциальных уравнений и систем второго порядка. Далее приводятся некоторые предварительные сведения из теории вероятностей и теории случайных процессов. Во второй части даны определения стохастических интегралов Ито и Стратоновича. Определены понятия стохастического дифференциального уравнения в формах Ито и Стратоновича. Даны достаточные условия устойчивости по вероятности и экспоненциальной устойчивости в среднем квадратическом, достаточные условия диссипативности и существования стационарных и периодических решений рассматриваемых уравнений. Проводится сравнение условий устойчивости решений стохастических уравнений в форме Ито и Стратоновича. В качестве примера рассматривается гармонический осциллятор, возмущенный белым шумом и случайным процессом с конечным математическим ожиданием.

Ключевые слова: стохастическое дифференциальное уравнение, устойчивость по вероятности, экспоненциальная устойчивость в среднем квадратическом, диссипативность, периодическое решение, интеграл Ито, интеграл Стратоновича.

* This work represents the extended text of the plenary report as the Third International Scientific Conference "Autumn Mathematical Readings in Adyghea" (AMRA - 3), October 15-20, 2019, Adyghe State University, Maikop, Republic of Adyghea.

** Статья представляет собой расширенный текст пленарного доклада на Третьей международной научной конференции «Осенние математические чтения в Адыгее» (ОМЧА - 3), 15-20 октября 2019 г., АГУ, Майкоп, Республика Адыгея.

1. Introduction

Stochastic differential equations (SDEs) occur where a system described by differential equations is influenced by random process (for instance, noise). Stochastic differential equations are used for modeling physical, technical, biological, economical and control systems in which an uncertainty is present. In general, it is not possible to give explicit expressions for the solutions to SDEs as well as to deterministic differential equations. Therefore, it is of great interest to be able to characterize at least qualitatively the behaviour of the solutions of SDEs. The typical questions are the following:

1) Does exist a solution of SDE?

2) Is there a unique solution?

3) Is it defined for all times?

4) Is the solution bounded for all times in some stochastic sense?

5) Under which conditions is the solution stable in some stochastic sense?

6) If the right-hand side of the system is a stationary or periodic random process, under which additional assumptions does the system have a stationary or periodic solution?

7) If the system has a stationary or periodic solution, under which conditions will every other solution converge to it?

Note that the above problems are motivated by practical interest. They are also actual for deterministic systems of differential equations. In that case the problems 1)-7) and connected ones with them have been thoroughly investigated by many authors, the results of which can be found in monographs [1-8], and others.

In further, we are more interested in the stability, dissipativity and periodicity properties. A.M. Lyapunov in his famous memoir [1] introduced a method of auxiliary functions (Lyapunov functions or direct or second order method) to the study of stability of deterministic systems, governed by ordinary differential equations (ODEs). Lyapunov's method is of great generality and a powerful tool for qualitative analysis (not only stability) of deterministic systems, described by ODE, since it does not require the knowledge of solutions themselves. This method reduces the problem of stability to the problem of the existence of Lyapunov's function which has along the solutions of the given equation an appropriate monotonic behavior. But the problem of the Lyapunov approach is that there is no universal method of finding a Lyapunov function, or determining that no such function exists.

As well as in deterministic systems, Lyapunov theory is a powerful tool for qualitative analysis of stochastic differential equations (SDEs). Since K. Ito constructed his stochastic calculus [9], the theory of SDEs had a very quick development. Later on it was recognized that Lyapunov's method can be extended to investigate various qualitative properties (especially stability) of SDEs. The first suggestion of Lyapunov-like theory for SDEs appeared in the papers of Bertram and Carachik [10] and Kats and Krasovskii [11]. Afterwards this theory was developed by Kushner [12], Khas-minskii [13], L. Arnold [14], Friedman [15], Mao [16] and other authors. For some classes of linear and nonlinear second order SDEs the stability conditions derived by Lyapunov's functions method are given in papers [17-28] (see also examples in [12], [13], [16]). Among the surveys on stability for SDEs by Lyapunov techniques we mention the papers [29, 30].

In the present paper we give a brief survey of results on stability, dissipativity and periodicity of second order SDEs, which are derived by using Lyapunov's direct method. Some new results are presented. For stochastic harmonic oscillator necessary and sufficient conditions in terms of its parameters for exponential stability in mean square are given.

2. Some Mathematical Preliminaries

In this section we present some basic notions and facts from probability theory and stochastic analysis. More details can be found in any introductory text (for instance, see [16, Ch. 1; 31, Ch. 2-4; 32; 33, Ch. 2; 34; 35]).

2.1. Probability space

Probability theory deals with mathematical models of trials whose outcomes (elementary events) depend on chance. All such possible outcomes form a set, Q, with typical element oeQ (a is regarded as elementary event). Certain subsets of Qare interpreted as being "events". In general, not every subset of Q is an observable event. These only observable events form a family T of subsets of fl. Such a family T should have (for the purpose of probability theory) certain properties. This leads to notion of a -algebra.

Definition 1. If fl is a given set, then a -algebra T on fl is a family (collection) T of sab-sets of Q with the following properties:

1) 0,Qe F-

2) if A e ? then Ac e T, where Ac e Q - A;

3) for any finite or countable sequence of sets A,ef, the anion U, A, and intersection

fl, A; are also in T: if Al3 A2,... gT, then U At, fl At e F-

k=1 k=1

The pair (fl, is called a measurable space; the elements of F is called T-measurable sets. If Q = and M is the family of all open sets in , then a smallest a -algebra a(M) on which contains M (this a(M) is called the a -algebra generated by M ) is called the Borel a -algebra, and is denoted by a(M) = {b}. The elements B of a(M) are called the Borel sets.

Definition 2. A probability measure P on a measurable space (Q, IF) is a function P: F->['Jtl| such that

1)P(0)=O, P(Q) = 1;

2) for any disjoint sequence of sets Ai, A2,- G F (i.e. A/fl A; = 0, if i j) the equality holds: P^ U Aij = Z P(a¿);

3) if ai, a2,• • • g T (Ai are not necessarily disjoint), then p(ü a,- p z P(a,) •

v=i J '=1

Definition 3. A triple (Q, T, P) is called a probability space provided that fl is any set, T is a a -algebra of subsets of fl, and P is a probability measure on T It is called a complete probability space iff contains all subsets G of fl with P - outer measure zero.

Remarks. 1) A set Ae T is called a random event; points co e Q are elementary events or sample points;

2) P(a) is the probability of the event Ae f;

3) A property which is true except for an event of probability zero is said to hold almost surely (a.s.). In particular, if P(a) = 1 for Ae f, it is said that "the event a occurs with probability 1", or "almost surely".

2.2. Random variables

The mathematical model for a random quantity is a random variable. The probability space defined above is an essential mathematical construction, but it is not "directly observable". Therefore mappings % from Q to are introduced, in this case the values of % one can observe.

Definition 4. Let (fl,T,V) be a probability space. A mapping (function) % \ Q —» y?" is called T-measura

ble if (B) :={®eQ:^)eB}ef for all Borell sets Bcz^».

Definition 5. An T-measurable mapping (function) % : Q —> yj" is called a random variable.

Notation: A random variable is usually denoted by % or %(a).

Every random variable £ induces a probability measure ¡£ on the Borel measurable space (ю",a(m)), a(M) = {b}, defined by ¡£ (b) = p£ 1 (b)) for Вест(М). It is read as "the probability that £ is in В " and usually denoted by P(£ g В). The measure ¡л£ is called the distribution of the random variable £.

Let £: Q ^ Ю" be any mapping (e.g. random variable). Then one can check that the family T (£) := £ (в) : Bg ст(м)} of sets £ (b) , where а(ы) ={b} is the Borel a -algebra on ю", makes up a a -algebra on Q. This a -algebra is called the a -algebra generated by £.

If £ is ^-measurable (e.g. £ is random variable), then J7, i.e. £ generates sub-cr-

algebra of T. It is clear that a -algebra 37 (¿f) is the smallest sub-a -algebra of T with respect to which £ is measurable.

Notice that in probabilistic terms the a -algebra J7 (<£) can be interpreted as "containing all relevant information" about the random variable £ .

2.3. Expectation

Let (Q, Г,Р) be a probability space and £ = ¿;{co),co g Q, be a real-valued random variable. Assume that £(0) is integrable with respect to the probability measure P.

Definition 6. If j£(a)p(a) , then the number E£ := j £())dP()) is called the expec-

Q Q

tation (or mean value) of £ (with respect to P).

If £ = £(0) is a vector-valued random variable, £()) = (£(0),...,£n(0)), then the expecta-

tion of £(0) is defined by E£:=l j£ (0)^(0),..., j£n(0)^(0) I = ^£{co)d?{co) provided that

VQ Q / Q

j£i(0)dP(0)<^ (i=1,...,n). This can be rewrite E£ = (E£,...,E£n).

Q

Since /£ (b) = P{0: £(0) g B} is a probability measure on the Borel space where )= {b} is the Borel a -algebra (generated by the family of all open sets in ), the expectation of -valued random variable £ can be expressed as E£ = j£(0)dP(0) = jxd/£(x).

Q mn

More generally, if f: is a Borel measurable mapping, then

E(f (£)) : = j f (£(0))dP(0) = j f (x) / (x).

Q Q

The number e(<£p), where p > 0, is called thepth moment of £. 2.4. Variance

Let £ = £(0)0 g Q, be a real valued random variable.

Definition 7. The number V(£):=E(£-E£) = j(£(0) -E£) dP0) is called the vari-

Q

ance of £ (provided the integral exists).

For a vector-valued random variable £(0) = £ (0),..., £n (0)) the variance of £ is defined by

V(£):=E|£-E(£)| = j£(0)-e£ dP0) (here |-| denotes the norm of the vector

Q

£(0) -E(£) g^").

It is straightforward to show that V(£) = E|£ - e£2 = E|£2 - |E£|2.

For two real-valued random variables £ and r the number Cov (£, r) : = E[(£-E£)(r-Er)] is called the covariance of £ and r.

Obviously, if r = £, then Cov (£, £) = V(£).

If Cov (£, r) = 0, then £ and r are said to be uncorrelated. The real-valued random variables £,...,£ are called uncorrelated if Cov (£,£■) = 0, i = j (if i = j, then Cov (£i,£) = V(£), i = 1,...,n).

If both £ and r are vector-valued random variables: £ = (£ (0), ..., £n (0)), r = (r1 (0 ), ..., r n (0 )) then the n x n matrix

Cov(£, r) : = E[(£ - E£)(r -Er) ] = (e[(£ - E£,)(r 1 - Er;)]);=1

is called the covariance matrix of £ and r (here T denotes transposition).

If £ = r, then the n x n matrix Cov (£, £) is symmetric nonnegative definite; in this case its diagonal elements are just the variances V(£1),...,V(£n).

2.5. Distribution function

Let (Q,be a probability space and £:Qbe a random variable: £ = £(0) = (£1 (0), ...,£„ (0)) . Let x = (xi,..., xn) еЮи.

Definition 8. The function F£: Ю" ^ [0,1] defined by

F£(x) = P{0 : £1 0)< x1,...,£„0)< x„} for all x ею" is called the distribution function of £, or joint distribution function of real-valued random variables £1 = £1 (0),..., £n = £n (0).

Definition 9. If there exist a nonnegative, integrable function p£: Ю" ^ Ю such that

x1 xn x

F£(x) = Ft(x1,..., xn) = J... J p£(y1,..., yjdy1...dyn = J p£(y)y then p£ is called the density func-

-ад -ад -ад

tion for £.

ад

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Obviously, J p£(y)dy = 1.

-ад

One can show that P(£ gB) = Jp£(x)dx for all В g а(ы) (ст(м) is a Borel a -algebra on Ю" ).

В

If real-valued random variable £ = £(0) has density

2

1 (x-m)

p£ (x) = —7T= e-(x g ю) av 2ж

for some real numbers m and a, then the random variable £ is said to have a Gaussian (or normal) distribution, with mean value m and variance a . In this case it is said that £ is an Gaussian

N(m,a2)

random variable. If vector-valued random variable £ = £ (0), ..., £n (0)) has density

p£ (x) =-"-1-1 e-2(x-m)c-1(x-m) (x g Ю")

( 2ж)2 •( det C)2

for some vector m ею" and some positive definite, symmetric matrix C, then £ is said to have a Gaussian (or normal) distribution, with mean m and covariance matrix C. In this case it is said that £ is an Gaussian N(m, C) vector-valued random variable.

It is not hard to show that E£ = J xf (x and V(£) = J x - E£|2/ (x .

mn mn

The latter formulas allow one to compute E£ and V(£) in terms of integrals over . (This is an important observation, since the probability space (Q, J^P) is "unobservable": all that we "see" are the values, which the random variable £ takes on in ).

Using the well-known equality from calculus J e 2 dx = it can be directly checked that

—w

if £ is random variable, then

1 w (x—m)2 1 w 2 (x—m)2

E£ =—7= Jxe lc~dx = m , .....V(£) =—7= J(x — m) e dx = c2.

These equalities show that the numbers m and c are indeed the mean value and variance of £, respectively.

2.6. Independence

Let (Q, T, P) be a probability space, and let a,B g T be two events, with P(B)>0.

Definition 10. The number P(A | B) := P(j^(|^)B) if P(b) >0 is called the probability of

A, given B (provided that B occurs).

It is intuitively clear that A and B are independent if P{A | B) = P(A).

Definition 11. Two events A and B are called independent if P(A fl B) = P(A)P(B) .

Definition 12. The events ai, --, a„, -- (a, ] is said to be independent if

p(a, n... n ai k )= p(a!-1)- ...' p(aik) for all possible choices of indices i1, i2,...ik :1 <ii < i2 < .. < ik. This definition can be extended to c -algebras.

Definition 13. A collection { J^,/= 1,2,...} of sub-a-algebras ... (Ti aT,

i = 1,2,...) is said to be independent if for every possible choice of indices ii,...,ik: 1 < ii < i2 < .. < ik, the equality p(a!1 f...f aJ = p(aJ-...p(aJ holds for all events

These definitions can be transfered to random variables.

Definition 14. A collection {, i = 1,2,...} of random variables £,...,£k,...; £.: q^w", is said to be independent if the c-algebras c(gi) (i = 1,2,...) generated by them are independent, that is equivalently, if for all integers k > 2 the equality

P( eBi,...,£k GBk )=P( G Bi)...' P( k

g Bk) holds for all choices of Borel sets Bi,.., Bk c W" . For example, two random variables £ : Q ^ W" and rj: Q ^ W" are independent if and only if P(£ g A, j g B) = P(£ g A) P(j g B) holds for Borel sets A, BcWn.

The following proposition expresses the independence of random variables in terms of their distribution functions.

Proposition 1. The real-valued random variables

£i = £1 («),...,£ = £mWfc : Q^W,i = 1,..,m) are independent if and only if F ^(x) = F £i (xi)'...' F£m (xm) for all xi eW(i = 1,..., m), where F£ is the distribution function of vector-valued random variable £ = (£,..., £m), and F^,..., F£m are the distribution functions of real-valued random variables £i,...,£m, respectively.

If the random variables have densities, then Proposition 1 is equivalent to the following.

Proposition 2. Suppose real-valued random variables £l,...,£m have densities p ,...,p£ Then £,...,£m are independent if and only if p£(x) = p^(Xl)• ...• p£ (xm) for all

e^ (i = m) I Where pe is the density funCtion of £ = (£l,...,£ m).

One of the most important properties of independent random variables is given by the following proposition.

Proposition 3. If real-valued random variables £,..., £n are independent and E|£j (i = l,...,n) I then e£ X...x£b| and e£ x...x£n) = E£ x...xE£n.

Proposition 4. If real-valued random variables £,..., £n with v£) (i = l,..., n) are in-dependenti then v( +... + £n) = V() +... + V(£n).

The statement of Proposition 4 remains valid also when the hypothesis that "£,..., £n are independent" is replaced by "£,...,£n are uncorrelated".

If two real-valued random variables £ and r] are independent, then they are uncorrelated, since Cov (£, r) = E[(£ - E£)(r -E r)] = E(£ -E£) •Efo -E r) = 0.

Similarly, if real-valued random variables £1,...,£n are independent, then they are uncorrelated, i.e. Cov (£i,£j) = 0 (i * j).

The converse is true under the additional assumption that the random variables £1,...,£n are Gaussian (normal).

Proposition 5. If real-valued random variables £1,...,£n are uncorrelated and Gaussian

then they are independent.

In the following subsection it is introduced the important notion of conditional expectation.

2.7. Conditional expectation

Above in 2.6 we have defined the probability of A, given B: P(A | B) . Here we give a definition of E(£ | r): the expected value of the real-valued random variable £, given another real-valued random variable r. The case of vector-valued random variables is analogous the case considered.

First, we define E(£ | B), the expected value of random variable £ = £((( given the event

~ p

B, with P(b) > 0 . For this we can consider B as the new probability space with P = p(B). Then we set E(£ | B) : = ! . \£(co)p(co). (E(£ | B) is mean value of £ over B).

P(B)B

Assume that it is given a probability space (Q, T, P), on which a simple random variabler = r(co) is defined, that is r(o) =Ea%A.(o(, where is an indicator function of A. :

i=i 1 1

XA.(o) = l if o e A., ZA-() = 0 if o £ A . In this case we suppose that the real numbers al,...,am

are distinct, the events Ai,. ., Am are disjoint with P(a.)> 0 (i = l,...,m) and U A. = ^ .

1 =l

Since we know the value r(o), we know also which of the events Al,. ., Am contains o . The reasonable estimate for £ = £(o( should be the mean value of £ over each A.(i = l,. .,m):

E(£|^):=E(£|a) on 4; ...,E(£ | rj):= E(£ | Am) on Am.

We see that E(£ | 77) is a random variable, T (77) -measurable, and |e(£ I r]\co)d?{p) = j" £(0)^(0) for all Ae T(77). (Here f{tj) =<t({Ai,.., A™})).

A A

These properties of E(£ | r) for r = Em a XAi lead to the definition in the general case. Let 77 be a random variable defined over (Q, ).

Definition 15. A real-valued random variable £ = £(a), ®£Q, is called the conditional expectation of random variable % = %(a) with respect to given random variable j = r(a) (denoted by E(% | r)) if:

1. Cip) /s T {t]) -measurable and integrable;

2. jV(fl>)rfP(ffl) = J ^(oj)dP(oj) for all Ae T (tj).

A A

(Here ^(rf) is the sub-a -algebra generated by random variable 77: T (77) cF. It is assumed that %(a) is integrable on Q).

Thus, ~E(g\r¡) is any T (77) -measurable random variable such that Je(£| n\o))d?((o) = \^((o)d?((o) for all Aeffe).

A A

Since ^(77) is the a -algebra generated by the random variable /7, we can set E(Z\r(r,)):=E(Z\T,).

This motivates more general notion of conditional expectation of random variable % under the condition where C?cz J1, is an arbitrary sub-<x -algebra of J\

Let L1 (Q, be the family of ^ -valued random variables % = %(a) with E%<ro. Let $ a T be a sub- a -algebra of and £ g l) (fi, 91).

Definition 16. A random variable £ = £(a), a e Q is called the conditional expectation of £ = £{p) under condition Q (denotedby E(£ \ if

1. Cip) is ^-measurable and integrable;

2. £{p) hQS the same values as £{co) on the average, i.e. J £{m )dP(a)) = J £ (0 )dP(a)) for all Ae£|.

AA

If Q is the <t -algebra generated by random variable /7, i.e. £= J1 (/7), then we have the Definition 15: E(£|£j -HfeliJ.

Thus, E(£\Q) is any ^-measurable random variable such that

jEfel gXfffW^) = í ^Sf>iP(f?) for all A g £.

A A

Notice that by the Radon-Nikodym theorem (see for instance [32, p. 142]), there exists E(£ , almost surely (a.s.) unique.

If Ae f and is a sub-<x -algebra of T then we can define the conditional probability of A, given JjjcJ7 as P(A|^="[f):=H[jjrT jj^), where jA is an indicator function of A. Note that P(A IF0 ) isa random variable. Also, for Ae f we define conditional probability of A, given /7 = /7(0):P(A|^):=P(A| J7(/7) ), where T(r¡) is the a -algebra generated by /7 = 77(0). We can rewrite P(A | /7) = E(ja | f(ij) ) = E(ja | /7).

For -valued random variable %(a)=(%1 (a),...,%n(a)),%i eLL (Q,(i = 1,...,n), its conditional expectation under condition Q can be defined by coordinate-wise:

From the Definition 16 it follows immediately that E(e(£ |£))=Bt£). Some other properties of the conditional expectation are as follows:

a) £(®)>0^>E(£|£í(ff)£Ü (a.s.);

b) £(®)is ^-measurable => E{£ | (a.s.);

c) £(«) = c(= const) => E(£ 1=c (a.s.);

d) a,b Gm^E(a£(co) + bt]((o)) = aE(£ \ g)(&)+b^\g){Gi) (a.s.);

e) (a.s.);

f) £(«), Q (i.e. T(£), Q) are independent => E(£ | £)(*?)=£* (a.s.).

2.8. Convergence

Let (Q, T.T*) be a probability space. Let £(co) and £k(®) (l < k < oo) be a y?" -valued random variables defined on the probability space (Q, T, P). Below we give four types of convergence

of the sequence of random variables £(()}=.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Definition 17. The sequence {£k(o)}k l is said to converge to £(o) almost surely (a.s.) or with probability 1, if there exists a set Q0 e 7 with P(q0) = 0 such that for every co £ Q0 the sequence {£k (o)} converges to £(o( in the usual sense in .

In this case, we write: limk^k£k(o) = £(o) a.s. or P{lim^k£k(o) = £()} = l.

Definition 18. The sequence {£k(o)}k l is said to converge to £ stochastically or in probabilityi if for every s> 0 P{o : £k (o)-£(o) > s}^ 0 as k ^k.

Let Lp = Lp (, (p > 0) be the family of -valued random variables £ with E|£|p < k . Definition 19. The sequence {£k(o)}k l is said to converge to £ in pth moment or in Lp i f£k,£ e Lp and E£k (()-£ ^ 0 as k ^k.

Definition 20. The sequence {£k(®)}k l is said to converge to £(o) in distributioni if for every real-valued continuous bounded function f defined on the relation hold

limk-Ef (£k ()) = Ef (£(o)) .

These convergence notions have the following relationship: (convergence in Lp) ^ (convergence in probability) ^ (convergence in distribution); (almost surely convergence) ^ (convergence in probability).

2.9. Law of large numbers

Let (fi, P) be a probability space. The following theorem is due to Chebyshev.

C k

Proposition 6 (Law of large numbers). Let {£k} be a sequence of independent random variables defined on (Q, T, P) such that E£k = m, v(£t)<c for k = 1,2,... (m,c e W). Then

К I —--> m in probability as n ^ œ, i.e. for any s > О lim P<

n I

( +... + (

n

> s> = О.

If {£¿1 is a sequence of independent, identically distributed (i.e. f£i(x) = F£2(x) =... for all x e ^ ), then in Proposition 6 the condition v(£k) < c can be omitted.

Proposition 7 (Khinchin). Let {£kj be a sequence of independent, identically distributed, integrable random variables with the same expectation: E£k = m for k = 1,2,... . Then

Kk

—--> m in probability as n ^ œ.

n

Proposition 8 (Generated law of large numbers). Let {£k} be sequence of independent random variables with finite expectations E£k = mk and bounded variances V(<£k) < c for

k = 1,2,... . Then for any s> 0 lim P^

£ +... + £„ E£ +... + E£

n

n

>sl = 0.

Propositions 6-8 describe the weak law of large numbers. Below two assertions of strong law of large numbers are presented. These are due to Kolmogorov.

Proposition 9 (Strong law of large numbers). Let {£} be a sequence of independent

random variables. Suppose that the expectations E£k of random variables (k = 1,2,...) are finite

( ) i MM)

E£k = mk, and variances V(£k) satisfy the condition

k=i

■ < œ.

Then

n

P^ lim

n^œ

(£ +... + £„ mi +... + mn

= 01 = 1.

n

n

Proposition 10. Let {<^k} =1 be a sequence of independent, identically distributed random variables with the same finite expectation: E£k = m. Then Pj lim-

£ + ^ = ml = 1.

n

Notice that there are also propositions about law of large numbers for random variables depending upon time (stochastic processes), in particular, for stationary in the wide sense stochastic processes.

2.10. Stochastic processes

Here it is introduced random variables depending upon time.

Definition 21. A collection {¿¡t(0)} T of -valued random variables (0) defined on a

probability space (Q, T, P) is called a stochastic process with parameter set and state space yj".

The parameter set T is usually the halfline [0, ro), or an interval [a, b]: T := [0, ro) or T :=[a,b]. If T is the non-negative integers, then {<£t(0) } t is a sequence of random variables,

considered above in 2.8.

Note that for each t eT fixed we have a random variables (0) e^".

On the other hand, for each fixed 0 e Q we have the mapping (function) t (0) , which called a sample path of the process.

It is sometimes written as £(t ,0) instead of (0), and stochastic process may be regarded as a function of two variables ((, 0) : ((, 0) eTxQ ^ £(t, 0) .

It is often written a stochastic process {<£t(0) } t as {^t}t T, (0), £(f) or ^((,0) . Let j(0) :i elcN} bea collection of -valued random variables, defined on the probability space (Q, J^P). Then the <x-algebra y(£;.,/el):=cr(£;.,7el) = cr(U ^(£,■)), where

iel

(<£;) = cr(gt) is a -algebra generated by random variable , is called the a -algebra generated by the collection j(0) :i el}.

Now let ^(ffl)! be an y{"-valued stochastic process on (Q, ^F.P). Definition 22. The a -algebra Jy (£) := a(<£t: t e T) = cr({£t e B}, t e T, B e 3 ), where B is the Borel a-algebra in is called the a -algebra generated by stochastic process

n

T , i.e. by the collection of random variables (a) for t eT.

One often thinks of (¿f), T=[0,t], as the history of {<£s}()< ; up to time t.

Let A,---, Am be Borel sets in , and t1,---, tm eT. Then the probabilities

tm;A,---,Am):=P{^(i1,0)eAi,-,^(tm,c)eAm} arecalled the finite-dimensional distributions of the process £(t,a). Kolmogorov showed that for any coordinated family of distributions P((1,--., tm; A,---, Am) there exist a stochastic process, which have these distributions as its finite-dimensional ones-

Let {t(®)}t T be an -valued stochastic process, defined on a probability space (Q, i",P).

Definition 23. The process £(t,a) is said to be continuous (respectively right continuous, left continuous) if for almost all a e Q function (a): T ^ is continuous (resp. right continuous, left continuous) on t.

Definition 24. A family of sub-a -algebra of T is called a filtration if it is in-

creasing, i.e. Fj c J^ c J7 for all t < s, t,s e T; T := [0,oo).

The stochastic process -^(co) is said to be {f*^ - adapted (where ^Jv.r is a filtration) if for every t e T, the random variable £t(p) is ^-measurable.

Definition 25. The stochastic process {<£t (®)}t T is said to be measurable if this process

regarded as a function £ : T x Q ^ of two variables (t.,<x>) is SC.jT.) > IF-measurable, where ^ T is the family of all Borel sub-sets of T.

Let {£t}t T be a stochastic process. Another stochastic process {rt}t T is said to be a version (or a modification or stochastically equivalent) of {£t}t T if for all t eT, (c) = rt (a) almost surely (a.s.), i.e. P{c : (c) = rt (c)}= 1 for any t e T .

Two stochastic processes {£t} t t and {rt}t T are said to be indistinguishable (or equivalent) if for almost ceQ, (c) = rt (a) for all t eT, i.e. P{a:£t (c) = rt (a) for all t eT}= 1.

Let {£t}t (0+ ) be a stochastic process-

Definition 26. The process {£} >o is said to satisfy the law of large numbers if for each s > 0, 5 > 0 there exists a number T > 0 such that for all t > T

P

1 J^(s, co)ds -1JE£(s,®)ds --

t о t о t

>5><s.

Definition 27. A stochastic process {£} > is said to satisfy the strong law of large

numbers if pjliml - j*£(s,® )ds -1 jE£(s, )ds

t 0 t 0 J

t >0

Л

= 0 > = 1.

2.11. Martingales

Let (Q, ^P) be a probability space, and a family of sub-a -algebras t^W, T = [0,oo), = be a filtration.

Definition 28. An -valued stochastic process {Mt}t>0 is called a martingale with respect to a filtration {Fe} (or simply martingale) if:

1) Mr is Tt-measurable for all I > 0 (i.e. { M, j ; n is adapted);

2) E|mJ < ^for all t > 0 (i.e. Mt (®) is integrable on Q);

3) E(Mí I Si Í = Mr a.s■ for aH 0< s <t <cc.

(Here |-| denotes the norm of a vector in ).

If JJ ;= — a (m„ : 0 < u < s) is the sub- a -algebra of CF generated by random vari-

ables M« = M« (© ) for 0 < u < s , then T3 may be interpreted as the history of the process j Mr} for all times 0 < u < s until (and including) time s.

Definition 29. A real-valued £Fe}- - adapted and integrable stochastic process j Mr} is called a super mar tíngale (with respect to [f^X) if E(M, Mx a.s. for all 0 < .v < t < x.

Definition 30. A real-valued - adapted integrable process {mJ is called a sub-mar tin gale (with respect to {Fjj if E(m, 1£ Ü a.s. for all 0 < .v < t < x.

Clearly, {Mt} t>0 is submartingale if and only if {-Mt}t >0 is supermartingale. Taking into account that E(e(Mí|3^)) = EMi> it follows from Definitions 2.6 and 2.7 that Em,<EM, for all 0 < s < t (for supermartingales), EMt >Ems for all 0 < s < t (for submartingales), i.e. for supermartingale the function m(() = EMt is monotonically decreasing in t, and for submartingale m(t) = EMt is monotonically increasing in t.

We give some important statements due to Doob.

Proposition 11 (Doob's submartingale inequality). Let p > 1. Let {Mt} be a real-

valued nonnegative submartingale such that Mt e Lp (Q, . Let [a, b] be a bounded interval,

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

( ^ ( P ^p

sup MP I <

0 < a < b . Then E

v

te[a,b]

p

p - l

em bp.

Proposition 12 (Doob's martingal inequalities). Let {Mt}t 0 be an -valued martingale. Let [a, b] a bounded interval, 0 < a < b.

1) If p > 1 and Mt g Lp (Q, then p\a: sup |m (®) > cl< —-E|Mb|pfor all c > 0;

I te[a,b] J CP

2) If p > l and Mt e Lp (Q, then e{ sup |мИ < ^ P

tte[a,b] J

E

.P " 1

Notice that martingals are important in probability theory because they admit the powerful estimates some of which are given above in propositions 11 and 12.

2.12. Markov process

Here we recall a usual definition of a Markov process. Let ( fi, T, P ) be a probability space. Definition 31. A -valued stochastic process {£} > >0 is called a Markov process if the following properties are satisfied:

1) {£,} is [T^-adaptedprocess (i.e. is Tt-measurable);

2) P {%{t) e A1> = P^l/J e A | ¿(f)) a.s. for all 0 < .v < t < x and for all Borel sets A in

yj", where Jjf is the cr -algebra generated by {£(/') : 0 < r < .v} : C^rSs^c: 3

The property 2) is called Markov's property. It means that given Markov process, the future behavior of the process given what has happened up to time s is the same as the behavior

p

obtained when starting the process at £(s), i.e. the future is independent from the past when the present is known.

The transition function of the Markov process >0 is a function P(s, x; t, A) defined on

0<s <t <oo, x e and A e Ъ (3 is the Borel set of y?"), with the following properties:

1) P(s, •; t, A) is Borel measurable for fixed s, t, A;

2) P(s, x; t, •) is a probability measure for fixed s, t, x;

3) P(i,x;i,A)=:^(A), ¿s(A)=1 if xeA, ¿s(A) = 0 if org A;

4) For every 0<s <t <cc and A e Ъ

P(; t, A) = P(£(t) eA|£s) a.s.

The latter relation one may rewrite in the form P(s, x; t, A) = P{£(() e A | £(s) = x} which gives an interpretation of P(s, x; t, A): the value of P(s, x; t, A) is the probability that the process will be in the set A at time t given the condition that the process was in the state x at time s < t.

2.13. Brownian motion

Browian motion is the name given to the irregular movement of pollen grains, suspended in water, observed by the Scottish botanist Robert Brown in 1828. From the observation it was noted that: 1) the path of a given particle is very irregular, having a tangent at no point, and 2) the motions of two distinct particles appear to be independent. To describe the motion mathematically it is natural to use the concept of a stochastic process Bt (0), interpreted as the position of the pollen grain со at time t.

Let (Q, З^Р) be a probability space with a family which is a filtration (i.e. increas-

ing sub- <x -algebras ofFiJ^cfj cF, 0 <s <t).

Definition 32. A real-valued stochastic process {Bt}t>0 is called a (standard) one-dimensional Brownian motion or Wiener process if

0) { B,j is (Fa) - adapted process (i.e. Br is Tt-measurable);

1) Bo = 0 a.s.;

2) for all times 0 < 10 < ti < 12 <... < tk < , the random variables Bt1 - Bt0, Bt2 -Bt1,..., Btk - Btk_i are independent;

3) for all times 0 < s < t < да, the random variable (increment) Bt - Bs is normally distributed with mean zero and variance t - s, i.e. Bt -Bs is a Gaussian process with N(0, t - s) for all t > s > 0.

Notice in particular that EBt = 0, V(Bt) = Eb2 = t for each time t > 0, by property 3) of Brownian motion (since Bt = Bt - B0, B0 = 0 ).

By Kolmogorov's finite-dimensional distributions one can prove that a process {Bt} > >0 with

properties 1) - 3) from Definitions 32 exists.

Note that by property 2) of Definition 32 Brownian motion has independent increments (therefore, they are uncorrelated), and by property 3) the distribution of Bt -Bs depends only on the difference t - s .

Definition 33. A stochastic process {^t}t>0 is called stationary if (0) has the same distribution as +h (0) for any h > 0.

The Brownian motion (Wiener process) has many important properties, some of which are listed below.

1. The increment Br ~ Bv is independent of Ts (i.e. of every event A e J^) for 0 < .v < i < x ;

2. {Bt} is continuous (or its modification), i.e. for almost all 0 eQ the function t ^ Bt (0)

(i.e. the Brownian sample path) is continuous;

3. For almost every 0 eQ, the function t ^Bt(0) is nowhere differentiable;

4. {Bt} t>0is of infinite variation on each subinterval (a, b) e [0, да);

5. The Brownian motion {Bt}>>0 is a martingale, i.e. .' = E." a s. for all

0 < s < t < да ;

6. The Brownian motion {Bt} has stationary increments, i.e. that the process { Bt+h - Bt} h>0 has the same distribution for all t e (0, да);

7. The Brownian motion {Bt} is a Markov process.

Define now a n -dimentional Brownian motion (or Wiener process).

Definition 34. An -valued stochastic process {Bt = (B1,...,Bn) }t>0 is called an n-dimensional Brownian motion (Wiener process) if:

1) every real-valued process {Bk}t >0 (k = 1,..., n) is an one-dimensional Brownian motion;

2) the processes {Bf}t>0 (k = 1,...,n) are independent, i.e. the a-algebras Tk := S^ (вг.1= еи'в! :Qii<«o) are independent.

Note that n -dimensional Brownian motion is also a Markov process. 2.14. White noise

Let (Q, T,P) be a probability space. Let £(t) = £t(p), t> 0, be a real-valued stochastic process with the independent increments. Let £(t) satisfy the conditions

£(0) = 0, E£(t) = 0, e(£(t))2 = t for all t > 0. (C)

For t < 0 we set £(() = 0. Note that if t > s > 0, then taking into account the property of independent increments we have for correlation function

R(t, s) = E[[ )£(s)] = E[(£(t)-£(s) £(s)-£(0) (£(t)-£(s) ) + £(s) 2] =

= E[-£(s ))(£(s)-£(0) ) + E£(s )2]= s. Thus, R((, s) = min(t, s) for t, s e [0, да).

• d£

Definition 35. The generalized stochastic process £ = —f, where £(t) is a rea-valuedprocess with independent increments satisfying the conditions (C), is called "white noise ".

In particular, if £(t) is Wiener process (Brownian motion), then —f is called Gaussian "white noise".

(Notice that, since Wiener process is Gaussian, — is also Gaussian by definition of derivation of generalized process £).

That the process is generalized means that it can be constructed as a distribution, that is as a stochastic continuous linear functional A : <77 —>• Л (</?), defined on the space V = {jsj of infinitely

differentiable functions <p(t) on У? = (- да, да) with compact support (2? is called the space of test functions on Ю with the topology introduced which define the notion of convergence; notice that the values linear functional are real-valued random variables).

Denote by m{<p) and R{(p,y/) {(p, 1// g mD) the mean square and the correlation functional for

d£ d£

white noise process —»—(cp), ^e Ъ (£ = £(/) is Wiener process, therefore the conditions dt dt

(с) hold). They are defined as follows:

(:=E£(() , R((,^):=E [ £(p(- m((p)][ç(y/)-m(y/)

ml

Since E£(() = 0 we have m(() = El dt(() I = Е

4 dt,

dt œ j

Eft(( )(t = 0. J dt

(Here we used the definition of derivation of Wiener process ) regarded as generalized one: £(<p)).

It can be proved that R((, щ) = f ((()dtds .

The latter one can be written (symbolically) as R((p,^) = - s )?(( ^(s )dtds, where

0 0

S(t - s) is Dirac delta function.

This equality can be interpreted as follows: correlation function R(t, s) for white noise process is given by delta function: R((, s) = 5(t - s).

Thus, in broad terms white noise may be defined as a real-valued stochastic process

£((), t > 0, which possesses the properties:

1) £(() is continuous on t > 0 (almost sure);

2) £(t) is Gaussian process;

3) the mathematical expectation (mean square) E£(() = 0 for all t > 0;

4) the correlation function , s ) = El £(()-£(s)! = £((-s) for all t, s > 0, i.e. £(() is a stationary (taking into account the property 3)) process with uncorrelated values.

Moreover, one can prove that white noise <p e 2>, is the process with independent values,

i.e. for any 2? such that <px-<p2 = 0, the random variables and i;((p2) are independent.

References:

1. Lyapunov A.M. The General Problem of the Stability of Motion. Doctoral dissertation. Univ. Kharkov, 1892. 251 pp.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

2. Malkin I.G. Theory of Stability of Motion. 2nd ed. Moscow: Nauka, 1966. 530 pp.

3. Chetaev N.G. Stability of Motion, 3rd ed. Moscow: Nauka, 1965. 207 pp.

4. Krasovsky N.N. Certain Problems in the Theory of Stability of Motion. Moscow: Fizmatgiz, 1959. 211 pp.

5. Pliss V.A. Non-Local Problems in the Theory of Oscillations. Moscow: Nauka, 1964. 320 pp.

6. Krasnoselsky M.A. The Operator of Translation Along the Trajectories of Differential Equations. Moscow: Nauka, 1966. 328 pp.

7. LaSalle J.P., Lefschetz S. Stability by Lyapunov's Direct Methods with Applications. New York: Academic Press, 1961. 168 pp.

8. Gelig A.Kh., Leonov G.A., Yakubovich V.A. Stability of Non-Linear Systems with a Non-Unique Equilibrium State. Moscow: Nauka, 1978. 400 pp.

9. Ito K. On stochastic differential equations // Mathematics. 1957. Vol. 1, Iss. 1. P. 78-116.

Примечания:

1. Ляпунов А.М. Общая задача об устойчивости движения. Докторская диссертация. Харьковский университет. 1892. 251 с.

2. Малкин И.Г. Теория устойчивости движения. 2-е изд. Москва: Наука, 1966. 530 с.

3. Четаев Н.Г. Устойчивость движения. 3-е изд. Москва: Наука, 1965. 207 с.

4. Красовский Н.Н. Некоторые задачи теории устойчивости движения. Москва: Физматгиз, 1959. 211 с.

5. Плисс В. А. Нелокальные проблемы в теории колебаний. Москва: Наука, 1964. 320 с.

6. Красносельский М.А. Оператор сдвига по траекториям дифференциальных уравнений. Москва: Наука, 1966. 328 с.

7. Ла-Салль Ж., Лефшец С. Исследование устойчивости прямым методом Ляпунова / под ред. Ф.Р. Ган-тмахера; пер. с англ. Н.Х. Розова. Москва: Мир, 1964. 168 с.

8. Гелиг А.Х., Леонов Г.А., Якубович В.А. Устойчивость нелинейных систем с неединственным состоянием равновесия. Москва: Наука, 1978. 400 с.

9. Ито К. О стохастических дифференциальных уравнениях // Математика. 1957. Т. 1, вып. 1.

10. Bertram J.E., Sarachik P.E. Stability of circuits with randomly time-varying parameters // IRE Transactions on Circuit Theory. 1959. Vol. 6, Iss. 5. P. 260-270.

11. Kats I.Ya., Krasovsky N.N. On stability of systems with random parameters // Applied Mathematics and Mechanics. 1960. Vol. 24, No. 5. P. 809-823.

12. Kushner H.J. Stochastic Stability and Control. New York: Academic Press, 1967. 198 pp.

13. Khasminsky R.Z. Stochastic Stability of Differential Equations. 2nd ed. Springer Heidelberg Dordrecht London; New York, 2012. 339 pp.

14. Arnold L. Stochastic Differential Equations: Theory and Applications. New York: John Wiley and Sons, 1974. 228 pp.

15. Friedman A. Stochastic Differential Equations and Applications. Vol. I. Academic Press, 1975. 248 pp.

16. Mao X. Stochastic Differential Equations and Applications. 2nd ed. Horwood Publishing Limited. Chich-ester, 2007. 422 pp.

17. Kushner H.J. On the construction stochastic Lyapu-nov functions // IEEE Trans. Autom. Control. 1965. Vol. 10, No. 4. P. 477-478.

18. Shumafov M.M. On the construction of Lyapunov functions for some second order nonlinear stochastic differential equations and questions of stability // Differential Equations. 1981. Vol. 17, No. 6. P. 1143-1145.

19. Shumafov M.M. On the dissipativity of random processes defined by some nonlinear second order differential equations // Differential Equations. 1993. Vol. 29, No 1. P. 175-176.

20. Shumafov M.M. Lyapunov Functions for Two-Dimensional Linear Stationary Stochastic Systems // Proceedings of Physical Society of Adygheya Republic. 1997. No 2. P. 1-26. URL: http://fora.adygnet.ru

21. Shumafov M.M. On the stochastic stability of a nonlinear system perturbed by a "white" noise random process // Proceedings of Physical Society of Adygheya Republic. 1999. No. 4. P. 118-124. URL: http ://fora.adygnet.ru

22. Shumafov M.M. On the stability of a second-order nonlinear stochastic system // Proceedings of Physical Society of Adygheya Republic. 2002. No. 7. Р. 98-102. URL: http://fora.adygnet.ru

23. Shumafov M.M. On the dissipativeness of solutions of second order stochastic differential equations // The Bulletin of the Adyghe State University. Ser. Natural-Mathematical and Technical Sciences. 2008. Iss. 4 (32). P. 11-17. URL: http://vestnik.adygnet.ru

24. Shumafov M.M. On the Stochastic Stability of Some Two-Dimensional Dynamical Systems // Differential Equations. 2010. Vol. 46, No. 6. P. 892-896.

25. Shumafov M.M., Tlyachev V.B. Construction of Lyapunov functions for the second-order linear stationary stochastic systems // Results of Science and Technology. Ser. Modern Mathematics and Its Applications. Thematic review. 2018. Vol. 149. P. 118-128.

26. Shumafov M.M., Tlyachev V.B. Criteria of stochastic stability of two-dimensional linear stationary systems perturbed by white noise // Dynamical Systems

С. 78-116.

10. Bertram J.E., Sarachik P.E. Stability of circuits with randomly time-varying parameters // IRE Transactions on Circuit Theory. 1959. Vol. 6, Iss. 5. P.260-270.

11. Кац И.Я., Красовский Н.Н. Об устойчивости систем со случайными параметрами // Прикладная математика и механика. 1960. Т. 24, № 5. С. 809-823.

12. Kushner H.J. Stochastic Stability and Control. New York: Academic Press, 1967. 198 pp.

13. Хасьминский Р.З. Устойчивость систем дифференциальных уравнений при случайных возмущениях их параметров. Москва: Наука, 1969. 368 с.

14. Arnold L. Stochastic Differential Equations: Theory and Applications. New York: John Wiley and Sons, 1974. 228 pp.

15. Friedman A. Stochastic Differential Equations and Applications. Vol. I. Academic Press, 1975. 248 pp.

16. Mao X. Stochastic Differential Equations and Applications. 2nd ed. Horwood Publishing Limited. Chich-ester, 2007. 422 pp.

17. Kushner H.J. On the construction stochastic Lyapunov functions // IEEE Trans. Autom. Control. 1965. Vol. 10, No. 4. P. 477-478.

18. Шумафов М.М. О построении функций Ляпунова для некоторых нелинейных стохастических дифференциальных уравнений второго порядка и вопросы устойчивости // Дифференциальные уравнения. 1981. Т. 17, № 6. С. 1143-1145.

19. Шумафов М.М. О диссипативности случайных процессов, определяемых некоторыми нелинейными дифференциальными уравнениями второго порядка // Дифференциальные уравнения. 1993. Т. 29, № 1. С. 175-176.

20. Шумафов М.М. Функции Ляпунова для двумерных линейных стационарных стохастических систем // Труды ФОРА. 1997. № 2. С. 1-26. URL: http://fora.adygnet.ru

21. Шумафов М.М. О стохастической устойчивости одной нелинейной системы, возмущенной случайным процессом типа «белого» шума // Труды ФОРА. 1999. № 4. С. 118-124. URL: http ://fora.adygnet.ru

22. Шумафов М.М. Об устойчивости одной двумерной нелинейной стохастической системы // Труды ФОРА. 2002. № 7. С. 98-102. URL: http ://fora.adygnet.ru

23. Шумафов М.М. О диссипативности решений стохастических дифференциальных уравнений второго порядка // Вестник Адыгейского госуниверситета. Сер. Естественно-математические и технические науки. 2008. Вып. 4 (32). С. 11-17. URL: http://vestnik.adygnet.ru

24. Шумафов М.М. О стохастической устойчивости некоторых двумерных динамических систем // Дифференциальные уравнения. 2010. Т. 46, № 6. С. 892-896.

25. Шумафов М.М., Тлячев В.Б. Построение функций Ляпунова для линейных стохастических стационарных систем второго порядка // Итоги науки и техники. Сер. Современная математика и ее приложения. Тематический обзор. 2018. Т. 149. С. 118-128.

26. Шумафов М.М., Тлячев В.Б. Criteria of stochastic stability of two-dimensional linear stationary systems perturbed by white noise // Динамические системы

in Science and Technology (DSST-2018): abstracts of reports of International Conf., September 17-21, 2018. Alushta.

27. Shumafov M.M., Tlyachev V.B. On the stochastic stability of the second-order differential systems // Nonlocal Boundary Problems and Related Mathematical Biology, Infotrhmatic and Physic Problems: proceedings of the 5th International Scient. Conf., dedicated to the 80th birthday of A.M. Nakhushev. December 4-7, 2018. Nalchik. P. 243.

28. Shumafov M.M., Tlyachev V.B. On the Stability of Random Processes Defined by Second Order Differential Equations // Equations of Convolution Type in Science and Technology (ECTST-2019): Russian Scient. and Pract. Conf. with International Participation dedicated to the 90th birthday of Yu.I. Chersky. September 25-28, 2019, Yalta. P. 6-8.

29. Thygesen Uffe Hogsbro. A Survey of Lyapunov Techniques for Stochastic Differential Equations // IMM Tech. Report. 1997. No. 18. URL: http://www.imm.dtu.dk

30. Visentin F.A Survey on Stability for Stochastic Differential Equations // Scientiae Mathematicae Ja-ponicae. 2013. Vol. 76, No. 1. P. 147-152.

31. Oksendal B. Stochastic Differential Equations. Berlin: Springer, 2007. 332 pp.

32. Loeve M. Probability Theory. Foundations. Random sequences. Princeton: Van Nostrand, 1960.

33. Evans L.C. Introduction to Stochastic Differential Equations. Version 1.2. California: Berkeley. 2002. 139 pp.

34. Gikhman 1.1., Skorokhod A.V. Introduction to the Theory of Random Processes. Moscow: Nauka, 1965. 656 pp.

35. Rozanov Yu.A. Probability Theory, Random Processes and Mathematical Statistics. Moscow: Nauka, 1985. 320 pp.

в науке и технологиях (DSST-2018): тезисы докладов Междунар. конф., 17-21 сентября 2018 г. Алушта.

27. Шумафов М.М., Тлячев В.Б. On the stochastic stability of the second-order differential systems // Нелокальные краевые задачи и родственные проблемы математической биологии, информатики и физики: материалы V Междунар. науч. конф., по-свящ. 80-летию А.М. Нахушева, 4-7 декабря 2018 г. Нальчик. Нальчик, 2018. С. 243.

28. Шумафов М.М., Тлячев В.Б. On the Stability of Random Processes Defined by Second Order Differential Equations // Уравнения типа свертки в науке и технологиях (ECTST-2019): Всерос. науч.-практич. конф. с междунар. участием, посвящ. 90-летию со дня рождения Ю.И. Черского, 25-28 сентября 2019 г. Ялта. Ялта, 2019.С. 6-8.

29. Thygesen Uffe Hogsbro. A Survey of Lyapunov Techniques for Stochastic Differential Equations // IMM Tech. Report. 1997. No. 18. URL: http://www.imm.dtu.dk

30. Visentin F.A Survey on Stability for Stochastic Differential Equations // Scientiae Mathematicae Ja-ponicae. 2013. Vol. 76, No. 1. P. 147-152.

31. Oksendal B. Stochastic Differential Equations. Berlin: Springer, 2007. 332 pp.

32. Loeve M. Probability Theory. Foundations. Random sequences. Princeton: Van Nostrand, 1960.

33. Evans L.C. Introduction to Stochastic Differential Equations. Version 1.2. California: Berkeley. 2002. 139 pp.

34. Гихман И.И., Скороход А.В. Введение в теорию случайных процессов. Москва: Наука, 1965. 656 с.

35. Розанов Ю.А. Теория вероятностей, случайные процессы и математическая статистика. Москва: Наука, 1985. 320 с.

i Надоели баннеры? Вы всегда можете отключить рекламу.