Научная статья на тему 'Optimal solutions for inclusions of geometric Brownian motion type with mean derivatives'

Optimal solutions for inclusions of geometric Brownian motion type with mean derivatives Текст научной статьи по специальности «Математика»

CC BY
97
22
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
ПРОИЗВОДНЫЕ В СРЕДНЕМ / СТОХАСТИЧЕСКИЕ ДИФФЕРЕНЦИАЛЬНЫЕ ВКЛЮЧЕНИЯ / ОПТИМАЛЬНОЕ РЕШЕНИЕ / MEAN DERIVATIVES / STOCHASTIC DIFFERENTIAL INCLUSIONS / OPTIMAL SOLUTION

Аннотация научной статьи по математике, автор научной работы — Gliklikh Yu E., Zheltikova O. O.

The idea of mean derivatives of stochastic processes was suggested by E. Nelson in 60-th years of XX century. Unlike ordinary derivatives, the mean derivatives are well-posed for a very broad class of stochastic processes and equations with mean derivatives naturally arise in many mathematical models of physics (in particular, E. Nelson introduced the mean derivatives for the needs of Stochastic Mechanics, a version of quantum mechanics). Inclusions with mean derivatives is a natural generalization of those equations in the case of feedback control or in motion in complicated media. The paper is devoted to a brief introduction into the theory of equations and inclusions with mean derivatives and to investigation of a special type of such inclusions called inclusions of geometric Brownian motion type. The existence of optimal solutions maximizing a certain cost criterion, is proved.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Optimal solutions for inclusions of geometric Brownian motion type with mean derivatives»

MSC 60H10, 60H30, 60K30

OPTIMAL SOLUTIONS FOR INCLUSIONS OF GEOMETRIC BROWNIAN MOTION TYPE WITH MEAN DERIVATIVES

Yu.E. Gliklikh, Voronezh State University, Voronezh, Russian Federation, yeg@math.vsu.ru,

O.O. Zheltikova, Voronezh State University, Voronezh, Russian Federation, ksu_ola@mail.ru

The idea of mean derivatives of stochastic processes was suggested by E. Nelson in 60-th years of XX century. Unlike ordinary derivatives, the mean derivatives are well-posed for a very broad class of stochastic processes and equations with mean derivatives naturally arise in many mathematical models of physics (in particular, E. Nelson introduced the mean derivatives for the needs of Stochastic Mechanics, a version of quantum mechanics). Inclusions with mean derivatives is a natural generalization of those equations in the case of feedback control or in motion in complicated media. The paper is devoted to a brief introduction into the theory of equations and inclusions with mean derivatives and to investigation of a special type of such inclusions called inclusions of geometric Brownian motion type. The existence of optimal solutions maximizing a certain cost criterion, is proved.

Keywords: mean derivatives; stochastic differential inclusions; optimal solution.

Introduction

The notion of mean derivatives (forward, backward, symmetric and antisymmetric) was introduced by Edward Nelson in 60-th in his construction of the so-called Stochastic Mechanics, a version of Quantum Mechanics ([1, 2, 3]). After that a lot of other applications of equations with mean derivatives to various problems of mathematical were fond (see, e.g., [4]). Inclusions with mean derivatives is a natural generalization of those equations in the case of feedback control or of motion in complicated media.

It should be pointed out that the classical Nelson’s mean derivatives give information about the drift of a stochastic process. In [5], as a slight modification of some Nelson’s constructions, a new sort of mean derivative called quadratic (it is responsible for the diffusion term of a process) was introduced so that, strictly speaking, it became possible to find processes having given mean derivatives.

The paper contains a brief introduction to the general theory of stochastic differential equations and inclusion given in terms of mean derivatives, and new applications. We investigate a special class of inclusions with mean derivatives called inclusions of geometric Brownian motion type, introduced previously in [6]. We show that under some natural condition, among the solutions of such an inclusion there is an optimal one maximizing (or minimizing) a certain cost criterion. For definiteness we deal with the problem of maximizing the criterion since the minimizing problem is quite analogous.

Some remarks on notations. In this paper we deal with equations and inclusions in the linear space R”, for which we always use coordinate presentation of vectors and linear operators. Vectors in R” are considered as columns. If X is such a vector, the transposed row vector is denoted by X*. Linear operators from R” to R” are represented as n x n matrices, the symbol * means transposition of a matrix (pass to the matrix of conjugate operator). The space of n x n matrices is denoted by L(R”, R”).

By S(n) we denote the linear space of symmetric n x n matrices that is a subspace in L(R”, R”). The symbol S+(n) denotes the set of positive definite symmetric n x n matrices

that is a convex open set in S(n). Its closure, i.e., the set of positive semi-definite symmetric n x n matrices, is denoted by S+ (n).

Everywhere below for a set B in Rn or in L(Rn, Rn) we use the norm introduced by usual formula ||B|| = sup ||y||.

yEB

Everywhere we use Einstein’s summation convention with respect to a shared upper and lower index.

For the sake of simplicity we consider equations, their solutions and other objects on a finite time interval t £ [0,T].

We refer the reader to [7, 4] for details about set-valued mappings; to [4, 8, 9] for details about Stochastic differential equations and weak convergence of probability measures; to [10] for details about weak convergence in Hilbert spaces and to [11] for details about conditional expectation. The research is supported in part by RFBR Grants 12-01-00183 and 13-01-00041.

1. Introduction into equations and inclusions with mean derivatives

Consider a stochastic process £(t) in Rn, t £ [0, T], given on a certain probability space (Q, F, P) and such that £(t) is an li-random element for all t. It is known that such a process determines three families of a-subalgebras of the a-algebra F:

(i) «the past> Pt generated by preimages of Borel sets from Rn under all mappings £(s) : Q ^ Rn for 0 < s < t;

(ii) «the future> Ft generated by preimages of Borel sets from Rn under all mappings £(s) : Q ^ Rn for t < s < T;

(iii) «the present> («now>) Nt generated by preimages of Borel sets from Rn under the mapping £(t) : Q ^ Rn.

All the above families we suppose to be complete, i.e., containing all sets of probability zero. For the sake of convenience we denote by E|( ) the conditional expectation E(-\Njt) with respect to the «present> N for £(t).

Following [1, 2, 3], introduce the following notions of forward and backward mean derivatives.

Definition 1. (i) The forward mean derivative D£(t) of £(t) at the time instant t is an li-random

element of the form

D£(t)= lim Et(^(t + At) - ^(t), (1)

s,w At^+0 tK At h ’

where the limit is supposed to exist in Li(Q, F, P) and At ^ +0 means that At ^ 0 and At > 0. (ii) The backward mean derivative D* £ (t) of £(t) at t is the Li-random element

where (as well as in (i)) the limit is assumed to exist in L1(Q, F, P) and At ^ +0 means that At ^ 0 and At > 0.

Remark 1. If £(t) is a Markov process then evidently E| can be replaced by E(-\Pt’) in (1) and by E(-\Ft’) in (2). In initial Nelson’s works there were two versions of definition of mean derivatives: as in our Definition 1 and with conditional expectations with respect to «past> and «future> as above that coincide for Markov processes. We shall not suppose £(t) to be a Markov process and give the definition with conditional expectation with respect to «present> taking into account the physical principle of locality: the derivative should be determined by the present state of the system, not by its past or future.

Following [5], we introduce the differential operator D2 that differentiates an li-random process £(t), t £ [0,T] according to the rule

D2£(t)= lim Et((£(t + At) - £(t))(£(t + At) - £(t))*), (3)

V ' At^+0 ty At

where (£(t + At) — £(t)) is considered as a column vector (vector in R”), (£(t + At) — £(t))* is a row vector (transposed, or conjugate vector) and the limit is supposed to exists in Li(Q, F, P). We emphasize that the matrix product of a column on the left and a row on the right is a matrix so that D2£(t) is a symmetric semi-positive definite matrix function on [0,T] x R”. We call D2 the quadratic mean derivative.

It is shown (see, e.g., [5, 4]) that for an Ito diffusion type process £(t) = £0 + a(s)ds + Jq A(s)dw(s) the formulae D£(t) = E|(a(t)) and D2£(t) = E|(A(t)A*(t)) hold (recall that by the definition of diffusion-type process, see, e.g. [9], here w(t) is adapted to the “past” of £(t), such a process is a solution of a diffusion type equation, see [9]). If £(t) is a diffusion process, i.e., a solution of stochastic differential equation £(t) = £0 + Jq a(s, £(s))ds + A(s, £(s))dw(s) (a particular case of diffusion type processes), D£(t) = a(t,£(t)) and D2£(t) = A(t,£(t))A*(t,£(t)). Note that quadratic derivative takes values in S+(n).

Let Borel measurable mappings a(t,x) and b(t,x) from [0,T] x Rn to Rn and to S+ (n), respectively, be given. We call the system of the form

f D£(t)= a(t,£(t)), (4)

\ D2£(t) = b(t,£(t)), (4)

a first order differential equation with forward mean derivatives.

Let a(t,x) and b(t,x) be set-valued mappings from [0,T] x Rn to Rn and to S+ (n),

respectively. The system of the form

f D£(t) £ a.(t,£(t)), (5)

\ D2£(t) £ b(t,£(t)). (5)

is called a first order differential inclusion with forward mean derivatives.

Definition 2. We say that (5) has a solution on [0,T] with initial condition £0 £ R”, if there exist a probability space (Q, F, P) and a process £(t) given on (Q, F, P) and ta,king values in R” such that £(0) = £0 and P - a.s. for almost all t (5) is satisfied. For equation (4) the notion of solution is quite analogous.

Note that for simplicity here we consider only deterministic initial conditions, i.e., £0 in Definition 2 is a point in R”.

Recall that for a mapping F : X ^ Y of a metric space X to a metric space Y its graph is the set of pairs {(x, F(x)) \ x £ X} in X x Y. Note that for a set-valued F the value F(x) is a set in Y.

For considering upper semicontinuous mean forward differential inclusions we need to recall the following

Definition 3. Let X and Y be metric spaces. For given £ > 0 a continuous single-valued mapping f£ : X Y is called an £-approximation of the set-valued mapping F : X Y, if the graph of f belongs to £-neighbourhood of the graph of F.

It is known (see, e.g., [7]), that for upper semicontinuous set-valued mappings with convex closed images in normed linear spaces the £-approximations exist for each £ > 0.

Denote by Q the Banach space C0([0,T], R”) of continuous curves in R” given on [0, T], with usual uniform norm. Introduce in Q the a-algebra F generated by cylinder sets. Everywhere below we use this notation. Recall that F is the Borel a-algebra in Q. Note that the elementary event in Q is a curve that we denote by x(-). Its value at t £ [0,T] is denoted by x(t).

It is a well-known fact that every stochastic process n(t) with continuous sample paths in R”, given on a certain probability space (Q, F, P) for t £ [0, T], is a measurable mapping from (Q, F') to (Q, F). Thus it determines a measure on (Q, F) by the standard formula (A) = P(n_i(A)) for every A £ F.

There is a standard process c(t,x(-)) in R” given on (Q, F). It is the so-called «coordinate process> defined by the formula c(t, x(-)) = x(t). The coordinate process on the probability space (Q, F, Jn) is the standard description of the process n(t) on this probability space. See details, e.g., in [9, 4].

We shall look for solutions of (5) with continuous sample paths and mainly the solution will be described as a coordinate process on Q where the corresponding measure will be constructed.

Definition 4. The perfect solution of (5) is a stochastic process with continuous sample paths such that it is a solution in the sense of Definition 2 and the measure corresponding to it on the space of continuous curves, is a weak limit of measures generated by solutions of a sequence of diffusion-type Ito equations with continuous coefficients.

Lemma 1. Let b(t,x) be a jointly continuous (measurable, smooth) mapping from [0,T] x R” to S+(n). Then there exists a jointly continuous (measurable, smooth, respectively) mapping A(t,x) from [0, T] x Rra to L(R”, Rra) such that for all t £ R, x £ Rra the equality A(t, x)A*(t, x) = b(t, x) holds.

The proof is available in [5, Lemma 2.2].

Below we deal with the sequence of processes £i (t) (solutions of a sequence of stochastic

differential equations in R”) such that the estimate E( sup ||£i(t)||2) < C2 holds for all i with

0<t<T

the same constant C2 > 0 (see [9, Section III,2, Lemma 1]). In presentation via the coordinate process the latter inequality means that

[(sup Hx(t)H2)dji < C2 (6)

Jn 0<t<T

for all measures j generated by processes £i(t) as above. For such processes we have to use the following technical statement.

Lemma 2. Consider a sequence of probabilistic measures j on (Q, F) such that (6) holds for all i. Let the measures j weakly converge to a certain measure j as i — <x>. Introduce the measures Vi by relations dVi = (1 + ||x(-)||c0)dji and the measures vj by relations dvj = (1 + ||x(-)||C0)dji. Then the measures Vi weakly converge to the measure v defined by the relation dv = (1+||x(-)||c0)dj and the measures vj weakly converge to the measure vi defined by the relation dvi = (1 + ||x(-)||C0)dj.

Indeed, specify an arbitrary bounded continuous function f : Q — R. Assertion of Lemma 2 follows from the fact that by (6) random variables f (£k)(1 + H£kII) are uniformly integrable as well as f (£k)(1 + ||£k||2) (see e.g. [12, Lemma 8]).

Corollary 1. Let b : [0,T] x Q — R” be a continuous vector-function such that Hb(t, x(-))H < K(1 +11x(■) ||^0) and analogous bi be such that ||bi(t, x(-)) || < K(1 + ||x(-) ||jC0) for a certain K > 0. Then

(i) lim L b(t,x(-))djk = fn b(t,x(-))dj;

(ii) lim fQ bi(t,x(-))djk = fn bi(t,x(-))dj.

2. Equations and inclusions with mean derivatives of geometric Brownian motion type

This section presents a brief description and a slight modification of material suggested in [6]. We deal with the following generalization of the so-called geometric Brownian motion, namely with a process S(t) that satisfies the system of stochastic differential equations 1

dSa(t) = Saaa (t; Si(t),..., S”(t))dt + Sa(t)Aa(t; S i(t),..., S”(t))dw3, (7)

where w3 are independent Wiener processes in Ri that together form a Wiener process in R”, a(t, x) is a vector field on R”, A(t, x) is a mapping from [0, T] x R” to the space of linear operators L(Wa, R”) and (A^O) denotes the matrix of operator A. Note that the (standard) geometric

Brownian motion satisfies (7) in the case where a(t) and A(t) depend only on time t (i.e., do not

depend on the point x £ R”).

The processes satisfying (7), arise in various stochastic models (e.g., in economy).

Suppose that the coordinates Sa of the solution of (7) are positive for all t. Than by Ito formula the process £(t) = log S(t) = {log Si(t),..., log S”(t)} satisfies the equation

d£a(t) = (aa — 2(Aa,53YA^) (t,£(t))dt + Aa(t,£(t))dw3(t), (8)

since dwadw3 = 5a3dt (here 5a3 is Kronecker’s symbol: 5aa = 1, 5a3 = 0 for a = 3).

Analogously, from Ito formula we derive that if a process £(t) satisfies (8), the process S(t) = exp £(t) = (exp £i(t),..., exp £”(t)) satisfies (7). Note that in this case the coordinates Sa are positive.

Denote by B the symmetric positive semi-definite matrix AA* (where A* is the operator conjugate to A as above) and by diagB the vector constructed from the diagonal elements of matrix B. Note that A^d31 A^ is the a-th element Baa of diagB. If a process satisfies (8), it also satisfies the following equation with mean derivatives:

f D£(t) = (a — ^diagB) (t,£(t)), (9)

\ D2£(t) = B(t, £(t)) (9)

or, equivalently, f

( D£(t) + idiagD2(£(t)) = a(t,£(t)), ( )

\ D2£(t) = B(t,£(t)). (10)

Let £(t) be a solution of equation (9) (or (10)). We call it the logarithm of the process

S (t)=exp £(t) = (et1(t),...,etn(t)).

Note that if equation (9) (or (10)) is given a priory with some B £ S+(n), the process S(t) = exp(£(t)) may not satisfy (7). Thus the models based on equations (9) or (10) cover a broader class of problems then those based on (8).

Consider set-valued mappings a : [0,T] x Rra — Rra and B : [0, T] x Rra — S+(n) and the

following inclusion with mean derivatives

f D£(t) + 2diagD2£(t) £ a(t,£(t)), (11)

\D2£(t) £ B(t,£(t)) . (11)

Inclusion (11) is called the one of geometric Brownian motion type. Such an inclusion can be

constructed from an equation of form (10) with control in the usual way. Let the right-hand

1Recall that we use Einstein’s summation convention with respect to a shared upper and lower index.

sides a(t,x,u) and B(t,x,u) of (10) depend on controlling parameter u and U(t,x) be the set of the possible values of controlling parameters at (t,x), then on constructing a(t,x) = cl U a(t,x,u) and B(t,x) = cl |J B(t,x,u) where cl denotes the convex closure, we

uEU (t,x) uEU (t,x)

obtain inclusion (11).

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Below we describe conditions, under which the solutions of (11) do exist, and prove existence of optimal solutions maximizing a certain cost criterion.

Note that inclusion (11) has the form analogous to equation (10). The inclusion given in the form analogous to (9) would be ill-posed.

3. The main results

Theorem 1. Specify an arbitrary initial value £0 £ R”. Let a(t,x) be an upper semicontinuous

set-valued mapping with closed convex images from [0, T] x R” to R” and let it satisfy the estimate

||a(t,x)||2 <K(1 + ||x||2) (12)

for some K > 0.

Let B(t,x) be an upper semicontinuous set-valued mapping with closed convex images from [0, T] x R” to S+(n) such that for each B(t,x) £ B(t,x) the estimate

UtrB(t,x)H < K(1 + ||x||) (13)

takes place for some K > 0.

Then for every sequence ei — 0, £i > 0, each pair of sequence ai(t,x) and Bi(t,x) of ei-approximations of a(t,x) and B(t,x), respectively, generates a perfect solution of (11) with initial condition £0.

Proof. Specify a sequence of ei — 0 and sequences of ei-approximations ai(t,x) and Bi(t,x) as in the hypothesis of Theorem. Without loss of generality we may suppose that Bi(t,x) are 2-approximations of B(t, x).

As the norm in S(n) we take the restriction to S(n) of Euclidean norm (i.e., the square root from the sum of squares of all elements of a matrix) in the space L(R”, R”) isomorphic to R” . Without loss of generality we suppose that (13) is valid for this norm.

All ai(t,x) satisfy (12) with a certain constant that is bigger than K (see the hypothesis). Nevertheless we keep the notation K for this constant. Since 1 + ||x||2 < (1 + ||x||)2, for ai(t,x) the estimate

Hai(t,x)H < K(1 + ||x||), (14)

is valid as well.

The approximations Bi(t,x) take values in S+(n). Introduce Bi(t,x) = Bi(t,x) + 4I where

I is the unit matrix. Immediately from the construction it follows that Bi(t,x) for every i is a continuous ei-approximation of B(t,x) and that at each (t,x) it belongs to S+(n), i.e., it is strictly positive definite. Besides, Bi(t,x) satisfy (13) where the constant K > 0 is bigger than the constant from the hypothesis of Theorem but nevertheless we keep the notation K for it.

By Lemma 1 there exist continuous fields Ai(t,x) such that Bi(t,x) = Ai(t,x)A*(t,x).

Directly from the definition of trace we obtain that trBi(t, x) is equal to the sum of squares

of all elements of Ai(t,x), i.e., it is the square of the Euclidean norm of Ai(t,x) in L(Wa, R”). Hence from (13) and from the obvious inequality (1 + ||x||) < (1 + ||x||)2 it follows that all Ai(t, x) satisfy

HAi(t,x)H <Ki(1 + ||x||). (15)

Without loss of generality we can suppose that the above-mentioned continuous approximations ai and Ai are smooth. Indeed, if a certain ai is not smooth, we can approximate it by a sequence of smooth mappings aj that converges to ai as j — to with respect to the uniform norm on compacts. Hence for j large enough the graph of aj belongs to the ei-neighborhood of the graph of a. Thus aij is an ei-approximation of a. Then we replace continuous ai by this aij, i.e., take it as new ai. For Ai the arguments are the same. Note that after this replacement estimates (14) and (15) remain true.

Consider the sequence of stochastic diferential equations

d£i(t) = (ai — 2 diagBi)(t,£i(t))dt + Ai(t,£i(t))dw(t), (16)

where w(t) is a Wiener process in R”.

Note that every (ai — 2diagBi)(t,x) is smooth as the difference of smooth mappings.

Consider H(ai — 2diagBi)(t,x)H and show that it satisfies the estimate of (14) type with a constant greater than K. Indeed,

H(ai — 2diagBi)(t,x)U < Hai(t,x)H + ||2diagBi(t,x)H

< K(1 + ||x||) + KitrBi(t,x) < K2(1 + ||x||). (17)

Thus, the coefficients of equations (16) are smooth and satisfy estimates (17) and (15). So,

every equation of this sequence has a unique strong solution £i(t) well-defined on the entire interval

[0,T] (see. [9]). In particular, this means that each process £i can be given on every appropriate probability space, where w(t) is adapted to its own «:past>.

Consider the measure space (Q, F) introduced in Section 1.Denote by Pt the a-subalgebra of F, generated by cylinder sets with bases on [0,t], and by Nt - the a-algebra generated by the preimages of Borel sets in R” under the mapping x(-) — x(t).

Since all the solutions £i(t) are strong, they all can be defined on a certain unique probability space (Q, F, P) and so they all can be considered as measurable mappings from (0,, F) to (Q, F) (see Section 1)..

On the measure space ([0,T], B), where B is Borel a-algebra, by Xi we denote the Lebesgue measure.

As it is mentioned in Section 1,.every process £i(t) determines a measure fai on (Q, F) and on the probability space (Q, F,fai) the coordinate process represents £i(t).

Since all (ai — 2diagBi)(t, x) satisfy (17) and all Ai(t,x) satisfy (15) with the same K (see above), equations (16) satisfy the hypothesis of [9, Lemma III.2.1] and the remark after it for all

i and so the estimate

E(sup ||£i(t)||2) < C2. (18)

t<T

is valid for all £i, where C2 depends only on the interval [0,T] and on K from (17) and (15).

Remark 2. In the proof of [9, Lemma III.2.1] estimate (18) is derived from the relation E(sup H^t)!!2) < K(1 + f0 E(sup ^(u)||2ds). Since the solutions are strong, they can be given

t<T u<s

on various probability spaces and the latter inequality is true on all such probability spaces. In particular, it is true on the probability space (Q, F, fa) where the solution is described as the coordinate process. This means that (6) is valid for all i for some C2 depending only on the interval [0, T] and on K from (17) and (15).

In addition by corollary in Section III.2 [9] the set of measures {fa} is weakly compact. Thus for a given sequence of approximations ai and Ai, from the sequence of corresponding measures

fai one can select a subsequence that weakly converges to a certain measure fa. For simplicity of presentation we suppose that the sequence fai itself weakly converges to fa. Denote by £(t) the coordinate process on the probability space (Q, F,fa). Note that Pt is the «:past> and Nt is the «:present> a-algebras for £(t).

Lemma 3. fQ( sup ||x(t)||2)dfa < C2 where constant C2 > 0 depends only on the interval [0,T]

0<t<T

and on K from (17) and (15).

Since the sequence of measures fai weakly converges to fa, Lemma 3 follows directly from Remark 2 and Corollary 1 (ii).

Let us continue the proof of Theorem. From the construction we derive that

aBi(t,x(t))a2 = aAi(t,x(t))A*(t,x(t))a2 < ||Ai(t,x(t))a2||A*(t,x(t))a2 = (trBi(t,x(t))2

< Ki(1 + Hx^H)2 < K2(1 + Hx^H2)-Since HBi(t, x)||2 < K2(1 + ||x||2)), taking into account Lemma 3, we see that

Bi(t,x(t))H2dfa x dXi < K3. (19)

Qx[0,T ]

Introduce the mapping Bi : [0, T] x Q — S+ (n) by the formula 13i(t,x(-)) = Bi(t,x(t)). Then it follows from (19) that the set of all Bi is uniformly bounded in the Hilbert space L2([0, T] xQ, S(n)) defined with respect to measures Xi in [0, T] and fa in Q. Hence, this set is weakly relatively

compact in L2([0, T] x Q, S+(n)) and so it is possible to select a subsequence that weakly converges

in L2([0,T] x Q, S+ (n)) to a certain B : [0,T] x Q — S+(n). For simplicity, let the sequence Bi(t,x( )) itself converge to B : [0, T] x Q — S+(n).

Introduce also B(t,x(-)) = E(B | Nt) on the probability space (Q, F,fa), x(-) £ Q. From the definition of weak convergence and presentation of a linear functional in L2 it immediately follows that diagBi(t,x(-)) weakly converges to diagB(t,x(-)) in L2([0,T] x Q, R”).

As ||ai(t,x(t))H2 < K(1 + ||x(t)||2) by (12), then, taking into account Lemma 3, we obtain that for some Ki > 0

ai(t, x(t))H2dXi x dfa < Ki. (20)

[0,T ]xQ

Introduce the mappings hi : [0,T] x Q — R” by the formula a,i(t,x(-)) = ai(t,x(t)). Then from

formula (20) it follows that the set of all hi is uniformly bounded with respect to the norm in Hilbert space L2([0,T] x Q, R”) defined with respect to measures Xi in [0,T] and fa in Q. Hence the set of all hi is weakly relatively compact in L2([0,T] x Q, R”) and so it is possible to select a subsequence that weakly in L2([0,T] x Q, R”) converges to a certain a : [0,T] x Q — R”. For simplicity, let hi(t,x(-)) itself be this subsequence.

Introduce also a(t,x(-)) = E(a | Nt) on the probability space (Q, F,fa), x(-) £ Q. Immediately from the definition of weak convergence and from the above arguments we obtain that (hi — idiagBi)(t,x) weakly converges to (a — 2diagB)(t,x) in L2([0,T] x Q, R”).

By Mazur’s lemma (see, [13]), for the weakly convergent sequence (hi — 2diagBi)(t, x(-)) there exists a sequence of finite convex combinations Sk(t,x(-)) of its elements that converges in the same space strongly (in norm). The convex combinations have the form

u(k)

ak(t,x(-))=E @i

i=j(k)

u(k)

where (3i > 0, i = j(k),..., n(k), j(k) — to as k — to and Pi = 1.

i=j(k)

Remark 3. Above we have introduced a(t, x(-)) as a weak limit of ai(t, x(-)) in L2([0, T] x Q, R”) equipped with measures Xi on [0,T] and fa on Q. By Mazur’s lemma, as well as above, it is a strong limit of some correspondent sequence of convex combinations of ai (different from that for (hi — 2diagBi)). But since the images of a are convex, those convex combinations are ej-approximations for some sequence of ej — 0. Thus fa-a.s. a(t,£(t)) is a selector of a(t,£(t)), measurable with respect to Nt. The same arguments show that fa-a.s. B(t,£(t)) is a selector of B(t,£(t)), measurable with respect to Nt.

Note that by construction and by the properties of conditional expectation the sequence Si(t,x(-)) converges to (a — diagB)(t,x(-)) strongly in L2([0,T] x Q, Rra) where Q is equipped with the a-algebra Nt. Hence it converges also in probability (in measure fa) and so it is possible to select a subsequence that converges fa-a.s. In order not to change the notation, we suppose that ai(t,x( )) converges to (a — diagB)(t,x(-)) fa-a.s.

Choose 5 > 0. By Egorov theorem (see, e.g., [13]) there exists a set K C Q such that fa(Ks) > 1 — 5 and on this set the sequence ai(t, x(-)) converges to (a — diagB)(t, x(-)) uniformly.

Let f : Q — R be an arbitrary bounded continuous function measurable with respect to Nt. Specify an arbitrary e > 0. From the above uniform convergence on K and boundedness of f it follows that for all i and all t £ [0, T] simultaneously there exists N(e) > 0 such that for k > N(e)

II J f(x(-))(ak (t, x(')) — (a — diagB)(t,x(-))^j dfaiH <e. (21)

Since f is bounded, there exists some e > 0 such that If (x(-))| < E for all x(-) £ Q. Note also that fa(Q\K) < 5. On the other hand, Ha^t, £i(t))| < K(1 + ||£i(t) |) by (14) and sup Q 0dfa < C2

by (18). Note also the relation

J ||£i||codfai < 1 J ll£i|lc0dfa ll&ll>c ll&ll>c

(see [14]). Thus, taking into account Remark 2, we get

|| / f(x(-))(Sk (t,x(-)) — (a — diagB )(t, x(-))) dfaiH <

Jn\K«

i.e., since 5 is an arbitrary positive number, the above norm of integral becomes smaller than any positive number when 5 — 0. Together with (21) this means that

lim || / f(x(-))fak(t,x(-)) — (a — diagB)(t,x(-))^jdfa^ =0 (22)

k^^ JQ ^ '

for all i uniformly.

Note that (a — diagB)(t,x(-)) is continuous on the set of full measure fa in Q. Indeed, it is a

uniform limit of continuous functions on K^ for every 5 > 0 and so on every finite union of the

fj

sets K^. Thus it is continuous on the finite unions of the sets K^. Evidently lim fa( |J fa(K$.)) = 1

j^<x i=i

for a sequence of 5i — 0.

Then by the properties of weak convergence of measures and by (14) we can apply Corollary

1 (i) and obtain that

lim / f(x(-))(a — diagB)(t,x(-))dfak = f(x(-))(a — diagB)(t,x(-))dfa

k^^J Q Jq

i n

5EK (1 + C2)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

c

and that

lim [ f (x(-))((x(t + At) - x(t))dfak = Jn

/ f(x('))((x(t + At) — x(t))dfa.

Jo,

The following relations take place

u(k)

( { 1

E вА f (x(-))[x(t + At) - x(t) - (hi - 2diagBi)(t, x(t))^jdfa

i=j(k) n

- J f (x(-))(x(t + At) - x(t) - (a - 1 diagB)(t, x(-))jdfa\\ < f f (x(-))[x(t + At) - x(t)j dfak - f f (x(-))((x(t + At) - x(t)j dfa\\

J О J О

n(k) ( f 1

+ii E в^J f (x(-))(hi - 2diagBi)(t,x(t))dfai

i=j(k)

— I f(x(-))(ai — 2diagBi)(t,x(t))dfa Jq 2

+ 11 / f (x(-))Sk (t,x())dfa —I f (x())(a — 2 diagB)(t,x(-))dfaH Jq Jq 2

where the right-hand side of this inequality becomes less than every positive number for k large enough. Thus

lim j f (x(-))(x(t + At) - x(t) - (a - 1 diagB)(t,x(-)))dfa = t^+0 Jn 2

( n(k)

A^nl E ei( f (x('))(x(t + At) - x(t)

i=j(k)

— (ai — 1 diagBi)(t,x(t)))dfaij = 0

and so D£(t) + idiagB(t,£(-)) = a(t,£(■)) £ h(t,£(-)) = a(t,£(t)) fa-a.s. (see Remark 3).

Recall that we have constructed the sequence Bi (t,x() that weakly converges to B(t,x() and B(t,x() is a Nt-measurable selector of B(t,£(t)) fa-a.s. (see Remark 3).

Then applying Mazur’s lemma and Egorov’s theorem in analogy to above arguments we show that for f (■) as above

Jim+0 [ f( )[(x(t + At) — x(t))(x(t + At) — x(t))*

A-t^+0 Jq

—A(t,x(t))A*(t,x(t))]dfa = 0.

Hence D2£(t) = A(t,£(t))A(t,£(t))* = B(t,£(t)) £ B(t,£(t)) fa-a.s.

Remark 4. Note that all sequences of e-approximations for all sequences of ei 0, used in the proof of Theorem 1, satisfy (17) and (15) with the same K so that by corollary in Section III.2 [9] the set of measures {fai} (corresponding to all sequences and all i) is weakly compact.

Let f be a continuous bounded real-valued function on [0,T] x R”. For solutions of (11) consider the cost criterion in the form

J(£(■))= E [T f (t,£(t))dt. (23)

0

We are looking for solutions, for which the value of the criterion is maximal.

Theorem 2. Among the perfect solutions of (11) constructed in the proof of Theorem 1, there is a solution £(t) on which the value of J is maximal.

Proof. Since all the measures on (Q, F), constructed in the proof of Theorem 1 for perfect solutions of (11), are probabilistic and the function f in (23) is bounded, the set of values of J on those solutions is bounded. If that set of values has a maximum, then the corresponding measure fa is the one we are looking for: the coordinate process on the space (Q, F, fa) is an optimal solution.

Suppose that the above-mentioned set of values has no maximum, but then it has a lowest upper bound ^ that is a limit point in that set. Let fa* be a sequence of measures such that for the corresponding solutions £*(t) the values J(£*(t)) converge to ^. Every fa* is a weak limit of a sequence of measures faj corresponding to some sequence of ej-approximations as j — to. Select from the sequence a subsequence (for simplicity we denote it by the same symbol faij) such that for the corresponding solutions £j (t) and for all i we obtain the uniform convergence of J(£j (■)) to J(£*(■)) as j — to. Then J(£ii()) — ^ as i — to. Since the set of all measures corresponding to all approximations, is weakly compact (see above), we can select from fan a subsequence (denote it by the same symbol fan) that weakly converges to a certain measure fa*. By the construction, for the coordinate process £*(t) on (Q, F,fa*) we get J(£*(■)) = ^, i.e., the value is maximal. Since fa* is a limit of fan, £*(t) is a perfect solution of (11) that we are looking for.

The assertion of Theorem 2 deals with logarithms of generalized geometric Brownian motions satisfying inclusion (11). Note that it remains true also for the corresponding generalized geometric Brownian motions. Indeed, introduce the cost criterion J(£(t)) = J (exp £(t)) (see Section 3). This criterion satisfies the hypothesis of Theorem 2 and so among the generalized geometric Brownian motions corresponding to solutions of inclusion (11), there is an optimal process maximizing J.

References

1. Nelson E. Derivation of the Schrodinger Equation from Newtonian Mechanics. Phys. Reviews,

1966, vol. 150, pp. 1079-1085.

2. Nelson E. Dynamical Theory of Brownian Motion. Princeton, Princeton University Press,

1967. 142 p.

3. Nelson E. Quantum Fluctuations. Princeton, Princeton University Press, 1985. 147 p.

4. Gliklikh Yu.E. Global and Stochastic Analysis with Applications to Mathematical Physics. London, Springer-Verlag, 2011. 460 p.

5. Azarina S.V., Gliklikh Yu.E. Differential Inclusions with Mean Derivatives. Dynamic Systems and Applications, 2007, vol. 16, pp. 49-72.

6. Azarina S.V., Gliklikh Yu.E. Inclusions with Mean Derivatives for Porcesses of Geometric Brownian Motion Type and Their Applications [Vklyucheniya s proizvodnymi v srednem

dlya protsessov tipa geometricheskogo brounovskogo dvizheniya i ikh prilozheniya]. Seminar po global’nomu i stokhasticheskomu analizu [Seminar on Global and Stochastic Analysis], 2009, issue 4, pp. 3-8.

7. Borisovich Yu.G., Gelman B.D., Myshkis A.D. , Obukhovskii V.V. Vvedenie v teoriyu mnogoznachnykh otobrazheniy i differentsial’nykh vklyucheniy [Introduction to the Theory of Multi-Valued Mappings and Differential Inclusions]. Moscow, KomKniga, 2005. 213 p.

8. Gliklikh Yu.E. Global’nyy i stakhosticheskiy analiz v zadachakh matematicheskoy fiziki [Global and Stochastic Analysis in Problems of Mathematical Physics]. Moscow, KomKniga, 2005. 416 p.

9. Gihman I.I., Skorohod A.V. Theory of Stochastic Processes. Vol. 3. N.Y., Springer-Verlag, 1979. 496 p.

10. Kantorovich L.V., Akilov G.P. Functional analysis. Oxford, Pergamon Press, 1982. 742 p.

11. Parthasarathy K.R. Introduction to Probability and Measure. N.Y., Springer-Verlag, 1978. 343 p.

12. Gliklikh Yu.E., Obukhovskii A.V. Stochastic Differential Inclusions of Langevin Type on Riemannian Manifolds. Discussiones Mathematicae DICO, 2001, vol. 21, pp. 173-190.

13. Yosida Y. Functional Analysis. Berlin, Springer-Verlag, 1965. 624 p.

14. Billingsley P. Convergence of Probability Measures. N.Y., Wiley, 1969. 351. p.

УДК 517.9 + 519.216.2

ОПТИМАЛЬНЫЕ РЕШЕНИЯ ДЛЯ ВКЛЮЧЕНИЙ ТИПА ГЕОМЕТРИЧЕСКОГО БРОУНОВСКОГО ДВИЖЕНИЯ С ПРОИЗВОДНЫМИ В СРЕДНЕМ

Ю.Е. Гликлих, О.О. Желтикова

Идея производных в среднем стохастических процессов была предложена Э. Нельсоном в 60-х годах ХХ века. В отличие от обычных производных, производные в среднем корректно определены для очень широкого класса случайных процессов, и уравнения с производными в среднем естественно возникают во многих математических моделях физики (в частности, Э. Нельсон ввел производные в среднем для нужд Стохастической Механики - варианта квантовой механики). Включения с производными в среднем являются естественными обобщениями указанных уравнений в случае управления с обратной связью или движения в сложных средах. Статья посвящена краткому введению в теорию уравнений и включений с производными в среднем и изучению специального класса подобных включений, называемых включениями типа геометрического броуновского движения. Доказано существование оптимального решения, максимизирующего некоторый функционал качества.

Ключевые слова: производные в среднем; стохастические дифференциальные включения; оптимальное решение.

Литература

1. Nelson, E. Derivation of the Schrodinger equation from Newtonian mechanics j E. Nelson jj Phys. Reviews. - 1966. - V. 15П. - P. 1П79-1П85.

2. Nelson, E. Dynamical theory of Brownian motion j E. Nelson. - Princeton: Princeton University Press, 1967. - 142 p.

3. Nelson, E. Quantum fluctuations / E. Nelson. - Princeton: Princeton University Press, 1985.

- 147 p.

4. Gliklikh, Yu.E. Global and Stochastic Analysis with Applications to Mathematical Physics j Yu.E. Gliklikh. - London: Springer-Verlag, 2П11. - 46П p.

5. Azarina, S.V. Differential inclusions with mean derivatives j S.V. Azarina, Yu.E. Gliklikh jj Dynamic systems and applications. - 2ПП7. - V. 16. - P. 49-72.

6. Азарина, С.В. Включения с производными в среднем для процессов типа геометрического броуновского движения и их приложения j С.В. Азарина, Ю.Е. Гликлих jj Семинар по глобальному и стохастическому анализу. - 2ПП9. - Вып. 4. - С. 3-8.

7. Введение в теорию многозначных отображений и дифференциальных включений / Ю.Г. Борисович, Б.Д. Гельман, А.Д. Мышкис, В.В. Обуховский. - М.: Комкнига, 2ПП5.

- 213 с.

8. Гликлих, Ю.Е. Глобальный и стохастический анализ в задачах математической физики j Ю.Е. Гликлих. - М.: Комкнига, 2ПП5. - 416 с.

9. Гихман, И.И. Теория случайных процессов j И.И. Гихман, А.В. Скороход. - М.: Наука, 1975. - Т.3. - 496 с.

1П. Канторович Л.В. Функциональный анализ j Л.В. Канторович, Г.П. Акилов. - М.: Наука, 1977. - 742 p.

11. Партасарати, К. Введение в теорию вероятностей и теорию меры / К. Партасарати. -М.: Мир, 1988. - 343 с.

12. Gliklikh, Yu.E. Stochastic differential inclusions of Langevin type on Riemannian manifolds j Yu.E. Gliklikh, A.V. Obukhovskii jj Discussiones Mathematicae DICO. - 2ПП1. - V. 21.

- P. 173-19П.

13. Иосида, К. Функциональный анализ j К. Иосида. - М.: Мир, 1967. - 624 с.

14. Биллингсли П. Сходимость вероятностных мер j П. Биллингсли. - М.: Наука, 1977. -351 с.

Юрий Евгеньевич Гликлих, доктор физико-математических наук, профессор, кафедра алгебры и топологических методов анализа, Воронежский государственный университет (г. Воронеж, Российская Федерация), yeg@math.vsu.ru.

Ольга Олеговна Желтикова, Воронежский государственный университет (г. Воронеж, Российская Федерация), ksu_ola@mail.ru.

Поступила в редакцию 30 апреля 20l3 г.

i Надоели баннеры? Вы всегда можете отключить рекламу.