Научная статья на тему 'On Markov–up processes and their recurrence properties'

On Markov–up processes and their recurrence properties Текст научной статьи по специальности «Математика»

CC BY
41
9
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
Markov-up process / recurrence

Аннотация научной статьи по математике, автор научной работы — A.Yu. Veretennikov, M.A. Veretennikova

A simple model of the new notion of “Markov up” processes is proposed; its positive recurrence and ergodic properties are shown under the appropriate conditions. A one-dimensional process in discrete time moves upwards as if it were Markov, and goes down in a more complicated way, remembering all its past from the moment of its “u-turn” down. Also, it is assumed that in some sense its move downwards becomes more and more probable after each step in this direction.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «On Markov–up processes and their recurrence properties»

On Markov�up processes and their recurrence properties

A.Yu.Veretennikov1), M.A. Veretennikova2)

1)Institute for Information Transmission Problems, Moscow, Russia;

e-mail: ayv@iitp.ru

2)University of Warwick, UK;

e-mail: maveretenn@gmail.com

Abstract

A simple model of the new notion of �Markov up� processes is proposed; its positive recurrence and ergodic properties are shown under the appropriate conditions. A one-dimensional process in discrete time moves upwards as if it were Markov, and goes down in a more complicated way, remembering all its past from the moment of its �u-turn� down. Also, it is assumed that in some sense its move downwards becomes more and more probable after each step in this direction.

Keywords: Markov-up process; recurrence

MSC: 60K15

1. Introduction

The idea of integer valued processes which behave like markovian on the periods of growing and in a more complicated non-markovian way on the periods of decreasing was suggested by Alexander Dmitrievich Solovyev in a private communication in the late 90�s [2]. In this phase of his research activity he only worked on applied projects. Hence, there is no doubt that this idea was also an applied one, most likely related to the theory of reliability, which was one of his main interests. To the best of the authors knowledge he did not leave any published notes on this theme. Also, the authors are not aware of any other publications devoted to this idea, although certain close models do exist in the literature. In this paper a toy model of this idea is proposed.

.

Consider a process Xn, n . 0 on Z+= {0,1, . . .}, or on Z0, .= {0,. . . N} with some

N 0 < N < ., possessing the following property: for any n . 1 where the last jump was up

.(including staying), it is assumed that for some function .(i, j), i, j . Z+,

P(Xn+1 = j|F X; Xn . Xn.1)= .(Xn, j), (1)

n

that is, the �movement upwards remains markovian�; the �decision� to turn downwards is also markovian in the first instant; however, where the last jump was down, the next proba�bility distribution may depend on some part of the past trajectory: namely for some function .(Xn,... X.n )

P(Xn+1 = j|F nX; Xn < Xn.1)= .(Xn,... X.n ), (2)

where .n is the last turning time from �up� to �down� before n; it is formally defined in (3) in what follows. Hence, the �memory� of the process while moving down is limited by the last time of turning down; the latter moment may not be bounded. These assumptions reflect the property that while the process �goes up" its transition probabilities for any jump up obey the Markov property (1); as soon as it goes down, its transition probabilities �must" remember some past values of the trajectory, namely, from the last jump up moment. The case of equality Xn = Xn.1 � or staying at its place � is included in the movement up; probably it may be shifted to the movement down, but apparently it would change the calculus and we do not pursue studying all possibilities here at once. Also, some more complicated rules could be introduced instead of those described above; however, our goal is just to show a simplest version of the idea of a �Markov up" process and to discuss some recurrence and ergodic properties which this model may possess.

As a rationale, the model may be applied to a situation of the evolution of some involved many-component device which may have several states and which �goes up� while it is working, or it �goes down� if one or several of the critical components in this device break down, after which the evolution does not stop but becomes more and more chaotic with a likely further disbalance or even, or, at least, dependent on all the states after their break down: the device �remembers� the event of the faults in the critical components all the time until they are repaired (in the simplest example fixing is just reloading the system), after which the transmission may resume again and the behaviour becomes �markovian� again, satisfying the condition (1). There is also some evidence that certain disastrous processes related to complicated devices may expose similar features: once some critical failure occurs, the process of destruction may accelerate and be unpredictably chaotic until some rescue arrives.

Note that the probability of such a model to be at some subset in the state space could be viewed as a characteristic of the reliability of this device. Suppose that the movement �up� of the process X is treated as approaching to some goal which is a high enough level above zero, and that at any moment of time the position of X above the minimal level N brings some profit, while falling down below the level N is regarded as a failure with no dividends or even with some loss due to the expenses for repairing with the necessity to recover and to start raising up again. Then the dynamical, or instantaneous reliability of the system may be defined as the probability r(t) := P(Xt > N). Naturally, we are interested in computing this function r(t), or, at least, its limit r(.) := limt>. r(t), or its stationary value if the latter exists. Indeed, traditionally all features of a model are evaluated and described in a stationary regime. It is well known that very often in probability models such a limit coincides with the stationary value of r(t). In such a setting the property of a positive recurrence may help to show that this invariant or limiting probability r(.) exists. The next important question would be to find the rate of this convergence; it is not pursued in this paper. The issue of the bounds for the rate of this convergence is left until further research and publications. Here we just recall that positive recurrence is naturally linked to the existence of a stationary regime (see the corollary 6 in what follows).

The paper [3] proposes a Markov model for the daily dynamics of the Fire Weather Index (FWI), which estimates the risk of wildfire. The authors do indicate that in fact the probability of wildfire escaping will grow as the duration of a several-day intensive fire onset increases. Statistical analysis in the paper concerns the suitable order of the Markov chain. It shows that for the data analyzed mostly a Markov chain of order 1 is suitable, however sometimes order 2 is preferable. Data is limited to the province of Ontario, and the appropriate order may be different elsewhere. In our model the length of memory is not fixed, which allows greater flexibility. It also takes into account duration of the last fire onset, which may be beneficial. For example, the paper [4] supports the idea that the total area burnt by a fire is an exponential function of time after ignition. Such amplification of chaos and further imbalance is discussed in the previous paragraph about functionality of a multi-component device. Evidence of local memory dependence suggests that possibly a Markov-up process should be a reasonable model for evolution of an index which quantifies realistic damage from fire. For the process to be called Markov-up the worse the prognosis of the total damage the lower the index should be. In case of working with a variable such as FWI ranging from 0 (low danger) to 100 (extreme danger), perhaps, we could just as well introduce the notion of a �Markov-down� process, reversing the directions of jumps with the specified transition probability characteristics. For fire damage index dynamics it would be appropriate to consider a variation of the Markov-down process, in which return to Markov behaviour happens after the index reaches a �low� danger threshold level in several sequential steps. Note that this index may also be regarded as a reliability type characteristic where the reliability value could be defined as a probability that this index does not exceed some level. What is more, actually, the probability of each possible value of this index could be a more accurate and informative characteristic of an �extended reliability� type. The theory in this paper concerns the simplest version of a Markov-up process.

Note that according to (2) the �transition probabilities� P(Xn+1 = j|Xn,... X.n ) after jumps down do not depend on n, that is,

P(Xn+1 = j|Xn,..., X.n )|.n =m,Xn =a0,...,X.n =am

= P(Xn+k+1 = j|Xn+k,..., X.n+k )|.n+k =m,Xn+k =a0,...,X.=am ,

n+k

for any m . 0 and k . 0 in the case of

a0 > ... > am,

where it is assumed that X.n.1 . am. Similar assumption is made about the probabilities P(Xn+1 = j|Xn) after jumps up, see (1). This corresponds to the �homogeneous� situation, in which it makes sense to pose a question about ergodic properties. For the conditional probabilities after the �jumps down� the memory could be, in principle, unlimited, in the sense that it is not described by, say, m-Markov chains (i.e., with the memory of length m) except for the case of a finite N. However, the process �does not remember anything which is older than the last turn

.down�, that is, there is no dependence of future probabilities on the past earlier than time .n for each n. The moment .n itself is interpreted as the last jump up before the fault occurs, and all the time before the faulty component is fixed, the device keeps record of what has happened from that moment to the present time, and the transition probabilities depend on this memory. The first jump up after a series of jumps down signifies that the faulty component is fixed and, hence, movement up resumes. The movement in both directions can have several options, that is, it is not assumed that any jump up is by +1 and any jump down is with -1. Naturally, from zero there .

are only jumps up, or the process may stay at its place. The model with a finite N does not differ too much from the infinite version: since we are interested in bounds which would not depend on N, the calculus would be very similar: the only point is that at N it should be specified what

..

.

kind of jumps are possible; we do not pursue this version here assuming N = ..

Models with more involved dependencies are possible: for example, instead of the immediate switching to �Markov� probabilities after one jump up, it could be assumed that such a switch occurs after several steps up, or after the average in time of consequent jumps up or down exceeds some level, etc. Probably, some other adjustments of the model may be performed in order to include some specific features of forest fires mentioned earlier.

We are interested in establishing ergodic properties for the model (1)�(2) under certain �recurrence� and �non-singularity� assumptions. So, recurrence is one of the key points addressed here.

There are some ideological similarities of the proposed model with renewal processes, and with a (more general) notion of Hawkes processes, and also with semi-Markov processes. Actually, this is a special case of semi-Markov type, as well as a special case of a regeneration process. Moreover, as we shall see in what follows, some transformation of the model based on the enlarged state space turns out to be a particular Markov process, which is not really surprising since, as is well-known, any process may be regarded as Markov after a certain change of the state space. Yet, this is not always useful. In any case, ergodic properties of the model are to be established from scratch, and markovian features will only be used in what concerns the invariant measure via the Harris � Khasminskii principle.

An extended abstract preceding this publication was presented at the ICMS5 conference in November 2020, see [5]. Because of many new objects, quite a few definitions will be repeatedly reminded to the reader during the text. The paper consists of five section: Introduction, The model and assumptions, Auxiliary lemmata, Main results (theorem 5 and corollary 6), Proof of theorem 5, and Proof of corollary 6.

2. The model and the assumptions

We use standard notation a . b = min(a, b), a . b = max(a, b).

Further notations: Let us define for each n . 0 the random variables .n := inf(k . n : .Xi := Xi+1 . Xi < 0, .i = k,... n), (inf(.)=+.), (3)

.n := sup(k . n : all increments .Xi . 0, . n . i . k) . n. (4) .n := sup(k . n : all increments .Xi < 0, . n . i . k) . n. (5) Also, let X.i,n := Xi1(.n . n . i . n), F.n = .(.n; X.i,n :0 . i . n). (6) Note that the family (F.n) is not a filtration, and this is not required. We have, F.n .Fn and 1(.n . n = n)E(.|F.n)= 1(.n . n = n)E(.|Xn) ... Also, note that X.n,n = Xn for any n.

Now let us state the assumptions which rewrite from scratch the formulae (1) and (2).

A1. Random memory depth: For any n, P(Xn+1 = j|Fn)= P(Xn+1 = j|F.n) a.s., (7) and the latter conditional probability does not depend on n given the past Xn,... X.n.n, which serves as the analogue of the homogeneity.

The random memory depth is what clearly distinguishes the proposed model from Markov chains with a fixed memory length also known as complex Markov chains.

A2. Irreducibility (local mixing): For any x . N and for two states y = x and y = x + 1 P(Xn+1 = y|F.n, Xn = x) . . > 0. Note that 2. . 1. Along with the recurrence condition, the assumption A2 will guarantee the irreducibility of the process in the extended state space where the process becomes Markov, see

(15) below. A3. Recurrence-1: There exists N . 0 such that (jump down . (Xn+1 < Xn)|F.n, N < Xn) . .0 > 0; (8) P(Xn+1 < Xn|F.n, N < Xn < Xn.1) . .1 > 0, etc., and for any n . m P(Xn+1 < Xn|F.n, N < Xn < ... < Xn.m+1) . .m.1 > 0, . 1 . m, (9) Note that .0 . .1 . . . . Denote q = 1 . .0; q < 1.

Then P(jump up . (Xn+1 . Xn)|F.n, N < Xn) . 1 . .0 = q < 1.

A4. Recurrence-2: It is assumed that the following infinite product converges

.

... := . .i > 0; (10) i=0

and

. i(1 . .i) < .. (11) i.1

Let q.:= 1 . ...(< 1) & q := 1 . .0 (< 1).

Note that P(jump up . (Xn+1 . Xn)|F.n, N < Xn) . 1 . .0 = ..0 = q < 1.

A5. Jump up moment bound:

M1:= ess sup sup E((Xn+1 . Xn)+|F.n) < .. (12)

P

. n

Let q.= 1 . ... (< 1).

This is the upper bound for the probability that the fall down is not successful, i.e., that the �floor� [0, N] is not reached in one go.

Denote

m

..m := . .i (. ... > 0).

i=0 Let us emphasize that the index i in .i is not the state where the process X is, but the value for how long the process is falling down. The process remembers for how long it has been going down so far, and the longer it goes down the more probable is to continue in this direction, at least, until the process reaches [0, N]. Equivalently,

. ln .i < ..

Of course, this implies that .i > 1 as i > ., which is, clearly, a weaker condition than (10). Convergence of the sequence .i to 1, if it is monotonic, may be interpreted in a way that the longer the decreasing trajectory, the more faulty components in the device: each jump down makes some additional disorder in the system, which further increases the probability to continue falling down.

C

Example 1. The assumption (11) is satisfied, for example, under the condition 1 . .m . m3 , or, equivalently,

C

.m . 1 .

m3.

An exponential rate of the approach of the sequence .m to 1 accepted in some applied models of a fire evolution could be interpreted as the inequality

.m . 1 . exp(..m)

with some . > 0.

The assumption (12) is valid, for example, if there exists a nonrandom constant C . 0 such that with probability one Xn+1 . Xn . C < ..

Denote

. = .1:= inf(t . 0: Xt . N); . := inf(t . . : Xt.1 . Xt = N).

The regeneration occurs not at moment ., but at moment .. However, the expectation of . may be evaluated via Ex.. Hence, it will be useful to introduce the following two sequences of stopping times with respect to the filtration F X by induction:

n Tn := inf(t > .n : Xt > N), .n+1:= inf(t > Tn : Xt . N).

The convention. With the initial position X0 = x we assume that any artificial �admissible past� is allowed, that is, we accept that there is some fictitious past which could have preceded this state; we include in this past nothing if the artificial state X.1 does not exceed X0, or we add the fictitious past trajectory from the last starting moment of the fall .0: X.0,. . . , X.1. From the assumption (A1) it follows that the process (Yn, FY) is Markov; of course, FY = F X

n nn .

Let us recall the definitions of Greeks:

.n := inf(k . n : .Xi := Xi+1 . Xi < 0, .i = k,... n)(inf(.)= .);

X.i,n := Xi1(.n . n) . i . n), F.n = .(.n; X.i,n :0 . i . n); .n := sup(k . n : all increments .Xi . 0, . n . i . k) . n; .n := sup(k . n : all increments .Xi < 0, . n . i . k) . n.

3. Auxiliary lemmata

Lemma 2. Under the assumption (A3) for any x > N, q

Ex(.0 . 0) . M2 =

(1 . q)2. Proof. Recall that the random variable .n was defined by the formula .n := sup(k . n : all increments .Xi . 0, . n . i . k) . n. We use the notations from the proof of lemma 4 (below): for i . n let i.1 n.1

ei

= 1(Xi+1 . Xi), e.i = 1(Xi+1 < Xi), .Xi = Xi+1 . Xi, .i = e.i

n

.

ek (assume

.

= 1).

k=nn

The bounds in this lemma and in the other lemmata will not depend on the initial state x, so we drop this index in Ex and Px in this section (but not in the proof of the main result). We have, for i . n

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Ex(ei|Xi > N)= P(Xi+1 . Xi|Xi > N)

= Ex(Px(Xi+1 . Xi|F.i, Xi > N)|Xi > N) . 1 . .0 = q.

Then almost surely

.n . n =

..

ke.n+k

n+k.1

.

ei

=

..

ke.n+k

n+k.1

.

ei

=

..

k.n+k

n

.

k=0 i=nk=1 i=nk=1

So, we estimate,

..

ke.n+k

n+k.1

.

ei .

..

kEx

n+k.1

.

Ex(.n . n)= E

ei

k=1 i=nk=1 i=n

..

kqk

= q

..

.

kqk.1 = q := M2 < .. QED

k=1 k=1

(1 . q)2

Let us recall,

. := inf(t . 0: Xt . N), .n := sup(k . n : all increments .Xi < 0, . n . i . k) . n, and

Px(Xn+1 < Xn|F.n, N < Xn < ... < Xn.m+1) . .m.1 > 0, . 1 . m, and also

.n := sup(k . n : all increments .Xi < 0, . n . i . k) . n. Lemma 3. Under the assumptions (A3)-(A4), for any x > N, (n = 0)

Ex(.n . n)1(.n < .) .

.

i(1 . .i) := M3 < ..

i.1

Proof. Similarly to the calculus of the previous lemma but with the replacement of ei by e.i and vice versa, we have

.. k 1=

n+k.1

.

(.n . n)1(.n < .) .

ken+k1(n + k . 1 < .)

e.i,

i=n

so,

..

n+k.1

.

e.i

Ex(.n . n)1(.n < .) . Ex

ken+k1(n + k . 1 < .)

k=1 i=n n+k.1

..

.

kEx1(n + k . 1 < .)(

e.i)Ex(en+k|.Xi < 0, 0 . i . n + k . 1)

.

k=1 i=n

..

kEx1(n + k . 1 < .)(

n+k.1

.

e.i)(1 . .k) .

..

A4

k(1 . .k)=: M3 < .. QED

A3

.

k=1 i=nk=1

Let us recall once more, .n := sup(k . n : all increments .Xi . 0, . n . i . k) . n. Lemma 4. Under the assumptions (A3) and (A5) the expected value of the maximum positive increment over any single period of running up (non-strictly) until the first jump down is finite: sup Ex(X.n . Xn)+ . M4 < .. n,x Proof. First of all, it suffices to show that sup Ex(X.n . Xn)+|F.n) . M4 < .. n,x Further, we have sup Ex((X.n . Xn)+|F.n, Xn . N) . N + sup Ex((X.n . Xn)+|F.n, Xn > N). n,xn,x

Hence, it suffices to show only

sup Ex((X.n . Xn)+|F.n, Xn > N) . M < . (a.s.) n,x

In other words, it is sufficient to establish for n = 0 that sup Ex(X.0 . x)+ . M < .. x>N With the same notations ei = 1(Xi+1 . Xi), e.i = 1(Xi+1 < Xi), .Xi = Xi+1 . Xi, .i =

n e.i . .ki.=1 n ek we have,

Ex(X.n . Xn)+

= Ex

..

.i

n

(Xi . Xn)=

..

Ex.i

n

(Xi . Xn)

i=n+1 i=n+1

(assuming that the latter sum converges; note that all its terms are non-negative). Further,

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

i.1 i.1

Ex.i (Xi . Xn)= Ex(.i

nn

.

.Xj)=

.

Ex.i

n

.Xj

j=n j=n

For each single term in this sum we have (n . j . i . 1)

j i.1 j i.1

Ex.i .Xj

n

.

.

ek' )= E(

.

.

ek).Xj( ek).XjEFj+1

(

(

ek' )

= ExEFj+1

k' k'k=n

=j+1 k=n =j+1

j j

i 1. .

(A3)

i.j.1

.

.

ek).XjE .

Fj+1

ek).Xj . q

.

= Ex(

(

ek' )

Ex(

k=nk' =j+1 k=n

j 1. .

ek)EFj

i.j.1

ej.Xj = q

Ex(

j.1

i.j.1

Ex(

.

ek)E .

Fj

ej.Xj

= q

k=nk=n j.1

(A5)

i.j.1

. M1q

i.j.1j.ni.n.1

= M1q

Ex(

.

ek) . M1qq

.

k=n

Hence,

i.1 i.1

Ex.i

n

(Xi . Xn)=

.

Ex.i

n

.Xj .

.

i.n.1

i.n.1 =(i . n)M1q

M1q

j=nj=n

and so

Ex(X.n . Xn)+ . M1

..

(i . n)q

i.n.1

=: M4 < .,

i=n+1 as required. Lemma 4 is proved. QED

4. Main results

Theorem 5. Under the assumptions (A1) � (A5) there exist constants C1, C2 > 0 such that Ex. . x + C1. (13) and there exist constants C2, C3 > 0 such that Ex. . C2x + C3. (14) M4q.

Here C1 . .

1.q.Corollary 6. Under the assumptions of the theorem 5 the process Xn has a stationary measure.

5. Proof of theorem 5

0. First of all let us state the idea of the proof. We will establish the property of recurrence towards the interval [0, N] due to the recurrence assumptions, which property holds true despite the non-markovian behaviour. Further, inside [0, N] coupling holds true on each step with a positive probability bounded away from zero on the jump up (or stay); after such a coupling, the process does not remember its past given the present before it started falling down. Hence, de-coupling is not possible.

Formally, let us make the process (strong) Markov by extending its state space. For this aim it suffices to define

Yn := Xn1(Xn . Xn.1)+(Xn,... X.n )T1(Xn < Xn.1) . (Xn,... X.n .n)T (15)

(here T stands for the transposition; recall that .n < n in case of Xn < Xn.1; in any case, the vector Yn is of a finite, but variable dimension which is random).

1. Recurrence. Due to (10), from any state y > N there is a positive probability to attain the set [0, N] in a single monotonic fall down with no stopovers with a probability no less than .... The time required for such a monotonic trajectory from y to [0, N] is no more than y . N . 1. However, other scenarios are possible with stopovers and temporary runs up. Hence, to evaluate the expected value of . some calculus is needed.

Let us establish the bound (13). Ex. . x + C. (16)

If x . N, then . = 0 and the bound is trivial. Let x > N. Recall that slightly abusing notations we only write down the initial position x, while in fact there might be some non-trivial prehistory F.0. The process may start descending straight away, or after several steps up (or after staying at state x for some time). In the latter case the position X.0 from which the descent starts admits the bound

(ExX.0 . x)+ . M4

(see lemma 4).

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Case I: at t = 0 the process is falling down. Let us define stopping times

t0 = T0 = 0, T1 = .t0, t1 = .T1, T2 = .t1, t2 = .T2, T3 = .t2,. . .

In words, Ti is the end of the next partial fall after ti.1; ti is the end of the next run up after Ti. There might be a.s. finitely many excursions down and up, and the last fall down will finish at [0, N]. Let us recall that

.n := max(k . n : all increments .Xi . 0, . n . i . k) . n,

and .n := max(k . n : all increments .Xi < 0, . n . i . k) . n.

We have .x > N Ex(.0 . 0) . . iqi =: M2. i and .x > N Ex(.0 . 0) . M3.

Note that Ti . ti.1 . Xti.1.

Denote by Ai (i . 1) the event of precisely i . 1 unsuccessful attempts to descend to the floor [0, N], after which on the ith attempt it does attain the floor; by Bj let us denote jth unsuccessful attempt to fall down until reaching the floor [0, N] (probability that it is unsuccessful is less that q.< 1); Bcj is the event where the jth fall down is successful. Then we have . = Ti on

Ai =(1.j.i.1 Bj)Bic. The probability of Ai does not exceed q.i.1. (Recall, q.= 1 . ....) So, we estimate,

Ex. = . Ex.1(Ai) . . Ex1(Ai)Ti = . Ex1(Bj)Bic)Ti

i.1 i.1 i.11.j.i.1

23

= . Ex. 1(Bj)1(Bic)Ti = . ExEFti.1. 1(Bj)1(Bic)Ti

i.11.j.i.1 i.11.j.i.1

4

= . Ex. 1(Bj)EFti.11(Bic)Ti

i.11.j.i.1

51(Bc

= . Ex. 1(Bj)i )(Ti . ti.1 + ti.1)

EFti.1

i.11.j.i.1

61(Bc 1(Bc

= . Ex. 1(Bj)i )ti.1 + . Ex1 . (Bj)i )(Ti . ti.1)

EFti.1

i.11.j.i.1 i.11.j.i.1

7 . . Ex. 1(Bj)1(Bic)ti.1 + . Ex1 . (Bj)1(Bic)Xti.1

EFti.1

i.11.j.i.1 i.11.j.i.1

8

1(Bc 1(Bc

. . Ex. 1(Bj)i )ti.1 + . Ex. 1(Bj)i )Xti.1.

i.11.j.i.1 i 1.j.i.1

Note that Bj .FTj . We are going to show that

. Ex. 1(Bj)ti.1 . C (17) i.11.j.i.1

and

. Ex. 1(Bj)1(Bic)Xti.1 . x + C. (18) i.11.j.i.1

Step 1.

ti.1 =(ti.1 . Ti.1)+(Ti.1 . ti.2)+ ... +(T1 . t0)+(t0 . T0).

We have

Ex. 1(Bj)(ti.1 . Ti.1)= ExEFTi.1. 1(Bj)(ti.1 . Ti.1)

1.j.i.11.j.i.1

lemma 1 = Ex. 1(Bj)EFTi.1 (ti.1 . Ti.1) . M2Ex. 1(Bj). M2q.i.1; 1.j.i.11.j.i.1

also,

Ex . 1(Bj)(Ti.1 . ti.2)= ExEFti.2 . 1(Bj)(Ti.1 . ti.2)

1.j.i.11.j.i.1

lemma 2 = Ex . 1(Bj) EFti.21(Bi.1)(Ti.1 . ti.2) . M3Ex . 1(Bj) . M3q.i.2; 1.j.i.21.j.i.2

further,

Ex . 1(Bj)(ti.2 . Ti.2)= Ex . 1(Bj)(ti.2 . Ti.2)EFti.11(Bi.1) 1.j.i.11.j.i.2

i.2 i.1

. q.Ex . (ti.2 . Ti.2) . .q= M2q.,

1(Bj) qM2.

1.j.i.2

and

Ex . 1(Bj)(Ti.2 . ti.3)= ExEFTi.2 . 1(Bj)(Ti.2 . ti.3) 1.j.i.11.j.i.1

= Ex . 1(Bj)(Ti.2 . ti.3)EFTi.21(Bi.1)

1.j.i.2

i.3 i.2;

. q.Ex . (Ti.2 . ti.3) . .q= M3q.

1(Bj) qM3.1.j.i.2

etc. By induction we obtain

i.2

Ex . 1(Bj) ti.1 . iM2q.i.1 +(i . 1)M3q.. 1.j.i.1

Hence, the first desired inequality (17) is true,

. Ex . 1(Bj) ti.1 . M2 .(i . 1)q.i.1 + M3 .(i . 2)q.i.1 =: C < .. i 1.j.i.1 i.1 i.2

Step 2. Note that Xtj . XTj , so that Xtj . Xtj.1 . Xtj . XTj . Also, in the case under the consideration Xt0 = x. Hence, we have,

Xti.1 =(Xti.1 . Xti.2 )+ ... +(Xt1 . Xt0 )+(Xt0 . x)+ x.

So, 1(Bc

Ex . 1(Bj) i )Xti.1

1.j.i.1

i.1 = Ex . 1(Bj) 1(Bic) x +(Xt0 . x)+ . (Xtk . Xtk.1 ) 1.j.i.1 k=1

i.1

. Ex . 1(Bj) 1(Bic) x + . (Xtk . XTk )

1.j.i.1 k=1

i.1 = x . Ex . 1(Bj)1(Bic)+ Ex . 1(Bj) 1(Bic) . (Xtk . XTk ) 1.j.i.11.j.i.1 k=1

i.1 . x . Ex . 1(Bj)1(Bic)+ Ex . 1(Bj) . (Xtk . XTk ). 1.j.i.11.j.i.1 k=1

For any 1 . k . i . 1 we estimate

Ex . 1(Bj)(Xtk . XTk )= ExEFtk . 1(Bj)(Xtk . XTk ) 1.j.i.11.j.i.1

= Ex . 1(Bj)(Xtk . XTk )EFtk . 1(Bj)

1.j.kk+1.j.i.1

i.k.1 i.k.1

. Ex . 1(Bj)(Xtk . XTk )q.= q.Ex . 1(Bj) EFTk (Xtk . XTk ) 1.j.k 1.j.k

lemma 3

i.k.1i.k.1+ki.1

. M4q.Ex . 1(Bj) . M4q.= M4q..

1.j.k

Therefore, since 1 = .i.1.j.i.11(Bj)1(Bic) a.s., we get

. Ex . 1(Bj) 1(Bic)Xti.1 i 1.j.i.1

. xEx .. 1(Bj)1(Bic)+ M4 . iq.i.1 . x + M4.

1 . q.

i 1.j.i.1 i

This shows (18), as required.

Case II: at t = 0 the process is going up. Let us define stopping times

T0 = 0, t0 = .0, T1 = .t0, t1 = T1 + .T1, T2 = t1 + .t1, ...

(Ti is the end of the next partial fall after ti.1; ti is the end of the next run up after Ti. There might be a.s. finitely many excursions down and up, and the last fall down will finish at [0, N].) We have,

ti.1 =(ti.1 . Ti.1)+(Ti.1 . ti.2)+ ... +(T1 . t0)+(t0 . T0).

So, we estimate

Ex. = . Ex.1(Ai)= . Ex1(Ai)Ti = . Ex1( Bj) Bic)Ti

i.1 i.1 i.11.j.i.1

23

= . Ex . 1(Bj) 1(Bic)Ti = . ExEFti.1 . 1(Bj) 1(Bic)Ti

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

i.11.j.i.1 i.11.j.i.1

4

= . Ex . 1(Bj) EFti.11(Bic)Ti

i.11.j.i.1

5

= . Ex . 1(Bj) EFti.11(Bic)(Ti . ti.1 + ti.1)

i.11.j.i.1

6

= . Ex . 1(Bj) 1(Bic)ti.1 + . Ex 1 . (Bj) 1(Bic)(Ti . ti.1)

EFti.1

i.11.j.i.1 i.11.j.i.1

7 . . Ex . 1(Bj) 1(Bic)ti.1 + . Ex 1 . (Bj) 1(Bic)Xti.1

EFti.1

i.11.j.i.1 i.11.j.i.1

8

1(Bc 1(Bc

. . Ex . 1(Bj) i )ti.1 + . Ex . 1(Bj) i )Xti.1

i.11.j.i.1 i.11.j.i.1

Note that Bj .FTj . We are going to show that

. Ex . 1(Bj) ti.1 . C (19) i.11.j.i.1

and

. Ex . 1(Bj) 1(Bic)Xti.1 . x + C. (20) i.11.j.i.1

Step 3. We have

Ex . 1(Bj)(ti.1 . Ti.1)= ExEFTi.1 . 1(Bj)(ti.1 . Ti.1)

1.j.i.11.j.i.1

lemma 1

i.1

= Ex . 1(Bj)(ti.1 . Ti.1) . M2Ex . 1(Bj) . M2q.,

EFTi.11.j.i.11.j.i.1

and

Ex . 1(Bj)(Ti.1 . ti.2)= ExEFti.2 . 1(Bj)(Ti.1 . ti.2)

1.j.i.11.j.i.1

lemma 2 i.2;

= Ex . 1(Bj) EFti.21(Bi.1)(Ti.1 . ti.2) . M3Ex . 1(Bj) . M3q.1.j.i.21.j.i.2

further,

Ex . 1(Bj)(ti.2 . Ti.2)= Ex . 1(Bj)(ti.2 . Ti.2)EFti.11(Bi.1) 1.j.i.11.j.i.2

i.2 i.1;

. q.Ex . (ti.2 . Ti.2) . .q= M2q.

1(Bj) qM2.

1.j.i.2

Ex . 1(Bj)(Ti.2 . ti.3)= ExEFTi.2 . 1(Bj)(Ti.2 . ti.3) 1.j.i.11.j.i.1

= Ex . 1(Bj)(Ti.2 . ti.3)EFTi.21(Bi.1)

1.j.i.2

i.3 i.2;

. q.Ex . (Ti.2 . ti.3) . .q= M3q.

1(Bj) qM3.1.j.i.2

etc. By induction we obtain

i.2

Ex . 1(Bj) ti.1 . iM2q.i.1 +(i . 1)M3q.. 1.j.i.1

Hence, the first desired inequality (19) is true,

i.1 i.1

. Ex . 1(Bj) ti.1 . M2 . iq.+ M3 .(i . 1)q.=: C < .. i.11.j.i.1 i.1 i.2

Step 4. Note that XT0 = x, and

i.1

Xti.1 . x + .(Xtj . XTj ).

j=1

So, we have,

i.1 Ex . 1(Bj) 1(Bic)Xti.1 . Ex . 1(Bj) 1(Bic) x + .(Xtj . XTj ) 1.j.i.11.j.i.1 j=1

i.1

= xEx . 1(Bj)1(Bic)+ Ex . 1(Bj) 1(Bic) .(Xtj . XTj )

1.j.i.11.j.i.1 j=1

i.1

. xEx . 1(Bj)1(Bic)+ Ex . 1(Bj) . (Xtk . XTk ).

1.j.i.11.j.i.1 k=1

For any 1 . k . i . 1 we estimate

Ex . 1(Bj)(Xtk . XTk )= ExEFtk . 1(Bj)(Xtk . XTk )

1.j.i.11.j.i.1

= Ex . 1(Bj)(Xtk . XTk )EFtk . 1(Bj)

1.j.kk+1.j.i.1

i.k.1 i.k.1

. Ex . 1(Bj)(Xtk . XTk )q.= q.Ex . 1(Bj) EFTk (Xtk . XTk ) 1.j.k 1.j.k

lemma 3

i.k.1i.k.1+ki.1

. M4q.Ex . 1(Bj) . M4q.= M4q..

1.j.k

Therefore, since 1 = .i.1 .1.j.i.11(Bj) 1(Bic) a.s., we get

. Ex . 1(Bj) 1(Bic)Xti.1

i.11.j.i.1

. xEx .. 1(Bj)1(Bic)+ M4 . q.i.1 . x + M4.

1 . q.

i.11.j.i.1 i This shows (20), as required. In both cases I and II the bound (13) is proved.

Step 5. Let us establish the bound (14). Recall the notations introduced earlier after the assumptions: . = .1:= inf(t . 0: Xt . N); . = .1:= inf(t . . : Xt.1 . Xt = N), and Tn := inf(t > .n : Xt > N), .n+1:= inf(t > Tn : Xt . N), n . 1, and T0:= 0. Also, let .n+1:= inf(t > .n : Xt.1 . Xt = N). We have due to the assumption (A5) ExXTn . C, n . 1; also, ExXT0 = x. Therefore, by virtue of the bound (13) we have,

Ex(.n+1 . Tn)= ExEx(.n+1 . Tn|F.Tn ) . ExXTn + C . C, n . 1,

and Ex(.1 . T0) . x + C, n = 0. Also, due to the assumptions there exists p . (0, 1) such that nn

Px(. > Tn) . p.. Px(. . Tn) . 1 . p, n . 1.

Also,

Ex(Tn . .n) . C, n . 1.

Thus, also

E(Tn+1 . Tn) = E(Tn+1 . .n+1 + .n+1 . Tn) . C.

Moreover,

E(Tn+1 . Tn|F X Tn ) . C.

It follows by induction that

ETn . Cn + x.

So, we estimate

.

Ex.1(Tn

< . . Tn+1) .

.

ExTn+11(Tn < . . Tn+1)

Ex. =

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

n.0 n.0

ExEx(Tn+11(Tn < . . Tn+1)|F TXn )

.

=

n.0 ExEx((Tn + Tn+1 . Tn)1(Tn < . . Tn+1)|F TXn )

.

=

n.0 ExEx((Tn + Tn+1 . Tn)1(Tn < .)|F TXn )

.

.

n.0

< .) Ex((Tn+1 . Tn)|F TXn )

'

.

ExTn1(Tn

< .)+

.

Ex1(Tn

=

.

'

n.0 n.0

.C+x1(n=0)

Further, with any integer M > 0, denoting ExTn1(Tn < .)=: dn, we have (note that d0 = 0),

MM

=

.

ExTn1(Tn < .)

''

.

Ex(Tn.1 + Tn . Tn.1)1(Tn < .)1(Tn.1

< .)

n=0 n=0 =:dn

MM

.

ExTn.11(Tn < .)1(Tn.1

< .)+

.

Ex(Tn . Tn.1)1(Tn < .)1(Tn.1

= d0 +

< .)

n=1 n=1 M

ExTn.11(Tn.1 < .) Ex(1(Tn < .)|F TXn.1 )

'

.

= d0 +

'

n=1

.p

M

Ex1(Tn.1 < .)Ex((Tn . Tn.1)1(Tn < .)|F TXn.1 )

.

+

n=1

MM

Ex1(Tn.1 < .)Ex((Tn . .n + .n . Tn.1)1(Tn < .)|F TXn.1 )

. d0 +

.

pdn.1 +

.

n=1 n=1 MM

Ex1(Tn.1 < .)[Ex((Tn . .n)1(Tn < .)|F TXn.1 )

=

.

pdn.1 +

.

n=1 n=1

+ Ex((.n . Tn.1)1(Tn < .)|F TXn.1 )) ].

' '

.Cp+x1(n=1)

We have,

M

Ex1(Tn.1 < .) Ex((.n . Tn.1)1(Tn < .)|F TXn.1 ))

''

.

n=1

.Cp+x1(n=1) MM.2

.

Ex1(Tn.1

< .) . C + x + C

.

. C + x + Cp pn . C + x.

n=1 n=0

Further,

M

Ex1(Tn.1 < .)[Ex((Tn . .n)1(Tn < .)|F TXn.1 )]

.

n=1

M

Ex1(Tn.1 < .)Ex[Ex{(Tn . .n)1(Tn < .)|F .Xn }|F TXn.1 ]

.

=

n=1 M Ex1(Tn.1 < .)Ex[Ex{(Tn . .n)|F .Xn } |F TXn.1 ]

''

.

.

n=1

.C

M

.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Ex1(Tn.1

< .) . C

.

p

n.1 . C.

. C

n=1 n.0

Thus,

MM.1

.

dn . p(C +

.

dn)+ C + x,

n=0 n=0

which by the monotone convergence theorem implies that

..

dn . C(1 + x)

n=0

and

Ex. .

.

dn + C . C + Cx.

n.0 The bound (14) is justified and the proof of the theorem is completed. QED

6. Proof of Corollary 6

The existence of an invariant measure for the process Y follows from the Harris � Khasminskii principle via the formula

.

Y(A) := c EN.

.

1(Yn . A),

n=1

where c is the normalising constant, A is any measurable set in the state space of the process Y, and by EN. we understand the initial condition X0 = N with any preceding fictitious state X.1 . N. By the assumptions, the distribution of X1 only depends on X0 given this condition. So, this state � with the convention of the preceding state in [0, N] � is, indeed, a regeneration point.

To apply it to the process X let us take any bounded measurable function f (y) (y =(y1,. . .)), which only depends on the first variable y1 = x:

.

f (y)� Y(dy) := cEN.

.

f (Yn).

n=1

The latter expression in the right hand side determines an invariant measure for the process X with a notation g(y1) := f (y):

.

g(y

1)� Y(dy) := cEN.

.

g(Y1

n

),

1

where Xn = Y1. So, the invariant measure for X reads

n

.

X(A1) := cEN.

.

1(Y1 . A1).

n

n=1

The corollary is proved. QED

Acknowledgements

For the first author the part consisting of theorem 5 and lemma 4 was supported by Russian Foundation for Basic Research grant 20-01-00575_a.

References

[1] K.B. Athreya and P. Ney, A new approach to the limit theory of recurrent Markov chains, Trans. Amer. Math. Soc. 245, pp. 493-501, 1978.

[2] A.D. Solovyev, private communication, 1999.

[3] D. L. Martell, A Markov Chain Model of Day to Day Changes in the Canadian Forest Fire Weather Index. International Journal of Wildland Fire, 9(4), pp. 265-273, 1999.

[4] G. Ramachandran, Exponential Model of Fire Growth. In Fire Safety Science: Proceedings of The First International Symposium, Editors Cecile E. Grant and Patrick J. Pagni, Hemisphere Publishing Corporation, Washington, 1986, pp. 657-666.

[5] A. Veretennikov, M. Veretennikova, On the notion of Markov-up processes, In book: Ed. by D.V. Kozyrev. The 5th International Conference on Stochastic Meth�ods (ICSM5). 23-27 November 2020, Russia, Moscow. M.: RUDN, 2020, pp. 219-223; http://www.mathnet.ru/php/presentation.phtml?option_lang=eng&presentid=29154.

i Надоели баннеры? Вы всегда можете отключить рекламу.