Научная статья на тему 'On recurrence and availability factor for single–server system with general arrivals'

On recurrence and availability factor for single–server system with general arrivals Текст научной статьи по специальности «Математика»

CC BY
83
8
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
single-server system / arrivals / recurrence

Аннотация научной статьи по математике, автор научной работы — A. Yu. Veretennikov

Recurrence and ergodic properties are established for a single–server queueing system with variable intensities of arrivals and service. Convergence to stationarity is also interpreted in terms of reliability theory.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «On recurrence and availability factor for single–server system with general arrivals»

On recurrence and availability factor for single-server system with general arrivals

A. Yu. Veretennikov2 •

University of Leeds, UK; National Research University Higher School of Economics, and Institute for Information Transmission Problems, Moscow, Russia,

email: a.veretennikov @ leeds.ac.uk.

Abstract

Recurrence and ergodic properties are established for a single-server queueing system with variable intensities of arrivals and service. Convergence to stationarity is also interpreted in terms of reliability theory.

Keywords: single-server system, arrivals, recurrence

1 Introduction

In the last decades, queueing systems of M/G/1/ro, or M/G/1, or more general G/G/l type (cf. [10]) - one of the most important queueing systems - attracted much attention, see [1], [2], [4], [5], [6], [9], [12], [13], [15], and references therein, et al. In this paper a single-server system similar to [18, 19] is considered, in which intensities of new arrivals as well as of their service may depend on the "whole state" of the system, where the whole state includes the number of customers in the system - waiting and at service - and the elapsed time of the last service (that is, time from the beginning of this service), as well as the elapsed time from the last arrival. In queueing theory notations, the system under consideration may be denoted as G/G/l/ro with restrictions. Batch arrivals are not allowed. The model is not GI/GI/ 1/ro (here "I" stands for independence, as usual) because generally speaking periods between two consequent hits of idle state may be dependent, as well as by other reasons. (This is a slight abuse of notations because here "idle" is more than one state.) The generalisation in comparison to the standard GI/GI/ 1/ro model and to the models studied in [18, 19] is because of dependence of the intensities on time from the last arrival, due to which the moment of hitting the idle state cannot be considered as a regeneration. The restrictions mentioned above relate to the existence of intensities and to certain assumptions on them, see the details in the beginning of the next section. By the m-availability factor of the system we understand the probability of m customers in total on the server and in the queue. The problem addressed in the paper is how estimate convergence rate of characteristics of the system including the m-availability factors to their stationary values.

2 The work was prepared within the framework of a subsidy granted to the HSE by the Government of the Russian Federation for the implementation of the Global Competitiveness Program, and supported by the RFBR grant 14-01-00319-a.

The elapsed service time of a customer at service is assumed to be known at any moment, but the remaining service time is not; the same is true for the elapsed time from the last arrival and the remaining time to the next arrival. For definiteness, the discipline of serving is FIFO (first-in-first-out), although other disciplines may be also considered.

The paper consists of the Section 1 - Introduction, of the setting and main result in the Section 2, of the auxiliary lemmata in the Section 3 and of the outline of the proof of the main result in the Section 4. In the Theorem 1 we deliberately do not show exact inequalities on the characteristics in the assumptions, which would imply greater or lesser constant in the main assertion (7) because of various possibilities of dependences between them; however, some idea of what precisely may be assumed can be easily worked out from the calculus in the proof.

2 The setting and main result

2.1 Defining the process

Let us present the class of models under investigation in this paper. Here the state space is a union of subspaces,

X = {(0, y): y > 0} U U^ {(n,x,y): x,y > 0}. Functions of class C1(X) are understood as functions with classical continuous derivatives with respect to the variable x. Functions with compact support on X are understood as functions vanishing outside some domain bounded in this metric: for example, C^X) stands for the class of functions with compact support and one continuous derivative. There is a generalised Poisson arrival flow with intensity A(X), where X = (n,x,y) for any n > 1 , and X = (0,y) for n = 0 . Slightly abusing notations, it is convenient to write X = (n, x, y) for n = 0 as well, assuming that in this case x = 0; after this identification, the "corrected" state space becomes

X = {(0,0,y): y > 0} U U^=1 {(n,x,y): x,y> 0}. If n > 0, then the server is serving one customer while all others are waiting in a queue. When the last service ends, immediately a new service of the next customer from the queue starts; recall that for definiteness the schedule of the service is assumed FIFO, although actually the result does not depend on it. If n = 0 then the server remains idle until the next customer arrival; the intensity of such arrival at state (0,y) = (0,0, y) may be variable depending on the value y, which stands for the elapsed time from the last end of service. Here n denotes the total number of customers in the system, and x stands for the elapsed time of the current service (except for n = 0, which was explained earlier), and y is the elapsed time from the last arrival. Usually, in the literature intensity of arrivals - if not independent - may depend on n (and in some cases on y), while intensity of service may depend on n (and in some cases also on x); however, we allow a more general dependence of all these variables together. Denote nt = n(Xt) - the number of customers corresponding to the state Xt, and xt = x(Xt), the second component of the process (Xt), and yt = y(Xt), the third component of the process (Xt) (the third if n > 0)). For any X = (n,x,y), intensity of service h(X) = h(n,x,y) is defined; it is also convenient to assume that h(X) = 0 for n(X) = 0. Both intensities A and h are understood in the following way, which is a definition: on any nonrandom interval of time [t, t + A), the conditional probability given Xt that the current service will not be finished and there will be no new arrivals reads,

exP (- (A + h) (nt, xt +s,yt+ s) ds); (1)

the conditional probability of exactly one arrival and no other events on this interval given Xt equals

Jo exp(- $0 (x + h)(nt,xt + s,yt + s) ds)X(nt,xt + v,yt + v)

x exp (— f^ v (A + h)(nt + 1,xt + v + s',s') ds') dv; the probabiity of exactly one service given Xt (of course, assuming nt > 0, otherwise no service is available and the probability in question equals zero) is

fo exp(— f ty + h)(nt,xt + s,yt + s) ds)h(nt,xt + v,yt + v)

(3)

x exp (— J^ v (A + h)(nt — 1,s',yt + v + s') ds') dv;

and so on, i.e., by induction a conditional probability of any finite number of events on this interval may be written as some multivariate integral, while the probability of infinitly many events on it equals zero. In particular, the (conditional given Xt) density of the moment of a new arrival or of the end of the current service after t at xt + z, z>0, equals, (A(nt,xt +z,yt + z) + h(nt,xt +z,yt+ z)) x exp(— J™ (A + h)(nt,xt + s,yt + s)) ds). This standard construction does not require any regularity of either intensity function, and even may allow some unbounded intensities; however, we do not touch this issue here and in the sequel both functions A and h are assumed to be bounded and, of course, Borel measurable. In this case, for A > 0 small enough, the expression in (1) may be rewritten as

1 — f0 & + h)(nt,xt +S,yt+ s) ds + 0(A2), A^0, (4)

and this what is "usually" replaced by

I — (A(Xt) + h(Xt))A + 0(A2).

However, in our situation, the latter replacement may be incorrect because of possible discontinuities of the functions A and h. Emphasize that from time t and until the next jump, the evolution of the process X is deterministic, which makes the process piecewise-linear Markov, see, e.g., [10].

2.2 Main result Let

A: = sup A(n,x,y) < ro.

n,x,y: n>0

For establishing convergence rate to the stationary regime, we assume (cf. [18, 19]),

inf h(n,x,y)>—, x > 0. (5)

n>0,y 1 + X

We also assume a new condition related to the function A0( t): = A (0,0, t):

0 < infA0(t) < supA0(t) < ro. (6)

t-0 t>o

Recall that the process has no explosion with probability one due to the boundedness of both intensities, i.e., the trajectory may have only finitely many jumps on any finite interval of time.

Theorem 1 Let the functions A and h be Borel measurable and bounded and let the assumptions (5) and (6) be satisfied. Then, under the assumptions above, if C0 is large enough, then there exists a unique stationary measure y. Moreover, for any k > 0 and any m > k, if C0 is large enough, then there exists C > 0 such that for any t > 0,

II ,,n,x,y .. ^ r (l+n+x+y)m . >

\\yt wtV< C (1+t)k+1 , (7)

where yyn,x,y is a marginal distribution of the process (Xt, t > 0) with the initial data X = (n,x,y) £ X. The constant C in (7) admits an effective bound.

In particular, this inequality holds true for the reliability characteristics introduced earlier. For any m> 0 denote

Pim(t): = Vxtft £ i<Xt) < m}). Then the following corollary holds true.

Corollary 1 Under the assumptions of the Theorem 1, the probabilities p<m(t) converge to their limits, p<m (ro), as t ^ ro, and for any k > 0 and any m> k, if C0 is large enough, then there exists C > 0

- the same as in (7) - such that the estimate is valid,

W,m(t)-P,m(^)\<c where x, y are the components of the initial state X = (n, x, y) E X.

Remark 1 It is plausible that under the same set of conditions (5)-(6), the bound in (7) may be improved so that the right hand side does not depend on y. Moreover, we emphasize that given all other constants, the value C in (7) may be made "computable", with a rather involved but explicit dependence on the other constants. It is likely that the condition (6) may be replaced by a weaker one,

^-<Ao(t)<supAo(t)<™, (8)

t>o

along with the assumption that C'0 is large enough; also, it is tempting to replace the condition (5) by a weaker one with some dependence of the bound in the right hand side of the variable y (under which new condition an improvement that was mentioned as a hypothesis in the first phrase of this remark becomes unlikely). However, all these issues require a bit more accuracy in the calculus and we do not pursue these goals here leaving them until further investigations.

The idea of the proof is based on constructing appropriate Lyapunov functions and yet on finding a new regeneration state instead of a "compromised" idle state of the system. Lyapunov functions guarantee that the distribution of the (independent) periods between regenerations admit some polynomial moments, which implies the desired statement. However, we first of all need some auxiliary results on a strong Markov property - which is essential in this approach -and on Dynkin's formula.

3 Lemmata

Recall [8] that the generator of a Markov process (Xt, t > 0) is an operator Q, such that for a sufficiently large class of functions f

s7Um\\E-!i(^-9f(X)\\=0 (9)

in the norm of the state space of the process; the notion of generator does depend on this norm. An operator Q is called a mild generalised generator (another name is extended generator) if (9) is replaced by its corollary (10) below called Dynkin's formula, or Dynkin's identity [8, Ch. 1, 3],

^xf(Xt) - f(X) = E* £ Qf(Xs) ds, (10)

also for a wide enough class of functions f. We will also use the non-homogeneous counterpart of Dynkin's formula,

Excp(t,Xt) - cp(0,X) = E* J0 (ysp(s,Xs) + d<p(s,Xsj) ds, (11)

for appropriate functions of two variables (cp(t,X)). Both (10) and (11) play a very important role in analysis of Markov models and under our assumptions may be justified similarly to [19]. Here X is a (non-random) initial value of the process. Both formulae (10)-(11) hold true for a large class of functions f, cp with Q given by the standard expression,

9f(X) = j-xf(X)1(n(X) >0)+ df(X)

+A(X)(f(X+) - f(X)) + h(X)(f(X~) - f(X)), where for any X = (n, x, y),

X+: = (n + 1,x,0), X~:= ((n-1)v0,0,y) (here av b = max(a, b)). Under our minimal assumptions on regularity of intensities this may be justified similarly to [19].

Lemma 1 If the functions A and h are Borel measurable and bounded, then the formulae (10) and (11) hold true for any t > 0 for every f £ CH(X) and p £ CH([0, ro) x X), respectively. Moreover, the process (Xt, t > 0) is strong Markov with respect to the filtration (?*, t > 0).

Further, let

Lm(X) = (n + 1 + X + y)m, LKm(t,X) = (1 + t)kLm(X). (12)

The extensions of Dynkin's formulae for some unbounded functions hold true: we will need them for the Lyapunov functions in (12).

Corollary 2 Under the assumptions of the Lemma 1,

Lm(Xt) — Lm(X) = f A(XS)[ (Lm(X(+)) — Lm(Xs))

+h(Xs)(Lm(X-) — Lm(Xs)) + 1(n(Xs) > 0)dLm(Xs) + dLm(Xs)] ds + Mt,

(13)

d

dy

with some martingale Mt, and also

Lk,m(t,Xt) — Lk,m(0,X) = f [A(Xs)(Lk,m(5,^s(+)) — Lkim(s,Xs))

+ h(Xs)(Lk,m(s,X-) — Lk,m(s,Xs)) (14)

+ (1(n(*s) >0)d + d + d) Lk,m(s,Xs)] dS + Mt,

with some martingale Mt.

About a martingale approach in queueing models see, for example, [14]. The proof of the Lemma 1 is based on the next three Lemmata. The first of them is a rigorous statement concerning a well-known folklore property that probability of "one event" on a small nonrandom interval of length A is of the order 0(A) and probability of "two or more events" on the same interval is of the order 0(A?). Of course, in queueing theory this is a common knowledge; moreover, the claims (15)-(18) follow immediatey from the definition of the process given earlier in (1)—(??). Yet for discontinuous intensities these properties have to be, at least, explicitly stated.

Lemma 2 Under the assumptions of the Theorem 1, for any t > 0,

FXt(no jumps on (t, t + A]) = exp(— f* (A + h)(Xt + s) ds) (= 1 + 0(A)), (15)

PXt( at least one jump on (t, t + A] ) = 0(A), (16)

PXt( exactly one jump up & no down on (t, t + A] ) = ff* A(Xt + s) ds + 0(A2),(17)

Px ( exactly one jump down & no up on (t,t + A] ) = f* h(Xt + s) ds + 0(A2),(18)

and

PXt( at least two jumps on (t, t + A] ) = 0(A2). (19)

In all cases above, 0(A) and 0(A2) are uniform with respect to Xt and only depend on the norm supx(A(X) + h(X)), that is, there exist C > 0, A0 > 0 such that for any X and any A < A0,

lim{ A_1PX( at least one jumps on (0,A] ) + A-2P*( at least two jumps on (0, A] )

+A~2 [P*t( one jump up & no down on (t, t + A] ) — f* A(Xt + s) ds]

(20)

+A~2 [pXf( one jump down & no up on (t, t + A] ) — f* h(Xt + s) ds]} < C < m.

The next two Lemmata are needed for the justification that the process with discontinuous intensities is, indeed, strong Markov.

Lemma 3 Under the assumptions of the Theorem 1, the semigroup Ttf(X) = Exf(Xt) is continuous in .

Lemma 4 Under the assumptions of the Theorem 1 the process (Xt, t > 0) is Feller, that is, Ttf(-) E Cb(X) for any f E Cb(X).

The proofs of all Lemmata 2-4 may be performed similarly to [19] where no regularity of the intensities was used, although the dependences were a bit less general. Further, according to [8, Theorem 3.3.10], any Feller process satisfying the claim of the Lemma 3 with right continuous trajectories is strong Markov, which guarantees the last assertion of the Lemma 1.

4 Outline of the Proof of the Theorem 1

1. The idea of the proof is to identify a regeneration state and to establish polynomial bounds for its hitting time. For the latter, we will use Lyapunov functions. The proof of convergence in total variation with rate of convergence basically repeats the calculus in [18] for the Lyapunov functions Lm(X) and Lkm(t,X) from (12) with some changes, and on Dynkin's formulae (10) and (11) due to the Corollary 2. Without big changes, this calculus provides a polynomial moment bound

Ext% <CLm(X)<C(n + 1+x + y)m, (21)

for certain values of k related to the exact value of the constat C0, and for the hitting time

t0: = inf(t >0:Xt = (0,0,*). However, it is not the set of idle states {(0,0,*)} (i.e., the third component here is arbitrary nonnegative) that will be a regeneration, but it is just an auxiliary one. Namely, once the process attains the set {(0,0,*)}, it may be then successfully coupled with another (stationary) version of the same process at their joint jump {n = 0} ^ {n = 1}. This is because, in particular, immediately after such a jump the state of each process reads as (1,0,0). Clearly, this state (1,0,0) may be considered as a regeneration one, and this is despite the fact that the process spends zero time in this state. The news in the calculus in comparison to [18] is that we have to tackle a wider class of intensities, which may be all variable (as well as discontinuous) rather than constant, including A0. However - beside a new regeneration state instead of a usual "zero" (idle) - this affects the calculus a little, once it is established that (10) and (11) hold true, because the major part of this calculus involves only time values t <t0. Some change is also in the procedure of coupling, though, because at state (1,0,0) the process can only spend zero time, which means that the process "cannot wait" at this state.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

In turn, the inequality (21) provides a bound for the rate of convergence, for the justification of which rate there are various approaches such as versions of the coupling method as well as renewal theory. Convergence of probabilities in the definition of m-availability factors is a special case of a more general convergence in total variation. Although the changes in comparison to [18] are, in fact, minor, yet it would be not totally fair to say that a simple reference to this earlier paper may replace a full proof. So, as suggested by the Editors, and for the completeness of this paper, and for the convenience of the reader we outline the details of the proof of the Theorem here.

2. Let us inspect the properties of the functions Lm(X) and Lkm(t,X). Assume that m > 1 and the value C0 in the condition (5) is large enough, namely, C0 satisifies

C0 > A22(m+k)-1 + 2m+k. (22)

Recall that t0: = inf( t > 0: Xt = (0,0,*)). Let X0 = X = (n,x,y). Note that it suffices to establish the

estimate (21) for initial states with n0 + x > 0. The reason for this is that in the case of X0 = (0,0, y), by virtue of the condition (6) irrespectively of the value of y, the time for the process to hit state (1,0,0) does satisfy this estimate - and even a better exponential bound holds true for such particular hitting time and initial state under our conditions - so that in this case we can start the estimate for t0 so to say from state (1,0,0). Hence, in the sequel we may and will assume n0 > 0 without losing a generality.

3. Repeating the main steps in the calculus from [18] in our more involved but computationaly very similar situation, we obtain for Xt = (0,0,*) (note that nt = 0 is not excluded): dLm(Xt) = X(Xt) ((n + 2+xt + 0)m-(n + 1 + xt+ yt)m) dt +

+h(Xt)(nm -(n + 1+xt + yt)m) dt +

+ ((n + 1 + xt+yt + dt)m -(n+1 + xt+ yt)m) + dMt =

= (h -h+ h)d t + dMt,

dt: = A(Xt)dt ((n + 2 + xt+ yt)m -(n + 1 + xt + 0)m),

12 dt: = h(Xt)((n +1 + xt+ yt)m -(n + yt)m) dt,

13 dt: = ((n + 1 + xt+yt + dt)m -(n + 1 + xt+ yt)m)

where

3

= m(n + 1 + xt+ yt)m-1 dt, and Mt is a local martingale (see, e.g., [14]). The following bound will be established:

(Ii -h+ h) < - CLm-i(Xt), t < To, with some C > 0. The main purpose - beside the convenience of the reader - of the calculus which follows is, indeed, to make sure that there is no new difficulties due to the more involved model in comparison to [18]; in particular, some news is that unlike the paper [18], now to hit the state with n = 0 may not be enough, and because of this even the definition of the hitting time t0 here is different.

Later on, for the function Lmk(t,X) = (1 + t)kLm(X) Cf k > 0 under the appropriate condition on C0 (see (22)) we will show that

EXLm,k(T0,Xr0) < Lm'(X) - C^XT0+1,

or, equivalently,

117 ^m,k(L0,^T0)

with some m' > m and c > 0, which would suffice for the desired result. For any m > 1, a > 1 we have,

(a + 1)m-am =m £ (a + s)m-1 ds < m2m-1an

It follows that

^ < m2m-1A(n + 1+xt+ yt)m-1 = m2m-1ALm-1(Xt).

Further,

¡2 > Co(1 + Xt)-1((n + 1+xt+ yt)m -(n + yt)m)

C lhxT0 < Lmi

Co(1 + xt)-11 m(n + yt+ s(1 + xt))m-1(1 + xt) ds

> Com f1/2 (n + yt+1(1 + xt))m-1 ds

> Com2-m(n + 1 + xt+ yt)m-1 = Com2-mLm-i(Xt).

Finally, So, we get,

h =mLm-1(Xt).

I1—I2+I3< rn2m~1ALm-1(Xt) — C0rn2~mLm-1(Xt) + m Lm-i(Xt). Here if C0 is large enough, namely, if

m2~mC0 > m2m~1A + m ~ C0 > 22m-1A + 2m (23)

(clearly, (23) is weaker than (22)), then the sum I1 — I2 + I3 is strictly negative:

Ii—l2+h< —m(2~mCo — 2m-1A — 1) Lm-1(Xt) < 0. Note that (22) in full generality will be used in the sequel. Now, by virtue of Fatou's Lemma - if necessary with an appropriate localizing sequence - we obtain,

VxLm(XtA*0) + (rn2~mCo — rn2m-1A — m)Ex C" Lm-^) ds < Lm(X),

and also

VxLm(X*0) + (rn2-mCo — rn2m-1A — m)Ex C" Lm-AXs) ds < Lm(X). (24)

uXLm(^-c0i

From (24) it follows that, in particular, Exr0 < ro (since Lm-1 > 1), from which it may be concluded due to Harris-Khasminsky's principle that there exists a stationary measure (see [11]); in our case it is clearly unique (e.g., because of the convergence to any stationary measure, which follows from this proof); moreover,

fXLm-1(X) [i(dX)<ro. (25)

Also, by Holder's inequality, for each t > 0,

ExLm:(XtATg) < Lm,(X), V m'<m. (26)

4. Note that the bound (26) has been established under the condition (23). Similarly, if it were known that C0 satisfies

C0 > 22(m+t)A + 2m+l, (27)

then we would be able to conclude that also

VxLm(XtATo) < Lm(X), V m'<m + l (28)

for each m' < m +1. In turn, if we need (27) and (28) for some I greater than k - let arbitrarily close to k - then (22) suffices for this.

5. Now let us inspect the function Lmk(t,X) = (1 + t)kLm(X) with k > 0 under the assumption (22). We have, similarly to the step 1,

dLm,k(t,Xt) = (1 + t)k[I1 -I2+ I3] dt + dMt

+k(1 + t)k-1 Lm(Xt) dt

< —(1 + t)km(2-mCo —A — 1)Lm-1(Xt) dt

+k(1 + t)k-1 Lm(Xt) dt + dMt, with some new local martingale Mt. The second term I4\ = k(1 + t)k-1 Lm(Xt) may be split into two parts, I4 = I5 + I6, where

I5: = k(1 + t)k-1 Lm(Xt) 1(k(1 + t)k-1 Lm(Xt) < £(1 + t)kLm-1(Xt)),

I6: = k(1 + t)k-1 Lm(Xt) 1(k(1 + t)k-1 Lm(Xt) > £(1 + t)kLm-1(Xt)), where 1(A) stands for the indicator of the event A. The term I5 is clearly dominated by the main negative expression —I2 in the sum I1 — I2+ I3, if we put e < m(2-mC0 — 2m-1A — 1). Let us now estimate the term I6. For any I > 0 and e> 0,

1 c 1 (k l™(xt))l — j k j fv ^

16 < 14 (s(1+t)Lm-1(xt))1 = 14 (s(1+t))lLl(A^).

So, I6 does not exceed the value

16

hi

k(l + t)k-1 - *Lm+t(Xt) .

Let I = k + 8 and assume that the value I satisfies the condition (27). Recall that this is always possible if C0 satisfies (22) and S > 0 is small enough. Then, due to (28) and, if necessary, by using a new auxiliary localizing sequence of stopping times with Fatou's Lemma we get,

56

^XLm,k(t A To,XtAT0)

+ (m(2-mCo - 2m-1A -1)- e)Ex J^0 (1 + s)kLm-1(Xs) ds <

< Lm(X) + C'EX J" Ex1(s < t A To)(1 + s)k-1-lLm+i(Xs) ds <

< Lm(X) + C'EX J" (1 + s)k-1-lExLm+i(XsMA*o) ds <

< Lm(X) + C"Lm+i(X) < C"'Lm+i(X). Again, by virtue of Fatou's Lemma this imples,

ExLmjk(To,XTo) + C'EX J00 (1 + s)kLm-1(Xs) ds < C'"Lm+t(X). Since Lm-1(XS) > 1(s < t0), we obtain,

Exrk+1 < CLm+t(X), (29)

with some new constant C > 0, which does admits some effective bound similarly to all earlier constants.

6. The estimate (29) - along with the remark about an exponential moment for time to hit state (1,0,0) starting from state (0,0,*) mentioned eralier - suffices for the desired inequality, and there are various ways to show it, including the coupling method (cf., e.g., [15, 17]), or renewal theory (see, e.g., [3]). Hence, for many readers a recommendation would be to stop reading here. However, for the convenience of the wider audience (as well as simply for the sake of completeness) we will now briefly recall the scheme of the coupling method mentioned earlier about how the proof may be completed without any big theory "by hand". Let (Xt) and (Xt) be two independent copies of our Markov process where the first process starts at X0 = X, while the second has a stationary initial distribution which existence was mentioned earlier. (At the moment uniqueness is not proved, so we let any stationary distribution if there are more than one.) Denote f0: = inf( t > 0:Xt = (0,0,*), &Xt = (0,0,*)) (the third components may be equal or different). Quite similarly to (29), the inequality

Exfk+1 < CLm+t(X), (30)

can be established, see, e.g., [18], with a new constant C, which also admits some effective bound. The proof follows from integration and from the fact that J Lm+l d^ < ^ - which integral also allows some effective bound - due to (25), the latter guaranteed by the choice of large enough value of C0, see (22).

7. Finally, by the coupling inequality ("c.i.") (see, for exampe, [16]),

\(M?-V)(A)\ = |E*(1№ EA)- 1(Xt E ¿))\1(t > fo)

+ | E*(1(Xt E A) - 1(Xt E A))\1(t < fo)

< Ex1(t < fo) = Vx(t < fo) < ^ <

Note that, in particular, the uniqueness of the stationary distribution follows from this convergence. Finally, by the definition of the total variation \\ n? - n ||w: = 2sup(ji? - n)(A), and

A

hence, the obtained inequality (30) provides the claim of the Theorem. References

[1] Asmussen, S., Applied Probability and Queues, 2nd edition, Springer, Berlin et al.

(2003).

[2] Bambos, N., Walrand, J., On stability of state-dependent queues and acyclic queueing networks, Adv. Appl. Probab. 21(3) (1989), 681-701.

[3] Borovkov, A. A., Asymptotic Methods in Queueing Theory, Chichester, NY: J. Wiley,

1984.

[4] Borovkov, A. A., Boxma, O. J., Palmowski, Z., On the Integral of the Workload Process of the Single Server Queue, Journal of Applied Probability, 40(1) (2003), 200-225.

[5] Bramson, M., Stability of Queueing Networks, École d'Été de Probabilités de Saint-Flour XXXVI-2006, Lecture Notes in Math., Vol. 1950 (2008).

[6] Brémaud, P., Lasgouttes, J.-M., Stationary IPA estimates for nonsmooth G/G/1/œ functionals via palm inversion and level-crossing analysis Discrete Event Dynamic Systems, 3(4) (1993) 347-374.

[7] Brémaud, P., Lasgouttes, J.-M., Stationary IPA estimates for nonsmooth G/G/1/œ functionals via palm inversion and level-crossing analysis. A preprint version 2012 of the paper [6], http://arxiv.org/pdf/1207.3241v1.pdf

[8] Dynkin, E. B., Markov processes, V. I, Springer-Verlag, Berlin-Gottingen-Heidelberg

(1965).

[9] Fakinos, D., The Single-Server Queue with Service Depending on Queue Size and with the Preemptive-Resume Last-Come-First-Served Queue Discipline, Journal of Applied Probability, 24(3) (1987), 758-767.

[10] Gnedenko, B. V., Kovalenko, I. N., Introduction to queueing theory. 2nd ed., rev. and suppl. Boston, MA et al., Birkhauser (1991).

[11] Hasminskii, R. Z.,, Stochastic Stability of Differential Equations, Dordrecht, The Netherlands: Sijthoff & Noordhoff (1980).

[12] Kim, B., Kim, J. A note on the subexponential asymptotics of the stationary distribution of M/G/1 type Markov chains, European Journal of Operational Research, 220(1), (2012), 132-134.

[13] Kimura, T., Masuyama, H., Takahashi, Y. Subexponential Asymptotics of the Stationary Distributions of GI/G/1-Type Markov Chains, arXiv:1410.5554v3 (June 2016).

[14] Liptser, R. Sh., Shiryaev, A. N., Stochastic calculus on filtered probability spaces, in: S. V Anulova, A. Yu. Veretennikov, N. V Krylov, et al., Stochastic calculus, Itogi Nauki i Tekhniki, Modern problems of fundamental math. directions, Moscow, VINITI (1989), 114-159 (in Russian); Engl. transl.: Probability Theory III, Stochastic Calculus, Yu. V. Prokhorov and A. N. Shiryaev Eds., Springer (1998), 111-157.

[15] Thorisson, H., The queue GI/G/1: finite moments of the cycle variables and uniform rates of convergence, Stoch. Proc. Appl. 19(1) (1985), 85-99.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

[16] Thorisson, H., Coupling, Stationarity, and Regeneration, New York, NY: Springer,

2000.

[17] Veretennikov, A. Yu., On polynomial mixing and convergence rate for stochastic difference and differential equations, Theory Probab. Appl. 45(1), 160-163 (2001).

[18] Veretennikov, A. Yu., On the rate of convergence to the stationary distribution in the single-server queuing system, Autom. Remote Control 74(10), 1620-1629 (2013).

[19] Veretennikov, A.Yu., Zverkina, G.A., Simple Proof of Dynkin's formula for Single-Server Systems and Polynomial Convergence Rates, Markov Processes Relat. Fields, 20, 479-504 (2014).

i Надоели баннеры? Вы всегда можете отключить рекламу.