Научная статья на тему 'Research of a Multidimensional Markov Chain as a Model for the Class of Queueing Systems Controlled by a Threshold Priority Algorithm'

Research of a Multidimensional Markov Chain as a Model for the Class of Queueing Systems Controlled by a Threshold Priority Algorithm Текст научной статьи по специальности «Математика»

CC BY
97
58
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
controlled queueing system / threshold priority / multidimensional Markov chain / recurrent state / stationary distribution

Аннотация научной статьи по математике, автор научной работы — Maria Rachinskaya, Mikhail Fedotkin

A class of controlled queueing systems with several heterogeneous conflicting input flows is investigated. A model of such systems is a time-homogeneous multidimensional Markov chain with a countable state space. Classification of the chain states is made: a closed set of recurrent aperiodic states and a set of transient states are determined. An ergodic theorem for the Markov chain is formulated and proved.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Research of a Multidimensional Markov Chain as a Model for the Class of Queueing Systems Controlled by a Threshold Priority Algorithm»

Rrachinskaya M., Fodotkin M. RT&A, No 1 (48) RESEARCH OF A MULTIDIMENSIONAL MARKOV CHAIN_Volume 13, March 2018

Research of a Multidimensional Markov Chain as a Model for the Class of Queueing Systems Controlled by a Threshold Priority Algorithm

Maria Rachinskaya, Mikhail Fedotkin

Lobachevsky State University of Nizhni Novgorod mailto:rachinskaya.maria@gmail.com, fma5@rambler.rurachinskaya.maria@gmail.com, fma5@rambler.ru

Abstract

A class of controlled queueing systems with several heterogeneous conflicting input flows is investigated. A model of such systems is a time-homogeneous multidimensional Markov chain with a countable state space. Classification of the chain states is made: a closed set of recurrent aperiodic states and a set of transient states are determined. An ergodic theorem for the Markov chain is formulated and proved.

Keywords: controlled queueing system, threshold priority, multidimensional Markov chain, recurrent state, stationary distribution

1 Introduction

Nowadays, there is a great amount of works that deal with the problems of controlled queueing systems [1, 2]. Many of similar researches have high applied value since they concern real biological, logistical, engineering and technical objects (e. g. [3, 4, 5]). Various quality characteristics and scores for such systems are investigated. One of the important goals of these works is system optimization. In order to synthesize an optimal system, it is usually necessary to study its asymptotic behaviour [4, 6, 7, 8, 9]. In particular, a number of works are devoted to obtaining the limit theorems and searching for the conditions of stationarity existence. Mainly such investigations are based on the mathematical and imitation modeling. If a quite simple reliable mathematical model is constructed, it becomes possible to study limiting dynamics and to determine existence of a stable stationary mode.

The work [10] studies a system with several stochastic independent conflicting input flows of customers (demands). In this work a specific model of input flows constructed in [11] is considered. The system carries not only service functionality for the customers but also control functions for the flows. It is supposed that the flows are controlled by a cyclic algorithm. As a rule, such control algorithm is applied if the input flows are regarded to be homogeneous, which means no preference is given to any of the flows. A case of heterogeneous input flows is considered in [12]. Such heterogeneity may imply, for example, different probabilistic structure of the flows, substantially different arrival intensities, different priority of the flows, etc. It is usually assumed in such case that a complicated adaptive feedback control algorithm is used. The present work is a continuation and expansion of [12] and it mainly focuses on the limiting behavior of the system.

2 System description

A system that controls m> 2 independent conflicting flows n1, n2, ..., nm and serves their customers is studied. It is assumed that the input flows are formed under the influence of the similar external environments. This means they can be approximated by the non-ordinary Poisson flows. For example, it is shown in [11] that a non-ordinary Poisson flow can be an adequate model for a traffic flow under certain external conditions. If the weather or the roadbed is quite bad, the heterogeneity of the vehicles becomes clear: some slow vehicles become a trouble for the fast ones. That is why vehicles start to gather into groups - traffic bathes. There is a certain dependency between the vehicles inside a batch while the different batches can be considered as independent. Similarly, it is supposed in this work that each input flow can be approximated by non-ordinary Poisson flow n, (henceforth, j Ej = {1,2,...,m}) with the following parameters: Aj > 0 - arrival intensity for the batches (groups of customers), pj, q, and Sj - probabilities of batches of one, two and three customers in a batch correspondingly (pj + qj + Sj = 1). The expressions for the one-dimensional distributions of the non-ordinary Poisson flow of this kind are derived in [11]. Probability t) that n E X = {0,1,...} customers of the flow n, arrive to the system during the interval [0, t), t > 0, is given by formula

J?]

<pj(n;t) = e-AJtZLu2=0 KJ0 Pr2u-3v«?s?

1 1 u\v\(n-2u-3v)\

Though the flows have the same probabilistic structure, they differ in priority and arrival intensity. The flows n, n2, ..., nm-i are the low-intensity flows, the flow nm has the highest intensity. At the same time, the customers of the flow n have the highest priority. The considered queueing system is a lossless system. The customers of the flow n, that arrive to the system and cannot be served at once are forced to wait in the queue Oj. The service device that also perform control functions for the flows can be in one of the states of the set r = {r(1), r(2),..., r(2m+1)}. The service device stays in each state r(fc), k E {1,2,... ,2m + 1}, during a time period of duration Tk. The state r(2J-1), where j E {1,2, ...,m — 1}, is reserved for servicing the flow n,. A service intensity in this case is > 0. Since the flow nm has the highest intensity, there are two service states for this flow: r(2m-1) and r(2m\ The service intensity is the same for both of these states and it equals > 0. It is also assumed that T2m < T2m-1. The input flows are conflicting which means no two of them can be served simultaneously. Moreover, for the sake of safety, it is recommended to have certain adjusting states between the service states for different flows. Therefore, the intermediate readjusting state r(2^), j e {1,2, ...,m — 1} is allocated for safe switching between service of the flow n and n,+1. The readjusting state after the flow nm is r(2m+r>. The variables lj = [MjT2j-1], j E J, and the variable lm = [P-mT2m] characterize the capacity of the service device in the corresponding state. It is supposed that the system always functions in the emergency mode [10]. This means there is no unmotivated downtimes. Each time the service device is in a service state for certain flow n, as many customers waiting in the queue Oj as possible are served.

At the same time, the number of served customers cannot exceed the service capacity in this state. When the service period for certain flow ends, either the current state switches to the next one according a certain control algorithm s(r) or the decision to prolong service is made. The control algorithm s(r) is to be described later. The served customers of the flow n, compose the output flow nj. A general scheme of the considered class of the systems is presented in Figure. 1.

Figure 1: General scheme of the considered class of systems

Let Ti, i £ I = {0,1,...}, denote the moments in which the decisions about state switching or prolongations are made. Such moments are the random variables since the initial state of the service device is unknown, an initial distribution of the state in moment t = 0 may be set and, generally speaking, the durations T1, T2, ..., T2m+1 are different. The time axis 0t is divided by these moments into the intervals A-1 = [0, t0), At = [Ti,Ti+1), i £ I. The following random variables and elements characterize the system in the interval Ai for i £ I: 1) r £ r - state of the service device; 2) ^j i £ X - number of demands of the flow n arrived to the system; 3) ^ - maximum number of demands of the flow n, that can be served; 4) ^ - number of customers of the flow n, that are actually served during this interval. Here for any j £ {1,2, ...,m — 1} we have ^ £ Bj = {0,lj} and £ Yj = {0,1,., lj} and £Bm = {0,l'm,lm}, £Ym = {0,1,... ,lm}. Apart from this, let the variable x £ X count the random number of customers waiting in the queue Oj at the moment Ti. For each flow n, it is also necessary to introduce the random variable £ {0,1,...} - number of demands of the flow n that are really served during the interval A-1. Now the control algorithm is to be introduced. Decision about the next state of the service device is made according to the following rule:

ri+1 = ufox u,ilu), where the control function is given point-wise:

u(r(k),x1,n1) = <

f r(k+1), k £ M\{2m - 2,2m, 2m + 1}; r(2m-1), k = 2m-2, x1+n1< hr; r(2m), k = 2m-2, x1 + n1> hr; r(2m), k = 2m, x1+n1< h^; r(2m+1\ k = 2m, x1+n1> hr; lr(1), k = 2m + 1.

(1)

Such algorithm has several peculiarities. Firstly, it implements feedback on the number of waiting customers in the queue for the high-priority flow. Secondly, the service device may prolong service for the flow with high intensity. Thirdly, the described algorithm is an anticipatory algorithm, since it takes into account data about the number i of customers that are to arrive to the system during the succeeding time period. It should be noted that a conflict of interests between high-intensity and high-priority flows is resolved with the help of a threshold priority variable h1 £ {0,1,...}. The service device state is switched from the service of high-intensity flow to the service of the high-priority flow only if the number of waiting customers in the high-priority queue reaches the threshold value. A graph of the described control algorithm is shown in Figure 2.

Figure 2: Graph of the control algorithm s(r) The variables t and t are defined by their conditional distributions

P0lj,i=n | ri = r(k)) = cpj(n;Tk), P(fj,i = b I ri = r(k))=pJ(b;r(k)), where function Pj:Bj xr^ {0,1} is given point-wise:

1, Ъ = 0, к E M\{2j — 1}, j E J\[m}; 1, b = 0, к E M\{2m — 1,2m}, j = m; 1, Ъ = lj, к = 2j — 1, j E J; 1, b = l'm, к = 2m, j = m; 0, otherwise.

ßj(b;F(k)) = <

Moreover, these variables are conditionally independent.

3 Problem statement

(2)

It is proposed in [12] to consider a random vector

Xi = (Г, « 1,i, & m,i, &Л-1, fmi-l) ETXXXXXYiXYm

(3)

as a system state at the moment Ti, i E I. This approach allows one to study the system dynamics from the point of view of two flows: П and Пт. The following recurrent relations are given in [12]: ri+1 = и(Г, x u, iu), x U+1 = max{0, ж u + - x m,i+i = max{0, ж m,i + Vm,i - Zm,i],

K'r,i = min{ x 1,i + 1l,i■ (l,il K'rn,i = min{ x m,i + imi $m,il

They describe changes in the system state for any i E I. The work [12] contains a proof of the following theorem.

Theorem 1 Vector sequence

m x 1,i, x m,i, ?l,i-l, %m,i-l); i E 1}

(4)

with initial distribution of vector (г0, x l,o,x mo¡,£l—l,p-1) is a multidimensional time-homogeneous controlled Markov chain.

A purpose of this work is to study the state space of the Markov chain (4) and to research its limiting behaviour.

Rrachinskaya M., Fodotkin M. RT&A, No 1 (48) RESEARCH OF A MULTIDIMENSIONAL MARKOV CHAIN_Volume 13, March 2018

4 Markov chain state space classification

Let Qi(r(k),x1,xm,y1,ym) for any r(k £ r, x1,xm eX, yxe Yr, ym £ Ym be the following probabilities:

Qi(T(k\x1,xm,yi,ym) = P(ri = r(k\&U = xi,^m,i = xm^'l,i-1 = yi^m,— = ym).

The proof of Theorem 1 from [12] contains the foundation for the following relation:

Qi + 1(r(k),x1,xm,y1,ym) = Ttmt1 ZC=0 ZCm=0 Z^=o Zlm=0 Qi(r(r),v1,vm,W1,Wm) X

X ZCC,=0 Z bi£Bi Znm=° Zbm£Bm <P1(ni, Tr)fi1(b1-, r(r))cpm(Um; Tr)Pm(*m\ r(r)) X

X P(u(r(r), v1, n1) = r(k), max{0, v1+n1- b^ = x1,

max{°, vm+nm- bm} = xm, min{v1 + n1, b1} = yu min{vm + nm. bm} = ym).

Note that taking into account (1) and (2) this relation is transformed into several special-case relations:

Qi+1(r(1^, x1, xm,y1, ym) = Zv1=0 p1(x1 - v1''T2m+1)Zv™=o pm(xm - vm;T2m+1) X

X Zh1=0 ZlZ=0 Qi(r(2m+1),v1,vm,W1,Wm)P(y1 = 0,ym = 0).

Qi+1(r(2\x1,xm,y1,ym) = Zvl=0 P1(y1 - vl'T1)ZVZ=0 Pm(xm - vm''T1) X X Zt1=0 Zlm=0 Qi(r(1),v1,vm,W1,Wm)P(x1 = 0y < l^m = 0) + + Zll+O P1(x1 + I1- v1, T1) Z%=0 Pm(xm - vm; T) X X Zt1=0 Zlm=0 Qi(r(1),v1,vm,W1,Wm)P(y1 = h,ym = 0);

(5)

(6)

(7)

Qi+1(r(k),x1,xm,y1,ym) = 1^=0 P1(x1 - v1';Tk-1)ZV^=0 Pm(xm - vm';Tk-1) X X Z^=0 Zlm=0 Qi(r(k-1),v1,vm,W1,Wm)P(y1 = 0,ym = 0), k£ {3,4.....2m - 2};

Qi+1(r(2m-1),x1,xm,y1,ym) = ZV1=0 P1(x1 - v^; Tm-2) ZlZ=0 Pm(xm - vm'; T^m-l) X X Z^o Zlm=o Qi(r(2m-2),v1,vm,W1,Wm)P(x1 < h1,y1 = 0,ym = 0); (8)

Qi+1(r(2m),x1,xm,y1,ym) = ZV1=o P1(x1 - v1;T2m-2)ZVm=0 Pm(xm - vm;T2m-2) X X Zt±=o Zlmm=o Qi(r(2m-2),v1,vm,W1,Wm)P(x1 > Ky = 0,ym = 0) +

+ Z1=0 p1(x1 - v1; T2m—1) Z<cm=0 ZZ=0 pm(cm - vm; T2m-1^ X

X Z^O Zlm=o Qi(r(2m-1),v1,vm,W1,Wm) X (9)

X P(max{0, Cm - lm} = xm,y1 = 0,min{Cm, Im} = ym) +

+ ZH=o P1(x1 - v1-;T2m)ZZ=0 ZCZ=0 Pm(Cm - v^Tm)^^ Zlm=0 Qi(r(2m),v1,vm,W1,Wm) X X P(x1 < h1,max{0,Cm - I'm} = xm,y1 = 0,min{Cm,l^} = ym);

Qi+1(r(2m+1),x1,xm,y1,ym) = ZV,l=o P1(x1 - vi;T2m) X

m) Z

X P(x1 > h1,max{0,Cm - l'm} = xm,y1 = 0,min{Cm^'m} = ym)

X ZCm=0 ZCm=0 Pm(Cm - v^Tm)^^ Zlm=0 Qd^, v^W^m) X (10)

Theorem 2 State space S = r X X X X XY1XYm of the Markov chain (4) consists of set D of transient states and minimal closed set E of recurrent aperiodic states:

D = {(r(1\x1,xm,y1,ym) £ S:x1 £ {0,1.....K - 1}} U

U {(r(k),x1,xm,y1,ym) £ S:k £ M\{2},y1 £ YA{0}} U U {(r(k),x1,xm,y1,ym) £ S:k £ M\{2m, 2m + 1},ym £ Ym\{0}} U

U {(r(2),x1,xm,y1,ym) ES:X1E A{0},y1 E YAft}} U

U {(r(2),X1,Xm,y1,ym) E S:y1 E {0,1.....K-1}}U

U {(r(2m-1),x1,xm,y1,ym) E S: X1 E {h1,h1 + 1,...}} U

U {(r(2m),x1,xm,y1,ym) E S: X1 E {0,1.....h1 - 1},xm E A{0},ym E Ym\{l'm,lm}} U

U {(r(2m), x1, xm, y1, ym) ES:x1E {h1, h1 + 1,...},xmE X\{0}, ym E Ym\{0, lm}} U

U {(r(2m+1),x1,xm,y1,ym) ES:x1E {0,1.....h1 - 1}} U

U {(r(2m+1), x1, xm, y1, ym) ES:x1E {h1, h1 + 1,...},xmE X\{0}, ym E {0,1.....l'm -1}} U

U {(r(2m+1),x1,xm,y1,ym) ES:x1E {h1,h1 + 1, ...},ym E {l'm + 1,l'm + 2.....lm}};

E(r(1)) = {(r(1),x1,xm,0,0):x1 E {h1,h1 + 1,...},xm EX}; E(r(2)) = {(r(2), x1, xm, k, 0):x1E X, xm E X} U

U{(r(2),0,xm,y1,0):xmEX,y1E{h1,h1 + 1.....k - 1}};

E(r(k)) = {(r(k),x1,xm,0,0):x1 E X,xm EX}, kE {3,4,...,2m - 2}; E(r(2m-1)) = {(ri2m-1)ixii xmi 0,0): x1 E {0,1,... ,h1-1},xmE X}; E(r(2m)) = {(r(2m),x1,xm,0,0):x1 E {h^ + 1,...},xm E X\{0}} U U {(r(2m), x1, xm, 0, lm): x1 EX,xmE A{0}} U U {(r(2m\x1,0,0,ym):x1 EX,ymE Ym} U

U{(r(2m),x1,xm,0,l'm):x1E{0,1.....h1-1},xmEX\{0}};

E(r(2m+1)) = {(r(2m+1),x1,xm,0,l^):x1 E {h1,h1 + 1,...},xm E X\{0}}U

U {(r(2m+1),x1, 0,0, ym): x1 E {h1, h1 + 1,...},ymE {0,1.....l'm}},

E = U2k=+1 E(r(k)).

Proof. First of all, it is necessary to determine the states of the Markov chain such that probabilities of being in them are equal to zero for any moment starting with r1. Based on relation (5) and control algorithm function (1), it follows that the Markov chain can move with positive probability to the state of the form (r(1),x1,xm,y1,ym) E S only from a state of the form (r(2m+1),v1,vm,w1,wm) ES. The probability P^ = 0,ym = 0) in the right side of equation (5) equals zero if at least one of the equalities y1 = 0 and ym = 0 does not take place. That is why a probability of being in any state from the set

D(r(1)) = {(r(1),x1,xm,y1,ym) E S:y1 E YA{0}} U {(r(1),x1,xm,y1,ym) E S: ym E Ym\{0}} at the moment Tt equals zero for any i E /\{0}. According to relation (6), if the Markov chain initially starts from any state from the set D(r(1)), it moves with a positive probability to the state ,0,xm,^,0) E S. Following the algorithm s(r) the chain further comes with a positive probability to the state of the form (r(2m+1),x1,xm,y1,ym) E S, leaving which the Markov chain has the zero probability of moving to any state from the set D(r(1)). Therefore, the states from the set D(r(1)) are transient by definition.

Similarly, based on recurrent relations (6)-(10), it can be derived that the probabilities of being in any state from the sets

D(r(2)) = {(r(2),x1,xm,y1,ym) ES:ymE Ym\{0}} U U{(r(2),x1,xm,y1,ym) ES:x1E A{0},y1 E YAft}}, D(r(k)) = {(r(k),x1,xm,y1,ym) E S:y1 E YA{0}} U

U {(r(k\x1,xm,y1,ym) ES:ymE Ym\{0}}, k E {3,4.....2m - 2},

D(r(2m-1)) = {(r(2m-1),x1,xm,y1,ym) E S:y1 E YA{0}} U U {(r(2m-1),x1,xm,y1,ym) ES:ymE Ym\{0}} U U {(r(2m-1), x1, xm, y1, ym) ES:x1E {h1, h1 + 1,...}}, D(r(2m)) = {(r(2m),x1,xm,y1,ym) E S:y1 E YA{0}} U

U {(r(2m),x1,xm,y1,ym) ES:x1E {0,1.....h1 - 1},xm E X\{0},

ym E Ym\{l'm,lm}} U {(r(2m),x1,xm,y1,ym) E S: x1 E {huh1 + 1,...}, xmEX\{0},ymEYm\{0,Vm}}, D(r(2m+1)) = {(r(2m+1),x1,xm,y1,ym) E S:y1 E YA{0}} U

U {(r(2m+1),x1,xm,y1,ym) ES:x1E {0,1.....h1 - 1}} U

U {(r(2m+1), x1, xm, y1, ym) ES:x1E {h1, h1 + 1,...},

ymE{l'm + 1,l'm + 2.....lm}}U

U {(r(2m+1\ x1, xm, y1, ym) ES:x1E {h1, h1 + 1,...}, xm E X\{0},ym E {0,1.....Vm-1}}

X 9m(xm - vm;T2m+1)Zli t™ Qi(r(2m+1),v1,vm,w1,wm).

at the moment Ti for i £ I\{0} equal zero, which means all of the states mentioned above are transient.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Secondly, consider one more of the subsets of the state space S. According to (5), the

following equation takes place for any xi, xm £ X:

Qi+1ir(1),x1,xm,0°) = ££=0 <Pi(Xi — Vi\T2m+i) X

I

■Jw1=0 £wm

This means a state of the form (r(i\ xi, xm, 0,0), xi £ {0,1,...,hi — 1}, xm £X, is achievable only from the set D(F(2m+i)), namely from the states (T(2m+r>,xi,xm,yi,ym) £ S for any xi £ {0,1,... ,hi — 1}. Therefore, at any moment Ti, starting from i = 2, a probability of being in any state of the set

D*(F(i)) = {(F(i),xi,xm,0,0) £ S: xi £ {0,1,.,hi — 1}} is equal to zero and D*(r(i)) also contains only transient states. In its turn, according to (6), the Markov chain can move with a positive probability to the states of the set

D*(r(2)) = {(r(2),xi,xm,yi,ym)£S:yi £ {0,1.....hi — 1}}

only from the states of the set D*(F(i)). This means at any moment Ti for i £ /\{0,1,2} the Markov chain (4) has the zero probabilities of being in states of the set D*(r(2)). Therefore, D*(r(2)) is a set of transient states as well. Note that

D = D(F(k)) U D*(r(i)) U D*(r(2)).

The set D is an open set which contains only transient states of the chain (4).

It can be easily verified that E = S\D. Let us show that all states from the set E communicate with each other. At first, consider the state (r(2m-2), 0,0,0,0) £ E. For any (r(k),xi,xm,yi,ym) £ E let us demonstrate that it is possible to get to this state from

, 0,0,0,0) and get back with a positive probability and finite-step transition. Such transition between states will be further illustrated with the help of the arrows directed to the final state. If one-step transition is considered, the arrow will also be marked with the probability of such transition.

1. For any xi,xm £ X transition (r(2m-2),xi,xm,0,0) ^ (r(2m-2), 0,0,0,0) may be performed as follows.

1.1. In case 0 <xi< hi:

s^n™ 71 ~ ^ Vi(h1-x1-T2m-2)<Pm(0-,T2m-2) ^n^ , „ ~ Vi(0;T2m)<Pm(0;T2m)

(r(2m-2), xi, xm, 0,0)-> (r(2m), hi, xm, 0,0)->

rr-(2m+i) 7 rn T T r> ■ r f IN p1(0;T2m+l)pm(0;T2m+1)

^ (r(2m+i),hi,max{0,xm — l^},0,mrn{xm,lm})->

, rn „ „ ^ P1(0;T1)pm(0;T1)

^ (r(i>,hi,max{0,xm — l'm},0,0)->

. . P1(0;T2)pm(0;T2)

^ (r(2), 0, max{0, xm — l'm}, hi, 0)-> (r(3), 0, max{0, xm — l'm},0,0) ^

P1(0,T2m-3)Pm(0;T2m-3) „ „ , „

^----> (r(2m-2),0,max{0,xm — l'm},0,0).

Such procedure should be repeated [Xj1] + 1 times untill the Markov chain gets to the state

m

(F(2m-2), 0,0,0,0).

1.2. In case xi> hi:

^f7m 71 ~ ^ P1(0';T2m-2)Pm(0';T2m-2) r \ „ P1(0';T2m)Pm(0';T2m)

(T(2m-2>,xi, xm, 0,0)-> (r(2m), xi, xm, 0,0)->

^ (r(2m+i),xi,max{0,xm — l'm},0,mm{xm,lm}) ^

P1(0';T2m+1)Pm(0';T2m+1) ,„ ,, ___.

-> (r(i), xi, max{0, xm — l'm},0,0) ^

P1(0;T1)pm(0;T1) .

(r(2), max{0, xi — li},max{0,xm — lm},mrn{xi, li},0) ^

(r(3), max{0, x1 - l1},max[0,xm - l'm},0,0)

Pl(0,T2m-3)Pm(0,T2m-3) ,„ , , ,„ ,, ___.

-> (r(2m-2),max{0,x1 - l1},max{0,xm - l'm},0,0).

rXl!

Such combination of transitions is repeated times. If any of the inequalities

l

max{0,x1 - Xl1}>0

l

max{0,xm — ^T1] X l'm} > 0

1

take place, then proceed with 1.1 after performing such combination.

2. Consider the states (F(2m-i),xi,xm,0,0) £ E(F(2m-i)) for xi £ {0,1,.,hi — 1}. The

—» ••• —»

transition

.„,,„, ^ pi(xi;T2m-2)pm(xm;T2m-2) ^ „ „n

(T(2m-2), 0,0,0,0)-> (F(2m-1),x1,xm, 0,0)

is possible. Backward transition may be as follows:

(T(2m-1),X1, Xm, 0,0) ->

(T(2m),h1,max{0,Xm - lm},0,min{Xm,lm})

m m

^ (T(2m+1),h1,max[0,xm -lm- rm},0,min{max{0,xm - lm),l^}) ^

Pl(0'>T2m+1)Pm(0'>T2m+1) ,„,..s, , ,„ , ,, ___.

-> (r(1),h1,max{0,xm -Im- C},0,0) ^

Pi(0;Ti)Pm(0;Ti) „ rn , -> (r(2), 0, max{0, Xm-lm- I'm), h1,0) ^

Pi(0;T2)Pm(0;T2) , - ,, „ ^

-> (r(3),0,max{0,xm -Im- CW)

Pi(0;T2m-3)Pm(0;T2m-3) .„,.,„, ^ „ ,„ , ,, ___.

-> (r(2m-2), 0,max{0,xm - lm - I'mW).

If max{0

,xm Im ^m} ^ 0, go to step 1.1 after completing such procedure. 3. Consider states (r(k),x1,xm,y1,ym) e E(r(2m)).

3.1. Let k = 2m, x1 e {h1, h1 + 1, ... }, xm e X,y1= ym = 0. Then the transition

.„,,„, ^ pi(xi;T2m-2)pm(xm;T2m-2) „ „.

(r(2m-2), 0,0,0,0)-> (r(2m),x1,xm, 0,0)

takes place. The backward transition starts with

^ (F(2m+1),x1,max{0,xm - rm},0,min{xm,l'm}) ^

Pi(0';T2m+i)Pm(0';T2m+i) ,„,..s, ,„ ,, ___.

-> (r(1), x1, max{0, xm - llm},0,0) ^

Pi(0;Ti)pm(0T) .

(r(2), max{0, x1 - l1},max{0,xm - lm},mm{x1, l1},0)

Pi(0;T2)Pm(0;T2) , rn , , rn

(r(3),max{0,x1 - l1},max{0,xm - l'm},0,0)

Pi(.0';T2m—3)Pm(.0';T2m—3) ,„ , , ,„ ,, ___.

> (r(2m-2),max{0,x1 - l1},max{0,xm - l'm},0,0).

If max{0 ,xm lm} ^ 0 or max{0, x1 I1} ^ 0, proceed with step 1.

3.2. Let k = 2m, x1 EX, xm E A{0}, y1 = 0, ym = lm. Then it is possible to perform the following transition:

(r(2m-2), 0,0,0,0) (r(2m-1), 0,xm + lm,0,0) ^

<Pi(xi;T2m-i)<pm(o;T2m-i) ^ x^ xm, 0, lm).

The backward transition starts with

(r(2m),xu xm, 0, lm)-> (r(2m),max{x1, h1}, xm, 0,0),

and continues with transition in analogy with case 3.1.

3.3. In case k = 2m, x1 E X, xm = 0, y1 = 0, ym E Ym the forward transition can be performed as follows:

.„,,„, ^ pi(0;T2m-2)pm(ym-T2m-2) « „ „ „„

(r(2m-2), 0,0,0,0)-> (r(2™-1), 0,ym, 0,0) ^

Vi(.Xi;T2m-i)Vm(.0;T2m-i)

+ (r(2m),x1,0,0,ym).

In its turn, the backward transition is

(r™,x1,0,0,ym) 'il™*0'^^^^ (r(2m)imax[x1,h1},0,0,0),

and after that proceed with transition 3.1.

3.4. Let k = 2m, x1 E {0,1,... ,h1 - 1}, xm E A{0}, y1 = 0, ym = l'm. There is a positive probability that the transition

(r(2m-2), 0,0,0,0) <Pi(°-'T2m-222>m(22m-2) (r(2m-1) 0i0,0,0) ^ ^^¿«ni™^ (r(2m),x1,0i0i0) 'iHmiVm(xm + lm*2m) (y^^x^,^) takes place. The backward transition starts with transition

(r(2m\ x1, xm, 0, l'm) vilWmlvmMm) ^p^, x^ max{0, xm - ^},0, min{xm, l'm}), which is repeated [-n] + 1 times, untill the state (r(2m),x1,0,0,0) is reached. After that, the

lm

transition continues with

(r(2m), x±i 0,0,0) 'II™*0-^}^«'^ p^max^, K},0,0,0),

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

—»••• —»

and, then, proceed with case 3.1.

4. Consider now the states (r(k},xi,xm,yi,ym) £ E(r(2m—i}).

4.1. Let k = 2m + 1, xi £ {hi,hi + 1,...}, xm £ X\{0}, yi = 0, ym = I'm. In this case it is possible to perform transition

(r(2m-2),0,0,0,0) *1-1-m-2-m<-m—<m-m-2] (f(2m),xi,xm + im, 0,0) ^

P1(0;T2m)Pm(0;T2m) , ^ „ . -> (F(2m—i}, xi, xm, 0, I'm)

and backward transition

„ ,, . P1(0';T2m+1)Pm(0';T2m+1) __->

(r(2m+i\xi, xm, 0, im) -> (r(i), xi, xm, 0,0) ^

P1(0;T1)pm(0;T1)

-> (r(2),max{0,xi — li}, xm,min{xi, li},0) ^

P1(0;T2)Pm(0;T2) rn , , ^

-> (r(3),max{0,xi — li},xm,0,0) ^---->

P1(0;T2m-3)Pm(0;T2m-3) ^ rn , ^ .

-> (T(2m-2^,max{0,xi — li},xm,0,0),

which is continued with case 1.

4.2. Let now k = 2m + 1, xi £ {hi, hi + 1,...}, xm = 0, and also yi = 0, ym £ {0,1,..., l'm}. The forward transition

P 1(x 1';T2m-2)Pm(ym';T2m-2) -

P1(0;T2m)Pm(0;T2m)

1 1 2 m-2 m m 2 m-2 (r(2m-2), 0,0,0,0)-> (r(2m), xi, ym, 0,0)

(r(2m+i),xi,0,0,ym)

is possible. The backward transition may, for example, be as follows:

__ . P1(0';T2m+1)Pm(0';T2m+1) ___-

(r(2m—i},xi,0,0,ym)-> (r(i),xi, 0,0,0) ^

P1(0;T1)pm(0;T1)

-> (r(2), max{0, xi — li},0, min{xi, Zi},0) ^

P1(0;T2)Pm(0;T2) rn , _____

-> (r(3),max{0,xi — k},0,0,0) ^---->

P1(0;T2m-3)Pm(0;T2m-3) ^ rn , „ „ ^

In case max{0, xi — 1-l} ± 0 proceed with transition 1.

5. For any state (T(k},xi,xm,yi,ym) from the set E(r(i)) equalities k = 1, xi £ {hi,hi +

1, . },

xm £ X, yi = 0, ym = 0 take place. Therefore, transition

(F(2m-2)i0i0,0,0) V1-1^---^---™--2} (f(2m),xi,xm + 1^,0,0) ^

(T(2m-i)iXi,xm,W V1--™1-----2^, (T(im,xi,xm,m

is possible. The backward transition starts with

(r(i},xi,xm,0,0)-> (r(2),max{0,xi — li},xm,min{xi,li},0) ^

-1(-;T-)Pm(-;T-)

(r(3),max{0,xi — li},xm,0,0)

^ ... ^

li 1 i}, xm,

-> (T(2m-2^,max{0,xi — li},xm,0,0).

Then, it continues with case 1 if xm ± 0 or max{0,xi — y ± 0.

6. Consider a state of the form (T(k},xi,xm,yi,ym) £ E(r(2}).

6.1. If k = 2, xi, xm £ X, yi = ^, ym = 0, the forward transition may be as follows:

(r(2m-2), 0,0,0,0)-> (r(2m),hi,xm + l'm, 0,0)

-1(-;T-m)Pm(-;T-m) (2m—i)

-> (r(2m—i>, hi, xm, 0, im) ^

-> (r(i}, xi + li, xm, 0,0) -> (r(2}, xi, xm, h, 0).

The backward transition contains

(T(2\xi,xm,li,0) -> (r(3},xi,xm,0,0) ^ ••• ^

-> (r(2m-2),xi,xm, 0,0).

In case xi ^ 0 or xm ^ 0, it is then necessary to go on with case 1.

6.2. Consider the case k = 2, xi = 0, xm £ X, yi £ {hi, hi + 1,..., ^ — 1}, ym = 0. The feasible forward transition is

(r(2m-2), 0,0,0,0)-> (r(2m),hi,xm + l'm, 0,0) ^

-> (r(2m—i}, hi, xm, 0, m ^

P1(y1-h1;T-m+1)Pm(0;T-m+1) n ^ P1(-;-1)Pm(-;T1) „ -> (r(i},yi,xm,0,0) -> (r(2},0,xm,yi,0).

And a probable backward transition has the following form:

—»

(r(2), 0,xm,y1,0)-> (r(3), 0,xm, 0,0)

<Pi(0-,T2m-3)<Pm(0-,T2m-3) „ .

-> (r(2m-2), 0,xm, 0,0).

If xm ^ 0, proceed with case 1.

7. Finally, let (r(k),x1,xm,y1,ym) E E(r(r)), r E {3,4,... ,2m - 2}. Then k E {3,4,... ,2m - 2}, x1,xm E X, y1 = ym = 0 and the following transition take place:

(r(2m-2), 0,0,0,0) '^'ii'l^i*2'-*} (r(2m)ih1,xm + Vmi0,0) ^

Vi(0;T2m)<Pm(0;T2m) , „ „ -> (r(2m+1), h1, xm, 0,1'm) ^

Vi(-i + li-hi;T2m+i)<Pm(0;T2m+i)

-> (r(1),x1 + l1,xm, 0,0) ^

Vi(0;Ti)<Pm(0;Ti) , ^ <Pi(0;T2)<Pm(0;T2)

^----> (r(k),x1,xm,0,0)

and

~ <Pi(0;Tk)<Pm(0;Tk) <Pi(0;T2m-3)<Pm(0;T2m-3) (r(k),x1,xm,0,0)-> ...-> (r(2m-2),x1,xm, 0,0).

Note that if x1 ^ 0 or xm ^ 0, it is necessary to continue with case 1.

Now it is seen that every two states of the set E communicate with each other - at least,

across the state (r(2m-2), 0,0,0,0). Therefore, the set E is an indecomposable class of recurrent

communicating states, i. e. a minimal closed set (see [13]). Moreover, this class contains the state

(r(2m\ 0,0,0, l'm) for which a loop-transition

„ „ „ ,/ n ^i(0-T2m)<Pm(im'>T2m) „„„,,.

(r(2m), 0,0,0, l'm)-> (r(2m), 0,0,0, l'm)

is possible. Thus, this state has period 1, which means the class E is a class of aperiodic states (see [14]).

5 Ergodic theorem

Theorem 3 For any initial distribution

{ Q0(r(k\x1,xm,y1,ym): (r(k\x1,xm,y1,ym) E S}

of the multidimensional Markov chain (4) two limiting options are possible: either 1) for any (r(k),x1,xm,y1,ym) E S the limiting equality

}\mQi(r(k),x1,xm,y1,ym) = 0

i^m

takes place and there is no stationary distribution, or 2) the limits

}\mQi(r(k\x1,xm,y1,ym) = Q(r(k\x1,xm,y1,ym)

i^m

exist, where

Q(r(k),x1,xm,y1,ym) > 0for(r(k'>,x1,xm,y1,ym) E E, Q(r(k),x1,xm,y1,ym) = 0for(r(k),x1,xm,y1,ym) E D,

equality

Q(r(k),x1,xm,y1,ym) = 1

takes place and there is one and only stationary distribution.

Proof. Since set D is countable, a situation may take place when the Markov chain with an initial distribution given in the set of transient states may walk in this set indefinitely. Demonstrate that such behaviour is not possible for the Markov chain (4). According to notation (3) and relations (5)-(10), the probability

P^(r(k},xi,xm,yi,ym) = P(h£E | X- = (r(k},xi,xm,yi,ym))

that the Markov chain (4) moves from (r(k},xi,xm,yi,ym) £ D to any state from the set E is positive. Moreover, for any state (r(k},xi,xm,yi,ym) £ D estimation

P^(r(k),xi,xm,yi,ym) > mln{Pi(hi;Ti)Pm(0;Ti),Pi(hi;T2m—i)Pm(0;T2m—i)} > 0 (11)

takes place. Let Po(r(k},xi,xm,yi,ym) be a probability that the chain (4) ever comes to the class E starting from a transient state (r(k},xi,xm,yi,ym) £ D, i. e.

P-!(r(k},xi,xm,yi,ym) = = Zn=i P(Xn £ E,Xi £ D,i = 0,1.....n — 1 I x- = (r(k),xi,xm,yi,ym)).

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Then, according to [15] these probabilities meet the system of linear equalities

P-^(r(k),xi,xm,yi,ym) = Z(T(r),V1,Vm,W1,Wm)£D P-f(r(r),Vi,Vm,Wi,Wm) X

X P(Xi = (r(r),Vi,Vm,Wi,Wm) I X- = (r(k), xi, x^.y^, ym)) + (12)

+P-,(r(k},xi,xm,yi,ym), (r(k),xi,xm,yi,ym) £ D.

Inequality (11) for any (r(k},xi,xm,yi,ym) £ D allows one to prove estimation

Z(T(r),V1,Vm,w1,wm)£D P(Xi = (r(r),Vi,Vm,Wi,Wm) 1 X- = (X^- xL ^ y^ ym)} =

= 1 — Pi(r(k\xi,xm,yi,ym)<

< 1 — mln{Pi(hi;Ti)Pm(hi;Ti),Pi(hi;T2m—i)Pm(hi;T2m—i)} < 1

Thus, the system (12) is a completely regular system. Then, according to [16], this system has an only limited solution. It can be easily verified that such solution is Po(r<~k},xi,xm,yi,ym) = 1 for any (r(k},xi,xm,yi,ym) £ D. Therefore, the Markov chain leaves the set D of transient states with probability one.

If the initial distribution is given only in the closed set E of recurrent states, the Markov chain (4) becomes an irreducible aperiodic Markov chain. In such case the statement of the theorem follows from the ergodic theorem in [15].

Note that the reasonings above demonstrate a general method for proving similar statements for the systems with scheme in Figure 1. Such method is especially useful if it becomes difficult to determine based on recurrent relations the finite number of steps that a Markov chain needs to leave the set of transient states. However, it was proved in Theorem 2 that in case of the algorithm s(r) with graph in Figure 2 it is enough to make three steps in order to leave the set D. In other words, for any initial distribution

Qi(r(k),xi,xm,yi,ym) = 0 for any (r(k\xi,xm,yi,ym) £ D and i £ /\{0,1,2}.

6 Conclusion

The multidimensional Markov chain which is a model of controlled queueing systems is researched. The structure of the Markov chain state space is investigated. It is proved that for any initial distribution, the Markov chain leaves the set of transient states for the finite number of steps. Thereby, it is further recommended to chose initial distribution only in set of recurrent states. Ergodic theorem is formulated and proved. The results form the basis for the further investigations concerning conditions of stationary mode existence and synthesis of optimal control.

References

[1] Kitaev, M. Yu. and Rykov, V. V. Controlled queueing systems. CRC Press, 1995.

[2] Rykov, V. and Efrosinin, D. Optimal control of queueing systems with heterogeneous servers. Queueing systems, Springer, V. 46., No 3-4, 2004. P. 389-407.

[3] Vishnevsky, V. and Rykov, V. Automobile system based on the method for stochastic networks with dependent service times. Trends in mathematics, V. 2, 2015. P. 741-750.

[4] Afanasyeva, L. and Bulinskaya, E. Asymptotic Analysis of Traffic Lights Performance Under Heavy Traffic Assumption. Methodology and Computing in Applied Probability, V. 15, No 4, 2013. P. 935-950.

[5] Rykov, V. and Efrosinin, D. On Optimal Control of Systems on Their Life Time. Recent Advances in System Reliability, Springer Series in Reliability Engineering, V. 51, 2012. P. 307319.

[6] Afanasyeva, L. and Bashtova, E. and Bulinskaya, E. Limit Theorems for Semi-Markov Queues and Their Applications. Communications in Statistics - Simulation and Computation, V. 41, No 6, 2012. P. 688-709.

[7] Petrova, O. V. and Ushakov, V. G. Asymptotic analysis of the Er(t)IGI1 queue. Informatika i ee primeneniya, V. 3, No 4, 2009. P. 35-40. (in Russian)

[8] Ushakov, A. V. and Ushakov, V. G. Limiting expectation time distribution for a critical load in a system with relative priority. Moscow University Computational Mathematics and Cybernetics, V. 37, No 1, 2013. P. 42-48.

[9] Afanasyeva, L. G. and Bashtova, E. E. Coupling method for asymptotic analysis of queues with regenerative input and unreliable server. Queueing systems, V. 76, No 2, 2014. P. 125-147.

[10] Fedotkin, M. A. and Rachinskaya, M. A. Computer simulation for a process of cyclic control of conflicting non-ordinary Poisson flows. Bulletin of the Volga State Academy of Water Transport, No 47, 2016. P. 43-51. (in Russian)

[11] Fedotkin, M. and Rachinskaya, M. Parameters Estimator of the Probabilistic Model of Moving Batches Traffic Flow Distributed Computer and Communication Networks. Ser. Communications in Computer and Information Science, V. 279, 2014. P. 154-168.

[12] Fedotkin, M. A. and Rachinskaya, M. A. Model for the system of control of flows and service of the requests in case the flows have different intensity and priority. Bulletin of the Volga State Academy of Water Transport, No 48, 2016. P. 62-69. (in Russian)

[13] Chung, K. L. Markov chains with stationary transition probabilities. Springer-Verlag, 1960.

[14] Shiryaev, A. N. Probability. Springer Science+Business Media New York, 1996.

[15] Feller, W. An introduction to probability theory and its applications. V. I. John Wiley & Sons, Inc., New York, 1966.

[16] Kantorovich, L. V. and Krylov, V.I. Approximate methods of higher analysis. Fizmatgiz, 1962. (in Russian)

i Надоели баннеры? Вы всегда можете отключить рекламу.