Научная статья на тему 'Metastability Of Large Networks With Mobile Servers'

Metastability Of Large Networks With Mobile Servers Текст научной статьи по специальности «Математика»

CC BY
97
25
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
ad hoc network / transience / metastability / mean field

Аннотация научной статьи по математике, автор научной работы — F. Baccelli, A. Rybko, Senya Shlosman, A. Vladimirov

We study symmetric queueing networks with moving servers and FIFO service discipline. The mean-field limit dynamics demonstrates unexpected behavior which we attribute to the metastability phenomenon. Large enough finite symmetric networks on regular graphs such as cycles are proved to be transient for arbitrary small inflow rates. However, the limiting non-linear Markov process possesses at least two stationary solutions. The proof of transience is based on martingale technique.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Metastability Of Large Networks With Mobile Servers»

Metastability Of Large Networks With Mobile Servers

F. Baccelli 1, A. Rybko 2, Senya Shlosman 2,3,4, A. Vladimirov 2

XUT Austin, Department of Mathematics, USA 2 Institute for Information Transmission Problems, RAS, Moscow 3Aix Marseille Université, Université de Toulon 4Skolkovo Institute of Science and Technology, Moscow mailto: @smith.comvl adim@iitp. ru

Abstract

We study symmetric queueing networks with moving servers and FIFO service discipline. The mean-field limit dynamics demonstrates unexpected behavior which we attribute to the metastability phenomenon. Large enough finite symmetric networks on regular graphs such as cycles are proved to be transient for arbitrary small inflow rates. However, the limiting non-linear Markov process possesses at least two stationary solutions. The proof of transience is based on martingale technique.1

Keywords: ad hoc network, transience, metastability, mean field

I Introduction

In this paper we consider networks with moving servers. The setting is the following: the network is living on a finite or countable graph G = (V, E) , at every node v EV of which one server s is located at any time. For every server, there are two incoming flows of customers: the exogenous customers, who come from the outside, and the transit customers, who come from some other servers. Every customer c coming into the network (through some initial server s(c)) is assigned a destination D(c) EV according to some randomized rule. If a customer c is served by a server located at v EV, then it jumps to a server at the node v' E V, such that dist(V,D(c)) =dist(v,D(c)) — 1, thereby coming closer to its destination. If there are several such v', one is chosen uniformly. There the customer c waits in the FIFO queue until his service starts. If a customer c completes his service by the server located at v, and it so happens that dist(v,D(c)) is 1 or 0, the customer is declared to have reached its destination and leaves the network.

The important feature of our model is that the servers of our network are themselves moving over the graph G. Namely, we suppose that any two servers s, s' located at adjacent nodes of G exchange their positions as the alarm clock associated to the edge rings. The time intervals between the rings of each alarm clock are i.i.d. exponential with rate p. When this happens, each of the two servers takes all the customer, waiting in its buffer or being served, to the new location. In particular, it can happen that after such a swap, the distance between the location of the customer c and its destination D(c) increases (at most by one). We assume that the service times of all customers at all servers are i.i.d. exponential with rate 1.

The motivation for this model comes from opportunistic multihop routing in mobile ad hoc wireless networks, see [5, 9, 7, 1, 3, 4]. Within this context, the servers represent mobile wireless devices. Each device moves randomly on the graph G which represents the phase space of device locations. The random swaps represent the random mobile motions on this phase space.

1 The authors gratefully acknowledge the support of grants 16-29-09497, 14-01-00379, 14-01-00319, 13-01-12410 by Russian

Foundation for Sciences.

Each node v EG of the phase space generates an exogenous traffic (packetized information) with rate Av corresponding to the exogenous customers alluded to above. Each such packet has some destination, which is some node of G. In opportunistic routing, each wireless device adopts the following greedy routing policy: any given packet scheduled for wireless transmission is sent to the neighboring node which is the closest to the packet destination. The neighbor condition represents in a simple way the wireless constraints. It implies a multihop route in general. This routing policy is the most natural one to use in view of the lack of knowledge of future random swaps.

In this paper we restrict consideration to cyclic graphs CK = 11/K11 and their mean-field versions, see below. Our main results, however, can be easily extended to much wider classes of networks.

The interest in mean-field versions is both of mathematical and practical nature. The mathematical interest of the mean-field version of a network is well documented. There are also practical motivations for analyzing such networks: their properties are crucial for understanding the long-time behavior of finite size networks.

The results we obtain look somewhat surprising. First of all, we find that for finite graphs the network is transient once the diameter of the graph is large enough. For example, consider the network on the graph CK with Poisson inflows with rate A> 0 at all nodes, exponential service times with rate 1, FIFO discipline and node swap rate p > 0. Then for all K > K(A,p) the queues at all servers tend to infinity as time grows. In words this means that the network is unstable for any A, however small it is - once the network is large enough.

The same picture takes place for mean-field graphs with N finite. They consist of N "parallel" copies of CK such that two nodes in different copies are adjacent if and only if the projections of these nodes to a single copy of CK are adjacent. However, the limiting picture, for N = m, is different: the corresponding NLM process on CK has stationary distributions, provided 0 < A < Acr(K,p), with Acr(K,p) < m for all K < m. Moreover, for all A < Acr there are at least two different stationary distributions, see Sect. 4 for more details. We demonstrate results of numerical modeling that suggest existence of three equilibria in some cases.

On the other hand, the general convergence result of [2] claims the convergence of the networks on to the one on C^ as N ^ m, which seem to contradict to the statements above. The explanation of this 'contradiction' is that the convergence in [2] holds only on finite time intervals [0,T].

That is, for any T there exists a value N = N(T), such that the network on is close to the limiting network on for all t E [0, T], provided N > N(T). Putting it differently, the network behaves like the limiting C^ network - and might even look as a stationary process - for quite a long time, depending on N, but eventually it departs from such regime and gets into the divergent one. Clearly, the picture we have is an instance of metastable behavior. We believe that more can be said about the metastable phase of our networks, including the formation of critical regions of servers with oversized queues, in the spirit of statistical mechanics, see e.g. [8], but we will not elaborate here on that topic.

II Finite networks

2.1 The CK network

The only case of a finite network we study here is the cyclic graph CK = 11 /K11. As was mentioned, our main results proved for this graph are easily extendable to much wider classes of networks. We use notation CK = (VK,E), where VK = {1, ...,K} and E = {(1,2), ...,(K - 1,K), (K, 1)}. For simplicity we take K to be odd.

We study a continuous-time Markov process on a countable state Q, related to the graph CK. Namely,

Q = {qv-.vEVK} = Wy«, where is the set of all finite words in the alphabet VK, including the empty word 0.

The queue qv £V% at a server located at v £VK consists of a finite (> 0) number of customers which are ordered by their arrival times (FIFO service discipline) and are marked by their destinations which are vertices of the graph CK. Since the destination of the customer is its only relevant feature, in our notations we sometime will identify the customers with their destinations.

2.1.1 Dynamics

Let us introduce the continuous-time Markov process M « M(t) with the state space Q. Let hv be the length of the queue qv at node v. We have qv = [q±, - ,qhv} if hv> 0 and qv = 0 if hv = 0.

The following events may happen in the process M.

An arrival event at node v changes the queue at this node. If the newly arrived customer has for its destination the node w, then the queue changes from qv to qv © w, that is, to [q",..., q%v, w} if hv> 0 or from 0 to (w) if hv = 0.

In this paper we consider the situation where each exogenous customer acquires its destination at the moment of first arrival to the system, in a translation-invariant manner: the probability to get destination w while arriving to our network at the node v depends only on w — v mod K. The case w = v is not excluded. We thus have the rates Avw, v,w £ CK, and the jump from qv to qv © w, corresponding to the arrival to v of the exogenous customer with final destination w happens with the rate Xv w. We introduce the rate X of exogenous customers as

A = Zw K,w (1)

(according to our definitions it does not depend on v).

Each node is equipped with an independent Poisson clock with parameter 1 (the service rate). As it rings, the service of the customer q" is over, provided hv > 0; nothing happens if hv = 0. In the former case the queue at node v changes from qv to

qv- = №.....qvhv}

(we also define 0_ = 0) and immediately one of the two things happen: either the customer q" leaves the network, or it jumps to one of the two neighboring queues, qv±1. The customer ql leaves the network only if its current position, v, is at distance < 1 from its destination, i.e. iff q" = v — 1, v, or v + 1. (This is just one of many possible choices we make for simplicity.) Otherwise it jumps to one of the neighboring vertices w = v ±1, which is the closest to its destination, i.e. to the one which satisfy: dist(w, q!) = dist(v, q") — 1 (there is a unique such w £VK since we assume K to be odd. The case of even K requires small changes).

The last type of event is the swap of two neighboring servers. Namely, there is an independent Poisson clock at each edge uv £ E of CK, with rate p > 0. As it rings, the queues at the vertices u and v swap their positions, that is,

qv(t+) = qu(t), qu(t+) = q"(t).

2.1.2 Submartingales

Here we introduce some martingale technique that will be used for the proof of transience of M for K large enough. To begin with, we label the K servers by the index k = 1,...,K; this labelling will not change during the evolution. Together with the original continuous-time Markov process M(t) we will consider the embedded discrete time process M(n), which is the value of M(t) immediately after the n-th event. The state of the process M consists of the states of all K

servers and all their locations.

The general theorem below will be applied to the quantities Xk, which are, roughly speaking, the lengths of the queues at the servers k, k = 1,... ,K, of the process M(An). The integer parameter A = A(K, A, p) will be chosen large enough, so that, in particular, after time A, the locations of the servers are well mixed on the graph CK, and the joint distribution of their location on CK is close to the uniform one. Moreover, we want the expectations of all the differences X%+1 — to be uniformly positive.

We start with the following theorem.

Theorem 1 Let T = Tn, n = 0,1,..., be a filtration and let Xk = 1,..., K, be a finite family of non-negative integer-valued submartingales adapted to T, such that for all k = 1, ...,K, and all n = 0,1,.., the following assumptions hold:

(1) For some p > 0 the inequality

VTn(Xn.+i—Xn,)>P (2)

holds whenever X% > 0.

(2) The increments are bounded by a constant R:

\XH+i—Xii\<R a. s. (3)

Then there exists an initial state (X1, ...,Xq) such that, with positive probability, Xn ^ +m as n ^ +m for all k = 1,..., K.

In order to prove the theorem we begin with an auxiliary lemma.

Lemma 2 Let yk = {y£: n = 0,1,..., k = 1,..., K, be a finite family of submartingales adapted to the same filtration T and such that Y^ e 0,1] for all k, n. Suppose also that for any £ > 0 there exists a 5 > 0 such that

E(Yk+1 — Yk) > 5 once 0 <Yk <1 — £ for all k and n. Suppose that the initial vector Y0 E A = [0,1]^ is deterministic and satisfies the condition

ZKk=1 Yk>K — 1., (4)

Then, with positive probability, Yk ^ 1 as n ^ m, for all k = 1,...,K.

Proof. Since all submartingales Yk are bounded, there is a limit Mm^^Yk almost surely for all k, see the Martingale Convergence Theorem in [6]. The value of this limit vector with probability 1 is either the 'maximal' vertex (1, .,1) of the cube A or a point a on the 'lower boundary' B of A: B = {a: mink=1. jiak = 0}. Indeed, for all other vectors v E A, we have

E(Ykk+1 — Yk)>0 if Yk=vk=v, k = 1.....K.

Note that

ZKk=1 bk<K — 1 (5)

for any vector b E B. By the submartingale property, we conclude that

En=1Yk>n=1Yk>K — 1 (6)

for all n = 1,... Inequalities (4)-(6) rule out the option that the limit of Yn belongs to B with probability 1.

Now, in order to derive Theorem 1 from Lemma 2, we make the following change of variables for submartingales Xkk. For a positive parameter a < 1 we define an 'irregular lattice' hi E

R+, by

h0 = 0, hi+1 = hi + a1, i = 0,1,.... We get = H = (1 — a)-1 < m. Now, for each k = 1, ...,K, we define the process Yk on the

same filtration T by the relation

Y*^ = hxu{My

The processes Y^ take values at the 'lattice' (h) for k = 1, ...,K. They are still submartingales if 1 — a is small enough. Indeed, for such a the local structure of the lattice in an R-neighborhood of a given point is modified only slightly. Since IX%+1 — X%l < R and E(X%+1 — X%) > p > 0, we conclude that the submartingale property is preserved.

Then the hypothesis of Lemma 2 holds (up to a constant factor H) and Theorem 1 is

proved.

2.1.3 Transience

Let us return to the process M(t). Suppose that the parameters A> 0 and ft > 0 are fixed. We remind the reader that our service rate is set to 1.

Theorem 3 For each A> 0 and ft > 0, there exists K* £ 1+ such that for any K > K* the process M is transient.

Proof. First of all, we construct a discrete time Markov chain V on the state space Q. To define it, we start with the embedded Markov chain M(n), defined earlier, and then pass to the chain M(An), with the integer A to be specified later. To get the chain V « (Vn), we modify the chain M(An) as follows: if for some n at least one of the K queues is at most A, we add to all such queues extra customers, to make these queues to be of length exactly A and then stop the process forever. Otherwise we do no changes. The obtained Markov chain is denoted by V.

We start the process V at some configuration Q0 with all queues longer than A.

We now prove the following statement: if A is large enough, the queue length process V at any given server is a submartingale satisfying the conditions of Theorem 1, with respect to the filtration defined by our discrete-time Markov chain M(n) (the individual queue length processes are clearly adapted to this filtration). This completes the proof because of Theorem 1. We need the following lemmas.

Lemma 4 (1) Let us consider the following function n(t) of the process M. At each t > 0, n(t) is the current permutation of indices of K servers with respect to indices of K nodes. Then the evolution of n(t) is a continuous time Markov process, independent of service and arrival processes, and, as t ^ ro, the distribution of n(t) converges to the uniform one on the set SK of all permutations.

(2) Let us fix index i and denote by v(i, t) the position of the server i at time t. Then the distribution of v(i, t) converges to the uniform distribution on (1,... ,K) as t ^ ro.

Proof. Let us introduce the graph structure on the permutation group SK. Namely, we consider all the transpositions t £ SK corresponding to the exchanges of pairs of neigboring servers, and we call two permutations n', n" to be connected by an edge iff n' = n"T for some t.

The resulting graph on SK is connected - because G is connected. The process of migration of servers is, obviously, a random walk on this graph, that is, a reversible process. Hence, as t ^ ro, the distribution of permutations converges to the uniform one uniformly on all initial states. The assertion of the lemma clearly follows.

Lemma 5 For any initial state Q0, the probability of a customer with position H > 0 in the queue to leave the network after being served, tends to 3/K as H ^ ro, uniformly in Q0.

Proof. As the waiting time of the customer tends to infinity with H ^ ro, the distribution of its server on VK tends to the uniform one on CK (see Lemma 4). In order for the customer c to exit the network, the last server of c has to be located at this moment at one of the three nodes: D(c) + 1, D(c), or D(c) — 1. The lemma follows.

Now we see that for all the customers in the initial queues whose positions are at least H, the mean chance of exit approaches 3/K as H ^ m, and the rate of this approach does not depend on the particularities of the initial state Q0, but only on H.

The next remark is that if a customer is served and then jumps to a different server, then the index j of that server is distributed almost uniformly over the remaining K — 1 indices. This fact follows from Lemma 5. Again, the rate of convergence is independent of Q0 because the servers swap positions independently of anything else. So we have established a lemma, analogous to Lemma 5:

Lemma 6 The probability of a customer with position H on server i to jump to server j tends to 1/(K — 1) as H ^ m uniformly in i, j, and in the initial states Q0 E Q.

We need a third combinatorial lemma, and we start with some definitions, and then formulate and prove it. Let {u, v} c CK be an ordered pair of elements. We define the map T from the set of all such pairs into the union VK u {*}, by

!forwdefinedby\u — w\ = 1, \v — w\ = \u — v\ — 1, W provided\u — v\ > 1, * otherwise.

For K odd the map T is well-defined. In case T{u, v} = w we say that a customer transits through w (on his way from u to v).

Let D:VK ^ VK be an arbitrary map. We want to compute the quantity

1

VK = jXnESK,iEVK hT{7T(i),D(i)}=TT(j)}, (7)

where SK is the symmetric group, n runs over all permutations from SN, while i and j are taken from some fixed labelling of the elements of VK. Thus Vk is the probability of transit through the node n(j) in the ensemble defined by the uniform distribution on SK. Of course, it does not depend on j. (Note that we consider the action of SK on pairs {u, v} given by n{u, v} = {nu, v}.)

Lemma 7

K-3

Vk = ~.

Proof of the Lemma. Let 1,2,..., K be the labelling fixed; without loss of genelarily we can take j = 1. Instead of performing the summation in (7) over whole group SK, we partition SK into (K — 2)! subsets An, and perform the summation over each An separately. If the result will not depent on n, we are done. Here n E SK, and, needless to say, for n, n' different we have either An = An' or An n An' = 0.

Let us describe the elements of the partition {An}. So let n is given, and the string i1, i2, i3,..., il, ii+1,..., iK is the result of applying the permutation n to the string 1,2, ...,K. Then we include into An the permutation n, and also K — 1 other permutations, which correspond to the cyclic permutations, e.g. we add to An the strings iK,i1,i2,i3,...,il,il+1,..., iK-1,iK,i1,i2,i3,...,il,il+1,..., and so on. We call these transformations 'cyclic moves'. Now with each of K permutations already listed we include into An also K — 2 other permutations, where the element 1 does not move, and the rest of the elements is permuted cyclically, i.e., for example from ix-^ ix, iu h, h^.^ i-h h+u..., we get iк, h, h^.^ ^ ii..., Ik-^ h, h, h,..., ^ 11+1^.., Ik-^ k, and so on. We call these transformations 'restricted cyclic moves'. The main property of thus defined classes of configurations is the following: Let a ± b E {1,2, ...,K} be two arbitrary indices, and I E {2,...,K} be an arbitrary index, different from 1. Then in every class An there exists exactly one permutation n', for which i1 = a and ^ = b.

Given n, take the customer I ± 1(= j), and its destination, D(l). If we already know the position ^ of customer 1 on the circle CK, then in the class An there are exactly K — 1 elements, each

of them corresponds to a different position of the server I on CK. If it so happens that i1 = D(l), then for no position of the server I the transit from I through i1 happens. The same also holds if i1 = + modK or i1 =

+ modK. For all other K — 3 values of ^ the transit from I through i1 happens precisely for one position of I (among K — 1 possibilities). Totally, within A„ we have (K — 1)(K — 3) transit events. Since lAnl = K(K — 1), the lemma follows. +

End of the proof of the theorem. Now we define the submartingales X^ and show that they satisfy all the properties of Theorem 1. We define to be the length of the queue of the k-th server in the process Vn, from which the constant A is subtracted. Clearly, X% > 0. We now show that if K and A are both suitably large, then the properties (1) and (2) of Theorem 1 hold.

Relation (3) is evidently satisfied with R = A. Let us check (2). Let us start the process M at a configuration where all the queue lengths are of the form Xq + A with Xq > 0, k = 1,...,K. We want to show that after time A, we have E(x!f — Xq) > p, for some p > 0. Let H = H(K) be the time after which the distribution of the K servers is almost uniform on CK, see Lemma 5. Before this moment, we do not know much about our network, so we bound the lengths of the queues M^ roughly, by Mu > Mq — H. After the time H the probability that a customer leaving a server leaves the network is almost 1/K, and the probabilities that it jumps to the left or the right are both close

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

K-1

to-.

2K

More precisely, by Lemma 7, the rate of arrival to every server after time H is almost A + (K — 3)/K, which is higher than the exit rate, 1, provided K is large enough (namely, K > K* = 3/ A). Hence the expected queue lengths in the process M grow linearly in time, at least after time H, which implies the existence of A > 0 such that E(M^) > Mq + p. So, Theorem 1 applies.

III Infinite networks

3.1 NLMP on 11

In this section we consider the limit of the network (11)N as N ^ ro, i.e. the NLMP on 11. The limit of the network can be studied in the same way. This NLMP is described in details in [2], and we use the notations therein. Here we are interested in its stationary distributions.

The NLMP is the evolution of the measure ® nv on the states (queues qv) of the (jumping) servers at the nodes v £ 111, given by the equations

^-tVv(qv,t) = M + 'B + C + 'D + £ (8)

with

dri*(qv)Wv)

being the derivative along the direction r(qv) (in our case of the exponential service time with rate 1 we have, of course, that-d t) = Vvfav, t) )

dri*(qv)(^v)

B = 5 (0,T(e(qv))) ^v(qv\e(qv),t)[atr(qv\e(qv),qv) + ae(qv\e(qv),qv)] (10) where qv is created from qv\ e(qv) by the arrival of e(qv) from v', and 8 (0,r(e(qv)}) takes into account the fact that if the last customer e(qv) has already received some amount of service, then he cannot arrive from the outside;

G = —^v(qv, t) [atr(qv, q^) + oe(qv, q^)], (11)

which corresponds to changes in queue qv due to customers arriving from other servers and from the outside (in the notations of (1), ae(qv, qv ©w) = Av w);

V = fqi:qi\\C(qi)=qv dVv(q'v, t)Of(q'^, q'v\C(q'^)) — ^v^ Rv\C(qv)), (12)

where the first term describes the situation where the queue qv arises after a customer was served

in a queue q'v (longer by one customer), and q'v\ C(q'v) = qv, while the second term describes the completion of service of a customer in qv;

£ = Zv'n.n.v Pvv'[^v'(qv, 0 — Vv(qv, t)], (13)

where the ¡-s are the rates of exchange of the servers.

For the convenience of the reader we repeat the equation (8 — 813) once more:

jTt^v(qv,t) = —1~^:)^v(qv,t)

+5 (o,T(e(qv)))^v(qv\ e(qv))[^tr(qv\ e(qv),qv) + °e(qv\e(qv),qv)] —^v(qv,t)'Eq^ [^tr(qv,q'^) + °e(qv,q'^)] + fq^\C(q^=qv dVv(q'v)°f(q'v,q'v\c(q^)) (14)

—^v(qv)^f(qv,qv\ c(qv)) + ^v'n.n.v Pw'[Mv'(qv) — Vv(qv)].

We are looking for the fixed points ^ of the evolution (14). Then the measures (on measures) will be stationary measures of our NLMP. Note that the dynamical system (14) might have other stationary measures (on measures) then those corresponding to the fixed points. We will simplify our setting. Namely, we make the following changes:

1. for the graph G we take the lattice 11;

2. all the customers have the same class;

3. the service time distribution t] is exponential, with the mean value 1;

4. the service discipline considered is FIFO;

5. the exogenous customer c arriving to the node v has for its destination the same node v, i.e. D (v) = v; inflow rates at all the nodes are constant, equal to A;

6. the two servers at v, v', which are neighbors in 11 can exchange their positions with the same rate ¡3 = ¡vv';

The queue qv can in this setting be identified with the sequence of destinations D(ct) of its customers. The equation for the fixed point then becomes:

0 = Vv(qv\e(qv))[Gtr(qv\e(qv),qv) + °e(qv\e(qv),qv)] —[¿v^ZqV WtMv^'v) + A] + Zqi,:.qi\.C(qi)=qv VvtiD

v'=v±1

We are interested in translation-invariant solutions. In that case the queue qv can be identified with the sequence of (signed) distances between the node v and the destinations D(ct) of its customers, so it becomes a finite integer sequence M = {n1,...,nl;ni E 11}, where I > 0 is the length of the queue qv. The rate of the arrival of the transit customer, otr(qv\ e(qv),qv) = otr([n1,..., nl-1], [n1,...,, nl-1, nl]) is then a function of one integer, nl, and so we adopt the notation

Ant = otr([n

1, . . . , nl-1],

[ n1,... , , nl-1, nl]).

According to our definitions, we thus have

Kk + 1,M) ifk>0,

Ak=Ak(ji) = hK^(k — 1,K) ifk<0, (15)

(-0 ifk = 0.

In what follows we look only for states ^ which have symmetric rates Ak :

Ak = A-k. (16) The probability ^v(qv) then turns into ^(M); note, however, that for v' = v ±1 we need to interpret nv'(qv) as n(n1 + 1,...,nl + 1). The equation now becomes:

, nl-1

ndCEk Ak + A)

+ T,k Kk, nu..., nl) — v(n.1,..., nl)il^0 (17)

+Pin(n1 + 1,. ..,nL + 1) + n(n1 — 1,.. .,nL — 1) — 2^(n1,..., n) = 0. As we see later, the equations (15) — (1517) can have several solutions, one solution or no solution, depending on the value of the parameter A. If ^ is a solution of the equations (15) — (1517) for some A, then we denote by

v(p) = ¿M

the rate of the transit customers to every node in the state n, and by ij(p.) the rate of the total flow to every node in the state n:

^(m) = v(m) +%.

Theorem 8 For every positive tj < 1 there exist a unique value A(ij) of the exogenous flow rate A and the state nv on the set of the queues (M), satisfying the equations (15) — (1517) with A = A(ij), such that

V(Vv) = v.

Proof. Consider the process which is described just by the relation (17), with arbitrary parameters Ak, k = 0,±1,... and A. This is an ordinary queuing system with a single server and with infinitely many types of customers. The customer of type k arrives with rate Ak (and with rate A0 + A for k = 0). Consider the random variable ^, which is the total time a customer spends in such a server in the stationary state. It has exponential distribution, which depends only on q = Ak + A (and which does not depend on the type of the customer), namely, E(^) = (1 — if)-1.

Suppose a customer of type k arrives to such a server. When it leaves the server, its type is changed to k + tv, where tv is a random (integer valued) variable. That change happens due to the ft-terms in (17). By symmetry, e(t^) = 0. The distribution of tv is the following. Consider a random walker W(t), living on 11, which starts at 0 - i.e. W(0) = 0, and which makes ±1 jumps with rates ft. Then tv = W(%v).

We now are going to present a choice of the rates Ak and A in such a way that the equations (15) is satisfied as well. Our choice of the rates Ak,A is related to the stationary distribution of a certain ergodic Markov process on 11, which we describe now. Define the matrix of transition probabilities P1 = (nst) by nst = Pt(tv = s — t). Of course, this Markov chain on 11 is not positive recurrent since its mean drift is zero. Let P2 be the second Markov chain, with transition probabilities

!1 fort > 0, s = t + 1, 1 fort = 0, s = 1,0,—1, 1 fort < 0, s = t — 1, 0 inothercases.

I.e. P2 is non-random map of 11 into itself. Consider the composition Markov chain, with transition matrix Q being the product, Q = P1P2. This chain is, in contrast, positive recurrent (it has a positive drift towards the origin), and it has a stationary state q = (qk, k £ 11). We take

Ak = r]qk, k±0; A = -qq0. (18)

The relations (15) are satisfied since the process Q describes the evolution of the type of the customer in the stationary state of the process (15) — (1517).

We now state some properties of the function A(q) as the parameter q varies in (0,1).

Proposition 9 There is a A+> 0 such that, for any positive A < A+, there are at least two different velues tj = tj-(A) and tj = tj+(A) satisfying the relation A(ij) = A and such that tj-(A) ^ 0 and tj+(A) ^ 1 as A^ 0.

Proof. Clearly, A(q) ^0 as q ^ 0. We want to argue that A(q) ^ 0 also when q ^ 1. Indeed, in this regime every customer spends more and more time waiting in the queue, so for every k the

probability Pr(%v < k) ^ 0 as ] ^ 1. Therefore the distribution of the random variable tv becomes more and more spread out: for every k, Pr(\r\^ < k) ^ 0 as ] ^ 1. Therefore the same property holds for the stationary distribution q, and the claim follows from relations (18) and Proposition 9. In particular, it means that the following equation on t]:

A(t]) = a> 0

has at least two solutions for small a: the corresponding ] is either small or close to 1. Indeed, this follows from the continuity of A(t]).

IV Some examples

Let us look at the function A(t]) for some finite cyclic graphs CK with different parameters, that is, with different randomized rules for the destination assignment. As we see, there are cases with one, two, and three equilibrium solutions.

Typical case: two solutions

Single solution

Three solutions

Three solutions: closer look

References

[1] Emmanuel Baccelli and Charles E. Perkins. Multi-hop Ad Hoc Wireless Communication. Internet-Draft draft-baccelli-manet-multihop-communication-04, Internet Engineering Task Force, September 2014. Work in Progress.

[2] F. Baccelli, A. Rybko, and S. Shlosman. Queuing Networks with Varying Topology - A Mean-Field Approach. ArXiv e-prints, November 2013.

[3] François Baccelli and Bartlomiej Blaszczyszyn. Stochastic geometry and wireless networks, volume 1: Theory. Foundations and Trends in Networking, 3(3-4):249-449, 2009.

[4] François Baccelli and Bartlomiej Blaszczyszyn. Stochastic geometry and wireless networks, volume 2: Applications. Foundations and Trends in Networking, 4(1-2):1-312, 2009.

[5] Josh Broch, David A. Maltz, David B. Johnson, Yih chun Hu, and Jorjeta Jetcheva. A performance comparison of multi-hop wireless ad hoc network routing protocols. pages 85-97, 1998.

[6] R. Durrett. Probability: Theory and Examples. Cambridge series on statistical and probabilistic mathematics. Cambridge University Press, 2010.

[7] C.S.R. Murthy and B.S. Manoj. Ad Hoc wireless networks: architectures and protocols. Prentice Hall communications engineering and emerging technologies series. Prentice Hall PTR, 2004.

[8] Roberto H Schonmann and Senya B Shlosman. Wulff droplets and the metastable relaxation of kinetic ising models. Communications in mathematical physics, 194(2):389-462, 1998.

[9] C.K. Toh. Ad Hoc Mobile Wireless Networks: Protocols and Systems. Prentice Hall PTR, 2002.

i Надоели баннеры? Вы всегда можете отключить рекламу.