Научная статья на тему 'Multistage biddings with risky assets: the case of countable set of possible liquidation values'

Multistage biddings with risky assets: the case of countable set of possible liquidation values Текст научной статьи по специальности «Математика»

CC BY
7
4
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
MULTISTAGE BIDDINGS / ASYMMETRIC INFORMATION / REPEATED GAMES / OPTIMAL STRATEGY

Аннотация научной статьи по математике, автор научной работы — Domansky Victor, Kreps Victoria

This paper is concerned with multistage bidding models introduced by De Meyer and Moussa Saley (2002) to analyze the evolution of the price system at finance markets with asymmetric information. The zero-sum repeated games with incomplete information are considered modelling the biddings with countable sets of possible prices and admissible bids, unlike the above-mentioned paper, where two values of price are possible and arbitrary bids are allowed. It is shown that, if the liquidation price of a share has a finite dispersion, then the sequence of values of n-step games is bounded and converges to the value of the game with infinite number of steps. We construct explicitly the optimal strategies for this game. The optimal strategy of Player 1 (the insider) generates a symmetric random walk of posterior mathematical expectations of liquidation price with absorption. The expected duration of this random walk is equal to the initial dispersion of liquidation price. The guaranteed total gain of Player 1 (the value of the game) is equal to this expected duration multiplied with the fixed gain per step.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Multistage biddings with risky assets: the case of countable set of possible liquidation values»

Multistage Biddings with Risky Assets: the Case of Countable Set of Possible Liquidation Values 1

Victor Domansky-^Victoria Kreps2

St.Petersburg Institute for Economics and Mathematics,

Russian Academy of Sciences,

1, Tchaikovskogo st., St. Petersburg, 191187, Russia,

2 e-mail address: [email protected] 3e-mail address:[email protected]

Abstract. This paper is concerned with multistage bidding models introduced by De Meyer and Moussa Saley (2002) to analyze the evolution of the price system at finance markets with asymmetric information.

The zero-sum repeated games with incomplete information are considered modelling the biddings with countable sets of possible prices and admissible bids, unlike the above-mentioned paper, where two values of price are possible and arbitrary bids are allowed.

It is shown that, if the liquidation price of a share has a finite dispersion, then the sequence of values of n-step games is bounded and converges to the value of the game with infinite number of steps. We construct explicitly the optimal strategies for this game.

The optimal strategy of Player 1 (the insider) generates a symmetric random walk of posterior mathematical expectations of liquidation price with absorption. The expected duration of this random walk is equal to the initial dispersion of liquidation price. The guaranteed total gain of Player 1 (the value of the game) is equal to this expected duration multiplied with the fixed gain per step.

Keywords: Multistage biddings, asymmetric information, repeated games, optimal

strategy.

Introduction

The Wiener process and its discrete analogues, random walks, are often used to model the evolution of price systems at finance markets. The random fluctuations of

1 This study was supported by the grant 07-06-00174a of Russian Foundation of Basic Research which is gratefully acknowledged.

prices are usually motivated by the effect of multiple exogenous factors subjected to accidental variations.

A different strategic motivation for these phenomena is proposed in the work of De Meyer and Saley (2002). The authors assert that the Brownian component in the evolution of prices on the stock market may originate from asymmetric information of stockbrokers on events determining market prices. “Insiders” are not interested in immediate revelation of their private information. This forces them to randomize their actions and results in the appearance of the oscillatory component in price evolution.

De Meyer and Saley demonstrate this idea by help of a simplified model of multistage biddings between two agents for risky assets (shares). A liquidation value of one share depends on a random “state of nature”. Before the biddings start a chance move determines the “state of nature” and, therefore, the liquidation value of one share once for all. Player 1 is informed on the “state of nature”, Player 2 is not. Both players know probabilities of chance move. Player 2 knows that Player 1 is an insider.

At each subsequent step t = 1, 2,...,n both players simultaneously propose their prices for one share. The maximal bid wins and one share is transacted at this price. If the bids are equal, no transaction occurs. Each player aims to maximize the value of his final portfolio (money plus liquidation value of obtained shares).

In this model the uninformed Player 2 should use the history of the informed Player 1 moves to update his beliefs about the state of nature. In fact, at each step Player 2 may use the Bayes rule to reestimate the posterior probabilities of chance move outcome, or, at least, the posterior mathematical expectations of liquidation value of a share. Player 1 could control these posterior probabilities.

Thus, Player 1 faces a problem of how best to use his private information without revealing it to Player 2. Using a myopic policy - bid the high price if the liquidation value is high, the low price if this value is low - is not optimal for Player 1, as it fully reveals the state of nature to Player 2. On the other hand, a strategy that does not depend on the state of nature reveals no information to Player 2, but does not allow Player 1 to take any advantage of his superior knowledge. Thus, Player 1 must maintain a delicate balance between taking advantage of his private information and concealing it from Player 2.

De Meyer and Saley consider the model where a liquidation price of a share takes only two values and players may make arbitrary bids. They reduce this model to a zero-sum repeated game with lack of information on one side, as introduced by Aumann, Maschler (1995), but with continual action sets. De Meyer and Saley show that these n-stage games have the values (i.e. the guaranteed gains of Player 1 are equal to the guaranteed losses of Player 2). They find these values and the optimal strategies of players. As n tends to infinity, the values infinitely grow up with rate i/n. It is shown that Brownian Motion appears in the asymptotics of transaction prices generated by these strategies.

It is more natural to assume that players may assign only discrete bids proportional to a minimal currency unit. In our previous papers [Domansky, 2007], [Domansky and Kreps, 2007] we investigate the model with two possible values of liquidation price and discrete admissible bids. We show that, unlike the model [De Meyer and Saley, 2002], as n tends to to, the sequence of guaranteed gains of insider is bounded from above and converges. It makes reasonable to consider the biddings with infinite number of steps. We construct the optimal strategies for corresponding infinite games. We write out explicitly the random process formed by the prices of transactions at sequential steps. The transaction prices perform a symmetric random walk over the admissible bids between two possible values of liquidation price with absorbing extreme points. The absorption of transaction prices means revealing of the true value of share by Player 2.

Here we consider the model where any integer non-negative bids are admissible. The liquidation price of a share Cp may take any non-negative integer values k = 0,1, 2,... according to a probability distribution p = (p0,p\,p2,...). This n-stage model is described by a zero-sum repeated game Gn (p) with incomplete information of Player 2 and with countable state and action spaces. The games considered in [Domansky, 2007], [Domansky and Kreps, 2007] represent a particular case of these games corresponding to probability distributions with two-point supports.

We show that if the random variable Cp determining the liquidation price of a share has a finite mathematical expectation E[Cp], then the values Vn(p) of n-stage games Gn (p) exist (i.e. the guaranteed gain of Player 1 is equal to the guaranteed loss of Player 2). If the dispersion D[Cp] is infinite, then, as n tends to to, the sequence Vn (p) diverges.

On the contrary, if the dispersion D[Cp] is finite, then, as n tends to to, the sequence of values Vn (p) of the games Gn (p) is bounded from above and converges. The limit H(p) is a continuous, concave, piecewise linear function with countable number of domains of linearity. The sets 0(k), k = 1, 2,... of distributions p with integer mathematical expectation E[C(p)] = k form its domains of nondifferentiability. If E[Cp] is an integer, then H(p) = D[Cp]/2. If E[Cp = k + a, where k is an integer, a e [0,1], then H(p) = (D[Cp] — a(1 — a))/2.

As the sequence Vn (p) is bounded from above, it is reasonable to consider the games G(p) with infinite number of steps. We show that the value VTO (p) is equal to H(p). We construct explicitly the optimal strategies for these games.

For the case p e 0(k) of integer mathematical expectation of liquidation value of a share, the insider optimal strategy is to generate a symmetric random walk of posterior mathematical expectations over domains ©(/). The expected duration of this random walk is equal to the dispersion of the liquidation price of a share. The value of infinite game is equal to the expected duration of this random walk multiplied by the constant one-step gain 1/2 of informed Player 1.

Let p e ©(k). If the random variable Cp takes the value k, then the “approximate” information of Player 2 turns to be the exact one and in fact the information advantage of Player 1 disappears. Hence, the gain of Player 1 is equal to zero, and

he can stop the game without any loss for himself. Otherwise, the first optimal move of Player 1 makes use of actions k — 1 and k with equal total probabilities and with posteriors pk-1 e ©(k — 1), pk e ©(k +1). For these posteriors the equalities Pk-1 = Pk =0 hold.

1. Repeated games with one-sided information modelling the multistage biddings

We consider the repeated games Gn (p) with incomplete information on one side (Aumann and Maschler, 1995) modelling the biddings described in introduction. Two players with opposite interests have money and single-type shares. The liquidation price of a share may take any non-negative integer values s e S = Z+ = {0,1, 2,...}.

At stage 0 a chance move determines the liquidation value of a share for the whole period of biddings n according to the probability distribution p = (p0,p1,p2,.. ) over S known to both Players. Player 1 is informed about the result of chance move s, Player 2 is not. Player 2 knows that Player 1 is an insider. At each subsequent stage t = 1,...,n both Players simultaneously propose their prices for one share, it e I = Z+ for Player 1 and jt e J = Z+ for Player 2. The pair (it,jt) is announced to both Players before proceeding to the next stage. The maximal bid wins, and one share is transacted at this price. Therefore, if it > jt, Player 1 gets one share from Player 2 and Player 2 receives the sum of money it from Player 1. If it < jt, Player 2 gets one share from Player 1 and Player 1 receives the sum jt from Player 2. If it = jt, then no transaction occurs. Each player aims to maximize the value of his final portfolio (money plus liquidation value of obtained shares).

This n-stage model is described by a zero-sum repeated game Gn (p) with incomplete information of Player 2 and with countable state space S = Z+ and with countable action spaces I = Z+ and J = Z+. One-step gains of Player 1 are given with the matrices As = [as (i, j)]iei,jeJ, s e S,

{j — s, for i < j;

0, for i = j;

—i + s, for i > j.

At the end of the game Player 2 pays to Player 1 the sum

n

'^2as(it,jt).

t=i

This description is common knowledge to both Players.

At step t it is enough for both Players to take into account the sequence (i1,..., it-1) of Player 1’s previous actions only. Thus, a strategy a for Player 1 informed on the state is a sequence of moves

a — (al,..., at,..

where the move at = (at(s))seS and at(s) : It-1 ^ A(I) is the probability distribution used by Player 1 to select his action at stage t, given the state k and previous observations. Here A(-) is the set of probability distributions over (•).

A strategy t for uninformed Player 2 is a sequence of moves

T = (т1, ...,тt,...),

where Tt : It-1 ^ A( J).

Observe that here we define infinite strategies fitting for games of arbitrary duration. A pair of strategies (a, t) induces a probability distribution n^T) over (I x J)TO. The payoff function of the game Gn (p) is

Kn(p,a,T) =J2 PshSn(a,T),

sGS

where

n

hn(a,T) = E(a,r )[^ as(it,jt)] t=1

is the s-component of the n-step vector payoff hn(a, t) for the pair of strategies (a, t). Here the expectation is taken with respect to the probability distribution n^a,T).

For the initial probability p the strategy a ensures the n-step payoff

Wn(p,a) = inf Kn(p,a,T).

T

The strategy t ensures the n-step vector payoff hn(T) with the components

hn(T) = sup hn(a(s),T).

a(s)

Now we describe the recursive structure of Gn+1 (p). A strategy a may be regarded as a pair (a1, (a(i))ieI), where a1(i|s) is a probability on I depending on s, and a(i) is a strategy depending on the first action i1 = i.

Analogously, a strategy t may be regarded as a pair (t1, (t(i))i£I), where t1 is a probability on J.

A pair (p, a1) induces the probability distribution n over SxI, n(s,i) = p(s)a1(i|s).

Let

q e A(I), qi = ^2Ps

S

be the marginal distribution of n on I (total probabilities of actions), and let

p(i) e A(S),ps(i) = Psa1(i|s)/qi be the conditional probability on S given i1 = i (a posterior probability).

Conversely, any set of total probabilities of actions q G A(I) and posterior probabilities (p (i) G A(S))ieI, satisfying the equality

X^p(i) = p,

iel

define a certain random move of Player 1 for the current probability p. The posterior probabilities contain the whole of essential for Player 1 information about the previous history of the game. Thus, to define a strategy of Player 1 it is sufficient to define the random move of Player 1 for any current posterior probability.

The following recursive representation for the payoff function corresponds to the recursive representation of strategies:

Kn+i(p,v,T) = K1(p,a1,r1) + ^2 qiKn(p(i),a(i),r(i)).

iel

Let, for all i G I, the strategy a(i) ensure the payoff wn(p(i),a(i)) in the game Gn(p(i)). Then the strategy a = (ai, (a(i))ieI) ensures the payoff

Wn+i(p,a) = min^[^ psa1(ijs)a(s,i,j) + qiWn (p(i),a(i))]. (25)

jeJ iei ses

Let, for all i G I, the strategy t(i) ensure the vector payoff hn(r(i)). Then the strategy t = (t1, (tn(i))iei) ensures the vector payoff hn+1(T) with the components

^n+i(T) = imaIx^Ti(j)(a(s,i,j) + ^n(T(i))) Vs G S. (26)

ie jeJ

The game Gn(p) has a value Vn (p) if

inf sup Kn (p, a, t) = sup inf Kn (p, a, t) = Vn (p).

T a a T

Players have optimal strategies a* and t* if

Vn(p) = inf Kn(p,a*, t) = supKn(p,a,T*),

T a

or, in above introduced notation,

Vn(p) = Wn(p, a*) = ^PshSn(T*).

ses

For probability distributions p with finite supports, the games Gn(p), as games with finite state and action spaces, have values Vn (p). The functions Vn are continuous and concave in p. Both players have optimal strategies a* and t*.

Consider the set M1 of probability distributions p with finite first moment m1 [p] = = 2tt=0Ps x s < to. For p e M1 the random variable Cp, determining the liquidation price of a share, has a finite mathematical expectation E[Cp] = m1[p]. The set M1 is a convex subset of Banach space L1({s}) of sequences l = (ls) with a norm

tt

l|l|li = £ \ls\xs.

s=0

Let pi, p2 e M1. Then, for “reasonable” strategies a and t,

\Kn(p1, a, t) — Kn(p2, a, t)\ < n||pi - p2||s1.

Therefore, the payoff of game Gn (p) with p e M1 can be approximated by the payoffs of games Gn(pk) with probability distributions pk having finite support. Next theorem follows immediately from this fact.

Theorem 1. If p e M1 then games Gn(p) have values Vn(p). The values Vn(p) are positive and do not decrease as the number of steps n increases.

Remark 1. If the random variable Cp does not belong to L2, then, as n tends to to, the sequence Vn(p) diverges.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

2. Upper bound for values Vn(p)

Here we consider the set M2 of probability distributions p with finite second moment

m2

[p] pstimess2 < to.

s=0

For p e M2, the random variable Cp, determining the liquidation price of a share, belongs to L2 and has a finite dispersion D[Cp] = m2[p] — (m1[p])2.

The set M2 is a closed convex subset of Banach space L1({s2}) of mappings

l: Z+ ^ R with a norm

tt

||l||.1 =E ^s l x s2.

s=0

The main result of this section is that for p e M2, as n ^ to, the sequence Vn (p) of values remains bounded.

To prove this we define recursively the set of infinite “reasonable” strategies

tm, m = 0 ,1,... of Player 2, suitable for the games Gn(p) with arbitrary n.

Definition 1. The first move tJ” is the action m e J. The moves tm for t > 1

depend on the last observed pair of actions (it-i,jt-i) only:

{jt-i — 1 for it-i < jt-1; jt-i, for it-1 = jt-1;

jt-i + 1, for it-1 > jt-1.

Remark 2. The definition of strategies tm includes the previous actions of both

players. In fact, these strategies can be implemented on the basis of Player 1’s

previous actions only.

Proposition 1. The strategies tm ensure the vector payoffs hn(Tm) e R+ with components given by

n-1

hsn(Tm) = Y,(m — s — l)+, (27)

£=0

for s < m,

n-1

hn(T m) = E(s — m — 1 —1)+, (28)

1 = 0

for s > m, where (a)+ := max{0,a}.

Proof.

The proof is by induction on the number of steps n. n = 1. For s < m Player 1’s best reply is any action k < m, and

hi(T ”) = max ai,m = ak,m = m — s. i

For s = m Player 1’s best reply is any action k < m and

h”(Tm) = max a”m = a”m = °. i

For s = m + 1 Player 1’s best replies are actions m and m +1 and

h”+1(Tm) = am+1 = am+1 = am+J = 0

h1 (T ) — mAxai,m — am,m am+1,m _ u.

i

For s > m + 1 Player 1’s best reply is action m + 1 and

h{(Tm) = max aSim = aSm+i,m = (s — m — 1). i

Therefore,

h1(Tk) = (k,k — 1,...,1, 0,0,1,...).

This proves Proposition 1 for n =1.

n ^ n + 1. Assume that the vector payoffs hn(Tk) are given with (3) and (4). We

have according to (2)

f ai,m + hn(Tfor i < m;

hn+i(T”) = max f ai,m + hn(T”), for i = m;

[ ai,m + hn(тm+1), for i > m.

For s < m, the first move of Player 1’s best reply is any action i < m. It results in

n 1 n

hn+1(T m) = ai,m + hn(T ” 1) = (m — s) + E(m — s — 1 — l) + = ^2(m — s — l)+.

1=0 1 = 0

For s = m, the first move of Player 1’s best reply is any action i < m, and i = m. It results in

h”+i(Tm) = a”m + h” (Tm-i) = a”” + h”(Tm) = 0.

For s = m +1 the first moves of Player 1’s best replies are actions m, and m +1. It results in

hm+1(Tm) = nm+1 + i m+l/m) = am+1 + hm+1(Tm+1) = 0

hn+1 V ) ~ am,m T hn \T J — am+1,m ' hn \T ) — u.

For s > m + 1 the first move of Player 1’s best reply is action m + 1. It results in

n-1 n

hn+1(T m)=a”+1 ,m+hn(T m+J) = (s — m — 1)+Tis — m — 2 — l)+=y> — m — 1 — l)+.

1=0 1 = 0

This proves Proposition 1 for n + 1.

Theorem 2. For p e M2 the values Vn (p) are bounded from above by a continuous, concave, and piecewise linear function H(p) over P. Its domains of linearity are

L(k) = {p : E[p] e [k,k + 1]}, k = 0,1,------------

Its domains of nondifferentiability are 0(k) = {p : E[p] = k}. The equality holds

H (p) = 1/2D[p] — 1/2J(p)(1 — (Jp)),

where J(p) = E[p] — en t[E[p]] and ent[x], x e R1 is the integer part of x.

Proof.

It is easy to see that

lim hsn(Tm) = htt(Tm) = (s — m)(s — m — 1)/2.

n—>tt

Thus, there is the following not depending on n upper bound for Vn(p):

tt

Vn(p) < mi^y^ ps(s — m)(s — m — l)/2, m = 0, 1,.... (29)

m

s=0

Observe that for E[p] e [k,k + 1] the minimum in formula (5) is attained on the k-th vector payoff. Consequently, for E[p] = k + a,

tt tt tt

Vn (p) < ^^ ps(s — k)(s — k — l)/2 = [(k2 + k) — (2k + 1) ^3 pss ^53 pss2]/2 =

s=0 s=0 s=0

tt

= 1-53 ps s2 — (k + a)2 — a + a2]/2 = [D[p] — a(1 — a)]/2 = H (p).

s=0

In particular, for p e 0(k) (E[p] = k),

tttt

Vn (p) <53 ps(s — k)(s — k — l)/2 = 53 ps(s — k)(s — k + l)/2 = D[p]/2. (30)

s=0 s=0

Corollary 1. The strategies tm, m = 0,1,... guarantee the same upper bound H(p) for the upper value of the infinite game Gtt (p).

Further we give another representation of the function H(p) over 0(r). The set 0(r) is a closed convex subset of Banach space LJ({s2}). The extreme points of this set are distributions pr(k, l) e 0(r) with two-point supports {r — l,r + k}

kl l) = ITi’ l) = kT~v (31)

k = 0,1, 2,..., l = 0,1,...,r, k + l > 0. Note that pr (0, l) = pr (k, 0) = er, where er is the degenerate distribution with one-point support err = 1.

Any p e 0(r) has the following unique representation as a convex combination of extreme points (7) of this set:

ttr

p = pr x er + ^J3 aki(p) x pr (k,l), (32)

k=11=1

with the coefficients

&kl{ P) = ^—\T 7 Pr — lPr + k•

2^t=i tpr-t

Consequently, the continuous linear function H over 0(r), equal to zero at er, has the following unique representation as convex combination of values at extreme points H(pr(k, l)) = kl/2 corresponding to decomposition (8):

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

ttr

H(p)=^53 aki(p) x k x l/2 (34)

k=1 i=1

with the coefficients akl (p) given by (9).

4. Asymptotics of values Vn(p)

In this section we show that, for p e M2, as n tends to to, the sequence of values

Vn (p) of the games Gn(p) converges to H(p).

To prove this, we construct lower bounds for the values Vn (p) of the games Gn (p). These lower bounds have the same structure as the upper bounds of Theorem 1. For any p e M2 we define the strategy of Player 1 ensuring these lower bounds.

Definition 2. Here we define a sequence of continuous, concave, and piecewise linear functions Bn over M2. Its domains of linearity are L(r), r = 0,1,..., and its domains of nondifferentiability are 0(r).

For the extreme points pr(k,l) of the set 0(r) the values Bn(pr(k,l)) are given with the recurrent equalities

Bn(pr (k, l)) = [1 + Bn-1(pr + 1(k — ^-,l +1)) + Bn-1(pr 1(k +1, l — 1))]/2, (35)

with the boundary conditions Bn-i(pr+k(0, l + k)) = Bn-i(pr-1 (k + l, 0)) = 0, and the initial condition B0(pr(k, l)) = 0.

For the interior points p e 0(r) the values Bn(p) are convex combinations of its values at extreme points with the coefficients aki (p) given by (9).

For the interior points p e L(r), the values Bn(p) are convex combinations of its values at boundary points pr e 0(r) and pr+J e 0(r+1) such that p=ap+1 — a)pr+J.

Definition 3. For any p e M2, we define the strategy a(p) of Player 1.

Let p e 0(r). If the random variable Cp takes the value r then the strategy a(p) stops the game. Otherwise, the first move of the strategy a(p) makes use of two actions r — 1 and r. These actions occur with total probabilities qr-i = qr = 1/2. For action r — 1 the posterior probability distribution is

p(r — 1) = p e 0(r — 1),

where

ps

0,

Ps~

J2r,Zo(r-j)P3

nU-r+1)pj

for s > r; for s = r;

for s < r.

(36)

For action r the posterior probability distribution is

p(r) = p+ e 0(r + 1),

where

p+

Ps-

0,

Ps1

H(j-r-1)Pj

^3 = 0 (

for s > r; for s = r;

for s < r.

(37)

For interior points p e L(r) with E[p] = r + a first moves of strategies a(p) are convex combinations of the first moves at boundary points pr e 0(r) and pr+J e 0(r + 1) such that p = apr+J + (1 — a)pr.

Remark 3. It follows from Theorem 2 that for p e 0(k), if the random variable Cp takes the value k, then the gain of Player 1 is equal to zero, and he can stop the game without any loss for himself.

p

Y.r=l(r-j)P

E3=0 (r-j)pj

Proposition 2. For the game Gn(p) the strategy <r(p) ensures the payoff

wn(p,a(p)) = Bn(p).

Proof.

It is sufficient to prove Proposition for the games Gn(pr(k, l)) corresponding to extreme points pr(k, l) of the sets 0(r), r = 1, 2,.... The proof is by induction on n. n = 1. The best answer of Player 2 to the first move of the strategy a(pr (k, l)) is any action l with l < r. The resulting immediate gain of Player 1 is equal to 1/2. Thus, the strategy a(pr(k,l)) ensures the payoff B1(pr(k,l)) = 1/2 in the one-step game G1(pr(k,l)). n ^ n + 1.

Assume that the strategies a(pr (k, l)) ensure the payoffs Bn(pr(k, l)) in the games Gn(pr (k,l)).

The first move of the strategy a(pr (k,l)) has immediate gain equal to 1/2. Its posterior probability distributions are pr-1(k + 1, l — 1) and pr+J(k — 1,l +1), and both of them occur with probabilities 1/2.

According to the induction assumption and formulas (1), (6), the resulting total gain of Player 1 is equal to

[1 + Bn(pT 1(k +1, l — 1)) + Bn(pr+1(k — ^-,l + 1))]/2 = Bn+1(p).

Thus, the strategy a(p) ensures the payoff Bn+1(p) in the games Gn+1(p) with p = pr (k, l). It is easy to extend this result to all p e M2.

Theorem 3. For p e M2, the following equalities hold:

lim Vn (p) = H(p).

n—►tt

Proof.

According to Theorem 2 and Proposition 2 the following inequalities hold:

Bn(p) < Vn(p) < H(p), Vp e M2.

The functions Bn and H are continuous, concave, and piecewise linear with the same domains of linearity L(r), r = 0,1,.... Such functions are completely determined with its values at the domains of nondifferentiability 0(r),r = 1, 2,....

Because of continuity and concavity of the functions Bn and H, to prove that the sequence Bn converges to H as n tends to to, it is enough to show this for p e 0(r), r =1, 2,....

The increasing sequence of continuous linear functions Bn over 0(r) is bounded from above with the continuous linear function H. Consequently, it has a continuous linear limit function Btt. To prove Theorem 3 for p e 0(r) it is enough to show that

lim Bn(pr(k,l)) = Btt(pr(k,l)) = H(pr(k,l)) = k x l/2, Vk,l.

n——tt

It follows from (11) that the limits Btt(pr(k,l)) should satisfy the equality

Btt(pr(k, l)) = [1 + Btt(pr+1(k — 1,l +1)) + Btt(pr-1(k + 1, l — 1))]/2 (38)

with the boundary conditions Btt(pr+k(0, l + k)) = Btt(pr-1 (k + l, 0)) = 0.

Solving the system of k + l — 1 linear equations (14) connecting k + l — 1 values Btt(pr+m(k — m,l + m)), m = —l + 1, —l + 2,...,k — 1, for distributions with the same two-point support {r — l, r + k}, we obtain that

Btt(pr(k, l)) = k x l/2 = H(pr(k, l)).

According to (10) this proves Theorem 3 for p e 0(r),r = 0,1,.... Because of the

continuity and concavity of the functions Vn it is true for all p e M2.

Corollary 2. It follows from the proof that the strategy a(p) ensures the payoff H(p) in the infinite game Gtt(p). The strategy a(p) is not optimal in any finite game Gn(p) with n < to.

5. Solutions for the games G^(p) and random walks

For p e M2, as the values Vn (p) are bounded from above, the consideration of games Gtt (p) with infinite number of steps becomes reasonable.

We restrict the set of Player 1’s admissible strategies in these games to the set £+ of strategies employing only the moves ensuring him a non-negative one-step gain against any action of Player 2. Consequently, the payoff functions Ktt(p, a, t) of the games Gtt (p) become definite (may be infinite) in all cases.

We show that the infinite game Gtt (p) has a value, and this value is equal to

H (p).

The existence of values for these games does not follow from common considerations and has to be proved. We prove it by providing the optimal strategies explicitly.

Theorem 4. For p e M2 the game Gtt (p) has a value Vtt (p) = H(p). Both Players have optimal strategies. The optimal strategy of Player 1 is the strategy a(p), given by Definition 3.

For p e L(r), r = 0, 1 . .., the optimal strategy of Player 2 is the strategy Tr, given by Definition 1. For p e 0(r),r = 1, 2,... any convex combination of the strategies tr-i and tr is optimal.

Proof.

According to Corollary 2, the strategy a(p) e £+ ensures the payoff H(p) in the game Gtt (p). Thus, for any p

e M2

supinf Ktt(p, a, t) > H(p), (39)

s+ T

and the function H is the lower bound for the lower value of the game Gtt.

On the other hand, according to Corollary 1, the strategies Tr, r = 0,1,..., ensure the payoff H(p) in the infinite game Gtt (p). Thus, for any p e M2

infsup Ktt(p, a, t ) < H (p), (40)

T s+

and the function H is the upper bound for the upper value of the game Gtt.

As the lower value is always less or equal to the upper value, it follows from (15) and (16) that

sup inf Ktt (p, a, t) = inf sup Ktt (p,a, t) = H(p) = V» (p).

s+ T T s+

The strategies a(p) e S+ and Tr, r = 0,1,... ensure the value H(p) = Vtt (p) in the infinite game Gtt (p).

6. The probabilistic interpretation of the results

For the initial probability distribution p e Theta(r), r = 1, 2,... the random sequence of posterior probability distributions, generated with the optimal strategy a(p) of Player 1, is the symmetric random walk (pt)tt=i over domains 0(l). Probabilities of jumps to each of adjacent domains 0(l — 1) and 0(l + 1) are equal to (1 — pi)/2, and probability of absorption is equal to pl. This is the Markov chain with the state space U^cTheta(l), and with the transition probabilities, for p e Theta(l),

Pr(p, el) = pi; Pr(p, p-) = Pr(p, p+) = (1 — pi)/2,

where p- and p+ are given with (12) and (13).

Next arising posterior probability distributions p- and p+ have pl =0 and, thus, for any subsequent visit to the domain 0(l), the probability of absorption becomes equal to zero.

For the random walk (pt)tt=1 with the initial probability distribution p e Theta(r), let Op be the random Markov time of absorption, i.e.

O(p) = min{t : pt = el} — 1.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

The Markov time Op of absorption of posterior probabilities represents the time of revelation the “true” value of share by Player 2 and, generally speaking, the time of bidding termination.

Proposition 3. For the random walk (pt)tt=i with the initial probability distribution p e Theta(r), the expected duration E[O(p)] of this random walk is equal to the dispersion D[p] of the liquidation price of a share.

Proof.

For the random walk (pt)tt=1 with the initial probability distribution p e Theta(r), the transition probabilities are linear functions over Theta(r). Consequently, the expected duration E[O(p)] of this random walk is a linear function over Theta(r) as well.

The continuous linear function E[O(p)] over 0(r), equal to zero at er, has the following unique representation as convex combination of values at extreme points E[O(pr(k, l))]:

ttr

H(p) = ££aki(p) x E[O(pr(k,l))],

k=ii=i

with the coefficients aki (p) given by (9).

It is well known that

E[O(pr (k, l))] = k(m — k) = D[pr(k, l)].

As the dispersion D[p] is a linear function over Theta(r), we obtain the assertion of Proposition 4.

Remark 4. The result of Theorem 4 turns to be rather intuitive. The value of infinite game is equal to the expected duration of random walk of posterior probability distributions, multiplied by the constant one-step gain 1/2 of informed Player 1.

7. Conclusion

The obtained results on the biddings with countable sets of possible prices and admissible bids demonstrate that the Brownian component in the evolution of prices on the stock market may originate from asymmetric information of stockbrokers on events determining market prices.

Acknowlegments

The authors express their gratitude to B. De Meyer for useful discussion on the subject.

References

De Meyer B., Saley H. 2002. On the Strategic Origin of Brownian Motion in Finance. Int. J. of Game Theory, 31: 285-319.

Aumann R., Maschler M. 1995. Repeated Games with Incomplete Information. The MIT Press: Cambridge, Massachusetts - London, England.

Domansky V. 2007. Repeated games with asymmetric information and random price fluctuations at finance markets. Int. J. of Game Theory, 36 2: 241-257.

Domansky V., Kreps V. 2007. Moment of revealing of insider information at biddings with asymmetric information of agents. Proc. of Appl. and Indust. Math., 14 (3): 399-416 (in Russian).

i Надоели баннеры? Вы всегда можете отключить рекламу.