Научная статья на тему 'Construction of different types of dynamics in an evolutionary model of trades in the stock market'

Construction of different types of dynamics in an evolutionary model of trades in the stock market Текст научной статьи по специальности «Математика»

CC BY
14
2
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
EVOLUTIONARY GAME / ESS STRATEGY / STOCK MARKET / CHEAP-TALK GAME / REPLICATIVE DYNAMIC / DISCRETE DYNAMICS / IMITATION MODELS / IMITATION DYNAMICS

Аннотация научной статьи по математике, автор научной работы — Gubar Elena

The main purpose of this work is research of the agents behavior on the stock market using methods of evolutionary game theory and construct evolutionary dynamics for the long-run period. For this model we considered and compared some additional cases of the dynamics. Following dynamics for the model were constructed: the evolutionary dynamics, the OLG dynamics and continuous dynamics of the imitation behavior. As a result we compared solutions, which were given by all dynamics.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Construction of different types of dynamics in an evolutionary model of trades in the stock market»

Gubar Elena

St.Petersburg State University,

Faculty of Applied Mathematics and Control Processes, Universitetskiy pr. 35, St.Petersburg, 198504, Russia fax: + 7 (812) 428 71 59 http://www. apmath. spbu. ru alyona-kor ©yahoo. com

Abstract The main purpose of this work is research of the agents behavior on the stock market using methods of evolutionary game theory and construct evolutionary dynamics for the long-run period. For this model we considered and compared some additional cases of the dynamics. Following dynamics for the model were constructed: the evolutionary dynamics, the OLG dynamics and continuous dynamics of the imitation behavior. As a result we compared solutions, which were given by all dynamics.

Keywords: Evolutionary game, ESS strategy, stock market, cheap-talk game, replicative dynamic, discrete dynamics, imitation models, imitation dynamics.

In this work, we construct an evolutionary model of the behavior of the stock market agents in case of takeover. Consider stock market with set of agents. Suppose that each agent has some blocks of shares of different companies. Assume that market agents have three types of behavior. The first type is to hold the block of share-and receive profit from it. The second type is to get control of the company. In this case agent buys blocks of shares and collects the control, but the main suggestion is, that each agent can’t buy the control of company independently, he can buy the control, only if he cooperates with other agent, which has block of shares of the same company. The third type of behavior is to detect purposes of the opponent. In other words, if the first agent meets other agent, who wants to hold his block of shares, then he holds too. If the first agent meets other agent who wants to buy the control of the company, then agents cooperate and buy the control.

Suppose that stock market agents are randomly matched, and their interaction can be described by symmetric two-player game. In this game the first strategy of the agents is to hold the blocks of shares, the second strategy) is to buy control. In this game players get following payoffs, if both players hold their blocks of shares, then their get little payoff, which is equal 2. If one player wants to buy the control, and other doesn’t, then the first agent gets 0, because he expends money, but doesn’t have the control, and the second player gets 3, because he has profit from his block of shares. If both players want to buy the control and cooperate, then they buy it and get payoff 4.

The main purpose of this work is research of the agents behavior on the stock market using methods of evolutionary game theory and construct evolutionary dynamics for the long-run period. For this model we considered and compared some additional cases of the dynamics. Following dynamics for the model were constructed:

the evolutionary dynamics, the OLG dynamics and continuous dynamics of the imitation behavior. As a result we compared solutions, which were given by all dynamics.

1. Base game

In this work the base model od trades is described by a symmetric two players game with payoff matrix:

H B

H (2,2) (3,0)

B (0,3) (4, 4).

Denote players strategies in this base game as H and B, strategy B corresponds to the first type of agents behavior, strategy H describes the second type of behavior. This symmetric game we will call base game.

Obviously the base game has three Nash equilibriums, two equilibrium are in pure strategies and one is mixed strategy equilibrium.

In base game we can verify, which strategies are evolutionary stable. To check which strategies are evolutionary stable, we use the criteria of evolutionary stability, remind the basic definition and the criteria:

Definition 1. x e A is an evolutionary stable strategy (ESS) if for every strategy y x there exists ey G (0,1) such as inequality

u[x, ey +(1 — e)x] > u[y, ey +(1 — e)x]

holds for all e G (0,ey).

Evolutionary stability criteria (Maynard Smith, Price 1973):

u(y, x) < u(x, x), for all y,

u(y, x) = u(x,x), then u(y,y) <u(x,y), y = x.

We can present following interpretation for this property. Evolutionary stability of some strategy x means that this strategy gives better payoff against any other strategy y and gives the best payoff against every alternative best reply y.

Denote as Aess the set of evolutionary stable strategies. In our case this set contains two strategies, AESS = {H, B}.

2. Extended game

Consider some extension of the base game. In extended game we add the third strategy E to the base game. This strategy describe new type of behavior of the market agents and we will call players, using strategy E, rational players. Rationality of the player means that agents, which use strategy E can recognize actions of his opponents. If one rational player meets his opponent, who use strategy B then the rational player also uses strategy B, and rational player uses strategy H, if his opponent uses strategy H. In case when rational player meets another rational player, then both players play Nash equilibrium strategies.

In base game we have two strict Nash equilibriums hence in extended game we will consider two cases, which describes two cases of market agents preferences. Below we present payoff matrices for both cases. Payoff matrix for the first case (case A) of extended game:

H B E

H (2,2) (3,0) (2,2)

B (0,3) (4, 4) (4,4)

E (2,2) (4, 4) (4,4)

We assume that in the first case both rational players use strategy B and then we have 4 as a payoff in strategy profile (E, E). In this case we have five Nash equilibriums profiles and the set of Nash equilibrium strategies is ANE = {H, B, E}. However in this case we do not have any evolutionary stable strategies, but we have two neutrally stable strategies B, E and denote as ANSS the set of neutrally stable strategies. Remind the definition of neutrally stability

Definition 2. x e A is neutrally stable strategy (NSS), if for every strategy y e A there exists some ey G (0,1) such that inequality

u[x, ey +(1 — e)x] > u[y, ey +(1 — e)x] holds for all e G (0,£j,).

To verify this kind of stability we can easily use following criteria:

Definition 3. Neutrally stable criteria (Maynard Smith, 1982):

u(y, x) < u(x, x) , for all y u(y,x) = u(x,x), then u(y,y) < u(x,y), y = x.

Neutrally stability means that if player uses neutrally stable strategy x then he gets better payoff against any other strategies and can get the same payoff against every alternative best reply.

We get that in the first case of extended game strategies B, E are neutrally stable strategies, ANSS = {B,E}.

Consider the second case of the extended game. In this case both rational players use strategy H and then we have 2 as a payoff in strategy profile (E, E). Payoff

matrix for the second case (case B) of extended game:

H B E

H (2,2) (3,0) (2,2)

B (0,3) (4, 4) (4,4)

E (2,2) (4, 4) (2,2)

For this payoff matrix we have following set on Nash equilibrium strategies: ANE = {H,B,E,x = (1/2,0,1/2)}.

We checked that in this case of extended game there are no any neutrally stable strategies, but strategy B is evolutionary stable and AESS = {B}.

Relative to both cases of extended game we get that the structure of Nash equilibriums and sets of ESS and NSS strategies are differ from base game.

3. Replicator dynamic

Consider the long run period and describe the market agents behavior, using instruments of evolutionary game theory. Suppose that we have large but finite group of agents on the stock market and assume that each agent in the group has some block of shares of different companies, which distribute between the agents. In this large group of market agents are randomly matched ant their interaction can be described by the extended game.

In the continuous case, as a result, we construct the replicator dynamics for the agents. But previously, we give some additional definitions.

Let each player in this group of agents is programmed to play only one pure strategy during whole time period and the mixed strategy in this case described in following way.

Denote as x(t) = (xH(t),xB (t),xE(t)) mixed strategy of the large group of market agents. The component of this mixed strategy means how many agents use certain pure strategy. In our case we have following components: xH(t) is the group share programmed to pure strategy H, xB (t) is the group share programmed to pure strategy B, xE (t) is the group share programmed to pure strategy E.

k

Denote as u(x,x) = ^ xiu(ei,x) an average payoff of large group of agents,

i=1

where ei is *-th pure strategy. The main equation for replicator dynamics was proposed by Taylor and Jonker, 1978:

Xi = \u(el ,x) — u{x,x)\xi, i=l,n. (1)

Remaind two propositions, which define main properties of replicator dynamics.

Proposition 1. (Bomze, Weibull, 1994) x e ANSS is Lyapunov stable in the replicators dynamics (1).

Proposition 2. (Taylor, Jonker, 1978) x e AESS is asymptotically stable in the replicators dynamics (1).

Construct replicator dynamics for both cases of extended game.

For the first case of extended game we get following system of differential equations:

x h = —xh (2xH + xh (3xb + 2(2xe — 1)) + 2(2xB + 4xb xe + xe (2xe — 1))) xb = —xb(2x2h + xh(3xb + 4xe) + 4(xb + xe)(xb + xe - 1)) (2)

xe = —xe(2x2h + xh(3xb + 2(2xe - 1)) + 4(xb + xe)(xb + xe - 1))

Using some set of initial states xH = 0.01, 0.06,..., 0.96, xB = 0.98, 0.93,..., 0.03, xE = 0.1., we found numerical solution for the system. Solutions trajectories are presented in the picture below:

In the first case of extended game, share xH is decreased during the time, but shares of agents xB and xE are increased. Hence in this case it if better for players cooperate and to buy large blocks of shares or the control of the target company.

By proposition 1 we can say that states xE and xB are Lyapunov stable, because both strategies are neutrally stable strategies. For the second case of extended game we get the following system of differential equations:

xh = (2xh xb + (xh — 1)xb — 2xb (1 — xh — xb ))xh

xb = ((—2 + 2xb )xh + xh xb + (2 — 2xb )(1 — xh — xb ))xb (3)

xe = (3xhxb — 2xb (1 — xh — xb ))(1 — xh — xb )

Figurel. Replicator dynamics for the first case of extended game

Using the initial states xH = 0.01,0.02,..., 0.98, xB = 0.98, 0.97,..., 0.01, xE = 0.1 we find solution for the system (3). Solutions trajectories for the system (3) are presented in the following picture:

Figure2. Replicator dynamics the second case of extended game extended game

For replicator dynamics, described be equations (3) we have some boundary initial states xH (0) = 0.37,xB (0) = 0.63,xE (0) = 0.01, xH (0) = 0.35, xB (0) = 0.65, xe(0) = 0.01, xh(0) = 0.36, xb (0) = 0.64, xe (0) = 0.01.

From the initial states xH = 0.01,... ,0.36 all trajectories aspire to the vertex xB and from initial states xH = 0.37,..., 0.98 solutions trajectories aspire to state xE. But in the long run period in the second case of extended game during the time strategy H is vanished (by proposition, Samuelson, 1993) and players share XB is increased.

By proposition 2 we get that xB is asymptotically stable, because strategy B is ESS strategy in extended game.

Analyze both cases we can say that in long run period it is better for the market agents to use strategy B, which required agents to cooperate with his opponents. And as a result agents can buy large blocks of shares or the company’s control. Indeed this type of behavior more risks, however strategy B will survive during whole time period.

4. Overlapping generation dynamics

Consider special case and suppose that evolution selection is modeled in discrete time with each period t = 0,1, 2 ... representing a generations, moreover we assume

that generations can be overlapped in time and agents appear and disappear on the market r > 1 times per time units. The time unit can be one week, one month or one year and assume that each time period in interaction involving the share 1/r, r e (0,1] of the total group. The interaction between the agents occurs at random with equal probability for all individuals in the total group of agents. Here x(t) = (xB (t),xE(t),xH(t)) is group state at moment t. Values xB (t),xE (t),xH(t) are shares of the agents, which use corresponding types of behavior B, H, E. As in previous section assume that each agent is programmed to pure strategy and can be replaced by u(el, x(t)) + 3 > 0 agents at moment, where u(el, x(t) is players payoff with pure strategy i, in state x(t). Variable 3 is lifetime of the agent in the market.

The replicator dynamics for this case is described by the expression:

x-(t+l)- + ^ + x.(t) ,-HBE U)

l[ + j _ 1 - r + t(/3 + u[x(t), x(t)]) W

x

(t) e ^, 3 > 0, t = 0, 1, 2,...

Variable t e (0,1] is length of the time interval between two successful changes of the group.

Construct dynamics of overlapping generation for the first case of the extended game.

Xu(t + 11 = ___________________(1-t+t([3+xb + 2))_______________

H\ ' ) (1—t+t((3+(2 — 2xb)xh + (. — xh+4)xb + ( — 2xh+4)(.xe))) h’

T„ff i H = _____________________(1-t+t(/3+4-4xh))_______________ frs

> (l-T+T(f3+(‘2-‘2xB)xH + (-xH+4)xB + {-‘2xH+4)(xE))) B’ w

Xw(t + 11 = ____________________(1-t+t(/3-2sh+4))_______________

' ' (1-t+t(I3+(2-2xb)xh + (-xh+4)xb + (-'2xh+4)(xe))) E)

For the initial states xH = 0.05,0.1,..., 0.95, xB = 0.95, 0.9,..., 0.05, xE = 0.5 and the rates of parameters 3 = 0.2; t = 0.8, t = 0,1, 2,... trajectories for the system (5) are presented in the picture below:

Figure3. Overlapping generations dynamics the first case of extended game

Consider the second case of the extended game and construct dynamics of overlapping generation.

X„(t + 11 — ___________________(1-t+t(/3+x2 + 2))___________________

i-) (1-t+t(I3+(2-2xb)x1 + (-xh+4)x2 + (.2x2 + 2)(1-xh-xb)))JjH’

Tn(Y + 1~) = __________________(1-t+t(/3+4-4xh))____________________

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

-B \ i / (l-T+T(f3+(2-2xB)xH + (-XH+4)xB + (2xB+2)(l-xH-XB))) B) ' '

Xw(t+ 11 = ____________________(l-T+T(f3+2xB + 2))__________________

' ' (1-t+t(I3+(2-2xb)xh + (-xh+4)xb + (2xb + 2)(1-xh-xb))) ’

For the initial states xH = 0.05, 0.1,.0.95, xB = 0.5, xE = 0.95, 0.9,..., 0.05, xH = 0.05,0.1,..., 0.95, xB = 0.95, 0.9,..., 0.05, xE = 0.5 and 3 = 0.2; t = 0.8,

t = 0,1, 2,... the solutions trajectories for the system (6) are presented in the

following picture:

Figure4. Overlapping generations dynamics the second case of extended game

We get that in for the first case of the extended game solution trajectories from all initial states aspire to boundary xB ,xE. For the second case of the extended game all solution trajectories aspire to the vertex xB and share xH will vanishes in long run period.

5. Replication by Imitation

As in previous paragraphs, consider large, but finite group of stock market agents. Each agent uses one on the three described strategies H, B or E, corresponds to three types of agents behavior. The agents match at random in total group of agents and assume, that agents infinitely live and interact forever. During the meeting an agent can review the own strategy and his opponent strategy, and hence, he can change the own strategy to sampled strategy.

There are two basic elements in this model. The first element is time rate at which agents in the group review their strategy choice. The second element is choice probability at which agents change their strategies. Both elements depend on the current group state and on the performance of the agent’s pure strategy.

Let K = {H, B, E} is set of pure strategies, denote as r*(x) an average review rate of the agent, who uses pure strategy i, in the group state x = (xH,xB ,xE). Player with pure strategy i will be called as i-strategist. Variable pj (x) is probability at which i-strategist, switches to some pure strategy j, i,j G K. Here Pl = (pH(x),PB (x),pE(x)),i = H,B,E is the resulting probability distribution over the set of pure strategies. This assumption can be interpreted in following way,

if one agent exits from the market, then he is replaced by another one. In general case the imitation dynamics is described by the formula:

xi xjrj(x)pj(x) — Ti(x)xi, i = H, B, E. (7)

jGK

In this paper we use special case of the imitation dynamics of successful agents.

6. Imitation of Successful Agents

Suppose that each agent samples another stock agent from the total group with equal probability for all agents and observes the average payoff to his own and the sampled agent’s strategy. When both players show their strategies then player who uses strategy i gets payoff u(ei, x) + e and player, who uses strategy j gets u(ej,x) + e', where e,e' is random variables with continuously probability distribution function 4>. The random variables e and e' can be interpreted as individual preference differences between agents in the market. As distribution function we use uniform distribution and consider particular case rf>(z) = a + fiz, a, 3 G R, 3 > 0.

Players compare their payoffs: if the payoff of the sampled agent is better than of the reviewing agent, he switches to the strategy of the sampled agent. In other words if this inequality u(ej,x) + e' > u(ei,x) + e is justify for player with pure strategy i then he switches to the strategy j.

For the general case the following formula describes the imitation dynamics of successful agents

xj(^[u(ei — ej,x)] — ^[u(ej — ei,x)])

jGK

Use the fact that ^(z) = a + 3z and transform (7):

xi = 23[u(ei, x) — u(x, x)]xi.

(8)

(9)

Use payoff matrix of the first case of the extended game and get the system of differential equations:

x h = 23(—xh (2xH + xh (3xb + 2(2xe — 1))) + 2(2xB + 4xb xe + xe (2xe — 1)))); xb = 2/3(—xb (2xH + xh(3xb + 4xe) + 4(xb + xe)(xb + xe — 1))); x e = 23(—xe (2xH + xh (3xb + 2(2xe — 1)) + 4(xb + xe )(xb + xe — 1)));

(10)

Solutions trajectories for the system (10) and the initial states xH = 0.01, 0.02, xE = 0.1 ..., 0.98, xB = 0.98,0.97,..., 0.01, and 3 = 0.8 are presented in the following picture:

Write the system of differential equations for the second case of the extended game:

xh = 23((2xhxb + (xh — 1)xb — 2xb (1 — xh — xb ))xh);

x b = 23(((—2 + 2xb )xh + xh xb + (2 — 2xb )(1 — xh — xb ))xb ; ] x e = 23(3xh xb — 2xb (1 — xh — xb ))(1 — xh — xb );

(11)

x

Solutions trajectories for the system (11) and the set of initial states xH = 0.01,0.02, xE =0.1 ..., 0.98, xB = 0.98, 0.97, ..., 0.01, and 3 = 0.8 are presented in the picture below:

Figure5. The imitation dynamics the first case of extended game

Figure6. The Imitation dynamics the second case of extended game

We get that solutions trajectories in case of imitation dynamics of successful agents are close to the evolutionary dynamics case, strategies E and B survive in long run period for the first case of the game and strategy B survives in long run period for the second case of the game. Shares of market agents, which use strategy H will be vanished from the market in both cases.

References

Weibull, J. (1995). Evolutionary Game Theory. — Cambridge, MA: The M.I.T.Press . Subramanian, N. (2005). En evolutionary model of speculative attacks. Journal of Evolutionary Economics, 15, 225-250. Springer-Verlag.

Gintis, H. (2000). Game theory evolving. Princeton University Press.

Aumann, R. J. and S. Hurt (2003). Long Cheap Talk Econometrica, Vol. 71, No. 6, 16191660.

Battalio, R., Samuelson, L. and J. van Huyck. (2001).Optimization Incentives and Coordination Failure in Laboratory Stag Hunt Games. Econometrica, Vol. 769. No. 3. 749-764.

Cressman, R. (2003). Evolutionary dynamics and extensive form games. MIT Press.

i Надоели баннеры? Вы всегда можете отключить рекламу.