Научная статья на тему 'MECHANISMS OF STRUGGLE WITH CORRUPTION IN DYNAMIC SOCIAL AND PRIVATE INTERESTS COORDINATION ENGINE MODELS'

MECHANISMS OF STRUGGLE WITH CORRUPTION IN DYNAMIC SOCIAL AND PRIVATE INTERESTS COORDINATION ENGINE MODELS Текст научной статьи по специальности «Математика»

CC BY
3
2
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
SOCIAL AND PRIVATE INTERESTS COORDINATION ENGINE (SPICE) MODELS / INVERSE DIFFERENTIAL STACKELBERG GAMES / CORRUPTION / PRINCIPAL-AGENTS SYSTEMS

Аннотация научной статьи по математике, автор научной работы — Gorbaneva Olga I., Usov Anatoly B., Ougolnitsky Gennady A.

A consideration of economic corruption is introduced in the dynamic social and private interests coordination engine (SPICE) models related to the resource allocation. The investigation is based on the hierarchical game theoretic approach in principal-agents systems. From the point of view of the agents a differential game in normal form arises which results in Nash equilibrium. The addition of a principal forms an inverse differential Stackelberg game in open-loop strategies. The related optimal control problems are solved by the Pontryagin maximum principle together with a method of qualitatively representative scenarios of simulation modeling. The algorithms of building of the Nash and Stackelberg equilibria are proposed, the numerical examples are described and analyzed.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «MECHANISMS OF STRUGGLE WITH CORRUPTION IN DYNAMIC SOCIAL AND PRIVATE INTERESTS COORDINATION ENGINE MODELS»

Contributions to Game Theory and Management, XII, 140-150

Mechanisms of Struggle with Corruption in Dynamic Social and Private Interests Coordination Engine Models*

Olga I. Gorbaneva, Anatoly B. Usov and Gennady A. Ougolnitsky

Southern Federal University, J.I. Vorovieh Institure of Mathematics, Mechanics and Computer Sciences,

Milchakov St. 8a, Rostov-on-Don, 344090, Russia E-mail: gorbaneva@mail.ru,usovtaath.rsu.ru, gaugolnickiy@sfedu.ru

Abstract A consideration of economic corruption is introduced in the dynamic social and private interests coordination engine (SPICE) models related to the resource allocation. The investigation is based on the hierarchical game theoretic approach in principal-agents systems. Prom the point of view of the agents a differential game in normal form arises which results in Nash equilibrium. The addition of a principal forms an inverse differential Stackelberg game in open-loop strategies. The related optimal control problems are solved by the Pontryagin maximum principle together with a method of qualitatively representative scenarios of simulation modeling. The algorithms of building of the Nash and Stackelberg equilibria are proposed, the numerical examples are described and analyzed.

Keywords: social and private interests coordination engine (SPICE) models, inverse differential Stackelberg games, corruption, principal-agents systems

1. Introduction

A big stream of literature is concerned with mathematical modeling of corruption. In most of the papers static models of struggle with corruption are built and investigated (Blackburn et al, 2006; Blackburn and Powell, 2011; Levin and Satarov, 2013; Antonenko et al, 2013). Dynamic models are studied for the special forms of model functions or strong requirements to the feasible strategies of agents (Kolokoltsov and Malafeev, 2017; Grass et al., 2008).

In this paper an authors' approach of sustainable management in active systems is used (Gorbaneva et al, 2016; UgoPnitskii and Usov, 2014; Ougolnitsky and Usov, 2015; Ougolnitsky and Usov, 2014; Ougolnitsky and Usov, 2019; Ugol'nitskii and Usov, 2013; Ougolnitsky and Usov, 2018). In relation with corruption, an extended active system is considered which consists of a principal, one or several supervisors, several agents, and a controlled dynamic system. The agents propose bribes to the supervisor in exchange to some privileges or resources. The principal is not corrupted, while the supervisor(s) and agents maximize their payoffs in the conditions of a possible corruption. The relations between the principal and supervisor(s) as well as between the supervisor(s) and the agents are modeled by the differential inverse Stackelberg games (Basar and Olsder, 2016; Dockner et al, 2000).

The paper develops the previous works (Ougolnitsky and Usov, 2015; Ougolnitsky and Usov, 2014; Ougolnitsky and Usov, 2019; Ugol'nitskii and Usov, 2013; Ougolnitsky and Usov, 2018). In (Ugol'nitskii and Usov, 2014; Ougolnitsky and

* This work was supported by the Russian Foundation for Basic Researches under grants No.18-01-00053.

Usov; 2015) the methods of struggle with administrative corruption in the models of optimal harvesting are considered. In (Ougolnitsky and Usov, 2014) the methods of struggle with economic corruption are proposed for the surface water quality control systems, in (Ougolnitsky and Usov, 2019) for general SPICE-models.

In this paper the dynamic SPICE-models of struggle with corruption in resource allocation are built and analyzed. The respective differential inverse Stackelberg games in open-loop strategies are solved by the Pontryagin maximum principle (Basar and Olsder, 2016) together with the method of qualitatively representative scenarios in simulation modeling (Ougolnitsky and Usov, 2018).

2. The problem setup

Consider a three-level extended active system which consists of a principal, a supervisor, several agents, and a controlled dynamical system. The supervisor allocates a (financial) resource between the agents who divide it between their private activities and the production of a common social good. The supervisor can increase the allocated resource in exchange of a bribe from the agents. The principal controls the supervisor and punish her in case of the found bribe.

The following information structure is assumed. The principal chooses his strategy first and reports it to the supervisor and the agents. Then the supervisor chooses her control and reports it to the agents. The agents exert influence to the controlled dynamical system (the production of a social good). Given the supervisor's strategy they play a differential game in normal form which results in Nash equilibrium in open-loop strategies. This equilibrium is treated as the best response of the agents to the supervisor's strategy. The principal has no his own payoff functional, and only implicitly controls the supervisor. The supervisor and the agents maximize their payoff functionals in the following form: - for the supervisor

Jo ({ri(-)}N=i, {M0}N=i,x(0) =

,T N

= e-pt{s0(t)C(x(t)) + [1 - z - Mz]V bi(t)ri(t)}dt ^ max (1)

Jo i=i

- for the agents (i = 1,2,..., N) Ji (ri(0,bi(-),x(0) =

= / e-pt [pi(ri(t) - bi(t) - Ui(t)) + Si(t)C(x(t))]dt ^ max (2)

o

Here so + 2N=1 si(t) = 1.

It is assumed that the supervisor and the agents receive their share of the produced social good (s0(t) and si(t), i = 1, 2,...,N) respectively in each moment of time.

Denote by N the number of agents; T - the period of time (the length of the game); p - the discount factor; z(t) - the probability for the supervisor to be caught Mx

C(x)

is used; s0, si(t) (i = 1,2, ...,N) - the shares of the supervisor and the agents in C(x); pi(t) - the agents' functions of revenue from their private activities; ri(t) - an

amount of resource allocated by the supervisor (her control) to the i-th agent with consideration of a bribe; bi(t) - the bribe ("kickback") of the i-th agent (his control) to the supervisor; u^t) - a share of the received resource that the i-th agent uses in the production of the social good. The payoff functionals (1), (2) are considered with the following constraints for the controls: - the supervisor

N

ri(t) > r-j(t) = R (3)

i=i

- the agents

0 < bi(t) + ui(t) < ri(t); i = 1, 2,..., N; 0 < t < T. (4)

R( t)

The system dynamics equation (for the social good) has the form

N

x = /(^ui(t)j - ^x(t); x(0) = xo. (5)

Here / is a production function for the social good; ^ - the deterioration rate; x0 -an initial amount of the social good.

Assume that c, /, pi (i = 1,2,..., N) are concave monotone increasing functions, c(0) = /(0) = pi(0) = 0 . For definiteness, consider the case of power functions:

C(x) = kx^; /(y) = AyY;pi(y) = ya; k, A, 7, a = const; i = 1, 2,..., N. (6)

(6) The values R(t), z M s0 are given by the principal. Thus, a dynamic SPICE-model in resource allocation (1) - (6) is considered.

3. The Nash equilibrium

From the point of view of the agents the functions ri(t) (supervisor's controls) in the model (1) - (6) are given. Consider two natural forms of the functions:

r(t) = R=bNrV; i =1,2,..., N (7)

Ej=i bj

and

ri(t)= roi + [R ^roJ bN(t\ ; i = 1, 2,..., N. (8)

V k=i ) Ej=i bj

The values ri0 (i = 1, 2,..., N) determine the amounts of resource allocated to the agents in the absence of corruption. The case (7) relates to the extortion when agents should give a bribe or not receive any resource. The case (8) describes the capture when a fixed allocated amount of the resource may be increased in exchange of a bribe.

N

equilibriums in open-loop strategies. Respectively, for each agent an optimal control

problem is solved basing on the Pontryagin maximum principle (Basar and Olsder, 2016). The Hamilton function for the i-th agent takes the form

Hi(x(t),bi(t),ui (t),Xi(t)) =

= e-pt ((n(t) - bi - ui)a + ksix/) - u^ - (9)

where functions ri(t) are defined by the formulas (7) or (8).

Substitute the problem (2), (4) - (6) s. t. (7) or (8) by an equivalent problem of maximization of the Hamilton function (9) with known constraints for the controls (Basar and Olsder, 2016) (i = 1, 2,..., N) which results in the system of equations (i = 1, 2,...,N): - in the case of (7)

t = («- bi - uf (- l) ="; (10)

oi-l / N \Y-1

dHH; = -e-pta^N , - bi - uij - XlAY jj u, | = 0; (11)

Y

dx dHi l ^-v \

— = dy = -vx + Al 2.1u, I ; xi(0) = x0; (12)

it = - it = ^ + e-ptBiPkxP-1; Xi(T) = 0 (13) in the case of (8) the equations (10), (11) take the form

\ ai-l

dHi -Pt I N , bi

x

e ptai\ r0i +(R r°,^N b - bi- u i \ ,= 1 ¿-2=1 bj

(R -£N=1 ro, )(eN=1 bj - bi)

N ) IX^N

2=1roj) \l^j=1 b, , / x 2 V 2 ' - 1=0; (14)

N

E,= 1 b2

(\ ai-1

N bi roi +(R roj N b - bi- ui 2=1 2=1 b2

(N V-1 -\i A7I J2 UA =0; (15)

In (10) - (15) Xi(t) (i = 1,2,..., n) are the conjugate functions. It is required to find the set of functions {(biy uiy \i)N=1,x}, satisfying the system of equations (10) - (13) i = 1, 2, ... , n

function (9).

From (10), (11) or (14), (15) the optimal controls of agents that form the Nash equilibrium are determined by one of the formulas:

ufE = 0; bfE = 0; i = 1, 2,..., N (16)

N

ufE = 0;£bNE = R;i = 1,2,...,N (17)

j=i

OF clS cl solution of the system of equations:

- in the case (10), (11):

R£ N=1 j = (E N=! bj )2

/ . )Y —1 (18)

-e-piaj R j-j - bi - uA - AiA7 ^N=1 j =0 in the case (14), (15):

(R - j roj) j bj = (Ef=i bj)2 1

-e—ptai ((R - j roj) - bi - u^ - A,A7 (j u^"* = 0

(19)

The systems of equations (18) or (19) are considered together with (12), (13) and are the systems of non-linear (3N+1) equations with (3N+1 ) unknowns which are solved by the Newton method. Thus, the Nash equilibria in the game of agents (2), (4), (5) s. t. (6) are expressed by the formulas (16), (17) or are the solutions of the systems of equations (18), (19).

Example 1. In the case (d. - days; c.u. - conventional financial units) N = 3 P = 0.01 T = 1095d.; si = 0.2; ai = 0.5; r0i = 10c.u.; i = 1,2,3 x0 = 100c.u.; R = 100c.u.; k = 6 P = 0.6; 7 = 0.5; A = 0.05; ^ = 0.01d—1 the Nash equilibrium and the payoffs are determined by the formulas

- in the case of capture: bNE(t) = 0c.u.; uNE(t) = 10c.u^ Ji = 14560c.u. i = = 1, 2, 3

- in the case of extortion: bNE(t) = uNE(t) = 0c.u. (i = 1, 2,3) and Ji = = 12400c.u. i = 1, 2, 3.

The amount of the social good decreases in time both in the case of extortion (x(T) = 44c.u.), and in the rase of capture (x(T) = 76c.u.).

Example 2. In the case of the input data from the Example 1 and decreasing of the agents' shares in the social good in 10 times (si = 0.02 i =1, 2,3) the optimal strategies of all agents change, their payoffs decrease and coincide for capture and extortion:

ufE(t) = 0c.u. (i = 1, 2,3); bNE(t) = R; bNNE(t) = bNE(t) = 0c.u.; Jx = = 16230c.u^ J2 = J3 = 1240c.u.

The final value of social good decreases (x(T) = 27c.u.) in comparison with Example 1.

Example 8. In the case of the input data from the Example 1 and decreasing of the

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

k = 0.5

extortion coincide again. The payoffs of all agents decrease, the final amount of the social good increases (x(T) = 73c.u.): bNE(t) = uNE(t) = R/3 and Ji = 11194c.u. i = 1, 2, 3

Example 4- In the case of the input data from the Example 1 and a1 =2; a2 = 3; a^ = 1.5 we receive:

- for the capture:

uNE(t) = 22c.u.- bNE(t) = 17c.u.-, uNE(t) = 32cuu■ bNNE(t) = 23c.u.-, u^E(t) = = 14cuu.-, bNE(t) = 10c.u.; J1 = 25346c.u.; J2 = 34769c.u.-, J3 = 19171cuu.

- for the extortion:

uNE(t) = 12c.ubNE(t) = 22c.u.] uNE(t) = 17cuu- bNNE(t) = 28c.u.-, u^E(t) = = 8c.u.; bNE(t) = 13c.u.-, J1 = 21621c.u.; J2 = 32540cm.-, J3 = 17323c.u.

The final value of the social good increases in comparison with the Example 1: in the case of extortion x(T) = 78c.u., in the case of capture x(T) = 92c.u.

4. The Stackelberg equilibrium

From the point of view of the supervisor the model (1) - (6) represents an inverse

N

in open-loop strategies. Now the bribe functions ri(t) are the agents' strategies that are determined by the solution of the game. The following algorithm of building of the Stackelberg equilibrium is proposed.

1. The supervisor's strategy of punishment of agents rP (t) when they refuse to cooperate with her is built as follows:

rP(t) = {rP(t)}N=1 : rP(t) = arg min Ji(bNE(t),uNE(t),n(t),x(t))

Ti>0;J2Nj ri = R

Here rp(t) = 0. Assume that {bNE(t),BNE(t)}N=1 is a Nash equilibrium in the game of agents when r'p (t) = 0 i = 1, 2,.., ^^en bNE(t) = 0 uPNE}(t) = 0; i = 1,2,...,N.

i=

= 1, 2,..., N):

rT

Li = min max J^bNE (t),uNE (t),rp (t),x(t))= x$ Bike-I3^dt.

ri>0;ENi ri=R 0<Ui+bi<Ti J0

2. The optimal control problem (1), (3) - (6) is solved with conditions

r T

Li = x$ Bike-I3^dt < Ji(bi(t),ui(t),n(t),x(t)); i = 1, 2,...,N. Jo

which make the strategy of reward more profitable for the agents than the strategy of punishment.

A maximum in (1) is found by (3N) functions (ri(t),ui(t), bi(t)); i = 1,..., N in the same time. Denote the solution of the respective optimal control problem by {rR (t),bR (t),uR (t)}1N=1 wher e rP (t) is the strategy of re war d of the i-th agent when he chooses bR(t) and uR(t).

3. The supervisor reports to each agent the control feedback strategy:

r (t) = i rR(t) if bi(t) = bR(t) and ui(t) = uRR(t) yt e [0; to), \ rP (t) otherwise

Then the Stackelberg equilibrium takes the form {rf(t), bf(t), (t)}^.

The step 2 of the algorithm is implemented numerically by means of digitaliza-tion (Ugol'nitskii and Usov, 2013) and the method of qualitatively representative scenarios (Ougolnitsky and Usov, 2018).

It is supposed that the supervisor and the agents cannot change their strategies in any moment of time due to a natural inertia. The strategies remain constant during some sufficiently long periods of time, so that

ai, if 0 < t < 11, A(t)=< if ti < t<t2' (20)

aK, if tK-i < t < T,

where the function A(t) represents the strategies of the agents (bj(t), w^t), i = = 1, 2,..., N) and the supervisor (r(t); i = 1, 2, ..., N); aj = const; tj = j y t; yt = T/K; j = 1,2,..., K, and K stands for the number of the intervals of constancy of the strategies. Thus, from (20):

bT>" the functional (1) transforms into a payoff function

Jo({rj}Nj(Ki), {bij j,*(•)) = £ fh e-pisc(t)k(t)x^(t)) dt +

j=i ^ j-l

Y^ XI bije-pt[1 - z - Mz]dt ^ max (21)

j=i V¿=1 ^j-1 J

- the functional (2) transform into payoff functions (i = 1, 2,..., N):

1K

J({rij}f=i, {bij}f=i, {«¿j}f=i,*(0) = - £(ry - bij - «ij)a(e-ptj-1 - e-ptj) +

P j=i

A /"tj

+ e-pis0(t)k(t)x^ (t))dt ^ max (22)

j=i ^tj-i

- the constraints (3) take the form

N

rij > 0;£rij = R; j = 1, 2,..., K; l =1, 2,..., N (23)

i=i

- the constraints (4) (i = 1,2,..., N) take the form

0 < bij + sij < rij; j = 1, 2,..., K. (24)

The equation of system dynamics (5) does not change.

Thus, the problem (l)-(6) takes the form (5), (6), (21) - (24). In more details, the supervisor's optimal control problem (1), (3)-(6) that is solved at step 2 of the algorithm, is reduced to the problem of maximization of the objective function (21) by 3 • K • N variables {rij^jf!*, {bij^^i, {«j^jf? with constraints (23), (24)

and

L

■T

1

K

:ike—3^tdt < -Y,(rij - bij - Uij )a (e-pt

P

e—pts0(t)k(t)x3(t)) dt . (25)

This problem is not tractable analytically for the functions of general type, so the method of qualitatively representative scenarios (QRS) of simulation modeling is used (Ougolnitsky and Usov, 2018). The idea of the QRS method is that in the majority of applied dynamical models of the complex real-world social-economic systems (in differential games as well) it is possible to choose a very small number of scenarios that give a satisfactory good picture of qualitatively different ways of the system development.

In the considered case the QRS set is the Cartesian product of N x K sets:

QRS = QRSU x QRS12 x---x QRS1K x QRS21 x---x QRSNK. (26)

The set QRSij is a set of qualitatively representative seenarios for the j-th moment of time of the supervisor in relation to the i-th agent and the i-th agent itself. This set contains the triples of strategies (rQjRS, bQjRS,uQjRS).

Assume that all sets QRSij consist of 15 elements:

.. i — 1 i — 1 QRSij = (rQRS, bfRS, uQRS) : rCQRS = {0;-(R - £ rkj); R - £ rkj}; (27)

j = 1,22,..., K; i =1,22,..., N. Then the cardinality of the QRS set is equal to m = Njj=-i \QRSik | = 15K N. After

identification of the model the QRS set is checked for sufficiency and redundancy (Ougolnitsky and Usov, 2018), and it is extended or reduced by the necessity.

Thus, the numerical algorithm of solution of the problem from the step 2 of the above algorithm consists in the following actions.

1. All model functions and parameters N, K, R, T, X0, k, p, M, z, ¡3, x-, A, so, a^ si (i = 1, 2,..., N) are given.

2. An initial QRS set (26), (27) is checked for sufficiency and redundancy (Ougolnitsky and Usov, 2018), and it is extended or reduced by the necessity.

3. The next (/-th) strategy from the QRS set is fixed:

k=l

k=l

Initially / = 1.

4. The determined Z-th strategy from the QRS set is substituted in (5). The values of the variable x(t) are calculated numerically (for example, by an explicit method of finite differences). Then the value of the supervisor's objective function (21) is calculated. With consideration of (25) the greater value of the function (21), and the respective set of maximizing controls are saved.

5. If not all qualitatively representative strategies are scanned then go to the next strategy (Z := Z + 1) and return to the step 3 of algorithm.

6. Scanning of all qualitatively representative strategies (initially there are 15N K of them) determines the strategy maximizing the objective function (21) s. t. (23)-(25), i. e.

The found maximizing strategy is the strategy of reward. 5. The numerical results

The simulations for the considered game are conducted at the computer with a microprocessor of the A10 series, Intel Pentium G4620 with operative memory 4 Gb on C# object oriented programming language according the described algorithm. A mean time of the simulations for the input data from below was equal to 3 years. The main part of the time was consumed by the numerical solution of the equation (5) by the explicit method of finite differences for each qualitatively representative strategy.

Example 5. In the case of N = 2 P = 0.01 T = 1095d.; Si = 0.2; a = 0.5; i = 1, 2; s0 = 0.6; k = 6; R = x0 = 100c.«.; P = 0.6; 7 = 0.5; A = 0.05; M = 1; z = 0.05; y = 0.01d-i we receive: Li = L2 = 12400c.«.; ri(T/10) = ri (T/2) = ri(T) = 50c.«,; bNE (T/10) = bfE (T/2) = bfE (T) = 50c.«,; «NE (T/10) = «NE (T/2) = «NE(T) = 0 i = 1,2; J0 = 24877c.«.; Ji = j2 = 15238c.«.

The value of social good increases in time (xNE (T) = 136c.«.). In the considered information structure of an inverse Stackelberg game the supervisor compels the agents to take her bribes and to spend all their resources to the production of social good. Thus, the corruption has a type of extortion, it is profitable to the supervisor and not profitable to the agents.

Example 6. In the case of the input data from the Example 5 and the growth of the agents' gains from their private activities (ai = 3; i = 1, 2) the strategies of agents and their payoffs does not change. It would be more profitable to the agents to use their resources in private purposes but the supervisor compels them to not change the strategies in comparison with the Example 5.

arg

max

QRS ,QRS QRS }N (k)

} = eQ-RS

J ij = 1

x(0)

Example 7. In the case of the input data from the Example 5 and decreasing of the agents' shares in the social good in 10 times (si = 0.02; i = 1,2) the optimal

strategies of agents do not change again. The supervisor's payoffs is the same while the agents' payoffs decrease abruptly: J = J2 = 1523c.u.

Example 8. In the case of the input data from the Example 5 and increasing the probability of the bribe-taker catch up to 30% (z = 0.3) and a modest penalty M = 1 the corruption holds but the supervisor's payoff decreases: J0 = 18238c.u.

Exam,pie 9. A stronger reduction of the corruption (increasing the probability of

M

the corruption elimination. Now bribe taking is not profitable for the supervisor. The agents assign all their resources in the production of social good. The agents' optimal strategies and their payoffs in the case z = 0.3 are given by the formulas: ri(T/10) = ri(T/2) = ri (T) = 50c.u.; bNE (t/10) = bfE (T/2) = bfE (T) = 0c.u.; uNE (T/10) = ufE (T/2) = ufE (T) = 50; i = 1, 2; J0 = 21324c.u.; Jx = j2 = 20659c.u.

6. Conclusion

The information structure of the inverse differential Stackelberg game gives priority to the supervisor, while the interests of agents are not in fact considered. In this case for the successful struggle with corruption the principal should provide such economic conditions which make the bribe-taking not profitable for the supervisor. The numerical examples show that a certain increase of the probability for the bribe-taker to be caught and the respective penalty is sufficient for this purpose (Examples 8,9). If the corruption is absent then the supervisor's payoff decreases (by 30 % approximately for the used input data), and the agents' payoffs increase.

References

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Antonenko, A. V., G. A. Ugol'nitskii and A. B. Usov (2013). St,aide models of struggle with corruption in hierarchical management systems. Journal of Computer and Systems Sciences International, 52(4), 664-675. Basar, T., and G.J. Olsder (1999). Dynamic Noncooperative Game Theory. Philadelphia. Blackburn, K., N. Bose and M. E. Hague (2006). The incidence and persistence of corruption in economic development. J. of Economic Dynamics and Control, 30, 2447-2467. Blakburn, K. and J. Powell (2011). Corruption, inflation and growth. Econ. Letters, 113, 225-227.

Dockner, E., S. Jorgensen, N. V. Long and G. Sorger (2000). Differential Games in Economics and Management Science. Cambridge University Press. Gorbaneva, O.I., G. A. Ougolnitsky and A. B. Usov (2016). Modeling of Corruption in

Hierarchical Organizations. N.Y.: Nova Science Publishers. Grass, D., J. Caulkins, G. Feichtinger et al. (2008). Optimal Control of Nonlinear Processes: With Applications in Drugs, Corruption, and Terror. Springer-Verlag: BerlinHeidelberg.

Kolokoltsov, V.N. and O. A. Malafeev (2017). Mean-Field-Game of Corruption. Dynamic

Games and Applications, 7, 34-47. Levin, M. E., and G. A. Satarov (2013). Russian Corruption. The Oxford Handbook of

Russian Economy, N.Y.: Oxford University Press, 286-309. Ougolnitsky, G. A., and A. B. Usov (2014). Modeling of corruption in three-level control systems. Control Sciences, 1, 53-62 (in Russian).

Ougolnitsky, G. A., and A. B. Usov (2015). Dynamic Hierarchical Two-Player Games in Open-Loop Strategies and Their Applications. Automation and Remote Control, 76(11), 2056-2069.

Ougolnitsky, G. A., and A. B. Usov (2018). Computer Simulations as a Solution Method for Differential Games. Computer Simulations: Advances in Research and Applications, Eds. M.I). Pfeffer and E. Bachmaier, N.Y.: Nova Science Publishers, 63-106.

Ougolnitsky, G. A., and A. B. Usov (2019). Dynamic models of concordance of private and social interests with economic corruption. Journal of Computer and Systems Sciences International, (to appear ).

Ugol'nitskii, G. A., and A. B. Usov (2013). A study of differential models for hierarchical control systems via their discretization. Automation and Remote Control, 74(2), 252263.

Ugol'nitskii, G. A., and A. B. Usov (2014). Dynamic models of struggle against corruption in hierarchical management systems of exploitation of biological resources. Journal of Computer and Systems Sciences International, 53(6), 939-947.

i Надоели баннеры? Вы всегда можете отключить рекламу.