-□ □-
Для побудови ланцюга Маркова використано модифшовану систему проектного управлтня, що представлена у стандартi з управлтня проектами i визначае схему взаемоди учаснитв проекту. Розроблено метод тран-сформацп щег схеми в одноридний ланцюг Маркова з дискретними станами i часом. Показано, що терацшне розв'язання системи рiвнянь мартвськог моделi дозво-ляе будувати "траекторю" роз-витку вiртуальних або реальних проектних систем
Ключовi слова:учасники проекту, комушкацп, ланцюг Маркова, дискретш стани, траекторiя
проекту
□-□
Для построения цепи Маркова использована модифицированная система проектного управления, которая представлена в стандарте по управлению проектами и определяет схему взаимодействия участников проекта. Разработан метод трансформации этой схемы в однородную цепь Маркова с дискретными состояниями и временем. Показано, что итерационное решение системы уравнений марковской модели позволяет строить "траекторию" развития виртуальных или реальных проектных систем
Ключевые слова: участники проекта, коммуникации, цепь Маркова, дискретные состояния,
траектория проекта -□ □-
UDC 005.8
|doi: 10.15587/1729-4061.2017.97883|
REPRESENTATION OF PROJECT SYSTEMS USING THE MARKOV CHAIN
V. Gogunskii
Doctor of Technical Sciences, Professor* E-mail: [email protected] O. Kolesnikov PhD, Associate Professor* E-mail: [email protected] G. Oborska PhD
Advertising agency "Formula uspecha" Velyka Arnautska str., 76-a, Odessa, Ukraine, 65045
E-mail: [email protected] A. Moskaliuk PhD*
E-mail: [email protected] K. Kolesnikova
Doctor of Technical Sciences, Associate Professor**
E-mail: [email protected] S. Harelik Senior Lecturer
Interbranch college for advanced training and retraining of personnel management and personnel development Belarusian National Technical University Minina str., 4, Minsk, Republic of Belarus, 220014 E-mail: [email protected] D. Lukianov PhD, Associate Professor Department of General and Clinical Psychology Belarusian State University Nezavisimosti ave., 4, Minsk, Belarus, 220030 E-mail: [email protected] *Department of Systems Management Life Safety*** **Department of Information technology in mechanical engineering***
***Odessa National Polytechnic University Shevchenko ave., 1, Odessa, Ukraine, 65044
1. Introduction
Scientific research into the field of project management is focused on studying the phenomena and essence, relations and patterns in the management of projects/programs/ portfolios (PPP) over their life cycles. Such a project activity is carried out within the social or organizational-technical systems with the attributes of uniqueness. Objective constraints in project systems are related to planning the resources, establishing the duration of projects and to the requirements for a specified level of quality in the outcomes of project activity [1].
Achieving useful results and value in projects is conducted through the creation of products that is inextricably linked to the practice of project implementation, which results in the formation of rational models, techniques, methods and mechanisms for project management [2].
The relevance of research is predetermined by two components. First, communication processes typically represent a purposeful informational influence on the state of project systems. That is why the transformation of projects towards proactive management through the use of models that reflect essential attributes of the examined system is relevant. Second, resolving the contradictions between the
©
needs of practice for information support of projects and the lack of acceptable models is possible by employing the Markov chains.
2. Literature review and problem statement
Existing problems in the management of PPP are usually solved by using the best practice examples. In this case, in order to improve the social or organizational-technical systems, the already-known solutions are proposed [3]. However, copying even the accepted methods often creates a "competency trap", which aims at not making changes to those systems that serve their purpose [4]. In other words, new solutions are discarded in favor of conventional methods, leading to the transformation of project search into operational activity [5]. That is why, in order to develop the management of organizations and enterprises, it is necessary to generalize the accumulated knowledge and devise theoretical provisions for the project management [6]. Special attention should be paid to applying the methods of mathematical modeling of project management processes in the interaction with the surrounding environment to determine trajectories in the development of PPP [7]. Studying the properties of project systems with the help of their models will make it possible to escape the "competency trap", which is a necessary condition for constructing a successful trajectory for the execution of PPP [8].
A problem of project management that remains unsolved is the lack of standard models to represent the organizational-technical systems. At the same time, however, there are graphic structures of interaction between project participants, for example, in standard [13]. These structures are similar to the directed graphs in the Markov chains. That is why it is proposed to confirm the hypothesis on the ability to represent project systems by using a Markov chain.
4. Method for the transformation of a general structure of project management into a Markov chain
A set of factors in the weakly structured project systems creates a complicated "spider web" of relations between states that vary over time depending on the system structure and the factors of internal and external environment [9]. The development of projects in such a system can often be represented only in the form of qualitative models [10]. However, the use of Markov chains makes it possible to pass over to the quantitative assessments of the progress and outcomes of projects [11]. When modeling the complex systems of project management, of key importance is the representation of structure of the project processes interaction by using a directed weighted graph, in which [12]:
- vertices match base factors (states) of the project;
- direct links between states represent causal chains, along which the impact of a certain factor on other factors is exerted.
We shall accept as the base structure of project states (Fig. 1) the scheme of interaction of project participants, which is shown in standard [13].
3. The aim and tasks of the study
The aim of present study is the generalization and development of applied aspects of using the Markov chains for mapping and modeling of weakly structured project management systems.
To achieve the set aim, the following tasks were formulated:
- to develop a method for the transformation of general structure of the organizational-technical system of project management into a Markov chain;
- to devise a method for the iteration solution of a system of equations that describes the Markov model;
- to examine practical aspects of project implementation by using the developed Markov model, in particular to study influence of the competence level in a project team on the effectiveness of projects.
Fig. 1. Scheme of interaction between project participants [13]: A, B, ... G — state identifiers
Markov chains represent a stochastic process that satisfies Markov properties and takes a finite or a countable number of values (states) [14]. There are Markov chains with discrete and continuous time. We consider a discrete case in the present study. The scheme of interaction between project participants, presented in standard [13] (Fig. 1) can be transformed into a Markov chain (Fig. 2).
We shall denote through Si {i=1, 2,..., 7} possible states of the system that exist in a project: S1=A; S2=B; S3=C; S4=D; S5=E; Sg=F; S7=G (Fig. 1, 2). A sequence of discrete random variables {Sk}k is called the Markov chain with discrete time if:
P(Sk+1=ik+1| Sk=ik; Sk-1=ik-1;..., So=io)= =P(Sk+1=ik+1 Sk=ik).
The next states of the Markov chain depend only on the current state and are independent of all the previous states. A region of values of random variables {Sk} is the space of states of the chain and number k is the number of step.
Matrix of transition probabilities will be written as follows:
Fig. 2. Marked graph of the Markov chain states: S-|=A — Customer; S2=B — project curator; S3=C — project manager; S4=D — base plan; S5=E — project team;
S6=F — project; S7=G — project product
Vertices of the transition graph correspond to the Markov chain states while directed edges run from vertex i{i= =1, 2,..., m} to vertex j{j=1, 2,..., m} only in the case when the probability of transition rc^ between the corresponding states i—is not equal to zero. These transition probabilities in the marked graph are indicated in the corresponding edge ( Fig. 2). A topology of the directed graph can be represented using the adjacency matrix:
c1.1 0 0 C1.4 0 C1.6 0
0 C2.2 C2.3 0 0 C2.6 0
c3.1 C3.2 C3.3 C3.4 C3.5 C3.6 0
||c 4 = 0 0 0 C4.4 C4.5 0 0
0 0 C5.3 0 C5.5 C5.6 C5.7
0 0 0 0 0 C6.6 C6.7
C7.1 0 0 0 0 0 C7.7
1 0 0 1 0 1 0
0 1 1 0 0 1 0
1 1 1 1 1 1 0
= 0 0 0 1 1 0 0
0 0 1 0 1 1 1
0 0 0 0 0 1 1
1 0 0 0 0 0 1
Each element Cj of the adjacency matrix, different from zero and equal to 1, indicates a direct connection between states i— j. The values of elements in the main diagonal Cii=1 point to the existence of a transition loop when the system remains in the same state.
As is known, all possible transitions from some state into other states constitute an entire group of events - one of the transitions should be implemented [10]. This allows us to introduce a norm for each row of matrix ||cij|| with the replacement of values cij=1 by the transition probabilities nij>0 with fulfillment of the condition, valid for the entire group of events:
£ j 1, {i = 1,2, ...,m},
j=1
where m=7 is the number of possible states of the system.
n1.1 0 0 nt4 0 nt6 0
0 n2.2 n 2.3 0 0 n26 0
n3.1 n3.2 n 3.3 n3.4 n3.5 n3.6 0
0 0 0 n4.4 n4.5 0 0
0 0 n 5.3 0 n 5.5 n 5.6 n5.7
0 0 0 0 0 n6.6 n6.7
n7.1 0 0 0 0 0 n77
Elements of this stochastic matrix are the transition probabilities between states i— j in one step, in this case, Vj 0.
The sum of probabilities of all states pi(k) at each step k:
£ p,(k) = 1,
i=1
where pi(k) is the probability of the i-th state in step k.
5. Solving a system of equations in the Markov chain
In the Markov chain, with a change in time (step k), distribution of state probabilities {pi(k), p2(k), ... pm(k)} changes. In this case, computing the distribution of probabilities at each subsequent (k+1) step is performed using the known formula of total probability [7]:
Pi(k+1) p2(k+1) P3(k +1) Pi(k +1) Ps(k +1) Pe(k +1) p7(k+1)
P1(k) T n1.1 0 0 n1.4 0 n1.6 0
P2(k) 0 n 2.2 n 2.3 0 0 n 2.6 0
P3(k) n3.1 n 3.2 n 3.3 n3.4 n n 0 3.5 3.6
P4(k) 0 0 0 n4.4 n4.5 0 0
P5(k) 0 0 n5.3 0 n5.5 n5.6 n5.7
P6(k) 0 0 0 0 0 n6.6 n 6.7
P7(k) n7.1 0 0 0 0 0 n7.7
(1)
Therefore, if a matrix of transition probabilities ||nij|| is assigned and one knows the initial distribution of state probabilities {pi(k), p2(k),... pm(k)} in step k, then the new distribution of state probabilities ||pi(k+1); i=1, 2,., m|| can be found from (1). In most of the publications that deal with the use of Markov chains, researchers stop at this point because the algorithm for practical calculation has been obtained [8]. However, the solution presented can be transformed into a somewhat different form. For this purpose, we shall employ the induction method in the analysis of expressions to compute the distribution of state probabilities in the 1st and 2nd steps.
At the 1st step:
T
p1(1) T p1(0) T n1.1 0 0 n1.4 0 n1.6 0
p2(1) p2(0) 0 n2.2 n2.3 0 0 n2.6 0
p3(1) p3(0) n3.1 n3.2 n3.= n 3.4 n n 0 3.5 ""3.6 v
p4(1) = p4(0) 0 0 0 n4.4 n4.5 0 0
p5(1) p5(0) 0 0 n5.3 0 n5.5 n5.6 n5.7
pe(1) pe(0) 0 0 0 0 0 n6.6 n6.7
p7(1) p7(0) n7.1 0 0 0 0 0 n7.7
At the 2nd step:
p1(2) T p1(1) T n1.1 0 0 n1.4 0 n1.6 0
p2(2) p2(1) 0 n2.2 n2.3 0 0 n2.6 0
p3(2) p3(1) n3.1 n3.2 n3.3 n 3.4 n n 0 3.5 ""3.6 v
p4(2) = p4(1) 0 0 0 n4.4 n4.5 0 0
p5(2) p5(1) 0 0 n5.3 0 n5.5 n5.6 n5.7
pe(2) pe(1) 0 0 0 0 0 n6.6 n6.7
p7(2) p7(1) n7.1 0 0 0 0 0 n7.7
That is why it is possible to write for any step k:
. (2)
||pi(k); i=1, 2, ..., 7||; ||pi(k+1); i=1, 2, ..., 7||
and
p1(k) T p1(0) T n1.1 0 0 n1.4 0 A k n1.6 0
p2(k) p2(0) 0 n2.2 n 2.3 0 0 n 2.6 0
p3(k) p3(0) n3.1 n3.2 n 3.3 n3.4 n n 0 3.5 ""3.6 v
p4(k) = p4(0) 0 0 0 n4.4 n4.5 0 0
p5(k) p5(0) 0 0 n5.3 0 n5.5 n5.6 n5.7
p6(k) p6(0) 0 0 0 0 0 n6.6 n 6.7
p7(k) p7(0) n7.1 0 0 0 0 0 n7.7
(3)
where nj are the elements of transition probabilities matrix; T is the index of column transposition
It follows from (6) that the distribution of state probabilities {p1(k), p2(k), ... pm(k)} in step k depends only on the initial probability distribution at k=0 and elements nij of the transition probability matrix in the k-th power of || nij||k. Thus, the Markov chain is assigned when these parameters of the system are defined.
Depending on the structure and values of transition probabilities ||nij||, the Markov chains can possess the following properties: irreversibility, reversibility, ergodicity, absorption [15].
In some cases, despite the randomness of the process, there is a possibility to control to a certain extent the distribution laws or parameters of transition probabilities [15]. It is obvious that when using the controlled Markov chains, a decision-making process becomes particularly efficient.
||pi(k+2); i=1, 2, ..., 7||.
Distribution of state probabilities {p1(k), p2(k), ... pm(k)} in the homogeneous Markov chain with discrete time characterizes phenomenological mapping of the system - by which the object manifests itself.
After substituting (2) into (3), we shall obtain:
p1(2) T p1(0) T n1.1 0 0 n1.4 0 n16 0
p2(2) p2(0) 0 n2.2 n2.3 0 0 n26 0
p3(2) p3(0) n3.1 n3.2 n3.3 n 3.4 n3.5 n3.6 0
p4(2) = p4(0) 0 0 0 n4.4 n4, 0 0
p5(2) p5(0) 0 0 n5.3 0 n5.5 n5.6 n5.7
p6(2) p6(0) 0 0 0 0 0 n6.6 n6.7
p7(2) p7(0) n7.1 0 0 0 0 0 n77
n1.1 0 0 n1.4 0 n1.6 0
0 n 2.2 n 2.3 0 0 n 2.6 0
n3.1 n 3.2 n 3.3 n 3.4 n n 0 3.5 ""3.6 v
0 0 0 n4.4 n4.5 0 0
0 0 n5.3 0 n5.5 n5.6 n5.7
0 0 0 0 0 n6.6 n 6.7
n7.1 0 0 0 0 0 n7.7
p1(2) T p1(0) T n1.1 0 0 n1.4 0 *1.6 0 2
P2(2) p2(0) 0 n 2.2 n 2.3 0 0 n26 0
P3(2) p3(0) n 3.1 n 3.2 n 3.3 n 3.4 n 3.5 n 3.6 0
P4(2) = p4(0) 0 0 0 n 4.4 n 4.5 0 0
P5(2) P5(0) 0 0 n5.3 0 n5.5 n 5.6 n5.7
P6(2) p6(0) 0 0 0 0 0 n6.6 n6.7
p7(2) p7(0) n7.1 0 0 0 0 0 n77
.(5)
6. Examining the impact of competence level in a project team on the trajectory of projects
As is known, a model is a virtual or real object, which can replace the original when exploring its properties. Let us replace a project system with its representation - the Markov model developed. We shall examine on this model an impact of the competence level in a project team on the project efficiency [16].
Results of change in the system state probabilities by steps for the base variant of the set of transition probabilities are shown in Fig. 3 under conditions: ^5.3=0.5; ns.5=0.33; n5.6=0.15; n5.7=0.02.
(4)
Fig. 3. Change in the probabilities of states of the system for
the base data set: pi(k) — probabilities of states: p1(k) — Customer; p2(k) — project curator; p3(k) — project manager; p4(k) — base plan; p5(k) — team; p6(k) — project; p7(k) — project product; k — project steps
x
Since we consider here a discrete variant of the Markov chain, then the estimated data are discretely represented by steps by the coordinates of corresponding markers (Fig. 3, 4). In order to visualize the results, these markers are conditionally connected by a solid line.
Matrix of transition probabilities of the base variant of a project (Fig. 3):
0.3 0 0 0.5 0 0.2 0
0 0.6 0.1 0 0 0.3 0
0.04 0.4 0.76 0.1 0.04 0.2 0
0 0 0 0.3 0.7 0 0
0 0 0.5 0 0.33 0.15 0.02
0 0 0 0 0 0.87 0.13
0.25 0 0 0 0 0 0.75
(7)
Fig. 4. Change in the system state probabilities for the changed data set: pi(k) — probabilities of states: p1(k) — Customer; p2(k) — project curator; p3(k) — project manager; p4(k) — base plan; p5(k) — team; p6(k) — project; p7(k) — project product; k — project steps
Matrix of transition probabilities for the modified variant of the project (Fig. 4):
0.3 0 0 0.5 0 0.2 0
0 0.6 0.1 0 0 0.3 0
0.04 0.4 0.76 0.1 0.04 0.2 0
0 0 0 0.3 0.7 0 0
0 0 0.1 0 0.33 0.20 0.02
0 0 0 0 0 0.87 0.13
0.25 0 0 0 0 0 0.75
The base project in a quasi-stationary state in step k=20 is characterized by the following distribution of state probabilities: p1(20)=0.07; p2(20)=0.03;p3(20)=0.23; p4(20)=0.08; p5(20)=0.10; p6(20)=0.32; p7(20)=0.17. This means that at step 20, 32 % of time resource time is allocated for the project execution, project manager utilizes 23 % of the same resource while only 10 % of the total resource remain for the project team. The results obtained reveal that in the execution of the project there is a certain contradiction between the team and its leader who is apparently trying to fulfill all the work in the project by himself and does not trust his team.
To eliminate this phenomenon, it is necessary to change the team's work parameters that should affect the values of the respective probabilities of transitions for the project manager and the team members [17]. Results shown in Fig. 4, obtained for the new initial conditions, demonstrate that only in the case of changing the terms of interaction in the project team, the progress and results of the project will be different from the base variant.
Under the same conditions, in a quasi-stationary state in step k=20, the new system is characterized by the following distribution of state probabilities: p1(20)=0.08; p2(20)=0.01; p3(20)=0.07; p4(20)=0.07; p5(20)=0.16; p6(20)=0.40; p7(20)=0.22. This means that in step 20, 40 % of time resource is allocated for the implementation of the project, project manager utilizes only 7 % of the same resource, and the project team increases its share to 16 %. The results obtained indicate that the characteristics of work of the project team considerably affect the course of the project, which allowed us to eliminate the contradiction between a project team and its manager indentified in the base project.
7. Discussion of results on the development of applied
aspects of the implementation of Markov chains in project management
Generalization and development of the applied aspects of implementing the Markov chains to represent the systems of project management expands the possibilities to proactively manage projects.
We created a unified Markov model for the projects, which makes it possible to represent the probabilities of states of project participants by a complete group of incompatible events, one of which is realized. The benefits of applying the Markov chains to project management are hampered by the need to "adjust" the model for a particular project system by determining experimentally the elements in the matrix of transition probabilities.
By using the devised Markov model, it is possible to assess the impact of most characteristics of the system on the course of the project. However, the main conclusion that we can draw judging by results of the conducted research is that a weakly structured system, which includes the project itself, its environment and the team, defines the outcome of the project. This is the confirmation of the S. D. Bushuyev law [6]. In other words, a change in the project state probabilities fully reflects the progress and efficiency of the project [18].
Mathematical description of the unified project model by the Markov chains makes it possible to model parameters of the quantitative objectives of projects, in particular, the changes in the system state probabilities depending on the number of steps in the implementation of projects. Applying the Markov model makes it possible to identify the required number of project steps in order to accomplish the goals of projects and establish existing contradictions and conflicts in project teams. The model developed might also be used for modeling the programs and portfolios of projects.
Further research should be directed towards the development of theoretical methods for determining the elements in transition probabilities matrix, which would allow scientific substantiation in determining the trajectory of development of virtual project systems that are only planned for practical implementation.
8. Conclusions
1. We proposed a method for the transformation of a typical scheme in project management into a Markov chain. As a typical scheme, which defines the topology of interaction between project participants, we employed its representation in the standard of project management [13]. A method for the transformation of this scheme into a homogeneous Markov chain with discrete states and time is developed.
2. Based on the iteration solution of a system of equations in the Markov chain, we proved that project development is carried out in steps. In this case, a trajectory of project development by steps can be determined not only for actual projects, but also for virtual project systems.
3. Evaluation of project effectiveness in the coordinates of state probabilities of the system by steps demonstrated a significant impact on the project trajectory. In this case, we in-
vestigated only variation in the terms of interaction within a project team. The base project in a quasi-stationary state in step k=20 demonstrated that 32 % of time resource is allocated to perform the work of the project, project manager spends 23 % of the time, and only 10 % remains for the project team. The remaining time is utilized in other states. The results revealed a certain contradiction between the base project team and its manager. We examined effect on the project from a change in the competence level of the project team and showed that the new values of transition probabilities for state S5 provide for the project improvement. The implementation of the project's work is given 40 % of time resource, project manager utilizes only 7 % of the resource, and the project team increases its share to 16 %. These data indicate that characteristics of the project team essentially affect the course of the project, which allowed us to eliminate the contradiction between a project team and its manager detected in the base project.
References
1. A guide to the project management body of knowledge. PMBOK® guide [Text]. - Fifth edition. - USA: Project Management Institute, 2013. - 589 p.
2. Turner, J. P. Manual on project-oriented management [Text] / J. P. Turner. - Moscow: Publishing Grebennikov House, 2007. - 552 p.
3. Bushuyev, S. Proactive Program Management for Development National Finance System in Turbulence Environment [Text] / S. Bushuyev, R. Jaroshenko // Procedia - Social and Behavioral Sciences. - 2013. - Vol. 74. - P. 61-70. doi: 10.1016/j.sb-spro.2013.03.044
4. Van der Hoorn, B. Playing projects: Identifying flow in the 'lived experience' [Text] / B. van der Hoorn // International Journal of Project Management. - 2015. - Vol. 33, Issue 5. - P. 1008-1021. doi: 10.1016/j.ijproman.2015.01.009
5. Stanovskii, O. L. Dynamic models in the method of project management [Text] / O. L. Stanovskii, K. V. Kolesnikova, O. Yu. Lebede-va, H. Ismail // Eastern-European Journal of Enterprise Technologies. - 2015. - Vol. 6, Issue 3 (78). - P. 46-52. doi: 10.15587/17294061.2015.55665
6. Kolesnikov, O. Development of the model of interaction among the project, team of project and project environment in project system [Text] / O. Kolesnikov, V. Gogunskii, K. Kolesnikova, D. Lukianov, T. Olekh // Eastern-European Journal of Enterprise Technologies. - 2016. - Vol. 5, Issue 9 (83). - P. 20-26. doi: 10.15587/1729-4061.2016.80769
7. Gogunskii, V. Developing a system for the initiation of projects using a Markov chain [Text] / V. Gogunskii, A. Bochkovsky, A. Moskaliuk, O. Kolesnikov, S. Babiuk // Eastern-European Journal of Enterprise Technologies. - 2017. - Vol. 1, Issue 3 (85). -P. 25-32. doi: 10.15587/1729-4061.2017.90971
8. Gogunskii, V. «Lifelong learning» is a new paradigm of personnel training in enterprises [Text] / V. Gogunskii, A. Kolesnikov, K. Kolesnikova, D. Lukianov // Eastern-European Journal of Enterprise Technologies. - 2016. - Vol. 4, Issue 2 (82). - P. 4-10. doi: 10.15587/1729-4061.2016.74905
9. Bushuyev, S. D. Convergence of knowledge in project management [Text] / S. D. Bushuyev, D. A. Bushuyev, V. B. Rogozina, O. V. Mikhieieva // 2015 IEEE 8th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS). - 2015. doi: 10.1109/idaacs.2015.7341355
10. Vaysman, V. A. Design Markov model states of system of design driven organization [Text] / V. A. Vaysman // Bulletin of Sumy State University. Series Engineering. - 2011. - Issue 3. - P. 13-18.
11. Hunter, J. J. The computation of key properties of Markov chains via perturbations [Text] / J. J. Hunter // Linear Algebra and its Applications. - 2016. - Vol. 511. - P. 176-202. doi: 10.1016/j.laa.2016.09.004
12. Oganov, A. V. Using the discrete states of the model to determine the workload of the portfolio manager [Text] / A. V. Oganov // Technological audit and production of reserves. - 2015. - Vol. 3, Issue 2 (23). - P. 51-57. doi: 10.15587/2312-8372.2015.45014
13. GOST R 54869-2011. Project Management. Project management requirements [Text]. - Moscow: Standartinform, 2011. - 12 p.
14. Milios, D. Markov Chain Simulation with Fewer Random Samples [Text] / D. Milios, S. Gilmore// Electronic Notes in Theoretical Computer Science. - 2013. - Vol. 296. - P. 183-197. doi: 10.1016/j.entcs.2013.07.012
15. Amparore, E. G. Backward Solution of Markov Chains and Markov Regenerative Processes: Formalization and Applications [Text] / E. G. Amparore, S. Donatelli // Electronic Notes in Theoretical Computer Science. - 2013. - Vol. 296. - P. 7-26. doi: 10.1016/ j.entcs.2013.07.002
16. Sherstyuk, O. The research on role differentiation as a method of forming the project team [Text] / O. Sherstyuk, T. Olekh, K. Kolesnikova // Eastern-European Journal of Enterprise Technologies. - 2016. - Vol. 2, Issue 3 (80). - P. 63-68. doi: 10.15587/17294061.2016.65681
17. Sherstyuk, O. I. Role paradigm of the formation of the project team [Text] / O. I. Sherstyuk, A. V. Oganov // Management of development of complex systems. - 2014. - Issue 20. - P. 97-101. - Available at: http://urss.knuba.edu.ua/files/zbirnyk-20/20.pdf
18. ISO 21500: 2012. Guidance on project management [Text]. - ISO PC 236. - 2012. - No. 113. - 51 p.