Научная статья на тему 'Consideration of optimal control of strictily hierarchical manpower system'

Consideration of optimal control of strictily hierarchical manpower system Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
114
21
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
MANPOWER SYSTEM / EQUILIBRIUM CONDITION / СИСТЕМА УПРАВЛЕНИЯ ЧЕЛОВЕЧЕСКИМИ РЕСУРСАМИ / УСЛОВИЕ РАВНОВЕСИЯ

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Škraba A., Stanovov V.V., Žnidaršič A., Rozman Č., Kofjač D.

The paper describes the problem of finding an optimal control strategy for the manpower control system. The equilibrium condition for the strict hierarchical manpower system control is stated which enables development of optimal strategy algorithm for one state example. Based on the equilibrium condition, the novel approach to the determination of optimal control in such system is described. Optimal tracking algorithm is described by example, which is implemented in MathematicaTM. The tracking algorithm is able to find the optimal values of the transition coefficients, so that the system achieves the desired value in one step. For the case when the desired value is not achievable in one step due to the boundary conditions, additional two algorithms are considered which bring state values to the desired ones in several steps. Two variants of the algorithm are considered, when the desired value is lower or greater than the initial value.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Consideration of optimal control of strictily hierarchical manpower system»

UDC 519.234

Vestnik SibGAU Vol. 17, No. 1, P. 97-102

CONSIDERATION OF OPTIMAL CONTROL OF STRICTILY HIERARCHICAL MANPOWER SYSTEM

A. Skraba1 , V. V. Stanovov2, A. Znidarsic1, C. Rozman3, D. Kofjac1

1University of Maribor, Cybernetics & Decision Support Systems Laboratory, Faculty of Organizational Sciences

Kidriceva cesta 55a, SI-4000, Kranj, Slovenia 2Reshetnev Siberian State Aerospace University 31, Krasnoyarsky Rabochy Av., Krasnoyarsk, 660037, Russian Federation 3University of Maribor, Faculty of Agriculture and Life Sciences Pivola 10, SI-2311, Hoce, Slovenia E-mail: [email protected]

The paper describes the problem of finding an optimal control strategy for the manpower control system. The equilibrium condition for the strict hierarchical manpower system control is stated which enables development of optimal strategy algorithm for one state example. Based on the equilibrium condition, the novel approach to the determination of optimal control in such system is described. Optimal tracking algorithm is described by example, which is implemented in Mathematica™. The tracking algorithm is able to find the optimal values of the transition coefficients, so that the system achieves the desired value in one step. For the case when the desired value is not achievable in one step due to the boundary conditions, additional two algorithms are considered which bring state values to the desired ones in several steps. Two variants of the algorithm are considered, when the desired value is lower or greater than the initial value.

Keywords: manpower system, equilibrium condition.

Вестник СибГАУ Том 17, № 1. С. 97-102

РАССМОТРЕНИЕ СТРАТЕГИИ ОПТИМАЛЬНОГО УПРАВЛЕНИЯ СТРОГО ИЕРАРХИЧЕСКОЙ СИСТЕМЫ УПРАВЛЕНИЯ ЧЕЛОВЕЧЕСКИМИ РЕСУРСАМИ

А. Шкраба1 , В. В. Становов2, А. Жнидаршич1, Ч. Розман3, Д. Кофьяч1

1Мариборский университет, Лаборатория кибернетики и систем поддержки принятия решений,

факультет организационных наук Словения, SI-4000, Крань, ул. Кидричева, 55a 2Сибирский государственный аэрокосмический университет имени академика М. Ф. Решетнёва Российская Федерация, 660037, г. Красноярск, просп. им. газ. «Красноярский рабочий», 31 3Мариборский университет, факультет агрокультуры и наук о жизни Словения, SI-2311, Хоче, Пивола, 10 E-mail: [email protected]

Рассматривается задача определения оптимальной стратегии для системы управления человеческими ресурсами. Формулируется условие равновесия для строго иерархической системы управления человеческими ресурсами, позволяющее реализовать алгоритм поиска оптимальной стратегии для случая с одним рангом. На основании этого условия равновесия описывается новый подход для определения оптимального управления в подобных системах. Оптимальный отслеживающий алгоритм управления описан примером, реализованным в среде MathematicaTM. Отслеживающий алгоритм способен найти оптимальные значения коэффициентов перехода, так что система достигает желаемого значения за один шаг. Для случая, когда желаемое значение недостижимо за один шаг из-за граничных условий, рассматриваются два дополнительных алгоритма, которые приводят значения рангов к желаемым за несколько шагов. Рассмотрены два варианта алгоритма, когда желаемое значение меньше или больше, чем начальное значение.

Ключевые слова: система управления человеческими ресурсами, условие равновесия.

Introduction. Panning the human resource management process affects the whole organizational structure. The common way of addressing such kind of problems is us-

ing modeling and simulation methods, which have shown promising results. In our work the System Dynamics Methodology [1; 2] was used. In [3] the resource assignment

language has been proposed, providing automatic answers to the problem of resource management at a given time period. In [4] the problem of workspace scheduling, taking into consideration employee's preferences for retail stores was considered, and the mixed integer programming model was successfully used to solve this problem. The stochastic modeling has also been applied to determine the most appropriate promotion time in [5]. This research also considered the survival rates in different classes, varying the class sizes and considering the main goal the main goal to be the time until the next promotion. However, the main obstacles which appear during the restructuring process appear to be the organization barriers, which were considered in [6]. This research shows that the technological factors have less importance than the organizational ones.

In our previous research in the field of hierarchical manpower system control [7; 8] the system was defined as x(k +1) = Ax(k) + Bu(k) [7] which is convenient for the simulation examination [9]. State vector x represents the number of men in particular rank whereas matrix of coefficients A represents promotion factors r and wastage factors f which are combined. Recruitment as the input to the system is represented by the Bu(k) term in the discrete state space.

Approaches with the evolutionary and biologically inspired algorithms [10-13] have been promising however, understanding of the procedure of how to determine optimal solution [14] might contribute to development of more efficient heuristic algorithms. When considering such system in terms of optimality the "curse of complexity" soon arises. By careful examination of optimal solution determination algorithm, several propositions can be included in optimal trajectory heuristic search.

Methodology consideration. Strict hierarchical manpower system represents a delay chain where elements depend on previous states. Structure of the system can be best represented graphically as in fig. 1. The system in our case consists of eight state elements which are interconnected with flows (Rates). First rate element on the left side of fig. 1 represents recruitment and is the only input to the system. From each state the fluctuations are possible, i. e. the case that a person leaves the system (wastage). Between the ranks the promotions are represented with the flows. In the last element on the right side of fig. 1 the retirement is represented by the last flow element. Each transition is determined by the parameter, whether parameter of fluctuation or parameter of promotion or retirement.

For the system in fig. 1 the equilibrium condition is dependent on the Lower and Upper parameter Boundary which can be stated as:

R

r + f < — < r + f

nLB J nLB ~ x * ~ "UB "UB '

(1)

where r^ is Lower Boundary for coefficient of promo-

tion for rank n; is Lower Boundary for coefficient

of fluctuation for rank n; R0 is Recruitment; X* is desired value of state n; rnuB is Upper Boundary for coefficient of promotion for rank n; fn is Upper Boundary

for coefficient of fluctuation for rank n.

Eq. (1) is mandatory condition if one would like to achieve equilibrium in all states i.e. that the desired values for all states are achieved and structure is constant, X = 0 .

The Recruitment R0 should be therefore bonded by the interval:

<(WB + fnLB ) * R0 * X>nUB + fnUB ) (2)

or stated differently:

[\b , R0UB ] e [X (rnLB + fnLB ), ^ (rnuB + fnuB )] • (3)

The equilibrium conditions determine, whether the system could be put into the stable condition at the end of transition. If there is no sufficient recruitment available and the boundary conditions are too stiff, the desired values in particular states could not be achieved. The interval stated by Eq. (3) should be considered at the policy design.

Development of Algorithm and Results. When developing the optimal solution algorithm the backward computation approach of Bellman dynamic programming was applied. The algorithm was developed on the tracking example for one state element. In our case, the initial value on the state element was set to 4 then peaks to 14 and settles back again at approximate value of 4. If the system should behave optimally the difference between the desired and actual values should always be 0. By that consideration we compute the solutions for the transition coefficient r from the final time to the initial time 0. At the determination of the optimal strategy we consider, that the recruitment should be put as low as possible. The backward computation is performed by the following equation:

X> +1) = X*n(k) + R0(k) -- T1(k) X*(k) - f1(k) X*( k)].

K_iz_P_v_N K_iz_N_v_S K_iz_S_v_M K_iz_M_v_P K_iz_PP_v_Po K_iz_Po_v_B K_iz_B_v_GM KJz_GM_v_pokoj

Fig. 1. Example of strict hierarchical system of eight ranks

In the next frame the developed algorithm is shown ent optimal solutions since the criterion is zero deviation which provides optimal solution. There are several differ- between desired and actual trajectory:

Lz={4,11,13,14,13,11,10,9,8,6,4} Vector of desired values R0LB={1,1,1,1,1,1,1,1,1,1,1} LowerBoundary Recruitment vector R0UB={10,10,10,10,10,10,10,10,10,10,10} UpperBoundary Recruitment vector r1LB ={0,0,0,0,0,0,0,0,0,0,0} LowerBoundary Transition coefficient vector r1UB={0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5} UB Transition coefficient v. f1LB={0.05,0.05,0.05,0.05,0.05,0.05,0.05,0.05,0.05,0.05,0.05} LB fluctuation v. f1UB={0.15,0.15,0.15,0.15,0.15,0.15,0.15,0.15,0.15,0.15,0.15} UB fluctuation v. t={0,1,2,3,4,5,6,7,8,9,10} Time vector

Do[

a=Lz[[ii+1]]; In optimum values of state equal desired values b=Lz[[ii]]; We take two adjacent points

R=R0LB[[ii]]; Recruitment is set to lowest possible value, i.e. LB f1=f1UB[[ii]]; Fluctuation is set to maximum possible value, i.e. UB H=Solve[aDb+R-r1*b-f1*b,{r1}]; Solve in order to get transition coeff.

Here we use two adjacent time points

xx=H[[All,1,2]]; Get values While[

Until r=xx[[1]] less than 0, i.e. negative, we increase recruitment for 1 (R++) until we reach positive value xx[[1]]<0,

R++; Increase recruitment for 1

H=Solve[aDb+R-r1*b-f1*b,{r1}]; Another solution of the system

xx=H[[All,1,2]]; Extract the value ];

AppendTo[R0,R]; Gather the results AppendTo[S,xx] Extract values

,

{ii,10,1,-1} Perform backward computation

]

However, if one considers that the input to the system, i. e. recruitment should be kept as low as possible, promotions should be kept as low as possible too and the fluctuations should be kept as high as possible, the unique, optimal solution is determined by the algorithm in tracking mode. This means, that the trajectory is changing in time.

Fig. 2 shows the results of the response of the system when optimal algorithm is applied as described. The rectangles represent the desired values of the state and the diamonds represent the actual values. In our case, the difference for all time points was 0.

Fig. 2. Tracking the desired value - optimal solution example, error equals zero (square - desired, diamond - response)

If the tracking case, the backward computation yields optimal solution. However, in the case, that the system state is too far from the desired value, the system cannot be in optimal state, i. e. the deviation is not 0. For the case, where the desired value is higher than the current state the algorithm was developed which is printed out in

the next frame. If the needed value of recruitment is higher than upper boundary value for recruitment, we set the recruitment value to the upper boundary and proceed to the next step. Notice, that in this case, the solution goes forward in time:

Do[

a=Lz[[ii+1]]; Next value as desired value b=L[[ii]]; Initial value as the state value

(L)

i.e.

LB

r=r1LB[[ii]]; Transitions are set to LowerBoundary LB f1=f1LB[[ii]]; Fluctuation is set to the lowest possible value CC=a-b+r*b+f1*b; Calculation of needed recruitment

If [CC> R0UB[[ii]], If the needed value of recruitment is higher than R0UB[i]

we set R0 on UpperBoundary UB L[[ii+1]]=L[[ii]]+1*(R0UB[[ii]]-r*b-f1*b); Calculation of state AppendTo[R0,R0UB[[ii]]]; Adding to recruitment result vector AppendTo[rr,r]; Adding transition coefficients r

AppendTo[ff,f1] Adding fluctuation coefficients f

,

L[[ii+1]]=L[[ii]]+1*(CC-r*b-f1*b) ; On other case, calculated CC is used for

R0, i.e. input recruitment AppendTo[R0,CC]; Collecting Recruitment data AppendTo[rr,r]; Colleccting transition coefficients data

AppendTo[ff,f1] Collecting fluctuation coefficients data ]

{ii,1,10,1} Computing from time 1 onward

]

Fig. 3 shows the results of the response of the system when described optimal algorithm is applied. The rectangles represent the desired values of the state and the diamonds represent the actual values. In our case, the differ-

ence for all time points is not 0. At the first part of the response, the difference between actual and desired state is significant. This is due to the boundaries of the parameters which do not allow to approach the goal faster.

Fig. 3. Tracking the desired value which is higher than initial state (square - desired, diamond - response)

The algorithm for the case, where the desired value is below the initial state value is shown in the next frame. In the loop, we calculate the needed value for recruitment R and if calculated recruitment is lower than the boundary

value for recruitment in particular step, we set recruitment equal to LB (i. e. we cannot go lower than the state LowerBoundary for the parameter):

Do[

a=Lz[[ii+1]]; Value in next time step equals to desired value b=L[[ii]]; Initial value is taken out of state vector r=r1UB[[ii]]; Transition coefficient is set to LB

f1=f1UB[[ii]]; Fluctuation coefficient is set to the lowest value i.e. UB CC=a-b+r*b+f1*b; Calculate the needed value for recruitment R0 If [CC< R0LB[[ii]], If we indicate, that the recruitment must be lower than

LB then we set R0 equal to LB L[[ii+1]]=L[[ii]]+1*(R0LB[[ii]]-r*b-f1*b); Calcualte the next state value AppendTo[R0,R0LB[[ii]]]; Collect the results for recruitment AppendTo[rr,r]; Collect the results for transition coefficients AppendTo[ff,f1] Collect the results for fluctuation coefficients

L[[ii+1]]=L[[ii]]+1*(CC-r*b-f1*b) ; In other case, we use previously

computed value CC for R0 (recruitment) AppendTo[R0,CC]; Collecting Recruitment data AppendTo[rr,r]; Colleccting transition coefficients data

AppendTo[ff,f1] Collecting fluctuation coefficients data ]

{ii,1,10,1} Computing from time 1 onward ]

Fig. 4 shows the results of the response of the system when above, optimal algorithm is applied. The rectangles represent the desired values of the state and the diamonds represent the actual values. In our case, the difference for all time points is not 0. At the first part of the response,

the difference between actual and desired state is also significant. This is, again, due to the boundaries of the parameters which do not allow to approach the goal faster.

Fig. 4. Tracking the desired value which is lower than initial state (square - desired, diamond - response)

In case if there are two ranks, the system becomes more complex, and another algorithm for finding the optimal strategy should be developed. The problem here is

that for two states, there can be 9 different variants of initial situations, i. e. for each of the states there can be 3 different situations, when the desired value is greater,

equal or less than the initial value, resulting in 9 different variants. For larger number of states the number of variants increases exponentially.

The most important feature of the system with the number of states larger than 2 is that the states become depended on each other, i. e. to optimize the value in state 2 we must change the output from state 1, which also changes the value in state 1.

Conclusion. Knowing the system optimal solution [15] is important for the development of heuristic algorithms and might contribute to the more efficient algorithm design. In our case, we have successfully developed optimal control algorithm for one state element case. The approach of backward computation was appropriate for the optimal strategy determination. In the case of the desired values which are not achievable in one step or fall outside the system reach due to the parameter limitations, the development of algorithm is more challenging. In further development the combinatorial approach should be considered in connection to the interval limitations of parameters.

Acknowledgement. This work is financed by Slovenian Research Agency ARRS, bilateral project No.: BI-RU/14-15-047 "Manpower control strategy determination with self-adapted evolutionary and biologically inspired algorithms".

Благодарности. Данная работа финансируется словенским исследовательским агентством ARRS. Двусторонний проект № BI-RU/14-15-047 «Определение стратегии управления человеческими ресурсами при помощи самонастраиваемых эволюционных и бионических алгоритмов».

References

1. Skraba A., Kljajic M., & Kljajic, M. B. The role of information feedback in the management group decision-making process applying system dynamics models. Group Decision and Negotiation, 2007, No. 16(1), P. 77-95, DOI: 10.1007/s10726-006-9035-9. ^

2. Borstnar M. K., Kljajic M., Skraba A., Kofjac D., Rajkovic V. The relevance of facilitation in group decision making supported by a simulation model. System Dynamics Review, 2011, No. 27(3), P. 270-293, DOI: 10.1002/sdr.460.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

3. Cabanillas C., Resinas M., del-Río-Ortega A., Cortés A. R. Specification and automated design-time analysis of the business process human resource perspective. Information Systems, 2015, No. 52, P. 55-82, DOI: 10.1016/j.is.2015.03.002.

4. Lin D., Yue T., Ganggang N., Yongqing X., Xin S., Changrui R., Zongying Z. Scheduling Workforce for Retail Stores with Employee Preferences. In 2015 IEEE

International Conference on Service Operations And Logistics, And Informatics (SOLI), (15-17 November 2015). 2015, P. 37-42. Piscataway, NJ, USA: IEEE.

5. Gupta A., Ghosal A. A manpower planning model based on length of service under varying class sizes. OPSEARCH, 2014, No. 51(4), P. 615-623, DOI: 10.1007/s12597-013-0162-1.

6. Babaei M., Zahra G., Soudabeh A. Challenges of Enterprise Resource Planning implementation in Iran large organizations. Information Systems, 2015, No. 54, P. 15-27, DOI: 10.1016/j.is.2015.05.003.

7. Skraba A., Kljajic M., Papler P., Kofjac D., Obed M. Determination of recruitment and transition strategies. Kybernetes. 2011, Vol. 40, No. 9/10, P. 1503-1522.

8. Skraba A., Kofjac D., Znidarsic A., Rozman C., Maletic M. Application of finite automata with genetic algorithms in JavaScript for determination of manpower system control. In: 3rd International Workshop on Mathematical Models and their Applications, November 19-21, 2014, Krasnoyarsk, IWMMA'2014.

9. Kljajic M., Bernik I., Skraba A., Simulation Approach to Decision Assessment in Enterprises, Simulation, (Simulation Councils Inc.) 2000, P. 199-210.

10. Semenkin E., Semenkina M. Stochastic Models and Optimization Algorithms for Decision Support in Spacecraft Control Systems Preliminary Design. Informatics in Control, Automation and Robotics, Lecture Notes in Electrical Engineering, 2014, Vol. 283, P. 51-65.

11. Akhmedova S., Semenkin E. Co-operation of biology related algorithms, 2013 IEEE Congress on Evolutionary Computation. 2013, P. 2207-2214.

12. Skraba A., Kofjac D., Znidarsic A., Maletic M., Rozman C., Semenkin E. S., Semenkina M. E., Stano-vov V. V. Application of Self-Configuring genetic algorithm for human resource management. Journal of Siberian Federal University - Mathematics and Physics. 2015, No. 8(1), P. 94-103.

13. Skraba A., Kofjac D., Znidarsic A., Rozman C., Maletic M. Application of finite automata with genetic algorithms in JavaScript for determination of manpower system control. Vestnik SibGAU. 2015, Vol. 16, No. 1, P. 153-158 (In Russ.).

14. Mehlman A. An approach to optimal recruitment and transition strategies for manpower systems using dynamic programming, Journal of Operational Research Society, 1980, Vol. 31, No. 11, P. 1009-1015.

15. Reeves G. R., Reid R. C. A. military reserve manpower planning model. Computers & Operations Research, 1999, Vol. 26, P. 1231-1242.

© Skraba A., Stanovov V. V., Znidarsic A., Rozman C., Kofjac D., 2016

i Надоели баннеры? Вы всегда можете отключить рекламу.