Научная статья на тему 'Amodified probabilistic genetic algorithm for the solution of complex constrained optimization problems'

Amodified probabilistic genetic algorithm for the solution of complex constrained optimization problems Текст научной статьи по специальности «Медицинские технологии»

CC BY
148
41
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
PROBABILISTIC GENETIC ALGORITHM / CONSTRAINED OPTIMIZATION

Аннотация научной статьи по медицинским технологиям, автор научной работы — Vorozheikin A. Yu, Gonchar T. N., Panfilov I. A., Sopov E. A., Sopov S. A.

A new algorithm for the solution of complex constrained optimization problems based on the probabilistic genetic algorithm with optimal solution prediction is proposed. The efficiency investigation results in comparison with standard genetic algorithm are presented.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Amodified probabilistic genetic algorithm for the solution of complex constrained optimization problems»

hand, the increase in the savings amortization leads to increase of cumulative expenses Z, that reduces balance profit Wb ,cumulativeprofittax N3 and,finally,netprofit Wr and reducing efficiency of the project.

On the other hand, the amortization directly influences the parameter NPV aside its increase. Thus, the size change of amortizationAm influences the NPV in three directions at the same time. Therefore during the use of the nonlinear method the efficiency of the proj ect will increase, as negative influence of amortization on expenses will be compensated, first, by the positive influence of reduction of the wealth tax, and, secondly, directinfluence of amortizationonNPV.

Due to the fact that the size increase of amortization Am will lead to increase in general expenses Z, it will be reflected inthe price of production Pk aside its increase, that, in turn, will influence the demand again. Therefore the problem of how much the parameter NPV will finally change after a change in the amortization charge method will depend on elasticity of demand for an exact production, a competitive position of the enterprise in the market.

An account of considered factors during the preparation of this project and during its realization makes the management

of its efficiency possible. Thus it is necessary to consider interaction of factors and change of production demand.

In conclusion we can say that the offered classification of investment projects modeling methods and the lead comparative analysis allows the choosing of toolkit which will correspond to purposes of the user. The investment project efficiency factors, allocated on the basis of the net profit calculation algorithm enable to operate the project efficiency at the stage of planning and during its realization.

Bibliography

1. Zimin, I. A. Real investment /I.A. Zimin. M.: Tandem, 2000.304 p. (inRussian).

2. Starik, D. E. The estimation of investment projects efficiency / D. E. Starik // Financy. 2006. № 10. C. 70-72. (inRussian).

3. The Designer and solver of discrete optimum control tasks (“Karma”): The Computerprogram: The certificate on the state registration in Rospatent № 2008614387 from 11.09.2008/Legal owners: A. V. Medvedev, P. N. Pobedash, A. V Smoljaninov, M. A. Gorbunov. (inRussian).

© GorbunovM. A., 2009

A. Yu. Vorozheikin, T. N. Gonchar, I. A. Panfilov, E. A. Sopov, S. A. Sopov Siberian State Aerospace University named after academician M. F. Reshetnev, Russia, Krasnoyarsk

A MODIFIED PROBABILISTIC GENETIC ALGORITHM FOR THE SOLUTION OF COMPLEX CONSTRAINED OPTIMIZATION PROBLEMS*

A new algorithm for the solution of complex constrained optimization problems based on the probabilistic genetic algorithm with optimal solution prediction is proposed. The efficiency investigation results in comparison with standard genetic algorithm are presented.

Keywords: probabilistic genetic algorithm, constrained optimization.

The necessity to develop complex system models appears in different fields of science and technology such as mathematics, economics, medicine, spacecraft control and so on. In the process of modeling there emerge many optimization problems which are multiextremal, multiobjective and have implicit formalized functions, complex feasible area structure, many types of variables etc. There is no possibility to solve such problems using classical optimization procedures thus we have to design and implement more effective and universal methods such as genetic algorithms (GAs). GAs proved their efficiency in the process of solving of many complex optimization problems [1; 2].

The GAs efficiency depends on fine tuning and control of their parameters. If an untrained user sets arbitrary

parameters values, the GA efficiency may vary fromvery low to very high. The recent trends in a field of GAs are adaptive GAs based on complex hybrid structures and efficient GAs with reduced parameters set.

A known approach to GA parameters set reduction is probabilistic genetic algorithms (pGAs) [3;4]. The essential difference between pGA and standard GA is that pGA has no crossover operator and new solutions are generated according to statistical information about search space. Thus, collecting and processing such kind of information, pGAs can adapt to the problems they solve.

PGAs demonstrated their efficiency with many complex optimization problems and their further investigation and improvement are promising. There are actual problems of the pGAs features analysis for wide-range optimization problems

* This work is supported by State Programs “Scientific and teaching staff of innovative Russia” (project NK-136P/3), by Grant of the President of Russia to the young PhD for 2009-2010 (MK-2160.2009.9).

and parameters set reduction without efficiency loss. This article is devoted to pGAs efficiency for complex constrained optimizationproblems investigation.

Probabilistic genetic algorithm for non-constrained optimization problems. During its run the GA collects and processes some statistical information about search space, but the statistics is absent in an explicit form. PGA uses the following form of statistics representation - the probabilities vector of the current population:

pk=(pk,..., pk), pk=P(xk=1), i=\n, here pk is the probability of unit value for i-th bit in solution X, k is iteration number.

The general scheme of pGA is:

1. Random generation of the initial population.

2. Selection of r individuals (called parents) on the basis of their fitness. Evaluate the probabilities vector as:

p = ( p^ ■ ■ ■ , Pn),

p. = p{x. = i) = I(yj,j =1,n,

' i=1

here n is chromosome length, xj is j-th gene of i th individual.

3. Form a newpopulation (called offspring) according to the distribution P.

4. Apply mutation operator to the offspring.

5. Form a new population from parents and offspring.

6. Repeat 2-5 steps.

As it was previously mentioned, during pGA run the algorithm collects the statistics about null and unit values distribution inthe population. The experimental results show that the probability vector components converge to the corresponding values of the optimal solution vector as shown infigure 1.

1 ■ 1 , I • . } ' ■'

'\ M ''J "" '

.'YU ^ '*1 n S i l i r

I J

: :n --o sc 3t a sn 1*

Fig. 1. The values change for j-th component of the probability vector

_As it is shown in figure 1, the given j-th component value of P converges to unit. It means that the value of j-th gene of optimal solution most probably is equal to unit (forbinary representation). One can use this feature to predict the optimal solution.

The following prediction algorithm was proposed in[3;4]:

1. Choose the certain scheme of pGA for the given problem, settheiterationnumber i = 1,...,I andthenumber of independent algorithm runs k = 1,..., K.

2. Collect the statistic (pj )k, j = 1,..., n . Average (pj) over k. Determine the tendency for pj change.

3. Set xf = 1, if (((p.) -0.5) > 0, else xf = 0.

i=1

The main idea is that the more often probability value is greaterthan0.5, the higherthe probability of optimal solution unitvalue.

In practical problems there may be such situations when pGA collects not enough information at the beginning and j-th gene value is equal to unit (or zero) for almost every solution. At the final stage pGA can find a much better solution with inverted values of j-th gene and it means that the probability vectorvalues will change their convergence direction (fig. 2). But the above mentioned prediction algorithm will give us the primary value, because the j-th value of probability vector was greater than 0.5(or less than

0.5for zero values) for a long time.

Thus one can use the following prediction algorithm modification:

1. Set the prediction step K. _ ______

2.Every Kiterationusethegivenstatistics Pi, i = 1, Nk , NK = t ■ K, t e{1,2, K} to evaluate the probability vector change: *Pi = Pi - Pi-1.

3. Set the weights for every iteration according to its number: oi = 2 i/NK (NK +1), i = 1, K, NK.

4. Evaluate the probability vector weighted change as:

*P = (*Pj ) = (>i- $*P‘.

i=1 _______________

5. Settheoptimalsolution: X P = (x°pt) ,where x°pt = 1 if *pj > 0, and x°pt = 0 otherwise.

6. Add the optimal solution in the current population and continue pGA run.

M l|----------1------------1-----1-----------1-----1-----------

"'j ■_ ii a a. tJ ,L Al 1UL

Fig.2. The situation when prediction can be wrong

The main idea of the given algorithm is that the probability values on the later iterations have the greater weights as the algorithm collects more information about search space. The

weights have values such that oj+1 ></. h (</. = 1.

i=1

Genetic algorithms for constrained optimizationproblems. In general GAs and pGAs select an individual in accordance with its fitness value, but there is no optimization constrains control. There are many possible methods to solve this problem.

Let the following constrained optimization problem be solved:

f (x) K extr,

gj (x) < 0, j = 1, r, hj (x) = 0, j = r +1, m.

In general, the individual x fitness is evaluated as:

m

fitness( x) = f (x) + 8 ■ Z(t) ■ ( fj (x),

j=1

where t is iteration number; 8 = 1 for the minimization problem; 8 = -1 forthemaximizationproblem; f. (x) isthe penalty value for j-th constrain break; p is the real number.

Thepenaltyfunctions fj (x) areevaluatedas:

[max {0, g (x)B, j = 1,r

f (x) = [ 1 j *

\hj (x), j = r +1, m

The following penalty methods are known: the “death” penalty, the static penalty, the dynamic penalty, the adaptive penalty and hybrid methods of the individuals “cure”.

As the authors analyzed every penalty method, the further investigation was limited by the dynamic and adaptive penalty methods as other methods have a number of disadvantages.

In particular, the “death” penalty eliminates every unfeasible solution even if it can have the important information for new feasible solutions. The static penalty contains a large set of parameters that should be well tuned -a non-optimal set of parameters can lead to unfeasible solutions. The “cure” method involves local optimization procedures on every iteration of GA, thus such methods use much more computational recourses.

The dynamic penalty. The method uses the previously mentionedpenaltyfunctionsanddefines Z(t) inafollowing way:

Z(t) =( C ■ t)“.

The fitness of x individual on the t-th iteration is evaluated as:

m

fitness(x) = f (x) + 8$(C■ t)“ ■ ( ff (x).

i=1

Thevalues C, a, p aresetaccordingtoacertainproblem. The recommended values are C = 0.5, a = p = 2 (obtained experimentally).

The adaptive penalty uses the same penalty functions, but Z(t) is evaluated as:

P1 ■ Z(t), if b e D, for t - k +1 < i < t

Z(t+1) = _p2 ■Z(t), if b a d,

for t - k +1 < i < t,

Z(t), otherwise,

where b is the best solution in the i-th population, P1 e (0,1), p2 > 1 and P1P2 f 1. The penalty decreases on the (t +1) step, if the best individual was feasible during the last k iterations. Otherwise, if it was unfeasible, the penalty increases.

The method uses three parameters: P1, p2, k. The adaptive penalty method uses both kinds of information: if the solution is unfeasible and if the previous solutions was unfeasible [5].

Probabilistic genetic algorithms for constrained optimization problems. The general scheme of pGA is the same as for the penalty method. The main difference is in the fitness function definition. Thus, one can extend the optimal solution prediction method for the constrained optimization problems. It is appropriate to use the modified prediction procedure as the objective function surface with penalty can have a lot of local optima and the general prediction algorithms can lead to a local solution instead of a global one.

GA and pGA computational efficiency investigation for constrained optimizationproblems. We compare the algorithms efficiency on a set of test problems of single objective constrained optimization. The objective functions and constrains are linear and non-linear functions of several variables. A part of test problems set is presented intable1 [6].

We investigate “the best-efficiency” and “the worst-efficiency” parameters set forboth algorithms to determine how parameters influence the efficiency in a wide range. The better results for “the worst-efficiency” parameters set give us better effectiveness for arbitrary parameters chosen by an untrained user.

As GA and pGA are stochastic procedures, we average characteristics of algorithms with every unique parameter set over 100 independent runs.

To estimate algorithms efficiency we will use the following criteria:

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

- the rate of runs (%), where the exact optimal solution was computed;

- the average iteration number (N), on which the exact optimal solution was computed for the first time.

At the first stage we define the constrains control method that gives the best efficiency with the given test problems set. We have resumed that the standard GA with “the best-efficiency” parameters shows the best results with the dynamic penalty for the whole test problems set. The standard GA with “he worst-efficiency” parameters shows the best results with the dynamic penalty only for 60 % cases (problems). On the average, the dynamic penalty is more effective than the adaptive penalty in 60 % cases.

PGA both with “the worst-efficiency” and “the best-efficiency” parameters shows the best results with the dynamic penalty for the whole test problems set.

Thus, we have determined that the dynamic penalty is more effective than the adaptive penalty for both GA and pGAalgorithms.

At the second stage we compare the efficiency of standard GAand pGAwith dynamic penalty. For “the best-efficiency” parameters set the standard GA is more effective than pGA in 67 % cases. But in cases when pGA yelds to GA, their efficiency differs insignificantly. For “the worst-efficiency” and “average-efficiency” pGA is more effective than GA in 100 % and 67 % cases respectively. The computational results of comparison are given in table 2.

At the third stage we compare the efficiency of standard GAand pGAwith dynamic penalty. For “the best-efficiency” parameters set the standard GA is more effective than pGA in 67 % cases. But in cases pGA yelds to GA, their efficiency differs insignificantly. For “the worst-efficiency” and “average-efficiency” pGA is more effective than GA in 100 % and 67 % cases respectively. The computational results of comparison are given in table 2.

At the forth stage we compare the efficiency of standard GA and pGA with optimal solution prediction. For «the best-efficiency» parameters set the pGA with optimal solution prediction is more effective than the standard GA in 60 % cases, “the worst-efficiency” pGA is more effective in 67 % cases. Moreover, the pGA with optimal solution prediction finds the optimal solution at much earlier iteration. The comparison of computational results are in table 3.

The results of the investigation shows that the pGA algorithm is more preferable than the standard GA, because it is more effective on the average and it has smaller number of parameters. The pGA with optimal solution prediction allows to compute optimal solution at much earlier iterations.

Bibliography

1. Holland, J. H. Adaptation in natural and artificial systems / J. H. Holland. Ann Arbor, MI : University of MichiganPress, 1975.

2. Goldberg. D. E. Genetic algorithms in search, optimization, and machine learning/D. E. Goldberg. Reading, MA:Addison-Wesley, 1989.

3. Сопов E. А. Вероятностный генетический алгоритм и его исследование / Е. А. Сопов // VII Королевские чтения. Т. 5. Самара: Изд-во Самар. науч. центра Рос. Акад. наук, 2003. С. 38-39.

4. Сопов Е. А. О вероятностном генетическом алгоритме. Современные техника и технологии. В 2 т. Т. 2 / Е. А. Сопов // Томск: Изд-во Том. политехи. ун-та, 2004. С. 197-199.

5. Michalewicz, Z. Genetic algorithms, numerical optimization and constraints /Z. Michalewicz // Proc. of the Sixth Intern. Conf. on Genetic Algorithms and their Applications. Pittsburgh, PA, 1995.

6. Whitley, D. BuildingBetterTestFunctions/D. Whitley // Proc. of the Sixth Intern. Conf. on Genetic Algorithms and theirApplications. Pittsburgh, PA, 1995.

Table І

The test problems for constrained optimization

The problem statement

The exact solution

y < 7 + sin(2 • x) y 0 і з sin(2 • x) x є [0,4?

x = 4

y = 7.989358247 z* = 79.82984520

z = 5 $ x % 0.5 $ y k max y 9 ~2 $ x % 5 y 0 x з 1.5 y 9 2 $ x % 1 x 0 0 y 0 0

x =13 = 2.16666

6

y = - = 0.66666

3

z* = — = 11.16666667

6

z(x, y) = 2000x % 2400y к max x 0 0 y 0 0 x y 120 110 4x % y 9 320 x%y9110 x y 340 120 x % 2y 9160 x % 4 y 9 280

x = 50 y = 55

z = 20 % e з 20exp

N

N = 4

2x1 з 3x2 % 4x3 910 4 x2 з 5 x3 % x4 91 10x1 % 7.5x3 з8.4x4 9 3.5

%21.7x2 з36.4x4 916.2

x. = 0, i = 1, N

zopt = 0

N t \

5(0.1$ x2 з 4cos(0.8xi)% 4) к min

z

i =1 N = 2

x12 % 9x^ 9 36 9x12 % x2 9 36

x. = 0, i = 1, N

zopt = 0

z

Table 2

The GA and pGA with the dynamic penalty efficiency comparison for constrained optimization problem

The problem The best-efficiency parameters The best-efficiency parameters The average-efficiency parameters

GA pGA GA pGA GA pGA

% N % N % N % N % N % N

Linear problem 1 76 31.05 64 32.81 12 16.83 14 14.14 41.78 27.67 40.22 26.2

Linear problem 2 100 472.04 100 446.9 0 0 0 0 61.19 357.38 61.56 375.58

Non-linear problem 1 58 32.28 66 32.39 8 25 24 18.25 34.22 28.95 41.33 26.49

Non-linear problem 2 100 21.22 98 15.02 44 9.93 68 9.12 76.07 14.49 83.56 12.24

Non-linear problem 3 20 18.8 68 29.94 0 0 22 13.64 7.41 19.45 40.89 24.68

Non-linear problem 4 94 35.77 92 36.09 52 43.42 48 42.96 72.81 33.07 72.44 31.86

Rastrigin function 100 54.4 100 33.4 5 10 100 76.92 56.85 58.32 100 48.22

Ackley (and) 100 2.76 100 4.51 95 61.21 100 69.71 99.63 23.02 100 32.92

Ackley (or) 100 1.94 100 3.79 100 63.04 100 42.24 100 12.57 100 13.41

The number of wins 7 6 6 3 2 2 7 6 3 4 7 5

The rate of wins 53.85 66.67 77.78 75 70 55.56

The number of double wins 4 2 0 5 1 3

The rate of double wins 66.67 100 75

Table 3

The GA and pGA with optimal solution prediction efficiency comparison for constrained optimization problem

The problem The best-efficiency parameters The best-efficiency parameters The average-efficiency parameters

GA pGA (prediction) GA pGA (prediction) GA pGA (prediction)

% N % N % N % N % N % N

Linear problem 1 76 31.05 48 34.29 12 16.83 22 24.64 41.78 27.67 39.33 32.8

Linear problem 2 100 472.04 100 438.4 0 0 2 8 61.19 357.38 60 350.82

Non-linear problem 1 58 32.28 46 23.96 8 25 0 0 34.22 28.95 21.78 19.49

Non-linear problem 2 100 21.22 66 18.12 44 9.93 28 14.21 76.07 14.49 45.11 16.91

Non-linear problem 3 20 18.8 34 29.47 0 0 0 0 7.41 19.45 17.33 21.45

Non-linear problem 4 94 35.77 100 38.92 52 43.42 60 41.77 72.81 33.07 84.22 30.84

Rastrigin function 100 54.4 100 32.36 5 10 98 55.22 56.85 58.32 99.78 47.82

Ackley (and) 100 2.76 100 3.06 95 61.21 100 50.57 99.63 23.02 100 29.74

Ackley (or) 100 1.94 100 3.47 100 63.04 100 56.65 100 12.57 100 15.58

The number of wins 7 4 5 6 3 4 6 4 5 5 5 4

The rate of wins 58.33 60 50 66.67 50 50 55.56 50

The number of double wins 2 3 2 4 1 2

The rate of double wins 60 66.67 66.67

© Vorozeikin A. Yu., Gonchar T. N., Panfilov I. A., Sopov E. A., Sopov S. A., 2009

i Надоели баннеры? Вы всегда можете отключить рекламу.