Научная статья на тему 'The global optimization method with selective averaging of the discrete decision variables'

The global optimization method with selective averaging of the discrete decision variables Текст научной статьи по специальности «Математика»

CC BY
27
5
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
ГЛОБАЛЬНАЯ ОПТИМИЗАЦИЯ / ДИСКРЕТНЫЕ ПЕРЕМЕННЫЕ / СЕЛЕКТИВНОЕ УСРЕДНЕНИЕ ИСКОМЫХ ПЕРЕМЕННЫХ / МНОГОЭКСТРЕМАЛЬНАЯ ФУНКЦИЯ / ОГРАНИЧЕНИЯ ТИПА НЕРАВЕНСТВ / GLOBAL OPTIMIZATION / DISCRETE VARIABLE / SELECTIVE AVERAGING OF DECISION VARIABLES / MULTIEXTREME FUNCTION / CONSTRAINTS OF INEQUALITY TYPE

Аннотация научной статьи по математике, автор научной работы — Rouban Anatoly I., Mikhalev Anton S.

In the paper, the functional of selective averaging of discrete decision variables is proposed. The positive selectivity coefficient is entered into a positive decreasing kernel of functional and with growth of selectivity coefficient the mean gives optimum values (in a limit) of decision discrete variables in a problem of global optimization. Based on the estimate of the selective averaging functional, a basic global optimization algorithm is synthesized on a set of discrete variables with ordered possible values under inequality constraints. The basis is a computational scheme for optimizing continuous variables and its transformation for optimization with respect to discrete variables. On a test example the high convergence rate and a noise stability of base algorithm are shown. Simulations have shown that the estimate of the probability of making a true decision reaches unit.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Метод глобальной оптимизации с селективным усреднением дискретных искомых переменных

Предложен функционал покомпонентного селективного усреднения искомых дискретных переменных. В положительное убывающее ядро функционала введен положительный коэффициент селективности. При его увеличении усреднение в пределе обеспечивает получение оптимального значения искомых дискретных переменных. На основе оценки функционала селективного усреднения синтезирован базовый алгоритм глобальной оптимизации на множестве дискретных переменных с упорядоченными возможными значениями при наличии ограничений неравенств. На тестовом примере продемонстрированы высокие скорость сходимости и помехоустойчивость базового алгоритма. Статистическое исследование алгоритма показало, что оценка вероятности получения истинного решения достигает единицы.

Текст научной работы на тему «The global optimization method with selective averaging of the discrete decision variables»

ВЕСТНИК ТОМСКОГО ГОСУДАРСТВЕННОГО УНИВЕРСИТЕТА

2020 Управление, вычислительная техника и информатика № 50

УДК 517.977

DOI: 10.17223/19988605/50/6

A.I. Rouban, A.S. Mikhalev

THE GLOBAL OPTIMIZATION METHOD WITH SELECTIVE AVERAGING OF THE DISCRETE DECISION VARIABLES

In the paper, the functional of selective averaging of discrete decision variables is proposed. The positive selectivity coefficient is entered into a positive decreasing kernel of functional and with growth of selectivity coefficient the mean gives optimum values (in a limit) of decision discrete variables in a problem of global optimization. Based on the estimate of the selective averaging functional, a basic global optimization algorithm is synthesized on a set of discrete variables with ordered possible values under inequality constraints. The basis is a computational scheme for optimizing continuous variables and its transformation for optimization with respect to discrete variables. On a test example the high convergence rate and a noise stability of base algorithm are shown. Simulations have shown that the estimate of the probability of making a true decision reaches unit.

Keywords: global optimization; discrete variable; selective averaging of decision variables; multiextreme function; constraints of inequality type.

The problem of search of a global extremum of objective functions on admissible set of decision variables (continuous, discrete or continuous-discrete) belongs to the very complex class [1-27]. The specificity of global optimization is caused by multiextremal view of the objective functions, their discontinuous nature, very different sensitivity of objective functions in relation to decision variables, the discrete nature of part or all the variables, the presence of noises, the presence of a considerable quantity of decision variables and constraints of inequalities type and equalities type. If objective functions are not calculated, they are measured in points of admissible set of decision variables.

The majority of algorithms is oriented only on cases of continuous decision variables and on elemental constraints of inequalities type. The comparative analysis of base algorithm of a method with selective averaging of continuous decision variables [6, 7, 10, 14, 24] in relation to existing heuristic algorithms showed its advantages on the rate of convergence, a noise stability and total computing complexity. In [27] the variant of base algorithm of a solution of a problem of global optimization on set of continuous-discrete variables is proposed. It is rational to develop this effective method of global optimization on the set of discrete variables.

In this paper for a solution of global optimization problems on a set of discrete variables the approach based on the selective averaging of decision variables with adaptive reorganization of an admissible set of trial movements is proposed.

It is constructed the functional of selective averaging of discrete decision variables. The selectivity coefficient is entered into kernel of functional. With an increase in the core selectivity coefficient, it becomes possible for the functional to distinguish the positions of the global minimum. It is shown that when selectivity coefficient of kernel tends to infinity the averaging gives optimal value of decision discrete variables.

The computing scheme of base algorithm of global optimization at continuous variables is transformed to the similar scheme at discrete variables. The base algorithm of global optimization on a set of discrete variables with ordered possible values at the presence of inequalities constraints is synthesized.

Trial and working steps are also separated in time. Before performance of each working step the series of calculations of the minimized function in the sampling points is carried out. Based on this information at the fixed selectivity coefficient of kernel, the selective averaging of decision variables will be executed numerically. For each discrete variable the continuous auxiliary non-dimension variable which contains

numbers of possible values of a discrete variable and the identical the adjoining each other subintervals covering these numbers is put in compliance. Due to one-to-one compliance the transition in the sampling points from continuous variables to discrete possible values is carried out. The same (but already reverse) transition occurs for received averaged values of decision variables.

Also, the adaptive reorganization of sizes of a set of possible trial movements is carried out and functions of inequalities constraints are considered.

The example shows the high convergence rate and noise stability of the base algorithm. It also provides near-to-one the estimate of the probability of obtaining a true solution.

1. Statement of problem

The problem of search of a global minimum of objective function f (y) on a set of discrete variables with ordered possible values at the presence of inequalities constraints is solving:

f(y) = globmin, 9j (y) < 0, j = 1,m, (1)

where y = (yl,...,yh) is vector h of discrete variables. Each discrete variable yt has rt possible ordered values yt,l,...,yt>n .

Inequalities constraints select (narrow) admissible set of possible values in which search of minimum of global minimum is carried out. It's required to define the position ymin of global minimum of objective function f (y) on limited set of change of its variables.

Function f (y) is multiextreme and can be distorted by noises. Functions of constraints can be non-convex. Search of extremum is carried out on basis only measurements or calculations of specified functions f (y), 9.(y), j = 1,m in the selected sampling points which satisfy to the inequalities constraints:

9 j(y) < 0 j =1 m.

We assume that a global minimum of function f (y) on an admissible set of points with discrete values of variables is unique.

2. The selective averaging of discrete decision variables

The selective averaging of decision continuous variables [6, 7, 10, 14, 24] is a mathematical expectation with special probability density function of these variables. Probability density function at a decreasing kernel (with increase of its normalized argument from 0 to 1) allows to approach with growth of selectivity coefficient of a kernel to the specified average value of decision variables i.e. to the true position of global minimum. This theoretical result gave to chance of construct the structure of a base numerical algorithm, which successfully applied at the presence of inequalities constraints. Due to the expansion of possibilities of this algorithm, the algorithms of single-objective global optimization at the constraints such as inequalities and equalities are synthesized. The algorithms of the solution of other extreme problems are obtained: multi-objective and minimax global optimization, search of the main minima of multiextremal functions [14].

Let's transfer idea of selective averaging [14] on solution the problem of global optimization (1) on set of possible values of the discrete decision variables which satisfy the inequalities constraints (1).

At first we will consider the one-dimensional version of optimization problem (1). Discrete variable y has r possible ordered values (y1,...,yr).

We enter the notation: /mm, /max are smallest and largest values of minimized function on an admissible set of possible values; ymm is admissible value of discrete variable at which function f (y) is reaching the minimum value: f (ymm) = fmm .

The averaged value (mathematical expectation) of decision variable is equal to value:

[j'L = Z

n=l

where ps(g) = Ps{gu) X Ps(gk) normalized (on 1) positive decreasing kernel (analog of probability / k=1

r

of a random event): ^psg) = 1; kernels ps(g^) and ps(g^) lie in interval [0; 1]; s is selectivity coeffi-m=i

cient of kernel (1 < s), at the decision it's enough to set of it from integer sequence: 1, 2, 3, ... ; gn =(f (y) - /mm V(fmx - fmm) is arguments of kernels which also lie in interval [0; 1]: 0 < ^ < 1, |a = 1, r.

In optimal Point f (ymJ = fmn and gmin = 0 , in Point of the maximum value f (Jmax) = fmax and

gmax = 1. The presence in the argument of kernel of values fmm and fmax allows cover of all range

of change of the optimized function and to pass to non-dimension variable. Due to of it the subsequent numerical algorithms become more universal. They independent of the units of measure of minimized function. And also the rate of convergence of algorithms and accuracy tracking of position of extremum increases. The absence of information about f and f is compensated by their estimates which calculated on each working step based on measurements (calculations) of optimized function in trial points.

Let's stop on a type of kernel ps (g) because normalized kernel ps (g) repeats its form. It's convenient to represent the ps (g) in the form of raising to the degree s rather simple decreasing kernel:

ps (g) = [p(g)]s. Possible degree kernels p(g): linear p(g) = 1 - g, parabolic p(g) = 1 - g2, cubic p(g) = 1 - g3 and etc. The example of other type of kernels is an exponential kernel p(g) = e~g, a hyperbolic kernel p(g) = g_1 . Further we will consider only degree kernels.

With growth of selectivity coefficient s all components of discrete function ps (gt), l = 1, r, approaches to zero, except ps (g^n = 0) = 1 and ps (gmax = 1) = 0. Then normalized discrete function tends to Kroneker's function [28]. It's equal to 1 in a point gMn = 0 and equal to 0 in other points. At a result in the right part of procedure of averaging (2) Kroneker's function «prick out» optimal value of decision variable ymin, i.e.

[ y ] s s- >-/ min .

Fig. 1. The values of linear kernel ps(g) and normalized kernel ps(g) at three possible values of a discrete variable and selectivity coefficient s = 1

On Fig. 1 the simplest case of the linear kernel ps (g) and its normalized analog ps (g) when discrete variable has three possible values: ymm,y2,ymax. Let's say in the point y the value g is equal to 0.5. With growth of selectivity coefficient s the kernel ps (0.5) = 1/2" and normalized kernel ps (0.5) = (1/2s )/(1 + (1/2s)) = 1/(1 + 2s) tends to zero. Respectively the ps (0) = 1/(1 + (1/2s)) tends to 1 and function ps (g) tends to Kroneker's function with the special point gmin = 0. With increase the number of possible values of discrete variable y, all noted limit regularities are preserved.

At increase the number of discrete variables y = (y,...,y ) (see (1)) all noted properties also take place. Each discrete variable yt has the rt possible ordered values yt4,...,yt . All admissible points have the h coordinates and satisfy to inequalities constraints. One of these points ymin = (yi,min,...,yh,mm) corresponds to a global minimum f(y^n) = fmn and gmin = 0 .

For any single-valued function ^(y) the selective averaging has the form:

_ R

№( y)]s = E Ps ( ^ )«K y^ ).

m

Here: y^ is admissible point with one of combinations of possible values of its coordinates, ^ is number of this point, y ) = y1,..., yh ), ^(y ) = ^(yi h., •••, y h n.) ; R is number of admissible points: (combinations of possible values of discrete variables) and each point which satisfy to inequalities constraints. Limit result

Wy)]s ymin ) = ^min , •••» yh,min )

is the same as for one-dimensional case.

We take the h functions ^ (y) — y, t = 1, h and we receive limit values for all coordinates

[yt > yt,min , t = 1, h ,

where \_y,\ =^_lPs{gv)yt ^ t = \,h, is coordinate with number t in point with number |a. This is co-

m=1 ,

ordinate-wise averaging. It's written in vector form:

[yL ,_00 >ymin = (y1,min, •••, yh,min) ,

- R ^

where = X Ps C?Li ).);M , >',, is admissible point with number |a at one of combinations of possible values of its variables.

By search of a maximum of function f the arguments of kernels are calculated on another:

g» = (/max - f(ym»/(fmax - fmin ) and the point ymax corresponds to global maximum value: f (ymax ) — f^ and gmax = 0.

3. Basic algorithm of optimization

The method is based on separation in time of trial and working steps, uniform distribution of sampling points on admissible set of possible values of discrete decision variables, numerical selective averaging (calculation of mathematical expectation) of decision variables by results of calculated (or measured) values of optimized function in sampling points. Adaptive reorganization of the sizes of the set of sampling points is carried out also on each working step.

The basic algorithm of global optimization based on selective averaging of decision continuous variables allows implementation on a set of discrete and continuous-discrete variables. In [27] the specified algorithm was generalized on a case of continuous and discrete variables with ordered possible values. Based on the scheme presented in [27], in this work the algorithm for a case only of discrete variables is constructed.

The solution of problem of optimization with discrete variables is based on transition from each discrete variable yt to the corresponding auxiliary continuous variable xt. From possible values >', ,,..., v, ,:

of a discrete variable yt transition to their numbers are carries out. Calculations on each working step are conducted for the number x(possible values of discrete variables yt. Averaged values of variables (estimates of mathematical expectation) and the sizes (also averaged) the variation sets of variables are calculated.

For receiving of n sampling points y(,), i = 1, n consistently by uniform distribution are generated on a «rectangular» set of a points and from them only those which satisfy inequalities constraints are left. After the l -th working step the generation of sampling points is carried out equally:

= X't + AXu®, ) e [-1; 1], t = 1h, i = in, (3)

In a formula (3) u(i) is elements of pseudo-random sequence of uniform distributed of the continuous

random value. This uniform law of distribution causes identical probabilities of emergence of possible values (and their numbers) all discrete variables.

In the received sampling points value of minimized function f(i) = f (y(i)), i = 1,n is calculated. Further, the position of the minimum is specified.

New value xl+1 on auxiliary variables and the sizes AXW of «rectangular» set of trial movements are calculated by the following formulas:

xt = è xt,NpS,min ,t = 1,h , (4)

i=1

„ (Ji) \ Ai) _ f f n _

~{i) _ JWmini_ (i) _ J J min V|7<')_v' 1« „(«) ih

r s,min n ' Smin 7 7 ' L^At iq I Ps,min »'

J ^ nun y i I Çj

Y r> ( P^- J max ~ /n ¿-i Fs\ a mm/ j=1

V i=1

l = 0,\,2,...;[0<yg,qe{\,2,...},0<s]. Here: /max =max{/<-'\ i = \,n},fmin =mm{f{}\ i = \,n), ps(-) is positive kernel, s is selectivity coefficient of kernel. The positive kernels normalized on 1 on a system of n sampling points:

n -n

è Psmin = 1. The argument of kernel is non-dimension variable, which always lie in interval [0; 1]. The

i=1 ,

kernels ps (•) monotonically decrease at argument increase.

On l -th step the value X^ calculated in accordance with formula (4) gets to the individual interval which covering some number of the corresponding value of a discrete variable. So the possible values of all discrete variables are calculated. Calculated (on the previous working step) on a formula (4) interval of

a variation 2AXtl of a continuous auxiliary variable allocates numbers of possible values of the discrete variable. The intervals of unit length covering these numbers form a new interval of change of an auxiliary

variable. For any component Xt the initial values of these variables is equal: X0 = (rt +1)/2, AX0 = rt/2 .

When approaching to minimum the region of trial movement reduced, and thus, there is more exact tracking of position of extremum. The criterion of stop of search process is the condition of reduction (at some l ) of size of region of variation of variables to the given value:

max j-AX0,t = 1,h [<82. (5)

[ AXt J

The corrected value %l in formula (4) directly is not used but AXl gives a variation interval 2Axt for auxiliary coordinate Xt (at the subsequent formation of sampling points) and is used in the condition break of search process (5).

4. Numeric example

Let's consider test function with 16 minima which constructed at the expense of operations «min» to 16 degree one-extreme potential functions:

f( y^ y 2)=min{zi( y^ y 2X i=1,16} ;

z1(y1, y2) = 21y -9 |2 +2|y2 -912; z^, y2) = 4|y -9|1,5 +4|y2 +1|1,8 +7; y2) = 41 y1 - 610,8 +41 y2 - 511,6 +4; z,(y„ y2) = 31 y1 I1,1 +31 y2 - 211,8 +16; z5(y1, y2) = 6|y1 + 4|+6|y2 -71 +5; z6(y1, y2) = 4| ya + 8|1,5 +4|y2 -13|1,6 +10; z7(y1, y2) = 21 y1 -311,5 +21 y2 -1111,5 +9; z8 , y2) = 41 y -1110,8 +41 y2 -210,9 +8,5; z9(y„ y 2 ) = 41 y1 + 810,8 +41 y 2 +110,8 +14; z^y, y2> = 31 y1 -1311,8 +31 ^2 -1211,6 +13; zn(y1, y2) = 3|y1 +1311,3 +3|y2 + 411,3 +12; z^(yx, y2) = 5| -6|0,8 +5|y2 +1|0,6 +15; z13(y1, y2) = 51 y +1311,6 +51 y2 -911,9 +8; zM(y, y2) = 6| y1 -910,6 +61 y2 + 810,6 +18; z^, y2) = 51 y -31 1,1 +51 y2 + 411,3 +6; z^, y2) = 51 y1 -31 1,6 +51 y2 +131 1,6 +10,5. The admissible region has the form of square which defined by two constraints-inequalities:

9100 H y I -15 < 0 ; 92(y2) N y2 I "15 < 0 .

The global minimum thus corresponds to a point y* = (9; 9) and has a f (y*) = 0 . We enter the additional constraints, one of which cuts the specified minimum:

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

93(y) = y + y2 -12 < 0 ; 94(y) = -y1 - y2 -10 < 0 . The conditional global extremum is in pointy* = (6; 5), f (y*) = 4 .

Fig. 2. Multiextremal function f (y) : at the left is a perspective view of function; on the right is the position of minima, admissible region of search and movement trajectory in a neighborhood of global minimum at three starting points (-14, -14), (-14, -4), (13, 12)

On Fig. 2 it is shown the perspective view of minimized function, the positions of minima of given function and the constraints which select admissible region. Both figures are constructed in the assumption that the discrete variables are continuous, and then the possible values of the discrete variables are specified. In fact we have 9 cross sections of the presented function at the specified discrete values of the first variable y1 and the 11 cross sections for the second variable y2. Each point in Fig. 2 on the right corresponds to the position of the minimum (small points is local minima, large points is global minimum) of a given objective function, the intersection of dotted lines is admissible points y(l with one of combinations of possible values

of its coordinates.

h-,-,-,-1-T1 -14 -i-1-i-1-1-i--

1 3 5 1 9 11 15 15 «/100 0 1 7 3 i 5 (.

Fig. 3. The left is the dependence of estimate of probability of hitting to neighborhood of true decision from a number of sampling point; the right is the change of variables on iterations at 100% noise

Let's show operability of the offered algorithm. The complex characteristic of process of search of

global minimum is the estimate Pam-m of probability of hitting of the obtained decision y in the neighbor-

^^ it 11 i ^ hood of true decision y : | y^- ytN»» = 0, t = 1,h, i.e. calculated values of discrete variables precisely

coincide with the true values in the point of minimum.

Let's consider dependence of received of true decision from a number of sampling point n . For this

purpose the M realization of process of search of minimum carried out. The estimate of probability Pcorrect will be equal to the relative frequency of hitting of the obtained decision y*, j = 1,M in neighborhood of true decision.

The parameters of the algorithm of minimization: (y0, x0) = (-13, - 4;1, 3), Ax = (8,5; 8,5), yq = 1 (is equal 2 for case by 100% noise), q = 2, kernel on the minimized function parabolic with degree of selectivity s = 300 (is equal 1000 for the case by 100% noise). Research parameters: number of realization M = 101.

On Fig. 3 (the left) the dependence -Pcorrect from a number of sampling point is presented. On Fig. 3 (the right) given a typical implementation of reorganization from step to step of discrete variables when search global minimum by a number of sampling points n = 500 (for the case without noise are required 3-5 iterations, for the case with 100% noise are required 7-11 iterations).

Conclusion

The developed method of selective averaging of decision variables and respectively base numerical algorithm of a global optimization has a rather simple structure. Due to the selective averaging of decision variables, the base algorithm has high noise stability. Use in an argument of a kernel of base algorithm of non-dimension (relative) values an essentially increase rate of convergence of algorithm, and reduces number of adjustable parameters.

The dimension of a problem of optimization at the chosen approach doesn't change. The complexity of calculations in comparison with a problem with continuous-discrete variables increases slightly. All main properties of an algorithm are also preserved.

The algorithm is based on execution the same type operations, which can be executed in parallel on multiprocessing computing systems. This feature is important in the presence of a large number of decision variables and different constraints.

REFERENCES

1. Schmit, L.A. & Farshi, B. (1974) Some approximation concepts for structural synthesis. AIAA J. 12(5). pp. 692-699.

2. Gupta, O.K. & Ravindran, A. (1983) Nonlinear integer programming and discrete optimization. Journal of Mechanisms,

Transmissions, and Automation in Design. 105(2). pp. 160-164.

3. Olsen, G. & Vanderplaats, G.N. (1989) A method for nonlinear optimization with discrete variables. AIAA J. 27(11). pp. 1584-

1589.

4. Rajeev, S. & Krishnamoorthy, C.S. (1992) Discrete optimization of structures using genetic algorithms. Journal of Structural

Engineering. 118(5). pp. 1233-1250. DOI: 10.1061/(ASCE)0733-9445(1992)118:5(1233)

5. Cohn, M.Z. & Dinovitzer, A.S. (1994) Application of structural optimization. Journal of Structural Engineering. 120(2). pp. 617-

650. DOI: 10.1061/(ASCE)0733-9445(1994)120:2(617)

6. Ruban, A.I. (1994) The nonparametric search global optimization method. Cybernetics and Higher Educational Institution. vol. 28

(Intellectual information technology). Tomsk: Tomsk Polytechnic University. pp. 107-114.

7. Ruban, A.I. (1995) The nonparametric search optimization method. Russian Physics Journal. 38(9). pp. 65-73.

8. Salajegheh, E. (1996) Discrete variable optimization of plate structures using dual methods. Computers & Structures. 58(6).

pp. 1131-1138.

9. Gutkowski, W. (1997) Discrete structural optimization: design problems and exact solution methods. Discrete structural

optimization. Vienna: Springer. pp. 1-53.

10. Ruban, A.I. (1997) Global extremum of continuous functions. Computer Science and Control Systems. 2. pp. 3-11.

11. Beckers, M. (2000) Dual methods for discrete structural optimization problems. International Journal for Numerical Methods in Engineering. 48. pp. 1761-1784. DOI: 10.1002/1097-0207(20000830)48:12<1761::AID-NME963>3.0.CO;2-R

12. Pezeshk, S., Camp, C. & Chen, D. (2000) Design of nonlinear framed structures using genetic optimization. Journal of Structural Engineering. 126(3). pp. 382-388. DOI: 10.1061/(ASCE)0733-9445(2000)126:3(382)

13. Salajegheh, J. (2001) Continuous and discrete optimization of space structures using approximation concepts. Ph.D. Thesis. University of Kerman. Iran. (In Persian).

14. Ruban, A.I. (2004) Global'naya optimizatsiya metodom usredneniya koordinat [Global optimization by a method of averaging of coordinates]. Krasnoyarsk: State Technical University.

15. Camp, C.V., Bichon, B.J. & Stovall, S.P. (2005) Design of steel frames using ant colony optimization. Journal of Structural Engineering. 131(3). pp. 369-379. DOI: 10.1061/(ASCE)0733-9445(2005)131:3(369)

16. Kaveh, A. & Talatahari, S. (2008) A hybrid particle swarm and ant colony optimization for design of truss structures. Asian Journal of Civil Engineering. 9(4). pp. 329-348.

17. Floudas, C.A. & Pardalos, P.M. (2009) Encyclopedia of Optimization. 2th ed. Boston, MA: Springer.

18. Rao, S.S. (2009) Engineering optimization: theory andpractice. 4th ed. John Wiley & Sons.

19. Spillers, W.R. & MacBain, K.M. (2009) Structural Optimization. Boston, MA: Springer.

20. Kaveh, A. & Talatahari, S. (2010) An improved ant colony optimization for the design of planar steel frames. Engineering Structures. 32(3). pp. 864-873. DOI: 10.1016/j.engstruct.2009.12.012

21. Kripakaran, P., Brian, H. & Abhinav, G. (2010) A genetic algorithm for design of moment-resisting steel frames. Structural Multidisciplinary Optimization. 32(3). pp. 559-574. DOI: 10.1007/s00158-011-0654-7

22. Gandomi, A.H., Yang, X.S. & Alavi, A.H. (2011) Mixed variable structural optimization using firefly algorithm. Computers & Structures. 89(23). pp. 2325-2336. DOI: 10.1016/j.compstruc.2011.08.002

23. Luh, G.C. & Lin, C.Y. (2011) Optimal design of truss-structures using particle swarm optimization. Computers & Structures. 89(23-24). pp. 2221-2232. DOI: 10.1016/j.compstruc.2011.08.013

24. Rouban, A.I. (2013) Global optimization method based on the selective averaging coordinates with restrictions. Vestnik Tomskogo gosudarstvennogo universiteta. Upravlenie vychislitelnaja tehnika i informatika - Tomsk State University Journal of Control and Computer Science. 22. pp. 114-123. DOI: 10.17212/1814-1196-2017-3-126-141

25. Baghlani, A. & Makiabadi, M. & Sarcheshmehpour, M. (2014) Discrete optimum design of truss structures by an improved firefly algorithm. Advances in Structural Engineering. 17(10). pp. 1517-1530. DOI: 10.1260/1369-4332.17.10.1517

26. Carbas, S. (2016) Design optimization of steel frames using an enhanced firefly algorithm. Engineering Optimization. pp. 1-19. DOI: 10.1080/0305215X.2016.1145217

27. Rouban, A.I. & Mikhalev, A.S. (2017) Global optimization with selective averaging of mixed variables: continuous and discrete with the ordered possible values. Nauchnyy vestnik Novosibirskogo gosudarstvennogo tekhnicheskogno universiteta - Scientific Bulletin of NSTU. 3(68). pp. 126-141. DOI: 10.17212/1814-1196-2017-3-126-141

28. Korn, G.A. & Korn, T. (1973) Mathematical Handbook. New York, San Francisco, Toronto, London, Sydney: [s.n.].

Received: July 8, 2019

Rouban A.I., Mikhalev A.A. (2020) THE GLOBAL OPTIMIZATION METHOD WITH SELECTIVE AVERAGING OF THE DISCRETE DECISION VARIABLES Vestnik Tomskogo gosudarstvennogo universiteta. Upravlenie vychislitelnaja tehnika i informatika [Tomsk State University Journal of Control and Computer Science]. 50, pp. 47-55

DOI: 10.17223/19988605/50/6

Рубан А.И., Михалев А.С. МЕТОД ГЛОБАЛЬНОЙ ОПТИМИЗАЦИИ С СЕЛЕКТИВНЫМ УСРЕДНЕНИЕМ ДИСКРЕТНЫХ ИСКОМЫХ ПЕРЕМЕННЫХ. Вестник Томского государственного университета. Управление, вычислительная техника и информатика. 2019. № 50, С. 47-55

Предложен функционал покомпонентного селективного усреднения искомых дискретных переменных. В положительное убывающее ядро функционала введен положительный коэффициент селективности. При его увеличении усреднение в пределе обеспечивает получение оптимального значения искомых дискретных переменных. На основе оценки функционала селективного усреднения синтезирован базовый алгоритм глобальной оптимизации на множестве дискретных переменных с упорядоченными возможными значениями при наличии ограничений неравенств. На тестовом примере продемонстрированы высокие скорость сходимости и помехоустойчивость базового алгоритма. Статистическое исследование алгоритма показало, что оценка вероятности получения истинного решения достигает единицы.

Ключевые слова: глобальная оптимизация; дискретные переменные; селективное усреднение искомых переменных; многоэкстремальная функция; ограничения типа неравенств.

ROUBAN Anatoly Ivanovich (Doktor of Technical Sciences, Professor of Computer Science Department of Institute of Space and Information Technologies, Siberian Federal University, Krasnoyarsk, Russian Federation). E-mail: ai-rouban@mail.ru

MIKHALEV Anton Sergeevich (Senior Lector of Computer Science Department of Institute of Space and Information Technologies, Siberian Federal University, Krasnoyarsk, Russian Federation). E-mail: asmikhalev@yandex.ru

i Надоели баннеры? Вы всегда можете отключить рекламу.