Научная статья на тему 'New adaptive multi-memetic global optimization agorithm'

New adaptive multi-memetic global optimization agorithm Текст научной статьи по специальности «Математика»

CC BY
106
17
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
MULTI-MEMETIC ALGORITHM / LANDSCAPE ANALYSIS / MIND EVOLUTIONARY COMPUTATION / GLOBAL OPTIMIZATION

Аннотация научной статьи по математике, автор научной работы — Karpenko A.P., Sakharov M.K.

This paper deals with the Simple MEC (SMEC) algorithm which belongs to a class of MEC algorithms. The algorithm was selected for investigation due to the following reasons: nowadays this algorithm and its modifications are successfully used for solving various optimization problems; the algorithm is highly suitable for parallel computations, especially for loosely coupled systems; the algorithm is not sufficiently studied --there are relatively few modifications of SMEC (while, for instance, tens of various modifications are known for particle swarm optimization). Authors proposed an adaptive multi-memetic modification of SMEC algorithm, which includes a stage of landscape analysis for composing a set of basic adaptation strategies; software implementation of the algorithm is also presented. Performance investigation was carried out with a use of multi-dimensional benchmark functions of different classes. It was demonstrated that the concept of multi-population along with the incorporated landscape analysis procedure allows making a rough static adaptation of the algorithm to the objective function at the very beginning of evolution process at the cost of small computational expenses. Utilization of memes, in turn, helps the algorithm to correct possible errors of static adaptation during the evolution due to a closer investigation of search sub-domains

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «New adaptive multi-memetic global optimization agorithm»

UDC 519.7

DOI: 10.18698/1812-3368-2019-2-17-31

NEW ADAPTIVE MULTI-MEMETIC GLOBAL OPTIMIZATION ALGORITHM

A.P. Karpenko M.K. Sakharov

[email protected] [email protected]

Bauman Moscow State Technical University, Moscow, Russian Federation

Abstract

This paper deals with the Simple MEC (SMEC) algorithm which belongs to a class of MEC algorithms. The algorithm was selected for investigation due to the following reasons: nowadays this algorithm and its modifications are successfully used for solving various optimization problems; the algorithm is highly suitable for parallel computations, especially for loosely coupled systems; the algorithm is not sufficiently studied — there are relatively few modifications of SMEC (while, for instance, tens of various modifications are known for particle swarm optimization). Authors proposed an adaptive multi-memetic modification of SMEC algorithm, which includes a stage of landscape analysis for composing a set of basic adaptation strategies; software implementation of the algorithm is also presented. Performance investigation was carried out with a use of multi-dimensional benchmark functions of different classes. It was demonstrated that the concept of multi-population along with the incorporated landscape analysis procedure allows making a rough static adaptation of the algorithm to the objective function at the very beginning of evolution process at the cost of small computational expenses. Utilization of memes, in turn, helps the algorithm to correct possible errors of static adaptation during the evolution due to a closer investigation of search sub-domains

Keywords

Multi-memetic algorithm, landscape analysis, mind evolutionary computation, global optimization

Received 25.12.2017 © Author(s), 2019

This work was supported by the Russian Foundation for Basic Research (RFBR project no. 16-07-00287)

Introduction. When solving real-world global optimization problems, one often faces a high-dimensional objective function with a non-trivial landscape, which is computationally expensive. In order to cope with such problems, many population-based algorithms were proposed [1, 2]. One of the main advantages

of this class of algorithms, apart from their simplicity of implementation and diversity, is a high probability of localizing so called sub-optimal solutions, in other words, solutions that are close to the global optimum. In real-world optimization problems, such solutions are often sufficient.

In the meantime, it was demonstrated [2-4] that often a single method is not enough to obtain a high-quality solution. It is required to hybridize a method with other optimization techniques. One of the promising approaches in this field is so called memetic algorithms (MA). These methods are population meta-heuristic optimization algorithms based on neo-Darwinian evolution and a concept of meme proposed by R. Dawkins in 1976 [5]. In the context of MA, a meme can be considered as any local optimization method applied to a current solution during the evolution process. Memetic algorithms represent a combination of population-based global optimization technique and local search procedures.

Recent results of the works [4, 6, 7] demonstrate that if a memetic algorithm receives no prior knowledge on the problem hand, it can produce a solution not only equal to the one obtained using an ordinary population algorithm but even worse. In addition, there are relatively few theoretical papers that would suggest any particular MA configuration for black-box optimization problems. As a result, many scientists tend to utilize adaptive algorithms, which are capable of selecting the most suitable local optimization techniques for a particular search sub-domain during the evolution process.

Nowadays, many scientists actively work on the alternative approach, namely, an exploratory landscape analysis (LA) of an objective function [8, 9]. Instead of a dynamic adaptation of an algorithm during the optimization process, it is proposed to extract some information on the objective function's landscape and topology at the cost of additional evaluations (1...10 % of total computational budget). Landscape analysis methods identify either search subdomains with rugged or smooth topology or sub-domains where values of the objective function are almost identical. In the works [10-12] several universal LA methods were proposed, including Cell Mapping and Information Content.

A class of Mind Evolutionary Computation (MEC) algorithms is considered [13-15] in this work. These algorithms belong to a family of methods inspired by a human society and simulate some aspects of a human behavior. An individual s is considered as an intelligent agent which operates in a group S made of analogous individuals. During the evolution process each individual is affected by other individuals within a group. This simulates the following logic. In order to achieve a high position within its group, an individual has to learn from the most successful individuals in this group. Groups themselves should follow the same principle to stay alive in the intergroup competition.

In this work, the Simple MEC algorithm is considered. It belongs to a class of MEC algorithms and was selected for investigation due to the following reasons: nowadays this algorithm and its modifications are successfully used for solving various optimization problems [6, 15]; the algorithm is highly suitable for parallel computations, especially for loosely coupled systems [7]; the algorithm is not sufficiently studied — there are relatively few modifications of SMEC (while, for instance, tens of various modifications are known for particle swarm optimization [2]).

We propose an adaptive multi-memetic modification of SMEC algorithm, which includes a stage of landscape analysis for composing a set of basic adaptation strategies; software implementation of the algorithm is also presented. Performance investigation was carried out with a use of multi-dimensional benchmark functions of different classes.

Problem statement and SMEC algorithm. A global deterministic unconstrained minimization problem is considered in this work

min 0(X) = 0 (X* ) = 0*, (1)

XeR" V ;

where 0 (X) is the scalar objective function; 0(x*) = 0* is its required

minimal value; X = (x1,x2,...,xn)is n-dimensional vector of variables; Rn is n-dimensional arithmetical space. A domain D0 is defined as follows:

Do = {X | xmin < xi < xmax, i e [1: n]} (2)

and used for generating the initial population of solutions.

A population in the SMEC algorithm consists of leading groups

Sb = (sb,S22,...,SbSb|) andlagging groups Sw =(sf,S2W,...,S^|); thenumber

of individuals within each group is set to be the same and equals |S|. The SMEC algorithm is based on the following procedures: initialization, similar-taxis and dissimilation.

The initialization stage creates groups S2, Sw and put them in the search domain. We illustrate the initialization stage by an example of the group Si.

1. Generate a random vector Xi>1 whose components are distributed uniformly within the corresponding search subdomain. Identify this vector with the individual si>1 of the group Si.

2. Determine the initial coordinates of the rest of the individuals in the group using the formula

Xi,j = Xi,1 + Nn (a), j e[2:| S |], (3)

where Nn (a) is n-dimensional vector of independent random real numbers, distributed normally with math expectation and standard deviation equaling 0 and a respectively.

The similar-taxis stage implements a local search inside every group Sb, SJ and can be described as follows.

1. Determine the current best individual si,jb, jb e [1 :| S |], of the group Si.

2. Determine new coordinates of the rest individuals si,j, j e[1:| S |],

j ^ jb in this group using formula (3).

3. Calculate the objective function's values for all individuals in the group @1,j = 0(X-,j), j e [1 :| S |]. Here vector X-,j corresponds to the individual s-,j.

4. Determine a new winner of the group s! kb, kb e [1 :| S |], as an individual with the lowest value of the objective function 0.

The dissimilation stage implements a global search between all groups and uses the following steps.

1. Determine the best individuals of all groups Sb, SJ.

2. Compare their scores and rank them. If a score of any leading group Sb is less than a score of any lagging group SJ, then the latter becomes a leading group and the leading group becomes a lagging one. If a score of any lagging group SJ is lower than scores of all leading groups for © consecutive iterations,

then it's removed from the population.

3. Replace each removed group with a new one using the initialization procedure.

Similar-taxis and dissimilation stages are repeated iteratively while the best obtained value of the objective function 0 (X) changes. When the best obtained value stops changing, the winner of the best group is selected as a solution to the optimization problem (1).

Modified Multi-Population SMEC. An extension performance investigation of the SMEC algorithm was carried out by the authors in the work [16] in order to determine the influence of the free parameters' values on the efficiency of the algorithm. The following parameters were considered: a, the standard deviation, utilized when generating new individuals in groups; the removing frequency of lagging groups; the ratio between numbers of leading and lagging groups in the population.

Results of that work demonstrated a strong dependency of the algorithm's efficiency on the values of those parameters and also revealed that optimal values of those parameters differ for various objective functions 0.

These conclusions allowed formulating a multi-population modification of SMEC algorithm. Instead of a single population, a set of sub-populations K = (K1, K2,..., K|k| ) is considered.

Evolution inside each sub-population is governed by the individual values of the free parameters described above. Those values can be set based on the distinct features of the objective function's topology if they are known a priori or determined with a use of some heuristics [16].

In the modified algorithm, the required optimal value 0* of the objective function is determined as the minimum of the values obtained by every sub-population independently

0* = min 0*, l e[1:| K |]. (4)

The modified initialization stage of sub-population Kl is executed using an individual value oi . As a result, each sub-population has its own level of search intensification and diversification and provides a balance for the whole multipopulation. Parameter oi is also used at the similar-taxis stage within every subpopulation Kl .

The ratio between numbers of leading and lagging groups in every subpopulation is determined by the value m. An experimental result shows [16], that a small number of lagging groups affects adversely the diversification properties and can be useful only when the computational process approaches stagnation.

The dissimilation stage in the modified algorithm is executed with the use of an individual value qj . Based on the experiments [16] it was determined that the higher the removing frequency is (the lower value of qj ), the lower are diversification properties of the algorithm because lagging groups don't have enough time to explore their search sub-domains.

Landscape analysis procedure. In this work, we present a new method for landscape analysis of the objective function for a case when no initial information is available with any limitation on a problem's dimension. This method allows putting a function into one of six groups, each group corresponds to a static adaptation strategy which is utilized on the further stages. Each proposed group corresponds to a certain type objective function's topology. This classification was proposed based on the experimental studies [16] as the new algorithm is designed for particularly for parallel loosely coupled

systems. It allows considering different parallelization procedures in terms of load balancing which will be studied in the future works. The proposed LA procedure can be described as follows.

1. Generate N quasi-random n-dimensional vectors with domain D0. Here N is a total number of all groups in a multi-population (a free parameter of the algorithm). In this work LPX sequence was used to generate quasi-random numbers since it provides a high-quality coverage of a domain [17].

2. For every Xr, r e [1: N ], calculate the corresponding values of the objective function 0r and sort those vectors in ascending order of values 0r, r e[1:N].

3. Equally divide a set of vectors (X1,X2,...,XN) into |K| groups in accordance with a given number of sub-populations | K | (one freer parameter).

4. For every group Kl, l e [1:|K |], calculate a value of its diameter dl — a maximum Euclidian distance between any two individuals within this group [18].

5. Build a linear approximation for the dependency of diameter d on group number l using the least squares method [18].

6. Calculate an estimation of the size of domain D0 using the formula

dD n (xmax - xmin )2. (5)

Put the objective function 0 into one of the six categories (Table 1) based on the calculated parameters.

Table 1

Classification of objective functions based on the LA results

dj (l) increases dj (l) neither increases nor decreases dj (l) decreases

Nested sub-domains with the dense first domain (category I) dD / d1 > 2 Non-intersected domains of the same size (category III) Distributed domains with potential minima (category V)

Nested sub-domains with the sparse first domain (category II) dD / d1 < 2 Intersected domains of the same size (category IV) Highly distributed domains with potential minima (category VI)

There are three possible cases for the approximated dependency d(l): d can be an increasing function of l; d can decrease as l grows; d(l) can be neither decreasing nor increasing. Within the scope of this work it is assumed that the latter scenario takes place when a slope angle of the approximated line is less than ±5°.

The ratio between dD and d1 helps to estimate the density of the first group K1 with respect to the original domain D0. In other words, we can understand whether vectors X with the least values of the function 0 are sparsely or densely distributed. We consider two possible cases:

— > 2, < 2. d1 d1

Here the value 2 was obtained from the empirical studies [18].

Each of the six categories represents a certain topology of the objective function 0 and subsequently the rules for determining numerical values of the SMEC algorithm's free parameters, specified in the previous sections.

For objective functions that belong to the categories I and II, there is a high probability that the required global minimum is located within a domain defined by group K1. In this case, values of the parameters a, ^ and © are selected to provide a high level of intensification properties for the first groups and increase diversification properties for groups with high index numbers.

For objective functions that belong to the categories III and IV, values of the parameters a, ^ and © are selected randomly from given intervals.

Finally, objective functions that belong to the categories V and VI can be characterized with a large search sub-domain that can include the desired minimum. In this case, the values of the parameters are selected to increase the diversification properties of the first groups.

Figure 1 displays a few examples of the proposed landscape analysis procedure for various two-dimensional benchmark functions, including the traditional functions of Rastrigin, Griewank and Styblinski — Tang [19] as well as the composition functions from CEC 2014 [20].

Multi-memetic modification. Memetic algorithms appeared to be successful for solving optimization problems in various fields. However, just like for any other heuristic algorithm, the adjustment of the free parameters is required for their efficient performance. For instance, it is crucial to choose the most suitable meme for a problem in hand. It was demonstrated in Ref. [4] that this choice affects MA's efficiency greatly.

d e f

Fig. 1. Results of the landscape analysis procedure for a few benchmark functions:

a Zakharov function (category I); b Griewank function (category II); c Styblinski — Tang function (category IV); d composition function 2 from CEC'14 (category II); e composition function 4 from CEC'14 (category I); f composition function 5 from CEC'14

(category IV)

Nowadays, there are many papers, which propose different schemes for hybridization of meta-heuristic methods with local search procedures [4, 21, 22]. Often these algorithms utilize complicated heuristic local search procedures designed specifically for certain problems. Despite high efficiency their applications are limited because in many real-world problems scientists do not have any prior information on what meme they should use for a particular problem. Multi-memetic algorithms were proposed to overcome these difficulties [23].

A distinct feature of this algorithm is a use of several memes during the evolution. In this class of algorithms, a decision on which meme to use for one or another individual in a population is usually made dynamically. Such a class of MA provides a competition between different specialized local search methods. As a result, the algorithm preserves high efficiency despite lacking any initial information about a problem under investigation.

When designing a multi-memetic algorithm, one should carefully select a practical strategy for applying one meme or another from a set of available

memes M = (mj, j e [1:| M |]). The choice can be made based on the characteristics of memes or/and search sub-domains [24, 25].

In this work three (|M|= 3) local search methods were utilized, namely, Nelder — Mead method [26], Solis — Wets method [27], and Monte-Carlo method [1]. Only zero-order methods were used to deal with problems where the objective function's derivative is not available explicitly and its approximation is computationally expensive. While the Nelder — Mead method is purely local, Solis — Wets and Monte-Carlo methods can solve both local and global optimization tasks depending on their parameters. As a hyper-heuristic for selecting a meme the following rule was utilized.

1. Within sub-population Ki in every group select the best individual sj i kh.

2. Launch all available memes from their current positions. An iteration number for each meme is limited with P.

3. Select the most efficient meme for every group based on the obtained values of 0(X).

4. Use the best meme mb to refine all individuals' positions in a group at the similar-taxis stage.

To save computational budget, memes are utilized with an individual frequency for each group qj /2.

The algorithm proposed in this paper with a use of multi-population concept and multi-memetic approach was named Multi-Memetic Modified MEC (M3MEC). The concept of multi-population along with the incorporated landscape analysis procedure allows making a rough static adaptation of the algorithm to the objective function at the very beginning of evolution process at the cost of small computational expenses. Utilization of memes, in turn, helps the algorithm to correct possible errors of static adaptation during the evolution due to a closer investigation of search sub-domains.

Performance investigation. The M3MEC algorithm was implemented by authors in Wolfram Mathematica. Software implementation has a modular structure, which helps to modify algorithms easily and extend them with additional assisting methods.

A study was carried out in this work to compare the efficiency of the proposed algorithms with a landscape analysis procedure and SMEC with optimal values of the free parameters from the work [16]. All numeric experiments were carried out using the multi-start method with 50 launches. The best obtained value of an objective function 0* as well as its average value 0 based on the results of all launches were utilized as the performance indices for

comparison two algorithms and their software implementations along with the average iteration number X.

Benchmark functions. Multi-dimensional benchmark optimization functions are considered in this paper [19]. An original domain for generating the initial population equals

D0 ={X | -10 < Xi < 10, i e[l: n]}. (6)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

During the experiments the following values of free parameters were used for the SMEC algorithm: standard deviation a = 0.1; total number of groups y = 100; number of individuals in each group |S| = 50; ratio between numbers of leading and lagging groups number of groups ^ = 1; removing frequency for lagging groups © = 20. In order to provide approximately the same level of computational expenses per iteration the following settings were used for M3MEC: number of sub-populations |K| = 5; number of groups in every sub-population yi = 16; number of individuals in each group within the sub-population |Si | = 30.

The number of stagnation iterations X stop = 50 was used as a termination criterion for the algorithms. Tolerance used for identifying stagnation was equal to e = 10"5.

Experimental results. Obtained results (Table 2) demonstrate superiority of the proposed algorithm over Simple Mind Evolutionary Computation algorithm.

Table 2

Numerical experiment results

Function SMEC M3MEC

n = 8 n = 16 n = 8 n = 16

Ackley function Ф = 3.9E + 0 ф* = 3.3E - 1 ф = 4.3E + 0 ф* = 8.7E - 1 ф1 = 1.01E - 3 ф* = 2.34E - 6 ф1 = 8.31E - 2 фх* = 2.44E - 5

Dixon function ф2 = 6.7E - 1 ф2 = 3.0E - 1 ф2 = 2.3E + 0 ф2 = 1.9E + 0 ф2 = 0.47E + 0 ф2 = 2.09E - 6 ф2 = 1.12E - 1 ф2 = 3.16E - 4

Griewank function ф3 =3.7E - 2 ф3 =2.9E - 2 ф3 = 4.2E - 2 ф3 = 2.1E - 2 ф3 = 6.95E - 2 ф3 = 1.97E - 2 ф3 = 6.47E - 2 ф3 = 1.85E - 5

Levy function ф4 = 2.6E + 0 ф4 = 0.9E + 0 ф4 = 1.1E + 1 ф4 = 3.9E + 0 ф4 = 6.28E - 5 ф4 =1.47E - 7 ф4 = 4.19E - 1 ф4 = 2.92E - 6

Powell function ф5 = 1.2E - 1 ф5 = 1.1E - 1 ф5 = 2.8E + 0 ф5 = 1.5E + 0 ф5 = 9.76E - 4 ф5 = 3.92E - 5 ф5 = 1.03E - 1 ф5 = 4.49E - 4

End of Table 2

Function SMEC M3MEC

n = 8 n = 16 n = 8 n = 16

Rastrigin Фб = 6.3E + 1 Фб = 2.4E + 2 Ф6 = 1.29E+0 Ф6 = 5.66E + 1

function ф6 = 4.0E + 1 ф6 = 1.4E + 2 ф6 = 1.33E - 2 ф6 = 9.97E + 0

Rosenbrock Ф7 = 6.5E + 0 Ф7 = 3.2E + 1 ф7 = 5.17E - 1 Ф7 = 2.21E + 1

function ф* = 2.4E + 0 ф7 = 2.5E + 1 ф7 = 1.04E- 3. ф7 = 2.64E - 1

Sphere Ф8 = 2.4E - 2 Ф8 = 1.9E - 1 ф8 = 1.71E - 5 ф8 = 6.97E - 3

function ф8 = 2.1E - 2 ф8 = 1.8E - 1 ф8 = 2.94E - 7 ф8 = 2.18E-6

Sum of Squares function ф9 = 8.0E - 2 ф9 = 1.6E + 0 ф9 = 3.99E - 5 ф9 = 1.08E - 2

ф9 = 7.3E - 2 ф9 = 9.4E - 1 ф9 = 2.34E - 7 ф9 = 1.51E - 6

Zakharov Ф10 = 4.6E - 2 Ф10 = 3.9E - 1 Ф10 = 4.68E - 4 Ф10 = 6.91E - 1

function ф1*0 = 3.9E - 2 фш = 3.1E - 1 ф1*0 = 9.03E - 7 ф1*0 = 2.72E - 4

For the majority of the benchmark functions the results obtained with the use of M3MEC are better than ones obtained using SMEC by several orders of

magnitude both for the average values 0 and least found values 0*. While the high accuracy of 0* is caused by memes [25], decrease in the average values 0 is conditioned upon LA procedure.

On the other hand, M3MEC algorithm requires more iterations than the SMEC algorithm (Fig. 2). However, in terms of the objective function's evaluations this advantage of SMEC is not obvious. It was discovered that subpopulations that were made of individuals with high values of 0(X) often requires more iterations to reach stagnation. This sometimes makes a decent number of last iterations unnecessary and can be overcome with a use of more advanced termination criteria.

Conclusion. This paper presents a new two-stage adaptive multi-memetic algorithm with the incorporated landscape analysis procedure. The algorithm is capable of adapting to various objective functions using both static and dynamic adaptation. Static adaptation was implemented with a use of landscape analysis, while dynamic adaptation was made possible by utilizing several memes.

A comparative study of the proposed method with a traditional SMEC algorithm was carried out. Obtained results demonstrate the superiority of proposed technique over Simple Mind Evolutionary Computation algorithm with tuned values of the free parameters. M3MEC provides a high-quality solution for multi-dimensional optimization problems but requires more

800 600 400 200 0

□ SMC I I ШМЕС

jy

d>

4"

¡г

r

гГ

Functions a

V*

A

Ф

1500

1000

500

0 SMEC

1 I ШМЕС

^ J"

J? J?

/ ^

я

V

/ J J

Functions b

Fig. 2. Iteration number for different benchmark functions:

a dimension n = 8; b dimension n = 16

evaluations of an objective function. This drawback, however, can be overcome with a use more advanced termination criteria.

Further research will be devoted to the investigation of different strategies for selecting memes and parallelization schemes for M3MEC.

REFERENCES

[1] Weise T. Global optimization algorithms — theory and application. University of Kassel, 2008.

[2] Karpenko A.P. Sovremennye algoritmy poiskovoy optimizatsii. Algoritmy, vdokh-novlennye prirodoy [Modern algorithms of search optimization. Nature-inspired algorithms]. Moscow, BMSTU Publ., 2014.

[3] Krasnogor N. Studies on the theory and design space of memetic algorithms. PhD Thesis. University of the West of England, 2002.

[4] Neri F., Cotta C., Moscato P. Handbook of memetic algorithms. Springer, 2011.

[5] Dawkins R. The Selfish Gene. Oxford Univ. Press, 1976.

[6] Sakharov M.K., Karpenko A.P. Multi-memetic mind evolutionary computation algorithm for loosely coupled systems of desktop computers. Nauka i obrazovanie: nauchnoe izdanie MGTU im. N.E. Baumana [Science and Education: Scientific Publication], 2015, no. 10 (in Russ.). DOI: 10.7463/1015.0814435

[7] Sakharov M., Karpenko A. New parallel multi-memetic MEC-based algorithm for loosely coupled systems. Proc. VII Int. Conf. Optimization Methods and Application "Optimization and applications" OPTIMA-2016, 2016, pp. 124-126.

[8] Mersmann O., Bischl B., Trautmann H., et al. Exploratory landscape analysis. GECCO'11, ACM, 2011, pp. 829-836. DOI: 10.1145/2001576.2001690

[9] Vassilev V.K., Fogarty T.C., Miller J.F. Smoothness, ruggedness and neutrality of fitness landscapes: from theory to application. In: Ghosh A., Tsutsui S. (eds). Advances in Evolutionary Computing. Natural Computing Series. Berlin, Heidelberg, Springer, 2003, pp. 3-44. DOI: https://doi.org/10.1007/978-3-642-18965-4_1

[10] Bischl B., Mersmann O., Trautmann H., et al. Algorithm selection based on exploratory landscape analysis and cost-sensitive learning. GECCO'14, ACM, 2012, pp. 313-320.

[11] Kerschke P., Preuss M., Hernándezet C., et al. Cell mapping techniques for exploratory landscape analysis. In: Tantar AA., et al. (eds). EVOLVE — A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation V. Advances in Intelligent Systems and Computing, vol. 288. Cham, Springer, 2014, pp. 115-131.

[12] Muñoz M.A., Kirley M., Halgamuge S.K. Exploratory landscape analysis of continuous space optimization problems using information content. IEEE Trans. Evol. Comput., 2015, vol. 19, iss. 1, pp. 74-87. DOI: 10.1109/TEVC.2014.2302006

[13] Sun Ch., Sun Y., Wang W. A survey of MEC: 1998-2001. IEEE SMC 2002 Int. Conf., 2002, p. 9. DOI: 10.1109/ICSMC.2002.1175629

[14] Jie J., Zeng J., Ren Y. Improved mind evolutionary computation for optimizations. Proc. 5th World Congress Intell. Contr. Automat., 2004, pp. 2200-2204.

DOI: 10.1109/WCICA.2004.1341978

[15] Jie J., Zeng J., Han C. An extended mind evolutionary computation model for optimizations. Appl. Math. Comput., 2007, vol. 185, iss. 2, pp. 1038-1049.

DOI: 10.1016/j.amc.2006.07.037

[16] Sakharov M., Karpenko A. Performance investigation of mind evolutionary computation algorithm and some of its modifications. In: Abraham A., Kovalev S., Tarassov V., Snásel V. (eds). Proc. 1st Int. Sci. Conf. IITI'16. Proceedings of the First International Scientific Conference "Intelligent Information Technologies for Industry" (IITI'16). Advances in Intelligent Systems and Computing, vol. 450. Cham, Springer, 2016, pp. 475486. DOI: https://doi.org/10.1007/978-3-319-33609-1_43

[17] Sobol' I.M. On the distribution of points in a cube and the approximate evaluation of integrals. USSR Comput. Maths. Phys., 1967, vol. 7, iss. 4, pp. 86-112.

DOI: 10.1016/0041-5553(67)90144-9

[18] Sakharov M., Karpenko A. A new way of decomposing search domain in a global optimization problem. In: Abraham A., Kovalev S., Tarassov V., Snasel V., Vasileva M., Sukhanov A. (eds). Proceedings of the Second International Scientific Conference "Intelligent Information Technologies for Industry" (IITI'17). Advances in Intelligent Systems and Computing, vol. 679. Cham, Springer, 2017, pp. 398-407.

DOI: https://doi.org/10.1007/978-3-319-68321-8_41

[19] Floudas A.A., Pardalos P.M., Adjiman C., et al. Handbook of test problems in local and global optimization. Boston, MA, Springer, 1999.

[20] Liang J.J., Qu B.Y., Suganthan P.N. Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization. Technical Report. Zhengzhou Univ., Nanyang Technological Univ., 2013.

[21] Nguyen Q.H., Ong Y.S., Krasnogor N. A study on the design issues of memetic algorithm. IEEE Congress on Evolutionary Computation, 2007, pp. 2390-2397.

DOI: 10.1109/CEC.2007.4424770

[22] Hart W., Krasnogor N., Smith J.E., Memetic evolutionary algorithms. In: Hart W.E., Smith J.E., Krasnogor N. (eds). Recent Advances in Memetic Algorithms. Studies in Fuzzi-ness and Soft Computing, vol. 166. Berlin, Heidelberg, Springer, 2005, pp. 3-27.

DOI: https://doi.org/10.1007/3-540-32363-5_1

[23] Krasnogor N., Blackburne B., Burke E.K., et al. Multimeme algorithms for protein structure prediction. In: Guervos J.J.M., Adamidis P., Beyer H.G., Schwefel H.P., Fernan-dez-Villacanas J.L. (eds). Parallel Problem Solving from Nature — PPSN VII. PPSN 2002. Lecture Notes in Computer Science, vol. 2439. Berlin, Heidelberg, Springer, pp. 769-778. DOI: https://doi.org/10.1007/3-540-45712-7_74

[24] Ong Y.S., Lim M.H., Zhu N., et al. Classification of adaptive memetic algorithms: a comparative study. IEEE Trans. Syst., Man, Cybern. {B}, 2006, pp. 141-152.

DOI: 10.1109/TSMCB.2005.856143

[25] Karpenko A.P., Sakharov M.K. Multi-memes global optimization based on the algorithm of mind evolutionary computation. Informatsionnye tekhnologii [Information Technologies], 2014, no. 7, pp. 23-30 (in Russ.).

[26] Nelder J.A., Meade R. A simplex method for function minimization. Comput. J., 1965, vol. 7, iss. 4, pp. 308-313. DOI: 10.1093/comjnl/7.4.308

[27] Solis F.J., Wets R. J.-B. Minimization by random search techniques. Math. Oper. Res., 1981, vol. 6, no. 1, pp. 19-30. DOI: 10.1287/moor.6.1.19

Karpenko A.P. — Dr. Sc. (Phys.-Math.), Professor, Head of Department of Computer Added Design, Bauman Moscow State Technical University (2-ya Baumanskaya ul. 5, str. 1, Moscow, 105005 Russian Federation).

Sakharov M.K. — Assist. Professor, Department of Computer Added Design, Bau-man Moscow State Technical University (2-ya Baumanskaya ul. 5, str. 1, Moscow, 105005 Russian Federation).

Please cite this article as:

Karpenko A.P., Sakharov M.K. New adaptive multi-memetic global optimization algorithm. Herald of the Bauman Moscow State Technical University, Series Natural Sciences, 2019, no. 2, pp. 17-31. DOI: 10.18698/1812-3368-2019-2-17-31

В Издательстве МГТУ им. Н.Э. Баумана вышел в свет учебник авторов

В.И. Ванько, О.В. Ермошиной, Г.Н. Кувыркина

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

['. Г* SGPfbKO. О.В. Ерглошина. Г.Н. Кувыркни

Вариационное исчисление и оптимальное управление

\ V ; U

«Вариационное исчисление и оптимальное управление»

Наряду с изложением основ классического вариационного исчисления и элементов теории оптимального управления рассмотрены прямые методы вариационного исчисления и методы преобразования вариационных задач, приводящие, в частности, к двойственным вариационным принципам. На примерах из физики, механики и техники показана эффективность методов вариационного исчисления и оптимального управления для решения прикладных задач. Для студентов и аспирантов технических университетов, а также для инженеров и научных работников, специализирующихся в области прикладной математики и математического моделирования.

По вопросам приобретения обращайтесь:

105005, Москва, 2-я Бауманская ул., д. 5, стр. 1 +7 (499) 263-60-45 [email protected] http://baumanpress.ru

i Надоели баннеры? Вы всегда можете отключить рекламу.