Научная статья на тему 'Ensembles of neural networks with application of multi-objective self-configurable genetic programming'

Ensembles of neural networks with application of multi-objective self-configurable genetic programming Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
132
35
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
ПРОГНОЗИРОВАНИЕ / АНСАМБЛИ НЕЙРОСЕТЕВЫХ МОДЕЛЕЙ / САМОКОНФИГУРИРУЕМОЕ ГЕНЕТИЧЕСКОЕ ПРОГРАММИРОВАНИЕ

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Loseva E.D., Lipinsky L.V.

Рассматривается комплексный подход для автоматизированного проектирования ансамблей нейросетевых моделей. Предлагается применение многокритериального самоконфигурируемого генетического программирования. К каждой новой сгенерированной сети добавляются наиболее эффективные (“лучшие”) сети, оцененные по двум критериям на первом этапе алгоритма. Таким образом создается популяция ансамблей. Условием отбора новых сгенерированных сетей является эффективность решения всего ансамбля, в который входит эта сеть, третий критерий. После окончания эволюционной процедуры в финальный ансамбль попадают сети, отобранные по третьему критерию. Также предложен подход к формированию решения ансамбля на основе решений входящих в него сетей схема Scheme ED1. Предложенный метод был протестирован на задачах прогнозирования с различным количеством входных и выходных сигналов (нейронов) в сети. Результаты подтверждают высокую эффективность.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Ensembles of neural networks with application of multi-objective self-configurable genetic programming»

UDC 519.234

Vestnik SibGAU Vol. 17, No. 1, P. 67-72

ENSEMBLES OF NEURAL NETWORKS

WITH APPLICATION OF MULTI-OBJECTIVE SELF-CONFIGURABLE GENETIC PROGRAMMING

E. D. Loseva*, L. V. Lipinsky

Reshetnev Siberian State Aerospace University 31, Krasnoyarsky Rabochy Av., Krasnoyarsk, 660037, Russian Federation E-mail: [email protected]

In this article the integrated approach for automatic formation ensembles of neural networks is proposed. The applying multi-criteria "Self-configurable " genetic programming is described. To each new generated network the most efficient ("best") network is added, which by two criteria were estimated on the first stage of the algorithm. Thus a population of neural network ensembles is created. The criterion of effectiveness of new networks is the third criterion -the effectiveness of ensemble decision, which includes in this network ensemble. The final ensemble with selected networks by third criteria is created. Also in this article the approach for formation of ensemble decision using the decisions of an added neural networks - Scheme ED1 is applied. Proposed method on different tasks with different amount of inputs and outputs signals (neurons) in ANN was tested. In the result this method shows high efficiency.

Keywords: forecasting problems, ensembles of artificial neural networks, self-configurable multi-criteria genetic programming.

Вестник СибГАУ Том 17. № 1. С. 67-72

АНСАМБЛИ НЕЙРОСЕТЕВЫХ МОДЕЛЕЙ С ПРИМЕНЕНИЕМ МНОГОКРИТЕРИАЛЬНОГО САМОКОНФИГУРИРУЕМОГО ГЕНЕТИЧЕСКОГО ПРОГРАММИРОВАНИЯ

Е. Д. Лосева*, Л. В. Липинский

Сибирский государственный аэрокосмический университет имени академика М. Ф. Решетнева Российская Федерация, 660037, г. Красноярск, просп. им. газ. «Красноярский рабочий», 31

E-mail: [email protected]

Рассматривается комплексный подход для автоматизированного проектирования ансамблей нейросете-вых моделей. Предлагается применение многокритериального самоконфигурируемого генетического программирования. К каждой новой сгенерированной сети добавляются наиболее эффективные ("лучшие") сети, оцененные по двум критериям на первом этапе алгоритма. Таким образом создается популяция ансамблей. Условием отбора новых сгенерированных сетей является эффективность решения всего ансамбля, в который входит эта сеть, - третий критерий. После окончания эволюционной процедуры в финальный ансамбль попадают сети, отобранные по третьему критерию. Также предложен подход к формированию решения ансамбля на основе решений входящих в него сетей - схема Scheme ED1. Предложенный метод был протестирован на задачах прогнозирования с различным количеством входных и выходных сигналов (нейронов) в сети. Результаты подтверждают высокую эффективность.

Ключевые слова: прогнозирование, ансамбли нейросетевых моделей, самоконфигурируемое генетическое программирование.

Introduction. At the present time, data analysis systems based on the Intelligent Information Technologies (IIT) became more popular in many sectors of human activity. Therefore became more urgent question of the development methods for automatic design and adaptation IIT for specific tasks. Such methods could allow eliminate expensive design IIT and reduce the time, which is required for the development of intelligent systems. One of the most perspective and popular technology is an artificial neural networks (ANN) (fig. 1). The range of tasks which are solved by artificial neural networks is wide [1; 2].

One of them is forecasting task. The relevance of forecasting problems in modern world is the quite high. The quality of their decisions have greatly influence to a decision in various fields (in the sphere of economy, industry, etc.). The approach to improve the efficiency of systems based on the neural networks is the using of neural network ensembles (ENN) [3; 4]. Using ENN is a good way to counteract the networks retraining and to improve their ability to generalize. Important question is design optimal ENN by different criteria of the efficiency with minimum architecture, high precision of the forecasting and the

optimal network parameters [5-7]. To satisfy these requirements in this paper a multi-criteria [8] "Self-configuring" genetic programming was used. Genetic programming (GP) [9] have been successfully used in solving a lot of real tasks. Genetic programming is looking for a solution in the space of trees. The using "Self-configuring" technique allows to find "best" combination of evolutionary operators automatically. It allows to reduce computationally resources and requires to the end users. The description of proposed approach stages (SelfCGP algorithm) below is described.

Formation ANN by genetic programming. For applying GP technique is necessary to encoding ANN in the form of the tree [10; 11]. The tree is a directed graph which consists of nodes and end vertices (leaves). In the leaves is one operator from the multiplicity T {IN1, IN2, IN3, ..., INK - the input neurons; F1, F2, ..., FN - activation functions (neurons)} and in the nodes is one operator from multiplicity F{+,<}. Signs from multiplicity F are indicators what is necessary to perform during the formation of the ANN. Sign "+" means formation all neurons in one layer and "<" means formation all layers in one ANN [12; 13]. The amount of input and output neurons in ANN consists of the task. Example for encoding ANN in the form of the tree on fig. 2 is shown.

The first stage of SelfCGP algorithm: automated ANN design. The automated design of ANN using the SelfCGP algorithm works as follows:

Step 1. Generation a population of individuals - trees. Each individual is a ANN.

Step 2. An important step is the optimization of the neural network weighting factors. In this research the Backpropagation method was used. The criterion for

stopping the optimization process is the minimization of the prediction error.

Step 3. Setting the equal probabilities for all configuration options for each type of operator (except selection operator). Initial probability for all operators, except crossing, are defined as follows [14]:

* - b

(1)

V i = 1,3, where Zi - amount of operators i-th type, Pi -

probability for using i-th type of the operator.

Step 4. Choosing operators for recombination (one-point, two-point) and mutation (strong, week). The operator for selection is the elite.

Step 5. Selection parents for recombination. Step 6. Estimation of each individual in population according to two criteria:

1. The first criteria is the error of the forecasting. The minimum error value can be investigate by minimization of deviation between the reference and calculated (current) output values of ANN. Evaluation is carried out according to the fitness function, which is calculated by the formula (3):

¿-I

I y* - У¿

E--

N

-» min,

Fitl - -

1 + E

max,

(2)

(3)

where N - the amount of output values; y - the reference values; y - current output values of ANN or of an ensemble.

Synapses.

Fig. 1. Type of ANN

Fig. 2. Endoding ANN into the tree structure

2. The second criteria is complexity of the ANN structure. Evaluation is carried out according to the fitness function, which is equal to (4):

Fit2 = n ■ N ВД+х + Nl ■ l,

(4)

where n - amount of input neurons in the first layer; N - amount of neurons in the i-th layer; i - number of hided layer; L - the amount of hided layers in neural network; l - the amount of neurons on the last layer.

Selection of trees with the minimal value by two criteria according to the following scheme is realized. The fitness values calculated by the first criterion from minimum to maximum are sorted. The difference between two values through pairwise is looked. Checking the condition for selection one individual from the pair:

- if the difference between two values exceeds the threshold An > 50 % , need to choose an individual whose fitness value is minimum;

- if the threshold An < 50 %, need to choose one individual from the pair by the second fitness function with minimum value of complexity.

Step 7. Recombinantion of selected individuals (parents).

Step 8. Mutation of a descendant.

Step 9. Evaluation of a new descendant.

Step 10. Updating the probabilities for operators using the average fitness of offspring obtained by the operator. Check the values of probabilities by these conditions:

and

P _ P I P <Pt

z, ■ N

P > p | P > Pt +-

г " l ' z, ■ N

(5)

(6)

for V k = 1, N , l = 1,3. Further the probabilities will change on:

Pnew _ Pold I Pnew _ p I Pnew _ Pold _ 1

l ill ill l -\т

z ■ N

(7)

where P _ -

10 ■ z.

for zi - amount of operators i-th type,

k = 1, N , N - amount of generations.

Choose operator with the highest value of fitness for all individuals on each iteration calculated by this formula (8):

Z(Fit_in + Fit_2n )

(8)

where k = 1, N , N - amount of generations, Fit _ 1ki, Fit_2k; - fitness value of n-th individual on k-generation, I - amount of operators i-th type on k-generation.

Step 11. If the algorithm reached the predetermined value of accuracy or exhausted the computational resources - go to step 12, otherwise go to step 2.

Step 12. Selection the "best" individual (ANN). For K "best" ANN described procedure is carried out K times (steps 1-12).

The second stage of SelfCGP algorithm: formation ensembles of ANN. In the second stage of SelfCGP algorithm the selection of effectiveness ANN in the final ensemble (SelfCGP + ENN) is realized. The third criterion is the error of a ensemble decision. The ensemble decision by Scheme ED1 (Scheme for creation an Ensemble Decision 1) is created. The second part of algorithm works as follows:

Step 1. Generation new population of individuals. Each individual is an ensemble. Ensemble with one randomly generated individual and the "best" individuals is formed. The amount of the additional "best" individuals in ensemble may be different, each of them is found by the first stage of algorithm (steps 1-12).

Step 2. The optimization of the neural network weighting factors. In this research Backpropagation method was used. The criterion for stopping the process of selecting the parameters is also the minimization of the prediction error.

Step 3. Setting the equal probabilities for all configuration options for each type of operator (except selection operator). Initial probability for all operators, except crossing, are defined by the formula (1).

Step 4. Choosing operation for recombination (one-point, two-point) and mutation (strong, week). The selection operator is the elite.

Step 5. Estimation new population by third criteria. The third criteria is the precision of an ensemble (individual) forecasting in which the solutions of its individual members are considered. The fitness function is calculated by the formula (3). The Scheme ED1 for creation a ensemble decision below is described:

1. Applying the train inputs for each model in the ensemble. To calculate the deviation between received and reference outputs to search for minimum value between outputs. If the minimum is found, then to determinate number of the model in the ensemble, which was received.

2. Applying the test inputs for each model in ensemble to determinate output values. To calculate the deviation between received and reference outputs to search for the minimum value between them. If the minimum is found, then to determinate number of the point (position) where it was received.

3. Creation new data base. This data base consists of the test values (results) from each models in ensemble. But in this data base are only certain values, which were defined by information about the number of model and the number of point for determination the placement of that value. The number of model is found in step 1, the number of point in step 2 is found.

Step 6. Recombination of two individuals (parents).

Step 7. Mutation of a descendent.

Step 8. Evaluation of a new descendant.

Step 9. Updating the probabilities for operators using the average fitness of offspring obtained by the operator. Check the values of probabilities by the formulas (5), (6). Further the probabilities will change and new probabilities are calculated by the formula (7). Choose operator with the highest value of fitness for all individuals on each iteration by the formula (8).

3

Step 10. If the algorithm reached the predetermined value of accuracy or exhausted the computational resources - go to step 11, otherwise go to step 2.

Step 11. Selection the most effectiveness K individuals for the final ensemble.

Data bases description. The proposed multi-criteria genetic programming for formation ANN ensembles with "Self-configurable" procedure (SelfCGP + ENN) on two data bases was tested. For the first task in tab. 1 is data set with the figures of the turbine state. The first 11 measurements in the data set - the process parameters that are expected to be related to vibration signals. Next 12 measurements - vibration signals in the different parts of the turbine were measured. The data base has 1000 values.

For the second task the electricity consumption data base of the Siberian Federal Region of the 2012-2014 years was used, with the figures: every month, every day and every hour of the day, payment for electricity consumption, etc. The features available in the database: electricity consumption of the region, of the district, of the city. The data base has 8000 values [15].

Results of the study. The initial settings for optimization procedure by the evolutionary algorithm are following: the amount neurons is 8; the maximum amount of layers - 8. Each data bases were divided into two parts. There are train and test values in proportion 80 % / 20 % accordingly. Each result is the minimum error value after 20 runs for each type of initial settings.

The error of ensemble forecasting in the percentage ratio by the formula (9) are converted:

Error =--—:—100 %, (9)

/^max ^min\ v 7

(y - y )

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

where E is a value, which by formula (2) was calculated; expression ^ ;ymax - ymn j is a deference between maximum and minimum values of an ensemble decision (output values).

The final ensemble consists of the tree "best" neural network model. The results of the forecasting precision (error) in tab. 2, 3 with different amount of "best" ANN and individuals for both task accordantly are represented.

Table 1

Data of the turbine state*

A L a m

PPi.i PPi.ii VPi.1 VPi.12

PP1400.1 pp 1400.11 vP 1400.1 vP1400.12

*ppi,j - j-th process parameter in the i-th data set (i = 1, ..., 1000; j = 1, ..., 11), vp;,j - j-th vibration signal in the i-th data set (i = 1, ..., 1000; j = 1, ..., 12).

Table 2

Minimum error values for the first task with different amount of additional "best" ANN-individuals on the second stage (step 1) of SelfCGP + ENN algorithm were calculated

Amount of output neurons in ANN

Amount of the additional "best" ANN-individuals Error, % 1 2 3 4 5 6 7 8 9 10 11 12

1 1.5 5.4 9.2 6.2 10 1.2 1.4 2.2 3.3 3.2 2.9 2.4

2 2.6 5.7 8.3 6.7 9.7 1.4 1.1 2.5 2.6 2.6 2.39 2.6

3 1.8 5.4 8.4 5.8 11 0.9 2.2 2.2 2.1 3.7 3.2 3.1

4 2.4 5.4 8.1 6.5 10 1 1.1 2.8 2.7 3.1 3.83 2.9

5 2.9 6 8.1 6.6 10 1.4 1.2 2.6 3.7 2.9 3 2.6

Table 3

Minimum error values for the second task with different amount of additional "best" ANN-individuals on the second stage (step 1) of SelfCGP + ENN algorithm and with different amount of individuals were calculated

Amount of individuals 80 160 240

Amount of additional ANN-individuals : 1

The average value of ANN models complexity in ensemble Number of Layers 5 6 8

Number of Neurons 7 7 5

Error, % 2.19 2.41 3.58

Amount of additional ANN-individuals : 2

The average value of ANN models complexity in ensemble Number of Layers 6 6 7

Number of Neurons 6 5 6

Error, % 3.46 3.7 2.13

Amount of additional ANN-individuals : 3

The average value of ANN models complexity in ensemble Number of Layers 4 5 5

Number of Neurons 7 8 6

Error, % 2.44 2.1 2.8

End tab. 3

Amount of individuals 80 160 240

Amount of additional ANN-individuals : 4

The average value of ANN models Number of Layers 6 5 7

complexity in ensemble Number of Neurons 6 5 4

Error, % 2.53 1.88 2.68

Amount of additional ANN-individuals : 5

The average value of ANN models Number of Layers 6 7 7

complexity in ensemble Number of Neurons 7 5 5

Error, % 4.04 1.95 3.24

The algorithm with Visual Studio C# program was realized and on a laptop with 1 terabyte of memory was tested with the four-core processor Intel Core i5-2410 (2.10 GHz).

Conclusion. The results of statistical studies show that offers a comprehensive approach for the design of neural network ensembles is demonstrated high efficiency for all used test tasks. In the result can conclude that the proposed method is not less effective than the widely used method based on the genetic algorithm. The advantage of the proposed method is reducing the cost of the solution by automatically choosing evolutionary operators, also find compact neural networks structure with height precision: the average modeling error in the range 1-6 % for the first task with 12 outputs, and in the rage 2-5 % for the second task with one output. According to the results the greatest deviation is in the fifth exit for the first task (tab. 2). It means that the result directly depends on the level of variation in the inputs data sets. In the fifths output the variation value is sufficiently high. After testing notice, that increasing number of inputs and outputs decrease the speed of the data processing, also necessary to optimize more parameters and increasing the number of iterations. But the positive dynamic is the reduction of complexity of neural network models: the reduction is in average for neurons - 30 % and for layers - 20 % if to compare with the initially sets, also the precision rates for two tasks are satisfactory in average are about 6 %. The proposed approach can effectively automatically generate ensembles of neural networks. Therefore, this approach may be used to improve the efficiency to solve complex applications tasks.

Acknowledgment. Research is performed with the financial support of the Ministry of Education and Science of the Russian Federation within the federal R&D programme (project RFMEFI57414X0037).

Благодарности. Работа выполнена при финансовой поддержке Министерства образования и науки Российской Федерации в рамках федеральной R&D-программы (проект RFMEFI57414X0037).

References

1. Anderson D., McNeill G. Artificial neural networks technology. DACS report, 1992, P. 1-34.

2. Angeline P. J. Adaptive and self-adaptive evolutionary computations. Palaniswami M. and Atti-kiouzel Y. (Eds.) Computational Intelligence: A Dynamic Systems Perspective. IEEE Press, 1995, P. 152-163.

3. Yu J. J. Q., Lam A. Y. S., Li V. O. K. Evolutionary Artificial Neural Network Based on Chemical Reaction Optimization. IEEE Congress on Evolutionary Computation (CEC'2011), 2011, P. 12-16.

4. Albert Lam, Victor O. K. Li, James J. Q. Yu. Real-coded chemical reaction optimization. Evolutionary Computation, 2012, Vol. 16, Iss. 3, P. 339-353.

5. J. J. Q. Yu., Victor O. K. Li. A social spider algorithm for global optimization. Applied Soft Computing, 2015, Vol. 30, P. 614-627.

6. Holland J. H. Adaptation in Natural and Artificial System. University of Michigan Press, 1975, P. 18-25.

7. Izeboudjen N., Larbes C., Farah A. A new classification approach for neural networks hardware: from standards chips to embedded systems on chip. Artificial Intelligence Review, 2014. Vol. 41, Iss. 4, P. 491-534.

8. Ashish G., Satchidanada D. Evolutionary Algorithm for Multi-Criterion Optimization: A Survey. International Journal of Computing & Information Science, 2004, Vol. 2, No. 1, P. 43-45.

9. Koza J. R. Genetic Programming. On the Programming of Computers by Means of Natural Selection, 1992, MIT Press, P. 109-120.

10. Huang J.-J., Tzeng G.-H., Ong Ch.-Sh. Two-stage genetic programming (2SGP) for the credit scoring model. Applied Mathematics and Computation, 2006, P. 1039-1053.

11. O'Neill M., Vanneschi L., Gustafson S., Banzhaf W. Open issues in genetic programming. In: Genetic Programming andEvolvable Machines, 2010, P. 339-363.

12. Semenkin E. S., Lipinsky L. V. [Application of the genetic programming algorithm in problems of design automation of intelligent information technologies]. Vestnik SibGAU. 2006, No. 3 (10), P. 22-26 (In Russ.).

13. Loseva E. D. [Ensembles of neural network models using multi-criteria self-configuring genetic programming]. MaterialyXIMezhdunar. nauch. konf. "Aktual'nye problemy aviatsii i kosmonavtiki" [Proceedings of XI Intern. Scientific. Conf. "Actual problems of aviation and cosmonautic"]. Krasnoyarsk, 2015, P. 340-343 (In Russ.).

14. Land M. W. S. Evolutionary Algorithms with Local Search for Combinatorial Optimization. PhD thesis, Citeseer. A thesis investigation memetic algorithms in combinatorial optimization. 1998. P. 259-315.

15. A. Asuncion, D. Newman. UCI machine learning repository. University of California, Irvine, School of Information and Computer Sciences, 2007. Available at: http://www.ics.uci.edu/~mlearn/MLRepository.html (accessed 10.11.2015).

Библиографические ссылки

1. Anderson D., McNeill G. Artificial neural networks technology : DACS report, 1992. P. 1-34.

2. Angeline P. J. Adaptive and self-adaptive evolutionary computations / M. Palaniswami and Y. Attikiouzel (eds.) // Computational Intelligence: A Dynamic Systems Perspective. IEEE Press, 1995. P. 152-163.

3. Yu J. J. Q., Lam A. Y. S., Li V. O. K. Evolutionary Artificial Neural Network Based on Chemical Reaction Optimization // IEEE Congress on Evolutionary Computation (CEC'2011), 2011. P. 12-16.

4. Lam A., Li V. O. K., Yu J. J. Q. Real-coded chemical reaction optimization // Evolutionary Computation, 2012. Vol. 16, iss. 3. P. 339-353.

5. Yu J. J. Q., Li V. O. K. A social spider algorithm for global optimization // Applied Soft Computing, 2015. Vol. 30. P. 614-627.

6. Holland J. H. Adaptation in Natural and Artificial System // University of Michigan Press, 1975. P. 18-25.

7. Izeboudjen N., Larbes C., Farah A. A new classification approach for neural networks hardware: from standards chips to embedded systems on chip // Artificial Intelligence Review. 2014. Vol. 41, iss. 4, P. 491-534.

8. Ashish G., Satchidanada D. Evolutionary Algorithm for Multi-Criterion Optimization: A Survey // International Journal of Computing & Information Science. 2004. Vol. 2, No. 1. P. 43-45.

9. Koza J. R. Genetic Programming // On the Programming of Computers by Means of Natural Selection. MIT Press, 1992. P. 109-120.

10. Huang J.-J., Tzeng G.-H., Ong Ch.-Sh. Two-stage genetic programming (2SGP) for the credit scoring model //

Applied Mathematics and Computation. 2006. P. 10391053.

11. Open issues in genetic programming / M. O'Neill [et al.] // Genetic Programming and Evolvable Machines. 2010. P. 339-363.

12. Семенкин E. С. Липинский Л. В. Применение алгоритма генетического программирования в задачах автоматизации проектирования интеллектуальных информационных технологий // Вестник СибГАУ. 2006. № 3 (10). C. 22-26.

13. Лосева E. Д. Ансамбли нейросетевых моделей с применением многокритериального самоконфигурируемого эволюционного алгоритма // Актуальные проблемы авиации и космонавтики : материалы XI Междунар. науч. конф. (6-12 апр. 2015, г. Красноярск) : в 2 ч. / под общ. ред. Ю. Ю. Логинова ; Сиб. гос. аэрокосмич. ун-т. Красноярск, 2015. С. 340-343.

14. Land M. W. S. Evolutionary Algorithms with Local Search for Combinatorial Optimization : PhD thesis, Citeseer. A thesis investigation memetic algorithms in combinatorial optimization. 1998. P. 259-315.

15. Asuncion A., Newman D. UCI machine learning repository [Электронный ресурс] / University of California, Irvine, School of Information and Computer Sciences, 2007. URL: http://www.ics.uci.edu/~mlearn/ MLRepository.html (дата обращения: 10.11.2015).

© Loseva E. D., Lipinsky L. V., 2016

i Надоели баннеры? Вы всегда можете отключить рекламу.