Научная статья на тему 'Robust and reliable techniques for speech-based emotion recognition'

Robust and reliable techniques for speech-based emotion recognition Текст научной статьи по специальности «Медицинские технологии»

CC BY
189
70
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
РАСПОЗНАВАНИЕ ЭМОЦИЙ / АДАПТИВНЫЙ МНОГОКРИТЕРИАЛЬНЫЙ ГЕНЕТИЧЕСКИЙ АЛГОРИТМ / КЛАССИФИКАТОР / КОЛЛЕКТИВНОЕ ПРИНЯТИЕ РЕШЕНИЙ / EMOTION RECOGNITION / ADAPTIVE MULTI-OBJECTIVE GENETIC ALGORITHM / CLASSIFIER / COLLECTIVE DECISION MAKING

Аннотация научной статьи по медицинским технологиям, автор научной работы — Brester C. Yu, Semenkina O. E., Sidorov M. Yu

One of the crucial challenges related to the spacecraft control is the monitoring of the mental state of crew members as well as operators of the flight control centre. In most cases, visual information is not sufficient, because spacemen are trained to cope with feelings and not to express emotions explicitly. In order to identify the genuine mental state of a crew member, it is reasonable to engage the acoustic characteristics obtained from speech signals presenting voice commands during the spacecraft control and interpersonal communication. Human emotion recognition implies flexible algorithmic techniques satisfying the requirements of reliability and fast operation in real time. In this paper we consider the heuristic feature selection procedure based on the self-adaptive multi-objective genetic algorithm that allows the number of acoustic characteristics involved in the recognition process to be reduced. The effectiveness of this approach and its robustness property are revealed in experiments with various classification models. The usage of this procedure leads to a reduction of the feature space dimension by a factor of two (from 384 to approximately 180 attributes), which means decreasing the time resources spent by the recognition algorithm. Moreover, it is proposed to implement some algorithmic schemes based on collective decision making by the set of classifiers (Multilayer Perceptron, Support Vector Machine, Linear Logistic Regression) that permits the improvement of the recognition quality (by up to 10% relative improvement). The developed algorithmic schemes provide a guaranteed level of effectiveness and might be used as a reliable alternative to the random choice of a classification model. Due to the robustness property the heuristic feature selection procedure is successfully applied on the data pre-processing stage, and then the approaches realizing the collective decision making schemes are used.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Robust and reliable techniques for speech-based emotion recognition»

UDC 591.87

Vestnik SibGAU Vol. 16, No. 1, Р. 22-27

SVM-BASED CLASSIFIER ENSEMBLES DESIGN WITH CO-OPERATIVE BIOLOGY INSPIRED ALGORITHM

Sh. A. Akhmedova*, E. S. Semenkin

Siberian State Aerospace University named after academician M. F. Reshetnev 31, Krasnoyarsky Rabochy Av., Krasnoyarsk, 660014, Russian Federation E-mail: [email protected]

The meta-heuristic called Co-Operation of Biology Related Algorithms (COBRA) is used for the automated design of a support vector machine (SVM) based classifiers ensemble. Two non-standard schemes, based on the use of the locally most effective ensemble member's output, are used to infer the ensemble decision. The usefulness of the approach is demonstrated on four benchmark classification problems solved: two bank scoring problems (Australian and German) and two medical diagnostic problems (Breast Cancer Wisconsin and Pima Indians Diabetes). Numerical experiments showed that classifier ensembles designed by COBRA exhibit high performance and reliability for separating instances from different categories. Ensembles of SVM-based classifiers implemented in this way outperform many alternative methods on the mentioned benchmark classification problems.

Keywords: support vector machines, ensembles, biology inspired algorithms, classification, optimization.

Вестник СибГАУ Т. 16, № 1. С. 22-27

ПРОЕКТИРОВАНИЕ КОЛЛЕКТИВОВ МАШИН ОПОРНЫХ ВЕКТОРОВ КООПЕРАТИВНЫМ БИОНИЧЕСКИМ АЛГОРИТМОМ ДЛЯ РЕШЕНИЯ ЗАДАЧ КЛАССИФИКАЦИИ

Ш. А. Ахмедова*, Е. С. Семенкин

Сибирский государственный аэрокосмический университет имени академика М. Ф. Решетнева Российская Федерация, 660014, г. Красноярск, просп. им. газ. «Красноярский рабочий», 31

E-mail: [email protected]

Коллективный бионический алгоритм оптимизации Co-Operation of Biology Related Algorithms (COBRA) был применен для автоматического проектирования коллективов классификаторов на базе машин опорных векторов. Для принятия решения коллективом использовались две нестандартные методики, основанные на учете выходных значений локально наиболее эффективного члена коллектива. Целесообразность применения разработанного подхода доказана на четырех задачах классификации: двух задачах банковского скоринга (австралийского и немецкого) и двух задачах медицинской диагностики (диагностирование опухоли груди и диабета). Численные эксперименты показали эффективность и работоспособность коллективов классификаторов, сгенерированных алгоритмом COBRA. Результаты, полученные коллективами машин опорных векторов, превосходили результаты многих альтернативных методов для указанных задач.

Ключевые слова: машины опорных векторов, коллективы, бионические алгоритмы, классификация, оптимизация.

Introduction. Classification problems are the problems of identifying to which of a set of categories a new instance belongs [1] that have many different applications such as computer vision, speech recognition, document classification, credit scoring and biological classification. Currently various algorithms for solving these problems are being developed. For example, the most sought tools for them are artificial neural networks [2], fuzzy logic [3], evolutionary algorithms [4] and other technologies. In this study the method called Support Vector Machines (SVM) [5] was considered.

SVM-based classifier design is equivalent to solving a constrained real-parameter optimization problem. Therefore a new collective nature inspired meta-heuristic Cooperation of Biology Related Algorithms (COBRA) [6], namely its modification for constrained problems Co-BRA-c [7], was used.

Efficient and successful operation of SVM-based classifiers was established and shown in various papers, for example, [8]. But nowadays highly increasing computing power and technology make possible the use of more complex intelligent architectures, taking advantage of

more than one intelligent technique in a collaborative way. So, one of the hybridization forms for using more than one technique is the ensemble approach.

Simple averaging, weighted averaging, majority voting and ranking are common methods usually applied to calculate the ensemble output. However in [9] a new scheme, based on the use of the locally most effective ensemble member's output, was proposed. Yet this scheme was originally developed for approximation problems. In this study the scheme mentioned in [9] was modified for solving classification problems in two ways.

The rest of the paper is organized as follows. Section 2 briefly describes the method COBRA and its modification COBRA-c. SVM-based classifier generated by meta-heuristic COBRA-c and the schemes, which were used to infer the ensemble decision, are presented in Section 3. In Section 4 developed approach was applied to four classification problems, such as bank scoring and medical diagnostic problems. In the Conclusion the results and directions for further research are discussed.

Co-Operation of Biology Related Algorithms. A new collective meta-heuristic called Co-Operation of Biology Related Algorithms (COBRA) [6] was developed based on five well-known and similar nature-inspired algorithms such as Particle Swarm Optimization (PSO) [10], Wolf Pack Search (WPS) [11], the Firefly Algorithm (FFA) [12], the Cuckoo Search Algorithm (CSA) [13] and the Bat Algorithm (BA) [14]. Each of the above listed algorithms was originally developed for solving realparameter unconstrained optimization problems and imitates a nature process or the behavior of an animal group. For example, the Bat Algorithm is based on the echoloca-tion behavior of bats; the Cuckoo Search Algorithm was inspired by the obligate brood parasitism of some cuckoo species by laying their eggs in the nests of other host birds (of other species); the Firefly Algorithm was inspired by the flashing behavior of fireflies.

A precondition for the new algorithm was the fact that one cannot say which approach is the most appropriate for the given function and the given dimension (number of variables). Namely on the basis of investigation into the effectiveness of these optimization methods, it was established that the best results were obtained by different methods for different problems and for different dimensions; in some cases the best algorithm differs even for the same test problem if the dimension varies. Each strategy has its advantages and disadvantages.

The new meta-heuristic approach combines the major advantages of the algorithms listed above. Its basic idea consists of generating five populations (one population for each algorithm) which are then executed in parallel cooperating with each other (the so-called island model).

The algorithm proposed in [6] is a self-tuning meta-heuristic so there is no necessity to choose the population size for each algorithm. The number of individuals in the population of each component algorithm can increase or decrease depending on whether the fitness value improves on the current stage or not. If the fitness value does not improve during a given number of generations, then the size of all populations increases. And vice versa, if the fitness value constantly improves, then the size of all populations decreases. Also each population can "grow"

by accepting individuals removed from other populations. The population "grows" only if its average fitness is better than the average fitness of all other populations. Thereby the "winner component algorithm" can be determined on each iteration/generation. The result of this kind of competition allows the presenting of the biggest resource (population size) to the most appropriate (in the current generation) algorithm.

Likewise, all populations communicate with each other by exchanging individuals in such a way that a part of the worst individuals of each population is replaced by the best individuals of other populations. It brings up-to-date information on the best achievements to all component algorithms and prevents their preliminary convergence to its own local optimum, which improves the group performance of all component algorithms.

The performance of the COBRA algorithm was evaluated on the set of 28 benchmark problems from the CEC'2013 competition [15]. Experiments showed that COBRA works successfully and is reliable on this benchmark and demonstrates competitive behavior. Results also showed that COBRA outperforms its component algorithms when the dimension grows and more complicated problems are solved [6].

As has already been mentioned, the COBRA approach was developed for solving unconstrained optimization problems, but in the real world, there are usually different constraints which should not be violated. So, COBRA-c, i. e. the modification of COBRA that can be used for solving constrained real-parameter optimization problems was proposed in [7].

The COBRA method was modified by applying three constraint handling methods: dynamic penalties [4], Deb's rule [16] and the technique described in [17]. Specifically the method proposed in [17] was implemented to the PSO-component of COBRA; at the same time other components were modified by implementing Deb's rule followed by calculating function values using dynamic penalties. The performance of the algorithm proposed in [7] was evaluated on the set of 18 scalable benchmark functions provided for the CEC 2010 competition and a special session on single objective constrained realparameter optimization [18]. The meta-heuristic COBRA-c was compared with algorithms that participated in the competition CEC 2010. Eventually it was established that the COBRA-c approach is superior to 3-4 of the 14 winner-methods from this competition. Besides, COBRA-c also outperforms all its component algorithms.

SVM-based classifiers. Support vector machines are linear classification mechanisms, which represent examples from a training set as points in space mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible [5]. New examples (from a test set) are then mapped into the same space and predicted to belong to a category based on which side of the gap they fall. So, SVM-based classifiers linearly divide examples from different classes.

For example, a training setX = {(xb y1), ..., (xx, yO}, xi = (xfl, ..., xm) andyi = -1 oryi = +1, assuming l examples with m real attributes, is given. The aim is to learn a hyper-plane with this dataset:

< w, x > + b = 0, (1)

BecmHUK Cu6rAY. TOM 16, № 1

where < ... > is a dot product, which separates examples labeled as -1 from ones labeled as +1. So using this hyper-plane, a new instance x is classified applying the following classifier:

fl,(< w, x >+b) > 1,

f(x) = \ (2)

|-1,(< w, x >+b) <-1.

SVM is based on the maximization of the distance between the discriminating hyper-plane and the closest examples. This maximization reduces the so-called structural risk, which is related to the quality of the decision function. The most discriminating hyper-plane can be computed by solving the following constrained optimization problem:

|| w ||2 ^ min, (3)

y (< w, x, > + b) > 1, i = 1, ..., /. (4)

Here < w, x, > is the usual scalar product of vectors, || w ||2 = < w, w >.

Thus, for solving the constrained optimization problem described above, i. e. for finding vector w and shift parameter b, the co-operative biology inspired meta-heuristic CIBRA-c was used.

Ensembles design. Let the maximum number of SVM-based classifiers in one ensemble be equal to M. For the training and testing of SVM-based classifiers and the examination of the whole ensemble, the given dataset was divided by the following way: 60 % of instances from the dataset were randomly chosen as a training sample of each classifier, 20 % of instances were chosen as a testing sample of each classifier and the remaining 20 % of instances (hereinafter referred to as new instances) were used for investigation into the effectiveness of the ensemble. So, first of all for each of M SVM-based classifiers vector w and shift parameter b were found by using the COBRA-c algorithm and 60 % of instances from the dataset. After that for each classifier its classification error was evaluated on the testing sample. Thus, the ensemble of M SVM-based classifiers was designed by using 80 % of the given data.

Then two similar non-standard schemes, based on the use of output of the locally most effective ensemble member, were used to infer the ensemble decision. Each instance x from the dataset was represented as vector x = = (x1, ..., xn), where n is the number of attributes. The training and testing samples (80 % of instances from a given dataset) are hereinafter referred to as "old" sample. For each new instance y the "closest" instance z from the "old" sample was found. The closeness of instances was evaluated by the calculation of the Euclidian distance between their vector representations. The ensemble members, which classified the closest instance correctly, were determined. The difference between the two schemes can be described as follows:

1. First scheme. If only one ensemble member classified it correctly then this member will be used for the new instance y. In all remaining cases (there was more than one member that classified the closest instance correctly or none of the ensemble members classified it correctly) the ensemble member with the smallest classification error was chosen for the new instance y.

2. Second scheme. If only one ensemble member classified it correctly then this member will be used for the new instance y. If there was more than one member that classified the closest instance correctly then the criterion

"confidence estimation" was used. The criterion value was calculated for each classifier in the following way:

ce(z) = < w, z > + b. (5)

The classifier with the best criterion value according to the class of instance z (the smallest value if it was labeled as -1 or the highest value if the instance was labeled as +1) was chosen for the new instance y. But again if none of the ensemble members classified the closest instance z correctly then the ensemble member with the smallest classification error was chosen for the new instance y.

It should be noted that in this study the maximum number of SVM-based classifiers in one ensemble was set equal to 10, i. e. M = 10.

Experimental results. In order to load the developed optimization technique with a really hard task four benchmark classification problems: bank scoring in Australia, bank scoring in Germany, Breast Cancer Wisconsin and Pima Indians Diabetes [19] were chosen. That choice was conditioned by the fact that the problems mentioned were solved by other researchers many times with different methods. Thus there are many results obtained by alternative approaches that can be used for comparison.

Firstly two bank scoring problems were solved with SVM-based classifier ensembles: bank scoring in Australia and bank scoring in Germany [19]. For the Australian bank scoring problem, there are 14 attributes (6 numerical and 8 categorical), 2 classes, 307 examples of creditworthy customers and 383 examples of non-creditworthy customers. For the German bank scoring problem there are 20 attributes (13 qualitative and 7 numerical), 2 classes, 700 records of creditworthy customers and 300 records of non-creditworthy customers. Both datasets were taken from [19].

Alternative algorithms for comparison as well as the method of performance estimation are taken from [20]. The results obtained are demonstrated in tab. 1 where the portion of correctly classified instances from testing sets (%) is presented. In tab. 1 below the following abbreviations are used: SVME_01 is the SVM-based classifier ensemble with the first scheme, SVME_02 is the SVM-based classifier ensemble with the second scheme and SVM+COBRA is just single SVM-based classifiers generated by the algorithm COBRA-c.

Table 1

Performance comparison of classifiers for bank scoring problems

Classifiers Scoring in Australia Scoring in Germany

2SGP 90.27 80.15

C4.5 89.86 77.73

Fuzzy 89.10 79.40

GP 88.89 78.34

CART 87.44 75.65

LR 86.96 78.37

CCEL 86.60 74.60

RSM 85.20 67.70

Bagging 84.70 68.40

Bayesian 84.70 67.90

Boosting 76.00 70.00

k-NN 71.50 71.51

SVM+COBRA 90.22 79.60

SVME 01 90.29 79.63

SVME 02 90.25 79.63

So, for the Australian bank scoring problem the results obtained are better than for alternative classifiers from tab. 1 and for the German bank scoring problem the results obtained are the second best. The results in tab. 1 are averaged on 20 algorithm executions. The standard deviation for the Australian bank scoring problem was equal to 2.19 % for the first scheme and 1.03 % for the second scheme. The standard deviation for the German bank scoring problem was equal to 2.03 % for the first scheme and 1.35 % for the second scheme. But ensembles with the first scheme and ensembles with the second scheme demonstrated the same mean result for the German bank scoring problem. Also ensembles showed better results than the single SVM-based classifier.

It should be noted that despite the fact that the maximum number of classifiers in one ensemble was equal to 10, usually only 2-5 SVM-based classifiers were obtained for each ensemble. Also for SVM-based classifier design, i. e. for solving a constrained optimization problem, the

maximum number of function evaluation was established to be equal to 10000.

After that two medical diagnostic problems were solved with SVM-based classifier ensembles: Breast Cancer Wisconsin Diagnostic and Pima Indians Diabetes [19]. For Breast Cancer Wisconsin Diagnostic there are 10 attributes (patient's ID that was not used for calculations and 9 categorical attributes which possess values from 1 to 10), 2 classes, 458 records of patients with benign cancer and 241 records of patients with malignant cancer. For Pima Indians Diabetes there are attributes (all numeric-valued), 2 classes, 500 patients who tested negative for diabetes and 268 patients who tested positive for diabetes). The benchmark data for these problems were also taken from [19].

The results obtained are presented in tab. 2 and tab. 3 where a portion of the correctly classified instances from testing sets is presented. There are in tab. 2 and tab. 3 also results of other researchers who used other approaches found in scientific literature [21; 22].

Table 2

Performance comparison of classifiers for the breast cancer problem

Author, year Method Accuracy, %

This study (2014) SVME 01 98.57

This study (2014) SVME 02 98.32

Authors results (2013) SVM+COBRA 97.64

Quinlan (1996) C4.5 94.74

Hamiton et al. (1996) RAIC 95.00

Ster, Dobnikar (1996) LDA 96.80

Nauck and Kruse (1999) NEFCLASS 95.06

Pena-Reyes, Sipper (1999) Fuzzy-GA1 97.36

Setiono (2000) Neuro-rule 2a 98.10

Albrecht et al. (2002) LSA machine 98.80

Abonyi, Szeifert (2003) SFC 95.57

Polat, Günes (2007) LS-SVM 98.53

Guijarro-Berdias et al. (2007) LLS 96.00

Karabatak, Cevdet-Ince (2009) AR + NN 97.40

Peng et al. (2009) CFW 99.50

Table 3

Performance comparison of classifiers for the Pima Indians diabetes problem

Author, year Method Accuracy, %

This study (2014) SVME 01 80.26

This study (2014) SVME 02 80.10

Authors results (2013) SVM+COBRA 79.98

H. Temurtas et al. (2009) MLNN with LM (10xFC) 79.62

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

H. Temurtas et al. (2009) PNN (10xFC) 78.05

H. Temurtas et al. (2009) MLNN with LM 82.37

H. Temurtas et al. (2009) PNN 78.13

M. R. Bozkurt et al. (2012) PNN 72.00

M. R. Bozkurt et al. (2012) LVQ 73.60

M. R. Bozkurt et al. (2012) FFN 68.80

M. R. Bozkurt et al. (2012) CFN 68.00

M. R. Bozkurt et al. (2012) DTDN 76.00

M. R. Bozkurt et al. (2012) TDN 66.80

M. R. Bozkurt et al. (2012) Gini 65.97

M. R. Bozkurt et al. (2012) AIS 68.80

S. M. Kamruzzaman et al. (2005) FCNN with PA 77.34

K. Kayaer., T. Yildirim (2003) GRNN 80.21

K. Kayaer., T. Yildmm (2003) MLNN with LM 77.08

L. Meng et al. (2005) AIRS 67.40

So, for the Pima Indians diabetes problem the results obtained are the second best. The results in tab. 2 and tab. 3 are averaged on 20 algorithm executions. The standard deviation for the breast cancer problem was equal to 0.84 % for the first scheme and 0.85 % for the second scheme. The standard deviation for the Pima Indians diabetes problem was equal to 2.68 % for the first scheme and 1.73 % for the second scheme. Again ensembles showed better results than the single SVM-based classifier.

As for the bank scoring problems, despite the fact that the maximum number of classifiers in one ensemble was equal to 10, usually only 2-5 SVM-based classifiers were obtained for each ensemble. Also for SVM-based classifier design, i. e. for solving a constrained optimization problem, the maximum number of function evaluation was established to be equal to 1000.

Conclusion. In this paper a new meta-heuristic for solving unconstrained optimization problems, called Cooperation of Biology Related Algorithms, and its modification for solving constrained optimization problems, called COBRA-c, are described. Experiments showed that COBRA and COBRA-c work successfully and are reliable on different benchmark problems and demonstrate competitive behavior.

Then the described optimization method COBRA-c was used for the design of SVM-based classifier ensembles. The algorithm COBRA-c was used for the SVM-based classifier adjustment. This approach was applied to four real-world classification problems (two bank scoring problems and two medical diagnostic problems), the solving of which is equivalent to solving hard optimization problems where objective functions have many variables and are given in the form of a computational program. The suggested algorithm successfully solved all problems designing ensembles with the competitive performance that allows consideration of the study results as confirmation of the reliability, workability and usefulness of the algorithm in solving real world optimization problems.

Directions for future research are heterogeneous: improvement of the cooperation and competition scheme within the approach and development of ensembles whose members are not only SVM-based classifiers but also neural networks, for example.

Acknowledgment. This work was supported by the Ministry of Education and Science of the Russian Federation, Project 140/14.

Благодарности. Работа поддержана Министерством образования и науки Российской Федерации, НИР 140/14.

References

1. Dietterich T. G. Machine learning research: Four current directions. AI Mag. 1997, no. 18, p. 97-136.

2. Rojas R. Neural networks: a systematic introduction. Springer-Verlag, Berlin. 1996, 502 p.

3. Yager R. R., Filev D. P. Essentials of fuzzy modeling and control. Wiley, New York. 1994, 408 p.

4. Eiben A. E., Smith J. E. Introduction to evolutionary computing. Springer, Berlin. 2003.

5. Vapnik V., Chervonenkis A. Teoriya raspoznava-niya obrazov [Theory of Pattern Recognition]. Moscow, Nauka Publ., 1974, 415 p.

6. Akhmedova Sh., Semenkin E. Co-Operation of Biology Related Algorithms. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC'2013). 2013. P. 2207-2214.

7. Akhmedova Sh., Semenkin E. [New optimization metaheuristic based on co-operation of biology related algorithms]. Vestnik SibGAU. 2013, No. 4 (50), p. 92-99 (In Russ.).

8. Joachims T. Text categorization with Support Vector Machines: Learning with many relevant features. In

Proceedings of the 10th European Conference on Machine Learning (ECML'1998). 1998, p. 137-142.

9. Popov E. A., Semenkina M. E., Lipinskiy L. V. [Evolutionary algorithm for automatic generation of neural network based noise suppression systems]. Vestnik SibGAU. 2012, No. 4 (44), p. 79-82 (In Russ.).

10. Kennedy J., Eberhart R. Particle Swarm Optimization. In Proceedings of International Conference on Neural networks IV. 1995, p. 1942-1948.

11. Chenguang Yang, Xuyan Tu and Jie Chen. Algorithm of Marriage in Honey Bees Optimization Based on the Wolf Pack Search. In Proceedings of International Conference on Intelligent Pervasive Computing (IPC2007). 2007, p. 462-467.

12. Yang X. S. Firefly algorithms for multimodal optimization. In Proceedings of 5th Symposium on Stochastic Algorithms, Foundations and Applications (SAGA 2009). 2009, p. 169-178.

13. Yang X. S., Deb S. Cuckoo Search via Levy flights. In Proceedings of World Congress on Nature & Biologically Inspired Computing (NaBic2009). 2009, p. 210-214.

14. Yang X. S. A new metaheuristic bat-inspired algorithm. Nature Inspired Cooperative Strategies for Optimization. Studies in Computational Intelligence. 2010, vol. 284, p. 65-74.

15. Liang J. J., Qu B. Y., Suganthan P. N., Hernandez-Diaz, A. G. Problem Definitions and Evaluation Criteria for the CEC 2013 Special Session on Real-Parameter Optimization. Technical Report, Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China, and Technical Report. Nanyang Technological University. Singapore. 2012.

16. Deb K. An efficient constraint handling method for genetic algorithms. Computer methods in applied mechanics and engineering. 2000, vol. 186 (2-4), p. 311-338.

17. Liang J. J., Shang Z., Li Z. Coevolutionary Comprehensive Learning Particle Swarm Optimizer. In Proceedings of Congress on Evolutionary Computation (CEC'2010). 2010, p. 1505-1512.

18. Mallipeddi R., Suganthan P. N. Problem Definitions and Evaluation Criteria for the CEC 2010 Competition on Constrained Real-Parameter Optimization. Technical report. Nanyang Technological University. Singapore. 2009.

19. Frank A., Asuncion, A. UCI Machine Learning Repository. Irvine, CA: University of California, School of Information and Computer Science. 2010. Available at: http://archive.ics.uci.edu/ml.

20. Huang J.-J., Tzeng G.-H., Ong Ch.-Sh. Two-stage genetic programming (2SGP) for the credit scoring model. Applied Mathematics and Computation. 2006, vol. 174, p. 1039-1053.

21. Marcano-Cedeno A., Quintanilla-Dominguez J., Andina D. WBCD breast cancer database classification applying artificial metaplasticity neural network. Expert Systems with Applications: An International Journal. 2011, vol. 38, iss. 8, p. 9573-9579.

22. Temurtas H., Yumusak N., Temurtas F. A comparative study on diabetes disease diagnosis using neural networks. Expert Systems with Applications. 2009, vol. 36, no. 4, p. 8610-8615.

Библиографические ссылки

1. Dietterich T. G. Machine learning research: Four current directions // AI Mag. 1997. 18. P. 97-136.

2. Rojas R. Neural networks: a systematic introduction. Berlin : Springer-Verlag, 1996. 502 p.

3. Yager R. R., Filev D. P. Essentials of fuzzy modeling and control. New York : Wiley, 1994. 408 p.

4. Eiben A. E., Smith J. E. Introduction to evolutionary computing. Berlin : Springer, 2003. 300 p.

5. Вапник В., Червоненкис А. Теория распознавания образов. М. : Наука, 1974. 415 c.

6. Akhmedova Sh., Semenkin E. Co-Operation of Biology Related Algorithms // Proceedings of the IEEE Congress on Evolutionary Computation (CEC'2013). 2013. P. 2207-2214.

7. Ахмедова Ш. А., Семенкин Е. С. Новый коллективный метод оптимизации на основе кооперации бионических алгоритмов // Вестник СибГАУ. 2013. № 4 (50). C. 92-99.

8. Joachims T. Text categorization with Support Vector Machines: Learning with many relevant features // Proceedings of the 10th European Conference on Machine Learning (ECML'1998). 1998. P. 137-142.

9. Попов Е. А., Семенкина М. Е., Липинский Л. В. Эволюционный алгоритм для автоматической генерации нейросетевых систем подавления шума // Вестник СибГАУ. 2013. № 4 (44). С. 79-82.

10. Kennedy J., Eberhart R. Particle Swarm Optimization // Proceedings of Intern. Conf. on Neural networks IV. 1995. P. 1942-1948.

11. Chenguang Yang, Xuyan Tu and Jie Chen. Algorithm of Marriage in Honey Bees Optimization Based on the Wolf Pack Search // Proceedings of Intern. Conf. on Intelligent Pervasive Computing (IPC2007). 2007. P. 462-467.

12. Yang X. S. Firefly algorithms for multimodal optimization // Proceedings of 5th Symposium on Stochastic

Algorithms, Foundations and Applications (SAGA 2009). 2009. P. 169-178.

13. Yang X. S., Deb S. Cuckoo Search via Levy flights // Proceedings of World Congress on Nature & Biologically Inspired Computing (NaBic2009). 2009. P. 210-214.

14. Yang X. S. A new metaheuristic bat-inspired algorithm. Nature Inspired Cooperative Strategies for Optimization // Studies in Computational Intelligence. 2010. Vol. 284. P. 65-74.

15. Problem Definitions and Evaluation Criteria for the CEC 2013 Special Session on Real-Parameter Optimization : Technical Report / J. J. Liang [et al.]; Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China, and Technical Report / Nanyang Technological University. Singapore, 2012.

16. Deb K. An efficient constraint handling method for genetic algorithms // Computer methods in applied mechanics and engineering. 2000. Vol. 186 (2-4). P. 311338.

17. Liang J. J., Shang Z., Li Z. Coevolutionary Comprehensive Learning Particle Swarm Optimizer // Proceedings of Congress on Evolutionary Computation (CEC'2010). 2010. P. 1505-1512.

18. Mallipeddi R., Suganthan P. N. Problem Definitions and Evaluation Criteria for the CEC 2010 Competition on Constrained Real-Parameter Optimization : Technical report / Nanyang Technological University. Singapore, 2009.

19. Frank A., Asuncion A. UCI Machine Learning Repository. Irvine, CA : University of California, School of Information and Computer Science, 2010. URL: http://archive.ics.uci.edu/ml.

20. Huang J.-J., Tzeng G.-H., Ong Ch.-Sh. Two-stage genetic programming (2SGP) for the credit scoring model // Applied Mathematics and Computation. 2006. Vol. 174. P. 1039-1053.

21. Marcano-Cedeno A., Quintanilla-Dominguez J., Andina D. WBCD breast cancer database classification applying artificial metaplasticity neural network // Expert Systems with Applications. 2011. Vol. 38, iss. 8. P. 95739579.

22. Temurtas H., Yumusak N., Temurtas F. A comparative study on diabetes disease diagnosis using neural networks // Expert Systems with Applications. 2009. Vol. 36, No. 4. P. 8610-8615.

© Akhmedova Sh. A., Semenkin E. S., 2015

i Надоели баннеры? Вы всегда можете отключить рекламу.