Investigation of effectiveness of Particle Swarm Optimization algorithm was conducted by solving test constrained and unconstrained multi-criterion problems. The maximum number of particles, which could be stored in the archive of non-dominated solutions, was set up for algorithm’s work. Number of particles in the archive was different for all problems. For solving constrained optimization problems was used method of the dynamic penalties. When finding solutions for problems archive filled partially. Researches showed that increasing of number of criterions leads increasing of algorithms’ effectiveness. So, for example, archive of the non-dominated solutions was filled on the average on 20-30 % when constrained problems were solved by using standard PSO and also by using binary PSO. Advantage of the standard PSO was only in time that was spent for one program run. And again algorithms’ results didn’t differ significantly. Number of particles and generations was about the same as it was when unconstrained problems were solved. Solving one constrained problem, feature of which was that, there was no point from Pareto set in the feasible region, required notable increase in population size. And in the end points, that was obtained, were on the part of the boundary of feasible region, which was the closest to the Pareto
set. Results that were obtained by using both standard and binary PSO were almost the same.
After all the investigations conducted, two real-world problems were solved: problem of formation of optimal investment portfolio of the enterprise and problem of formation of optimal loan portfolio of the bank. Besides, first problem was solved as in one-criterion definition so in the multi-criterion definition.
Rreferences
1. Kennedy J., Eberhart R. Particle Swarm Optimization // Proceedings of IEEE Intern. Conf. on Neural Networks. IV. 1995. P. 1942-1948.
2. Kennedy J., Eberhart R. C. A discrete binary version of the particle swarm algorithm // Proceedings of the World Multiconf. on Systemics, Cybernetics and Informatics 1997. Piscataway, NJ. 1997. P. 4104-4109.
3. Eiben A. E., Smith J. E. Introduction to evolutionary computation. Springer, Berlin, 2003.
4. Electronic resource. URL: http://www-optima. amp.i.kyoto-u.ac.jp/member/student/hedar/Hedar_files/ TestGO_files/Page422.htm.
Ш. А. Ахмедова
РАЗРАБОТКА И ИССЛЕДОВАНИЕ ЭФФЕКТИВНОСТИ СТАЙНОГО АЛГОРИТМА ОПТИМИЗАЦИИ
Проведены исследования эффективности стайного алгоритма оптимизации (РБО) с вещественными и бинарными частицами при решении задач условной и безусловной одно- и многокритериальной оптимизации. Разработаны также параллельные модификации обоих алгоритмов и решены две реальные практические задачи.
Ключевые слова: стандартная и бинарная, параллелизм, многокритериальность.
© Akhmedova Sh. A., 2012
UDC 519.8
E. A. Popov, M. E. Semenkina, L. V. Lipinskiy
EVOLUTIONARY ALGORITHM FOR AUTOMATIC GENERATION OF NEURAL NETWORK BASED NOISE SUPPRESSION SYSTEMS*
We propose using neural network technology to noise suppress in information signals. Neural networks are automatically generated and adjusted with an evolutionary algorithm. It is shown that the evolutionary algorithm provides a reliable noise suppress system.
Keywords: genetic algorithm, genetic programming algorithm, neural network, noise suppress system.
In the modern world, there are many sources and receivers of information signals, such as wired and wireless Internet, different access points, a huge range of radio waves, mobile sources, etc. All these sources are sensitive to various kinds of noises and disturbances that are associated with the signals mutual influence and the external factors influence that contributes to the mismatch of the transmission line, resonance phenomena, etc. [1].
The noise filters theoretical foundation is a spectral analysis, algorithmic basis is the fast Fourier transformation. Application of spectral analysis and classical filters requires careful adjustment of the parameters set, which makes it very difficult to implement the design automation of noise reduction systems. It results into necessity of finding new approaches. One of such approaches might be the use of intelligent information technologies intensively developing the last twenty years [2].
*The study was supported by The Ministry of education and science of Russian Federation, project № 16.740.11.0742,
14.740.12.1341 and 11.519.11.4002.
To date, there are several methods for noise reduction: shielding, grounding, signal filtration, noise reduction with adaptive filters, the signal wavelet analysis method, etc. All of them have limitations and some disadvantages (the requirement of a priori information about the signal or noise, the complex technology structure, expensive equipment, complex mathematical tools, the developers’ high qualification, etc.).
Noise reduction system should be created to relax these restrictions and eliminate disadvantages, so that they will not require changes in the environment, a priori information about the signal or noise, expensive equipment, highly qualified developers and could be designed in an automated mode.
Evolutionary methods (EM) are able to search in a complex space where the solution is a hierarchical structure or a combinatorial circuit. It does not use a priori information about the optimized function that
significantly expands the application field of such methods. Neural network models are another famous method of data mining. Neural networks (ANN) are able to process large data sets, are resistant to noise, adapting to the changing problem conditions. In this paper, we propose the use an evolutionary algorithm for
automatically generating the neural network based noise suppression system.
The evolutionary algorithm automatically
generating neural network based noise suppression system. The process of ANN models implementing and preparing for work consists of two main steps: neural networks structure selection (including activation functions threshold values adjustment) and tuning the neurons connections weight values. Moreover, neural network model can be adapted by adjusting the weights when new data would be received or problem conditions would be changed.
Researchers seek to implement minimal neural networks architecture. In this case, the network generalizing properties are higher, the result obtained is more predictable and less time is required for signal processing. We propose to use evolutionary algorithm to generate such neural networks, in which the structure of the neural network is configured by a genetic programming algorithm (GP), and the weights and activation functions thresholds are adjusted by genetic algorithm (GA) followed by a hill climbing method [3].
Neural network structure configuring. The terminal and functional sets should be defined to generate the neural network structure using genetic programming algorithm (GP). Neurons or neuron blocks interconnected in a certain way can be chosen as the terminal set in the problem of generating the neural network structure. Then the operators that combine these neurons and their blocks in the network will be included in a functional set. The chosen encoding method must satisfy two conditions: insularity and sufficiency.
The insularity condition requires that the admissible solutions would be obtained for any combination of functional and terminal elements. Two operators can be included in a functional set to satisfy these requirements,
operators such as the installation of the terminal elements in a single layer and link layers.
Sufficiency condition requires that the terminal and functional elements are sufficient for the task. A number of different activation functions and their combinations are included in the terminal set to satisfy this condition.
Weights and activation function threshold values optimization. In this paper, we use a genetic algorithm followed by local search method for weights optimization. Studies show that the GA focuses individuals in attraction areas of local extremum points on the first iterations. Local search is easier to produce with help of the conjugate gradient algorithm. This algorithm is comparable to the effectiveness of second-order methods while using the first order derivative. The derivative numerical calculation extends this coefficients optimization method application on the neural networks with arbitrary structure.
Results of the study. A complete investigation of all types of noise filtering problems formulations is not possible in one paper, therefore studies were carried out under following restrictions:
- the test signals are periodic harmonic signals;
- the test noise is a constant broadband noise (white noise);
- the signal spectral analysis method is taken as a basis for the implemented model.
After preliminary examination of existing intellectual information technologies, it has been established that the most appropriate technology for the initial research are artificial neural networks because of their ability to be automatically trained for solving the problem and to adapt to the external influences changes.
The software environment MATLAB ® Neural Network Toolbox™ was used for pre-adaptation neural network technology to the noise suppression problem.
Input data for training the neural network is a noisy harmonic spectrum of a periodic test signal. Test signal is a sine wave with 100 Hz frequency and amplitude of 1. Noise is broadband constant noise with average power equal to 4.
Four neural network structures available in the Matlab were chosen for the comparative analysis based on the specific solved problem. The following structure neural networks have been the chosen: a cascading direct propagation network, an Elman network, a feedforward network with error back propagation; an autoregressive dynamically trained neural network.
The hidden layers number, number of neurons in the hidden layer, training function and the neurons activation function are varied for each structure. The comparison results are presented in Table 1. The most effective structures were selected on the basis of the analysis. They are shown in bold in Table 1. Elman network and the feedforward network with back error propagation can be regarded as the best performance options. At the same time the latter network has only half the training time and a simpler structure that is significantly in terms of practical implementation.
Neural networks performance comparison
Network Hidden layers number Neurons number Average training time, second Average variance Average signal/ noise ratio, dB False positives alarms number
Cascading direct distribution network 3 10 1 0,0226 15,4 107
2 10 1 0,0231 15,4 109
1 10 1 0,0235 15,2 116
3 15 1 0,0214 15,4 89
Elman network 3 10 13 0,0263 16 112
2 10 4 0,0223 14,8 92
1 10 2 0,0191 15,6 51
1 10 3 0,0179 16,3 53
Feedforward network with back propagation error 3 10 1 0,5007 -0.02 1000
2 10 1 0,0221 15,4 91
1 10 1 0,0226 15,4 94
1 5 1 0,0176 16,3 55
Autoregressive dynamically trained neural network 3 5 0,1821 9,2 451
2 5 1 0,2285 9,3 449
1 5 1 0,0555 9,9 446
1 8 1 0,0193 15,9 62
The typical network structure
Therefore, we can assume that in our case direct distribution network with back propagation error must be considered as the best neural network that solves the noise suppression problem. Best Network has the following characteristics: 1 hidden layer, 5 neurons in the layer, bipolar sigmoid as activation function, the average training time is equal to one second, training error is equal to 0.01, the average signal/noise ratio is equal to 9,2 dB before training and 16,3 dB after training, the average variance of the processed signal is equal to 0.0179, the number of false positives alarms is equal to 55.
The following settings were chosen for the program system [4] generating neural networks with arbitrary structure using the genetic programming algorithm:
1. The running time was equal to 4 generations with 20 individuals.
2. Tournament selection with three individuals was selected.
3. The initial depth of the tree is equal three. The trees are growing with full growth method.
The experiments established that an efficient neural network successfully solving the noise suppression problem can be obtained after every run. The typical obtained structure is shown in Figure 1.
The neural network has the following characteristics: average signal/noise ratio is 9,2 dB (before processing), 19,8 dB (after processing), the processed signal average variance with respect to the standard is equal to 0.0163, the false positives alarms number (signal/noise ratio is
less than 10dB) reached 20. It can be concluded that the automatically generated network is the most effective because it has a very simple structure and the best results of signal processing.
3 Conclusions
In this paper we proposed an approach to solving the noise suppression problem based on the ANN, presented the neural network structure automatically generated with the help of the genetic programming algorithm, conducted the statistical analysis of the results and substantiated the practical application possibility of the noise suppression neural network based method in digital communication systems.
References
1. Sklar B. Digital Communications. The theoretical basis and practical application. Moscow : Dialectics, Williams, 2004.
2. Rutkovskaya D., Pilinsky M., Rutkowski L. Neural networks, genetic algorithms and fuzzy systems: Per. from Polish. Moscow : Goryachaya liniya - Telecom, 2004.
3. Semenkina M. E., Semenkin E. S. The algorithm of genetic programming with the generalized operator of multiple recombination // Computer training programs and innovation. 2009. № 2. P. 20.
4. Lipinski L. V., Semenkin E. S. The system for generating evolutionary neural network models for complex systems // Computer training programs and innovation. 2007. № 7. P. 15.
Е. А. Попов, М. Е. Семенкина, Л. В. Липинский
ЭВОЛЮЦИОННЫЙ АЛГОРИТМ ДЛЯ АВТОМАТИЧЕСКОЙ ГЕНЕРАЦИИ НЕЙРОСЕТЕВЫХ СИСТЕМ ПОДАВЛЕНИЯ ШУМА
Предлагается применять нейронные сети в качестве систем подавления шума в информационных сигналах. Нейронные сети создаются и настраиваются автоматически при помощи эволюционных алгоритмов. Показано, что нейронные сети являются надежным средством для подавления шума.
Ключевые слов: генетический алгоритм, алгоритм генетического программирования, нейронные сети, системы подавления шума.
© Popov E. A., Semenkina M. E., Lipinskiy L. V., 2012
UDC 005; 519.7; 303.732
I. S. Ryzhikov
ABOUT MULTIAGENT SYSTEM APPLICATIONS FOR SPEECH RECOGNITION PROBLEM
*
In this paper we suggest two different multi agent systems for speech recognition problem. The multi agent systems (MAS) are becoming very popular because of their flexibility and applicability to complex problems. The system is based on functioning of different agents that forms the system and interacts with each other. The main profit of using multi agent approach is that every agent can be described as a simple subsystem and the whole initial task can be solved with automatic and autonomous agent actions, interactions and decision making. So the main problem can be reduced to behavior rule base tuning.
Keywords: multi agent systems, speech recognition, intelligent agents.
Due to the increasing tasks complexity nowadays it is a common task to choose and modify the one from the variety of classification, modeling and control cybernetic techniques. Since the problems are related to new applied fields with uncommon properties the modifications of methods for every distinct task or even seeking a new way to solve the problem become the main problem for the researcher. The speech recognition itself touches upon classification, optimization, modeling problems and many others; it means that success can be achieved via using the complex recognition system that deals with all the properties of every task and the main problem. There are some different ways to define the problem of speech recognition, different paradigms and theories already exist. Since the speech recognition problem complexity and dependence of the current language it was designed for there are still no any complete solutions for the general problem. Though there are plenty of different techniques to solve every occurred task that is related with speech recognition problem, the speech recognition system, actually, is not able to achieve the desired goals. The complex system consists of different parts and every on them requires a great amount of calculation resources to solve its own task with given accuracy. If the accuracy of the current step is not achieved, the error is going to increase with every following step and the output becomes far from the one it should be. The dependence of processing quality for every element in the system of the previous one’s out-
put requires a lot of resources for every current task and the special modifications for every distinct technique for appearing task and every distinct properties of the problem. But also there is another way to solve the complex problem: to create the interaction between the different elements with different goals and make their real time communication be possible. The system that is based on interaction of different components with different goals in some cases can be the multi agent system.
That is why the MAS can satisfy the needs of the complex system since its agents can be intelligent, they can communicate and their goals are to find the solution of recognition task. The benefit of MAS usage consists of three aspects. Firstly, the interacting intelligent agents can be simple, rather simpler than the task they are to deal with. Only because of the interaction, the group of simple agents can automatically solve complex tasks. Secondly, if within the time new better techniques appear or problem definition changes we do not need to rebuild the system, we just need to change the related agent or build new agents for the new goals. Thirdly, no matter what approach of recognition we use, there would always be the task similar to every approach: noise suppressing, wave representation or modeling, classification and etc. In this paper we suggest to use MAS as a decision support system.
The using of MAS in speech recognition problem was described in general at [1] and [2], and for some specific tasks at [3].
*The study was supported by The Ministry of education and science of Russian Federation, project № 16.740.11.0742,
14.740.12.1341 and 11.519.11.4002.