Научная статья на тему 'DEVELOPMENT OF A METHOD FOR TRAINING ARTIFICIAL NEURAL NETWORKS FOR INTELLIGENT DECISION SUPPORT SYSTEMS'

DEVELOPMENT OF A METHOD FOR TRAINING ARTIFICIAL NEURAL NETWORKS FOR INTELLIGENT DECISION SUPPORT SYSTEMS Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
353
54
i Надоели баннеры? Вы всегда можете отключить рекламу.
i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «DEVELOPMENT OF A METHOD FOR TRAINING ARTIFICIAL NEURAL NETWORKS FOR INTELLIGENT DECISION SUPPORT SYSTEMS»

-□ □-

Розроблено метод навчання штуч-них нейронних мереж для гнтелектуаль-них систем тдтримки прийняття рг-шень. Метод проводить навчання не т1льки синаптичних ваг штучног нейронног мережг, але й виду та параметра функцп належностг, архгтектури та параметрiв окремого вузла мережг. В разi неможливостi забезпечити задану яксть функцюнування штучних нейронних мереж за рахунок навчання параметрiв штучног нейронног мережл вгдбуваеться навчання архгтектури штучних нейронних мереж. Вибгр архг-тектури, виду та параметра функцп належностг вгдбуваеться з урахуванням обчислювальних ресурсш засобу та з вра-хуванням типу та кiлькостi тформацп, що надходить на вхгд штучног нейронног мережг. Зазначений метод дозволяв про-водити навчання окремого вузла мережл та здгйснювати комбгнування вузлгв мережг. Розробка запропонованого методу обумовлена необхiднiстю проведен-ня навчання штучних нейронних мереж для гнтелектуальних систем тдтрим-ки прийняття ргшень, з метою обробки бтьшог кiлькостi тформацп, при одно-значностi ргшень, що приймаються. Зазначений метод навчання забезпечуе в середньому на 10-18 % быьшу високу ефективтсть навчання штучних нейронних мереж та не накопичуе поми-лок в ходi навчання. Зазначений метод дозволить проводити навчання штучних нейронних мереж; визначити ефектив-т заходи для тдвищення ефективнос-тi функцюнування штучних нейронних мереж; тдвищити ефективтсть функцюнування штучних нейронних мереж за рахунок навчання параметра та архг-тектури штучних нейронних мереж. Метод дозволить зменшити викорис-тання обчислювальних ресурав систем тдтримки та прийняття ргшень; виро-бити заходи, що спрямоват на тдвищення ефективностг навчання штучних нейронних мереж; тдвищити оператив-тсть обробки тформацп в штучних ней-ронних мережах

Ключовг слова: штучт нейронт мережг, обробка тформацп, гнтелектуальт

системи тдтримки прийняття рш.ень -□ □-

UDC 004.032.26

|döi: 10.15587/1729-4061.2020.2033011

DEVELOPMENT OF A METHOD FOR TRAINING ARTIFICIAL NEURAL NETWORKS FOR INTELLIGENT DECISION SUPPORT SYSTEMS

V. Dudnyk

PnD

Department of Fire Training* Yu. Sinenko Associate Professor Department of Fire Training* M . M a t s y k Associate Professor Department of Driving Combat Vehicles and Cars* Ye. Demchenko PhD, Head of Research Department Research Department of Scientific and Methodological Support for the Development and Implementation of Programs for the Development of Weapons And Military Equipment and the State Defense Order**

R. Zhyvotovskyi PhD, Senior Researcher, Head of Research Department Research Department of the Development of Anti-Aircraft Missile Systems and Complexes**

I u . R e p i l o Doctor of Military Sciences, Professor Department of Missile Troops and Artillery*** O. Zabolotnyi PhD, Associate Professor, Leading Researcher Center of Military Strategic Studies*** O. Symonenko Senior Lecturer Department of Automated Control Systems Military Institute of Telecommunications and Informatization named after Heroes of Kruty Moskovska str., 45/1, Kyiv, Ukraine, 01011 P. Pozdniakov PhD, Head of Department Department of Tactic and General Military Sciences Institute of Naval Forces National University «Odessa Maritime Academy» Hradonachalnytska str., 20, Odessa, Ukraine, 65029 A. Shyshatskyi PhD, Senior Researcher Research Department of Electronic Warfare Development**

Е-mail: [email protected] *Hetman Petro Sahaidachnyi National Army Academy Heroiv Maidanu str., 32, Lviv, Ukraine, 79026 **Central Scientifically-Research Institute of Arming and Military Equipment of the Armed Forces of Ukraine Povitroflotskyi ave., 28, Kyiv, Ukraine, 03168 ***Ivan Chernyakhovsky National Defense University of Ukraine Povitrofloski ave., 28, Kyiv, Ukraine, 03049

Received date 30.03.2020 Copyright © 2020, V. Dudnyk, Yu. Sinenko, M. Matsyk, Ye. Demchenko,

Accepted date 18.05.2020 R. Zhyvotovskyi, Iu. Repilo, O. Zabolotnyi, O. Symonenko, P. Pozdniakov, A. Shyshatskyi

Published date 30.06.2020 This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0)

1. Introduction

Decision support systems (DSS) are actively used in all spheres of human life. They are especially common in the

processing of large data sets, providing information support for decision-making by decision-makers.

Currently, the basis of existing DSS are methods of artificial intelligence [1-10].

The creation of intelligent DSS has become a natural continuation of the widespread use of DSS of the classical type. Intelligent DSS provide information support for all production processes and services of enterprises (organizations, institutions). Intelligent DSS are used for design, manufacture and sale of products, financial and economic analysis, planning, personnel management, marketing, support for the creation (operation, repair) of products and long-term planning. Also, these intelligent DSS have been widely used to solve specific tasks of military purpose, namely [1, 2]:

- planning the deployment, operation of communication systems and data transmission;

- automation of troops and weapons control;

- collection, processing and generalization of intelligence information on the state of intelligence objects;

- forecasting the electronic situation in communication channels, etc.

The main tool for solving computational and other problems in modern intelligent DSS is evolving artificial neural networks (ANN).

The prospects for the use of evolving ANNs are due to the fact that the capabilities of ANNs, which do not have the possibility of evolution, do not meet the requirements for the efficiency of data processing and their learning capabilities.

Evolving ANNs have both universal approximating properties and fuzzy inference capabilities, which are the reasons why they are widely used to solve various problems of data mining, identification, emulation, forecasting, intelligent control, etc. ANNs provide stable operation in conditions of nonlinearity, uncertainty, stochasticity and chaos, various perturbations and disturbances.

Despite their successful use to solve a wide range of data mining problems, these systems have a number of disadvantages associated with their use.

The most significant shortcomings are the following:

- complexity of choosing the system architecture. As a rule, the model based on the principles of computational intelligence has a fixed architecture. In the context of ANN, this means that the neural network has a fixed number of neurons and connections. Therefore, adapting the system to new data coming in for processing that is different from previous data may be problematic;

- training in batch mode and training for several epochs requires significant time resources. Such systems are not adapted to work with a sufficiently high rate of new data for processing;

- a lot of the existing systems of computational intelligence can not determine the evolving rules by which the system develops and can also present the results of their work in terms of natural language.

Thus, the task of developing new ANN training methods, which will solve these difficulties, is urgent.

2. Literature review and problem statement

In [3], an analysis of the properties of ANNs, which were used in predicting the concentration of air pollutants is performed. It is emphasized that ANNs have a low convergence rate and a local minimum. The use of an extreme learning machine for ANN is proposed, which provides high efficiency of generalization at extremely high learning speed. The disadvantages of this approach include the accumulation of ANN errors during the calculations, the inability to choose the parameters and type of membership function.

The work [5] presents an operational approach to spatial analysis in the marine industry to quantify and reflect related ecosystem services. This approach covers the three-dimensionality of the marine environment, considering separately all marine areas (sea surface, water column and seabed). In fact, the method builds 3-dimensional sea models by estimating and mapping the associated marine domains through the adoption of representative indicators. The disadvantages of this method include the impossibility of flexible adjustment (adaptation) to evaluate models while adding (excluding) indicators and changing their parameters (compatibility and significance of indicators).

The work [6] presents a model of machine learning for automatic identification of requests and provision of information support services exchanged between members of the Internet community. This model is designed to process a large number of messages from social network users. The disadvantages of this model are the lack of mechanisms for assessing the adequacy of decisions and high computational complexity.

In [7], the use of ANN for the detection of heart rhythm abnormalities and other heart diseases is presented. The backpropagation algorithm is used as a method of ANN teaching. The disadvantage of this approach is its limitation to training only synaptic weights without training the type and parameters of the membership function.

In the work [8] the use of ANN to detect the avalanche is presented. The backpropagation algorithm is used as a method of ANN training. The disadvantage of this approach is its limitation to training only synaptic weights without training the type and parameters of the membership function.

In [9]. the use of ANN for the detection of anomaly detection problems in home authorization systems is presented. The «winner gets everything» algorithm is used as a method of training Kohnen's ANN. The disadvantages of this approach are the accumulation of errors in the learning process, the limitation to learning only synaptic weights without learning the type and parameters of the membership function, as well as the need to store previously calculated data.

In [10], the use of ANN to identify problems in detecting abnormalities in the human encephalogram is presented. The method of fine-tuning of the ANN parameters is used as a method of ANN training. The disadvantages of this approach are the accumulation of errors in the learning process, the limitation to learning only synaptic weights without learning the type and parameters of the membership function.

In [12], the use of machine learning methods, namely ANN and genetic algorithms is presented. A genetic algorithm is used as a method of ANN training. The disadvantage of this approach is its limitation to training only synaptic weights without training the type and parameters of the membership function.

In [13], the use of machine learning methods is presented, namely ANN and differential search method. During the research, a hybrid method of ANN training was developed, which is based on the use of the algorithm of error backpropagation and differential search. The disadvantage of this approach is its limitation to training only synaptic weights without training the type and parameters of the membership function.

In [14], the development of methods for ANN learning using the combined approximation of the response surface, which provides the smallest errors of learning and forecasting is carried out. The disadvantage of this method is the accumulation of errors during training and the inability to change the architecture of the ANN during training.

The work [15] shows the use of ANN to assess the efficiency of the unit, using the previous time series of its performance. SBM (Stochastic Block Model) and DEA (Data Envelopment Analysis) models are used for ANN training. The disadvantages of this approach are the limited choice of network architecture and training only synaptic weights.

The work [16] shows the use of ANN for the evaluation of geomechanical properties. The backpropagation algorithm is used as a method of ANN training. Improving the characteristics of the backpropagation algorithm is achieved by increasing the training sample. The disadvantage of this approach is its limitation to training only synaptic weights without training the type and parameters of the membership function.

The work [17] shows the use of ANN for estimating traffic intensity. The backpropagation algorithm is used as a method of ANN training. Improving the performance of the backpropagation algorithm is achieved by using bandwidth connections between each layer, so that each layer sets out only the residual function relative to the results of the previous layer. The disadvantage of this approach is its limitation to training only synaptic weights without training the type and parameters of the membership function.

These methods [1-17] are usually focused on learning synaptic weights or membership functions. However, these scientific works should not be used to analyze and predict the state of the electronic environment in special-purpose communication systems.

The use of the known [1-17] algorithms (methods, techniques) of training artificial neural networks for forecasting the electronic environment does not meet the existing and future requirements for them, namely:

- increasing the amount of input information that artificial neural networks can process;

- increasing the reliability of decision-making by intelligent decision support systems in the analysis of the electronic environment;

- increasing the speed of adaptation of the architecture and parameters of artificial neural networks in accordance with the tasks that arise;

- ensuring the predictability of the learning process of artificial neural networks in the analysis of the electronic environment;

- ensuring the calculation of large data sets for one era without saving previous calculations;

- impossibility to provide training for individual elements of the architecture of artificial neural networks;

- impossibility to combine the architecture of artificial neural networks (individual nodes).

3. The aim and objectives of the study

The aim of the study is to develop a method of training artificial neural networks for intelligent decision support systems that solve problems of analysis and forecasting of the electronic environment, which allows you to process more information with unambiguous decisions.

To achieve this aim, the following tasks were set:

- to determine the possibility of training the type and parameters of the membership function, as well as the architecture and parameters of a single network node in addition to training the synaptic weights of the artificial neural network;

- to conduct approbation of the proposed method.

4. Determination of the possibility of training the type and parameters of the membership function as well as the architecture and parameters of a particular network node

To solve the problems of analysis and forecasting of the electronic environment of special-purpose networks, the authors propose to use an evolving ANN cascade.

The architecture of the evolving ANN cascade [16-18] is presented in Fig. 1.

The zero layer of the system receives (nx1) -dimensional vector of input signals x(k) = (x(k), xj(k), x2(k), ..., xn(k))T, which is then transmitted to the first hidden layer containing nodes-neurons, each of which has two inputs.

Output signals are formed at the outputs of N-1 nodes of the first hidden layer y[1] s = l,2,...,0.5n(n-1) = c2n. Then, these signals get to the selection unit SB, which performs the function of sorting the nodes of the first hidden layer according to the accepted criterion (usually the value of the mean square error o2[1] ) so that c!ra < c!ra <... < o![1]^ The outputs of the y[1]* and yip selection unit get to the input of a single node-neuron of the second layer, at the output of which the output signal y[2]* is formed.

This output signal together with the output signal of the selection unit get to the input of the node-neuron of the next layer. The process of building cascades continues until the required accuracy of information processing is achieved.

As nodes of the considered evolving ANN cascade, two-input neuro-fuzzy systems that were considered earlier [1], and also two-input neuro-fuzzy nodes which architecture will be considered further can be used.

The problem of ANN learning is clearly complicated if the data coming to the input of the neural network are non-stationary and nonlinear, contain quasi-periodic, stochastic and chaotic components.

Fig. 1. Architecture of evolving ANN cascade [16—18]

X

X

2

n

In these conditions, nonlinear models based on the mathematical apparatus of computational intelligence [2-4] and, first of all, neuro-fuzzy systems proved to be the best. The advantages are the high approximating and extrapolating properties, learning ability, transparency and interpretability of the results. NARX-models should be also mentioned here and they look like:

y (k) = f

y (k -1), y (k - 2),...,

y (k - ny ) X (k - 1)'...'X (k - nx )

(1)

where y (k) is the estimate (forecast) of the sequence at the moment of discrete time k = 1,2,...; f (•) is some nonlinear transformation implemented by the neuro-fuzzy system; x (k) is the exogenous factor that determines the behavior of y (k).

It can be seen that the description (1) corresponds to the ANARX-models (Additive Nonlinear Autoregressive Exo-

genous - NARX) and WANARX-models (Additive Nonlinear Autoregressive Exogenous Weighted - ANARX). Such models are well studied and there are a lot of architectures and algorithms for their training, but it is assumed that the order of the model is somehow predetermined.

In the case of structural nonstationarity of the studied series (in this case it is the analysis of the electronic situation), these orders are a priori unknown and must be adjusted in the learning process.

Given the above, the classic procedure for learning networks is the adjustment of synaptic weights, without taking into account other network learning opportunities, such as the type of architecture of individual network nodes and network composition (node combinations).

Fig. 2 shows the proposed method of training of the artificial neural network.

c

START

Input of source data

Determination of the composition of the ANN and the type of the model nx, ny

Adjusting the architecture of a single node

Correction of the neuronal weights

Adjustment of the type and parameters of MF

No

No

Fig. 2. Algorithm of functioning and training of the evolving ANN

Improvement of this learning algorithm consists in adding the following procedures: determining the composition of the ANN and the order of the model nx, ny; adjusting the architecture of a single node; checking the capabilities of the architecture of a single node of the ANN and ANN as a whole to the known methods of training of artificial neural networks.

There is additional training of artificial neural networks, which was not taken into account in [1-17]:

- the architectures of artificial neural networks depending on the amount of source information (number of layers, number of hidden layers, number of connections between neurons in the layer and between layers);

- the architecture and parameters of a separate node of artificial neural networks;

- the possibility of combining nodes of an artificial neural network.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Step 1. The initial step consists in entering the initial data.

Step 2. Determination of the composition of the ANN (number of nodes) and the type of the model nx, ny.

Step 3. Correction of the architecture of a single ANN node.

Step 4. Correction of the weights of a single node neuron.

Step 5. Correction of the type and parameters of the membership function (MF). It should be noted that the implementation of steps 4 and 5 can be performed both sequentially and in parallel depending on the software implementation.

Steps 6-7. Check of the capabilities of the architecture of a single node.

Steps 8-9. Check of the architecture's ability to process the amount of information that is coming to its input.

Let us consider the proposed method in detail.

Step 1. Entering the initial data.

At this stage, the initial parameters of the network are entered: the number of layers, the number of nodes, the number of connections between them, the initial values of the membership function.

Step 2. Determination of the composition of the ANN (number of nodes) and the type of the model nx, ny.

ANARX-model looks as follows [7, 17]:

y (k) = A (y (k -1)x (k -1)) + f (y (k - 2),x (k - 2))-+... +

n

+ fn (y (k - n),x (k - n)) A (y (k -1 ),x (k -1)) (2)

1=1

where n = max {ny, n^, the initial task of the synthesis of the prediction system is decomposed into many local problems of parametric identification of models-nodes with two inputs y (k -1), x (k -1), l = 1,2,..., n,....

Fig. 3 shows the architecture of the ANARX-system, formed by two lines of pure delay elements z-1 (z~ly (k) = y (k -1)) and n parallel connected nodes N].

Training of these nodes is carried out independently of each other, and the introduction of new nodes or the exclusion of redundant ones does not affect all other neurons, so the evolution of such a system is realized by elementary manipulation of the number of nodes.

As the node of the ANARX-system under consideration, the two-input neuro- fuzzy systems discussed earlier, as well as the two-input neo- fuzzy nodes are used.

Step 3. Correction of the architecture of a single ANN node.

When the speed of data processing and simplicity of numerical implementation of the computational intelligence system come to the fore, instead of neuro-fuzzy nodes of the ANARX model, it is advisable to use neuro-fuzzy neurons, which belong to the nonlinear learning systems.

The architecture of the neuro-fuzzy-neuron as a node of the ANARX-system is shown in Fig. 4. The advantages of the neuro-fuzzy neuron include high learning speed, computational simplicity, good approximating properties, and the ability to find a global minimum of the learning criterion.

The constituent elements of the neuro-fuzzy neuron are nonlinear synapses NSy, NSx, in which the rules of fuzzy zero-order Takagi-Sugeno inference are implemented, however, as it is easy to notice, the neuro-fuzzy-neuron is much simpler structurally than the neuro-fuzzy node shown in Fig. 3.

When the input of such a node receives signals y (k -1), x(k -1), the output value is formed:

y (k) = <py (k) + <pi (k) =

=È®X (y (k -1 ))+È:w'x ц x ( (k -1 )

(3)

is the synaptic weights of the /-function neuron, is the membership function.

And at the output of the ANARX-model as a whole:

У (k) = I

Л фу (y(k - 0) + « +Î'w^x ((k - l))

(4)

since the neuro-fuzzy neuron is also an additive model, the ANARX model on neuro-fuzzy neurons is twice additive.

Step 4. Correction of the weights of a single node neuron.

Correction of neuronal weights in the ANN node is based on the known scientific approaches, described for example in [18, 19].

У ( k )'

x ( k ) <

N H

У

4N [2]

/( k) = f)( y( k-i),x( k-1)

m= ^ , * f(y(k-2),x(k-2)) ) У(k) =!>ï(k-/)'x(k-l))

S

Уn (k ) = -=fn (y (k - n),x (k - и))

Fig. 3. Architecture of the ANARX-system [16-18]

-)

z

z

z

z

-)

z

-)

z

y ( k -1 )

c(k -1)

Fig. 4. Neo-fuzzy node of the ANARX system [16-18]

Step 5. Correction of the type and parameters of the membership function (FN).

As membership functions in the neuro-fuzzy-neuron, triangular constructions corresponding to the conditions of a single partition are usually used.

It should be mentioned that B-splines are a kind of generalized membership functions: for example, at q=2 we obtain traditional triangular membership functions, at q=4 we obtain cubic splines, etc.

Entering further vector variables:

(y (k -/))=1; ( (k -/)) = 1,

(5) W = (..... Ky > '...' whx )>

which simplifies the design of the node, eliminating the layer of normalization. As membership functions of the neuro-fuzzy-neuron, it was proposed to use 5-splines, which provide a higher quality of approximation and also meet the conditions of a single breakdown. In this case, for the 5-spline of the q-th order, the following can be written:

1, if cly < y (k -1)< cMy

0 otherwise

K (y (k -1))=1,

l)< r. . 1

^for q = 1,

V^t^ ^ (y ( -1 )) + ^^

rz+q-1,y riy

xV&y (y ( -1)), for q > 1, i = 1,...,h - q, where ci = ( cl 2,..., cl „ )T.

Vl (( (k -1 )) = 1, 1, if c■ < x (A -1 )< c+ 1

for q = 1,

0 otherwise

XÎMzCx ^ (( -1))+ c'+qx - x(k -1T

c -c

i+q-1,x vx

xV+t (( -1)) for q > 1,

1 = 1,...,h - q.

(k) = ( (k - ' ))>».,^ a, (y)(k - ' )

^ ((k - l)),...,^A, ((k - l))

(4) can be rewritten: yl (k)= ®V (k)

(8)

and using the optimal gradient one-step Kachmazh-Widrow-Hoff algorithm:

(6)

wh (k) = wh (k -1) +

+ y (k)-(k - 1)cph (x(k)) ( (k T

IK (x (k)TT

we obtain taking into account (9):

wl (k) = wl (k -1) + r-1 (k ) X x(y (k)-wl T (k - 1)cp ' (k))cpl (k),

rt (k) = arl (k -1) + cT (k) cpl (k), 0 <a< 1,

(9)

(10)

(7)

which has both filtering and tracking properties. It can also be seen that for a = 1 (9) it completely coincides with the optimal Kachmazh-Widrow-Hoff algorithm (10).

Steps 6-7. Check of the capabilities of a single node architecture.

At this stage, the ability of the architecture of the ANN node with certain parameters to perform a computational

task is tested. The ability to perform a computational task is determined by comparing the computational capabilities of the architecture and parameters of the ANN node and the required computing resources of the ANN node.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

In the case of mismatch of computing resources of the ANN node, there is a change of parameters of the ANN node, and in case of impossibility to increase computing resources of the node - there is a change of architecture and parameters of the ANN node.

Steps 8-9. Check of the architecture's ability to process the amount of information coming to its input.

Since each node of the ANARX N[1] model is configured independently of the others and is essentially a separate neuro-fuzzy system, to improve the quality of the obtained predictions, you can use the idea of combining an ensemble of neural networks. This approach naturally leads to the architecture of the weighted ANARX neuro-fuzzy-WANARX system shown in Fig. 5.

The output signal of this system can be written as:

The system of Karush-Kun-Tucker equations is also used:

y (k) = £ Cy1 (k) = cT y (k),

(11)

where y(k) = (( (k),y2 (k),...,yn(k) , is the vector of the weight coefficients that adjust and determine the proximity of the signals yl (k) to the process that is predicted (processed) by y(k) and meets the conditions of non-bias:

I C = cTIn = 1,

(12)

VcL (c, X) = 2Rc + XIn = 0,

dL = SI -1 = 0. ax "

The solution of the system (15) will lead to:

jC = R-% (R-% ), U = -2I„TR-lIn,

(15)

(16)

while the Lagrangian (17) at the saddle point takes the value:

L *(c, X) = (ITR-%

(17)

The implementation of the algorithm (16) may encounter significant difficulties in processing information with a high degree of correlation of signals yl (k). This leads to poor conditionally of the matrix R, which must be rotated on each clock of real time k.

We write the Lagrange function (14) as:

L (c, X) = I y (k)-cT y (k )2 + X(c Tl„ -1)

(18)

and a gradient algorithm for finding its saddle point based on the Arrow-Hurwitz procedure [17, 18]:

where In - (n x l) is the vector formed by units.

To find the vector c in batch mode, you can use the method of undetermined Lagrange multipliers. To do this, a sequence of errors is used:

v (k) = y (k) - y (k) = y (k) - c T y (k) = = cT Iny (k) - cT y (k) = cT ((k) - y (k)) = c (k). (13)

Lagrange function:

L(c,X) = £c(k)c + X(cTIn -1) =

k

= cTRc + X(cTIn -1), (14)

where X is the undetermined Lagrange multiplier, R = = ^V(k) VT(k) is the correlation matrix of errors.

c (k) = c (k - 1)-nc (k)VcL (c, X), X(k) = X(k - i) + n,(k)dLM,

c (k) = c (k - 1) + nc (k)

(19)

(20)

2(y (k)-c T (k - 1)y (k))x

x y (k)-X(k -1) = c (k - 1) + n (k)(2v (k)y (k )-X(k -1) ), X(k) = X(k- 1) + nx(k)(cT(k)ln -1),

where n (k), nX (k) are the learning step parameters.

The Arrow-Hurwitz procedure is reduced to a saddle point under the fairly general assumptions of relative values (k), nX (¡k), but these parameters can be optimized to speed up the learning process.

J? (k ) = £ c,j>1 (k )

Fig. 5. Architecture of the WANARX-system [16-18]

or

To do this, we multiply the first relation (20) on the left by y (k):

У (k)c (k)= fc (k - 1) + n (k )x x( (k)|| У (k))2-X(k - 1)У TI

(21)

and use an additional function that characterizes the criterion convergence:

y (k)-yT (k)c (k)) = я2 (k)-2n (k)v (k)x x(v (k)|| y (k)||2-X(k - 1)y TIn) +

+ n2 (k)(2v (k)| y (k))2 -X(k - 1)y TIn).

(22)

Solution of the differential equation: d(y (k)-yT (k )c

4 (k)

- = -2v (k)

2v(k)|У(k)12 -Л

-X(k -1) fln

+ 2nc (k)(2v(k)||У(k)H -X(k- l)TI„) = 0

(23)

allows us to get the optimal values of the learning step n (k) in the form:

П (k) =

ik)

2vШ(k) -X(k- 1)yTI„

substituting it in (20), it is finally possible to write down:

v(k)(2v(k)|y(k)) -X(k- 1)yTIn

c (k ) = c (k -1)

2v (k)|| y (k))2-X(k - 1)y TIn

X(k)=X(k- 1) + nx(k)(cT(k)In -1).

- TRC 274 H/V/UHF Jammer (20-3000), which simulated the operation of the electronic warfare system (transmitter power is 20 W, the frequency band that can be suppressed is 10 MHz, the type of interference is the noise interference with frequency manipulation;strategy of the REB complex is dynamic);

- MikroTik NetMetal 5 broadband radio access stations with the following parameters (128 positional quadrature amplitude manipulation; radiation bandwidth is 40 MHz, radiation power is 1 W; radiation frequency is 2.4 GHz).

To predict the state of the electronic environment, the training of the artificial neural network lasted for three sessions of 17 hours each (full measurement cycle of Agilent OmniBER 718 for broadband facilities).

After that, the forecast of the state of the electronic situation was carried out. Approbation was performed under equal conditions for each of the systems.

The square root of the RMSE standard error was used as a prediction quality criterion.

A multilayer perceptron (MLP), a radial basis function network (RBFN), and an adaptive neuro-fuzzy inference system (ANFIS) were used to compare prediction quality.

The results of forecasting the state of the electronic environment for different systems are presented in Table 1.

Table 1

Forecasting results for different systems

(24)

(25)

Name of the system Number of adjustable parameters RMSE (training) RMSE (test) Time, sec

MLP 53 0.1158 0.1507 0.1181

RBFN 23 0.1166 0.2255 0.1181

ANFIS 81 0.0659 0.1965 0.1181

Evolving cascade system with neo-fuzzy nodes 21 0.0584 0.1181 0.1181

It is easy to notice that the procedure (25) coincides with the Kachmazh-Widrow-Hoff algorithm (9).

Based on equations (11)-(25), the ability of the network architecture and parameters to process information with a given degree of reliability of the obtained results is checked. On the basis of the specified comparison, at the first stage the decision concerning the adjustment of ANN parameters, and in case of adjustment impossibility of the ANN architecture is made.

5. Approbation of the method of training artificial neural networks for intelligent decision support systems

Simulation of the proposed method was performed in the Matlab 2019 software environment.

To demonstrate the effectiveness of the proposed weighted ANARX system, a forecast of the electronic environment of special radio systems was made.

To evaluate the effectiveness of the proposed method, modeling was performed using the following components:

- personal computer with special software and Matlab 2019;

- Agilent OmniBER 718 digital flow analyzer with software and a set of connecting cables that measures parameters;

Since the multilayer perceptron cannot operate in near-real-time mode, 2 variants of this system were selected for comparison.

The first multilayer perceptron was trained during one epoch. The second multilayer perceptron was trained for five epochs, and the number of adjustable parameters in this case was approximately equal to the number of adjustable parameters in the proposed systems. In both cases, the multilayer perceptrons contained 4 inputs and 7 nodes in the hidden layer. The number of adjustable parameters was 43.

For the second multilayer perceptron, the operating time was almost 2 times longer, but at the same time, the prediction quality was also almost 2 times better. Two radial basis function networks were also selected.

The number of parameters of the first radial basis function network was almost equal to the number of parameters of the proposed systems. The architecture of the second radial basis function network was chosen taking into account the quality of its operation. In the first case, the radial basis function network had 3 inputs and 7 nuclear functions. In the second case, the radial basis function network also had 3 inputs, but 12 nuclear functions.

The ANFIS system showed one of the best prediction results in this experiment. It had 4 inputs, 55 nodes and was studied for five epochs. For the training, the parameter a was

+

set equal to 0.62. The system contained 37 adjustable parameters. 5-splines with q = 2 (triangular membership functions) were used as membership functions. The forecast quality of this system was quite high, and the training time was the shortest.

Table 2

Comparison of the forecasting results

Systems Number of adjustable parameters RMSE (training) RMSE (test) Time, sec

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

MLP (version 1) 44 0.0610 0.0710 0.4263

MLP (version 2) 44 0.0246 0.0391 0.9319

RBFN (version 1) 37 0.0661 0.0842 0.6562

RBFN (version 2) 62 0.0473 0.0604 1.1250

ANFIS 82 0.0247 0.0396 0.7131

ANARX with neuro-fuzzy nodes 39 0.0923 0.0952 0.4310

Weighted ANARX with neuro-fuzzy nodes 38 0.0437 0.0553 0.3760

The research of the developed method showed that this training method provides on average 10-18 % higher learning efficiency of artificial neural networks and does not accumulate errors during training (Table 1, 2). This can be seen by the efficiency of data processing in the last columns of Table 1, 2.

6. Discussion of the results of development of training method of artificial neural networks for intelligent decision support systems

The main advantages of the proposed evaluation method are:

- does not accumulate learning errors during the training of artificial neural networks by adjusting the parameters and architecture of the artificial neural network. This is explained by expressions (1)-(3), (9);

- unambiguity of the obtained results - expressions (3), (4);

- wide scope of use (decision support systems);

- simplicity of mathematical calculations;

- possibility of adapting the system during operation and its individual elements - expression (9);

- possibility of synthesizing the optimal structure of the decision-support system - expressions (11)-(25).

As for the limitations of this method, this method is adapted for analysis and forecasting of the electronic environment in terms of its uncertainty and high dynamics. However, the proposed method is able to successfully solve the problem of data analysis and forecasting with appropriate adaptation to a particular type of decision support systems.

The disadvantages of the proposed method include:

- loss of informativeness in the assessment (forecasting) due to the construction of the membership function. This loss of information can be reduced by choosing the type of membership function and its parameters in the practical implementation of the proposed methodology in decision support systems. The choice of the type of membership function depends on the computing resources of a particular electronic computing device.

- lower accuracy of assessment by a single parameter of state estimation;

- loss of accuracy of results during the reconstruction of the architecture of the artificial neural network.

This method will allow:

- to train artificial neural networks;

- to identify effective measures to improve the efficiency of artificial neural networks;

- to increase the efficiency of artificial neural networks by training the parameters and network architecture;

- to reduce the use of computing resources of decision support systems;

- to develop measures aimed at improving the learning efficiency of artificial neural networks;

- to increase the efficiency of information processing in artificial neural networks.

This research is a further development of research conducted by the authors. It is aimed at the development of theoretical foundations for improving the efficiency of artificial intelligence systems published earlier [1, 2, 29-32].

Areas of further research should be aimed at reducing the computational costs in the processing of various types of data in special-purpose systems.

7. Conclusions

1. A method of training artificial neural networks for intelligent decision support systems has been developed.

Improvement of the efficiency of information processing, reduction of the estimation and forecasting error are achieved by:

- learning not only the synaptic weights of the artificial neural network, but also the type and parameters of the membership function;

- learning the architecture of artificial neural networks;

- the possibility of combining elements of an artificial neural network, which will allow the adaptation of the architecture and parameters of the network to solve a specific type of problems;

- learning opportunities for individual elements of the artificial neural network;

- calculation of data for one epoch without the need to store previous calculations and this reduces the time for processing information by not having to access the database;

- absence of accumulation of learning errors of artificial neural networks as a result of processing information arriving at the input of artificial neural networks.

2. Approbation of the proposed method by the example of forecasting the state of the electronic environment is carried out. This example showed an increase in the efficiency of artificial neural networks at the level of 10-18 % of the efficiency of information processing through the use of additional training procedures for artificial neural networks.

Acknowledgments

The author's team is grateful for the help to:

- doctor of technical sciences, professor Oleksiy Vikto-rovych Kuvshinov - deputy head of the educational and scientific institute of the National defense university of Ukraine named after Ivan Chernyakhovsky;

- doctor of technical sciences, senior researcher Yuriy Vla-dimirovich Zhuravskiy - leading researcher of the research center of the Zhytomyr military institute named after S. P. Korolev;

- doctor of technical sciences, senior researcher Oleg Yaroslavovich Sova - head of the department of «Automated control systems» of the military institute of telecommunications and informatization named after the Heroes of Krut;

- candidate of technical sciences, associate professor Oleksandr Mykolayovych Bashkirov - leading researcher at the Central research institute of armament and military equipment of the Armed Forces of Ukraine.

References

1. Kalantaievska, S., Pievtsov, H., Kuvshynov, O., Shyshatskyi, A., Yarosh, S., Gatsenko, S. et. al. (2018). Method of integral estimation of channel state in the multiantenna radio communication systems. Eastern-European Journal of Enterprise Technologies, 5 (9 (95)), 60-76. doi: https://doi.org/10.15587/1729-4061.2018.144085

2. Kuchuk, N., Mohammed, A. S., Shyshatskyi, A., Nalapko, O. (2019). The method of improving the efficiency of routes selection in networks of connection with the possibility of self-organization. International Journal of Advanced Trends in Computer Science and Engineering, 8 (1.2), 1-6, doi: https://doi.org/10.30534/ijatcse/2019/0181.22019

3. Zhang, J., Ding, W. (2017). Prediction of Air Pollutants Concentration Based on an Extreme Learning Machine: The Case of Hong Kong. International Journal of Environmental Research and Public Health, 14 (2), 114. doi: https://doi.org/10.3390/ ijerph14020114

4. Katranzhy, L., Podskrebko, O., Krasko, V. (2018). Modelling the dynamics of the adequacy of bank's regulatory capital. Baltic Journal of Economic Studies, 4 (1), 188-194. doi: https://doi.org/10.30525/2256-0742/2018-4-1-188-194

5. Manea, E., Di Carlo, D., Depellegrin, D., Agardy, T., Gissi, E. (2019). Multidimensional assessment of supporting ecosystem services for marine spatial planning of the Adriatic Sea. Ecological Indicators, 101, 821-837. doi: https://doi.org/10.1016/ j.ecolind.2018.12.017

6. Qavdar, A. B., Ferhatosmanoglu, N. (2018). Airline customer lifetime value estimation using data analytics supported by social network information. Journal of Air Transport Management, 67, 19-33. doi: https://doi.org/10.1016/jjairtraman.2017.10.007

7. Kachayeva, G. I., Mustafayev, A. G. (2018). The use of neural networks for the automatic analysis of electrocardiograms in diagnosis of cardiovascular diseases. Herald of Dagestan State Technical University. Technical Sciences, 45 (2), 114-124. doi: https://doi.org/ 10.21822/2073-6185-2018-45-2-114-124

8. Zhdanov, V. V. (2016). Experimental method to predict avalanches based on neural networks. Ice and Snow, 56 (4), 502-510. doi: https://doi.org/10.15356/2076-6734-2016-4-502-510

9. Kanev, A., Nasteka, A., Bessonova, C., Nevmerzhitsky, D., Silaev, A., Efremov, A., Nikiforova, K. (2017). Anomaly detection in wireless sensor network of the «smart home» system. 2017 20th Conference of Open Innovations Association (FRUCT), 118-124. doi: https://doi.org/10.23919/fruct.2017.8071301

10. Sreeshakthy, M., Preethi, J. (2016). Classification of human emotion from deap EEG signal using hybrid improved neural networks with Cuckoo search. BRAIN. Broad Research in Artificial Intelligence and Neuroscience, 6 (3-4), 60-73. Available at: https://www.slideshare.net/bpatrut/classification-of-human-emotion-from-deap-eeg-signal-using-hybrid-improved-neural-networks-with-cuckoo-search

11. Chica, J., Zaputt, S., Encalada, J., Salamea, C., Montalvo, M. (2019). Objective assessment of skin repigmentation using a multilayer perceptron. Journal of Medical Signals & Sensors, 9 (2), 88. doi: https://doi.org/10.4103/jmss.jmss_52_18

12. Massel, L. V., Gerget, O. M., Massel, A. G., Mamedov, T. G. (2019). The Use of Machine Learning in Situational Management in Relation to the Tasks of the Power Industry. EPJ Web of Conferences, 217, 01010. doi: https://doi.org/10.1051/epjconf/201921701010

13. Abaci, K., Yamacli, V. (2019). Hybrid Artificial Neural Network by Using Differential Search Algorithm for Solving Power Flow Problem. Advances in Electrical and Computer Engineering, 19 (4), 57-64. doi: https://doi.org/10.4316/aece.2019.04007

14. Mishchuk, O. S., Vitynskyi, P. B. (2018). Neural Network with Combined Approximation of the Surface of the Response. Research Bulletin of the National Technical University of Ukraine «Kyiv Politechnic Institute», 2, 18-24. doi: https://doi.org/ 10.20535/1810-0546.2018.2.129022

15. Kazemi, M., Faezirad, M. (2018). Efficiency estimation using nonlinear influences of time lags in DEA Using Artificial Neural Networks. Industrial Management Journal, 10 (1), 17-34. doi: https://doi.org/10.22059/imj.2018.129192.1006898

16. Parapuram, G., Mokhtari, M., Ben Hmida, J. (2018). An Artificially Intelligent Technique to Generate Synthetic Geomechanical Well Logs for the Bakken Formation. Energies, 11 (3), 680. doi: https://doi.org/10.3390/en11030680

17. Prokoptsev, N. G., Alekseenko, A. E., Kholodov, Y. A. (2018). Traffic flow speed prediction on transportation graph with convolutional neural networks. Computer Research and Modeling, 10 (3), 359-367. doi: https://doi.org/10.20537/2076-7633-2018-10-3-359-367

18. Bodyanskiy, Y., Pliss, I., Vynokurova, O. (2013). Flexible Neo-fuzzy Neuron and Neuro-fuzzy Network for Monitoring Time Series Properties. Information Technology and Management Science, 16 (1), 47-52. doi: https://doi.org/10.2478/itms-2013-0007

19. Bodyanskiy, Ye., Pliss, I., Vynokurova, O. (2013). Flexible wavelet-neuro-fuzzy neuron in dynamic data mining tasks. Oil and Gas Power Engineering, 2 (20), 158-162. Available at: http://nbuv.gov.ua/UJRN/Nge_2013_2_18

20. Haykin, S. (1998). Neural Networks: A Comprehensive Foundation. Prentice Hall, 842.

21. Nelles, O. (2001). Nonlinear System Identification. Springer. doi: https://doi.org/10.1007/978-3-662-04323-3

22. Wang, L.-X., Mendel, J. M. (1992). Fuzzy basis functions, universal approximation, and orthogonal least-squares learning. IEEE Transactions on Neural Networks, 3 (5), 807-814. doi: https://doi.org/10.1109/72.159070

23. Kohonen, T. (1995). Self-Organizing Maps. Springer. doi: https://doi.org/10.1007/978-3-642-97610-0

24. Kasabov, N. (2003). Evolving Connectionist Systems. Springer. doi: https://doi.org/10.1007/978-1-4471-3740-5

25. Sugeno, M., Kang, G. T. (1988). Structure identification of fuzzy model. Fuzzy Sets and Systems, 28 (1), 15-33. doi: https://doi.org/ 10.1016/0165-0114(88)90113-3

26. Ljung, L. (1999). System Identification. Theory for the User. PTR Prentice Hall, Upper Saddle River, 609. Available at: https:// www.twirpx.com/file/277211/

27. Otto, P., Bodyanskiy, Y., Kolodyazhniy, V. (2003). A new learning algorithm for a forecasting neuro-fuzzy network. Integrated Computer-Aided Engineering, 10 (4), 399-409. doi: https://doi.org/10.3233/ica-2003-10409

28. Narendra, K. S., Parthasarathy, K. (1990). Identification and control of dynamical systems using neural networks. IEEE Transactions on Neural Networks, 1 (1), 4-27. doi: https://doi.org/10.1109/72.80202

29. Petruk, S., Zhyvotovskyi, R., Shyshatskyi, A. (2018). Mathematical Model of MIMO. 2018 International Scientific-Practical Conference Problems of Infocommunications. Science and Technology (PIC S&T), 7-11. doi: https://doi.org/10.1109/ infocommst.2018.8632163

30. Zhyvotovskyi, R., Shyshatskyi, A., Petruk, S. (2017). Structural-semantic model of communication channel. 2017 4th International Scientific-Practical Conference Problems of Infocommunications. Science and Technology (PIC S&T), 524-529. doi: https:// doi.org/10.1109/infocommst.2017.8246454

31. Alieinykov, I., Thamer, K. A., Zhuravskyi, Y., Sova, O., Smirnova, N., Zhyvotovskyi, R. et. al. (2019). Development of a method of fuzzy evaluation of information and analytical support of strategic management. Eastern-European Journal of Enterprise Technologies, 6 (2 (102)), 16-27. doi: https://doi.org/10.15587/1729-4061.2019.184394

32. Koshlan, A., Salnikova, O., Chekhovska, M., Zhyvotovskyi, R., Prokopenko, Y., Hurskyi, T. et. al. (2019). Development of an algorithm for complex processing of geospatial data in the special-purpose geoinformation system in conditions of diversity and uncertainty of data. Eastern-European Journal of Enterprise Technologies, 5 (9 (101)), 35-45. doi: https://doi.org/10.15587/ 1729-4061.2019.180197

i Надоели баннеры? Вы всегда можете отключить рекламу.