Научная статья на тему 'Рекурентна аппроксимация как инструмент расширения функций и режимов работы нейронной сети'

Рекурентна аппроксимация как инструмент расширения функций и режимов работы нейронной сети Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
47
7
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
РЕКУРРЕНТНАЯ СЕТЬ / РЕЖИМЫ РАБОТЫ / MODES OF OPERATION / ПОРОЖДАЮЩИЕ ПРАВИЛА / PRODUCTIVE RULES / АНАЛИТИЧЕСКОЕ ОБУЧЕНИЕ / ANALYTICAL TRAINING OF NEURON / ОЦЕНКА ОШИБКИ АППРОКСИМАЦИИ / ERROR EVALUATION / КООРДИНАЦИОННОЕ УПРАВЛЕНИЕ / COORDINATION CONTROL / RANN

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Trunov A.

Рассмотрена роль рекуррентной искусственной нейронной сети (РИНС) для решения задач координационного управления. Сформирована структура РИНС обработки информации на основе векторов индикаторов и рекуррентной аппроксимации. Продемонстрировано, что коррекция нуля, калибровка, измерение, определение ошибки аппроксимации позволяет решать задачи минимизации, расширить функциональные возможности и реализовывать новые режимы ее работы. Представлены алгоритмы аналитического обучения нейрона с несколькими входами и пример формирования продуктивных правил при решении задачи минимизации

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Recurrent approximation as the tool for expansion of functions and modes of operation of neural network

The paper considers the role of recurrent artificial neural network (RANN) for the solution of specific problems of coordination control, the relevance of which is predetermined by the development of modern automated systems. We synthesized the RANN information processing structure that is formed based on the indicators vectors and recurrent approximation of continuous function. New modes of its work and expanded functionality were examined. It was demonstrated that it is capable to implement zero correction modes, calibration, preparing information on the error of approximation, to solve the problem of minimization and act as a module of decision making support system.We proposed generalized algorithm for analytical determination of synaptic weights coefficients and evaluation of their error. It is shown that the application of the indicator vectors makes these algorithms practically independent of selecting initial approximation of synaptic weights coefficients, while the network acquires mechanism of readjustment during optimal control. For its implementation, depending on the changes that occur to the object, in accordance with the obtained analytical criteria of evaluation of error of synaptic weights coefficients, their readjustment is conducted. The synthesized structure is able to realize algorithms that provide a necessary set of operating modes and formation of productive or controlling rules based on the analysis of behavior of the set of the indicator vectors. Its structure forms the information support of the conditional part of the rules “condition-action” and implements effective part in the algorithms of coordination control. It also is capable to implement simple algorithms for finding roots and control that minimizes or maximizes continuous function or the Lagrange function under conditions of existence of restrictions of inequalities for a nonlinear object.The application of the obtained results is also useful for solving various separate problems: formation of productive rules for solving the problems of finding simple root of monotonic function, finding a not simple root of monotonic function, finding a root of oscillating function, selecting controlling influence and the problem on the synthesis of controlling influence. Obtained results continue and complement practical implementation of the idea of recurrent approximation for solving the tasks of modeling and design.

Текст научной работы на тему «Рекурентна аппроксимация как инструмент расширения функций и режимов работы нейронной сети»

-□ □-

Розглянуто роль рекурентног штучног нейронног мере-жi (РШНМ) для розв'язку характерних задач координащ-йного управлтня. Сформовано структуру РШНМ обробки тформацп на базi векторiв - iндикаторiв та рекурентног апроксимаци. Продемонстровано, що корекщя нуля, калi-брування, вимiрювання, визначення похибки апроксимаци, дозволяврозв'язувати задачiмiнiмiзацiг, розширити функщ-ональш можливостi та реалiзувати новi режими гг роботи. Представлено алгоритми аналтичного навчання нейрону з декшькома входами та приклад формування продуктивних правил при розв'язуванш задачi мiнiмiзацiг

Ключовi слова: рекурентна мережа, режими роботи, продуктивш правила, аналтичне навчання нейрону, оцтка

похибки, координацшне управлтня

□-□

Рассмотрена роль рекуррентной искусственной нейронной сети (РИНС) для решения задач координационного управления. Сформирована структура РИНС обработки информации на основе векторов - индикаторов и рекуррентной аппроксимации. Продемонстрировано, что коррекция нуля, калибровка, измерение, определение ошибки аппроксимации позволяет решать задачи минимизации, расширить функциональные возможности и реализовывать новые режимы ее работы. Представлены алгоритмы аналитического обучения нейрона с несколькими входами и пример формирования продуктивных правил при решении задачи минимизации

Ключевые слова: рекуррентная сеть, режимы работы, порождающие правила, аналитическое обучение, оценка

ошибки аппроксимации, координационное управление -□ □-

UDC 517.962.27:004.8.032.26:519.876.2

|DOI: 10.15587/1729-4061.2016.81298

RECURRENT APPROXIMATION AS THE TOOL FOR EXPANSION OF FUNCTIONS AND MODES OF OPERATION OF NEURAL NETWORK

A. Trunov

PhD, Associate Professor, First Vice-Rector Department of automation and computer-integrated technologies Petro Mohyla Black Sea State University 68 Marines str., 10, Mykolaiv, Ukraine, 54000 E-mail: trunovalexandr@gmail.com

1. Introduction

An experience in the automation of complex processes increasingly demonstrates inability of classic methods of the theory of automatic control to effectively resolve the problems of automation of production processes and of management of socio-economic projects [1-3]. Coordination, coordination management [1] as the main principle of functioning in a general control system plays the role of a subsystem of the process stabilization relative to a predetermined strategy that is represented by a trajectory in the space of states [4]. When defining the role of coordination in the process of control, it should be noted that for any intellectual, industrial, social human activity, a necessary characteristic is the procedure of decision making [1, 2, 5, 6]. The latter is presented in one of the forms: values of parameters, functions of controlling influences, formation of productive rules, linguistic terms from the given set [7-10]. For the formation of theoretical principles of the theory of coordination of complex systems, it is necessary to further develop formal methods, mechanisms, tools of coordination that ensure coherence of functions of components of the system and synchronization of objectives as a single entity [2]. At the same time, at present there are successfully operating systems of expert functional diagnosis of complex objects, for example [7, 9]. There are attempts at constructing models of dynamic processes based on the networks of different types [2, 3] created on the methods of conventional and fuzzy logic

[4, 8-14]. Modeling of the systems is conducted, which include non-stationary objects [3, 8-10], including the GPSS environments [3]. They examine and analyze the processes of formation of databases [9] and knowledge bases [12-14], formation of productive rules and conclusions based on the Sugeno-Mamdani algorithms and the use of fuzzy neural networks for identification and control of weakly formalized objects, using the Sugeno-Mamdani-Kang network [8]. However, as it is generally known, one of the tools for coordination, which in its initial essence provides coherence of functions of the components of a system and synchronization of objectives as a single entity, is a neural network. It is a means that implements the principle of causation. Its properties especially expand owing to the introduction of recurrent approximation [15], the indicator - vector [16], and development of algorithms for analytical learning [17]. Such recurrent artificial neural networks (RANN) [18] acquire attractive properties and capacities that are not yet fully explored, that is why the implementation of modes of their operation is relevant, including for a new direction of providing coordination control [1, 2] and the indicator - vector [16].

2. Literature review and problem statement

Artificial neural network today is one of the most powerful tools to solve the problems of the information streams processing [1, 19]. As demonstrated in papers [11, 18-20],

the application of recurrent network for the initial data processing creates preconditions and realizes the advantages of a comprehensive approach to the processes of collection, initial processing, accumulation of data, searching for mathematical models, formation of knowledge bases. Development of the structure of such networks, search for and application of convolutions leads to the creation of hybrid and recurrent networks [23]. Their training for interpreting the motions of the highest order that is transformed in the motion of the drive and implements the required trajectory is based on the variants of the theory of optimum control. The application of this approach to biological problems leads to analogies with the chains of cerebral or spinal cord [23]. However, the implementation of these innovative approaches is based on the stepping reverse transitions, which in turn require measuring a signal of sensory feedback and intelligent analysis. Most of the implementations of various stages of the process of intellectual activity are carried out in such networks [2] with the help, first of all, of measuring, then signal processing. Lately, the principles of comparative analysis have been applied more and more for the solution of problems of intellectual activity [5]. However, the need to provide an uncontrolled learning and compression of information by automatic coder, modification of the lost functions and invariance is partially solved in article [24] by changing the structure and formation of parallel links. As demonstrated in papers [8, 10], further development of methods of application of comparative approaches to the analysis leads to the necessity of observing quality attributes of behavior of a physical magnitude by the change in discrete magnitudes. The introduction of indicators [16, 18], which are formed by the comparative rules, demonstrates that they can be used for the formation of productive rules. Development of the systems for initial processing of data using RANN [16, 18], that has recently been implemented in the automated systems [7-9], in industries [12-14] that are rapidly readjusted, and surveillance systems [14], necessitates further development of rapid methods [19-22] of instantaneous learning and qualitative analysis and the methods of formation of results - conclusions based on it [7, 8, 10, 19-22]. As demonstrated in articles [15-17], new types of representation of continuous signal with a simultaneous application of the indicators - vectors [16] and recurrent approximation [15] opens new approaches to diagnosing [15, 16] and creation of RANN.

The main unsolved problems are the decomposition of dynamic signal and its representation through a new tool -the indicator - vector, using recurrent network based on the short-long term memory. However, another unsolved problem is the formation of structure for implementation of various modes of RANN operation to solve practical problems of data processing, forming a model and creating productive rules.

3. Aim and tasks of the study

The aim of this work is the formation of structure of recurrent network of information processing with expanded functionality and additional modes of operation based on the indicator vector and recurrent approximation of continuous function.

To accomplish the set goal, the following tasks are formulated:

- to synthesize the structure of recurrent network that is capable to perform calibration, to prepare information on the error of approximation, to solve the problem of minimization;

- to build an algorithm for analytical training a neuron with multiple inputs and to demonstrate its practical effectiveness;

- to synthesize algorithms that provide a necessary set of operating modes and production of controlling rules.

4. Setting and solving the problem of formation of the structure of recurrent network of data processing based on the indicator - vectors and recurrent approximation of continuous function

4. 1. Setting the problem of decomposition of vector function into a Taylor series using the indicator vectors

We will introduce on unlimited orderly set VY e the rule of trigger activation that is defined by two standards Y1, Y3 and the level of permissible deviations 81 and s2:

D3 (Y) =

-1, if Y < Yi + Si;

0, if Y e[Yi + Si,Ys -82];

1, if Y>Y3-82.

(1)

Note that we introduced operator D3, which acts on the scalar function, and its image forms a scalar, which is reflected on the metric space at any magnitude 81 and s2, which during configuration may also tend to zero. Under these notes and conditions, the decomposition component of any vector function from vector argument will take the form:

Li (xp + Axp) = |Li(xp)| D3 (Li(xp)) +

+x

j=1

9Li(xp)

+XX

k=1 j=1

D:

dxj 92L¡(xp)

9Li(xp)

Axj

9xk9xj

D:

dxj

>Lj(xp) 9xk9xj

Axk Axj

(2)

This decomposition is realized under conditions of existence of the Frechet derivatives from the first to the third order, and its accuracy is determined, according to the mean value theorem, by the maximum value of third order derivative module [18].

4. 2. Application of short-long term memory to designing a recurrent network

Under these conditions, the application of recurrent network with memory and structural elements that define the components of the vector-indicator also requires an effective tool for analytical determination of the roots. Fig. 1 demonstrates a fragment of such a RANN, which gen-eralizes_the idea of a new decomposition of continuous function L(Xn+1) in a Taylor series and its practical application for realization of the process of parallel and recurrent signal processing. Thus, if the output from neuron 3 defines reference behavior of system for arbitrary vector of strategies X, and from neurons 4 and 17 for Xn and Xn+1, then deviation:

AL (X) = L (X)-L (xn+1 ),

(3)

defines the strategy of change in controlling influences in accordance with the values of the component vector-indi-

cator, which is obtained after processing component vector of deviation (3) by comparator (1). In addition, after adding to the output from neuron 7 the product of output of neuron 11, multiplied by the output from neuron 16, and adding the product of the transposed vector of output from neuron 16 to the output from neuron 15 and multiplied by half the vector of the output from neuron 16, we obtain approximated value L(Xn+1) in the point Xn+1:

II AXn

will create the argument growth. Applying the comparator to the output of neuron 16, which implements predicate in the form (1), we will obtain from the output of neuron 18 the value of additional component of the indicator - vector:

v4 = d3 (ax).

(5)

L(Xn+i) = | Ml + | |bi

Xn+1

AXn + AXT cj

2

(4)

№)|Vi

4. 3. Practical determination of convergence of algorithm for the analytical training of neurons

Analytical training and retraining neurons was proposed and implemented in article [17] for a conditional

neural network. It should be noted that the recurrent solutions proposed in [17, 18], under conditions of minimizing the sum of squares of errors, as a fragment of RANN come down to solving the system of nonlinear equations. Let us confine ourselves to a case of finding the coefficients of syn-aptic weights roy for the j-th neuron of the k-th input and for the magnitude of input Xj of the i-th standard with a total number of standards Ij. Under these notations, we will write down the algorithm's key system:

AXT|L '(Xn } V2

B

T |L"(Xn H fn. ---^

B

{Bk = 0; k = 1, Kj ; j = 1, Kj (6) where it is denoted

L(Xn+i)

Fig. 1. Fragment of RANN with short-long term memory

Bk=X

M

-X

:-uY (1-

1, (1 +

Vi

if (k = 1) then (xk-1l) = 1;" if ( j = 1) then (Xk-1;i ) = 1;

B

In turn, this standard element (Fig. 1) also gives a possibility to estimate an error of approximation by comparing magnitudes L(Xn+1), calculated by (4), and the output 17. The latter, in turn, opens up possibilities to perform improvement of the model. This is especially relevant for real systems, in which L(Xn+1) are oscillating nonsmooth functions, since in such cases there occurs a necessity of multipoint approximations. However, as demonstrated in paper [18], the number of points of information delay exceeds by one the order of the higher derivative, so its magnitude is limited. It was also demonstrated there that for examining the properties of object, an important condition is the awareness about direction of the change in argument. Applying the comparator to argument growth rate that implements predicate in the form (1), we will introduce the analysis tool that will complement the set of tools of the indicator - vector (2) and (4), which was applied to the decomposition of function [16, 17]. Implementation of the algorithm of increment in the recurrent network is simplified [18]. Thus, by giving at the same time the signal about vector of argument from the network's input and from the inner layer of neuron 1-2, which are always displaced by one step, to neuron 16, we

Using system decomposition (6) by the recurrent approximation method [15] and being limited only by increments of first order, write down the system for determining the synaptic weights rokj for the neuron with the number of inputs Kj:

Bk - XAra j-1,nAkj = 0;k = 1,K-; j = 1, K-,

j=1

(7)

where it is denoted

M 3 r 0

Akj = X X (-1)r kjr (1+e-Si ) e-

Numeric results, with detailed analysis of influence on oscillations and the speed of convergence, of both magnitude and sign of the initial approximation rokj, are represented in paper [18] on the example of neuron with one input and one output. However, to ensure analytical training directly in the process of control, it is necessary to determine the moment of termination of iterative process of approximation.

The availability and quality of such a criterion determines the efficiency of training. Let us derive it as evaluation of error of coefficients of synaptic weights. Suppose that functions Bk are integrated with the square:

Bk =

J(b> )

Bk ) dx

We receive an error rate by applying operator of rationing to equations (6) and (7), then the evaluation from the bottom and the top will be analytically determined as:

Akj

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Akj

j= Akj

Ara j-u

Ad

k = i-Kj; j = i,Kj,

The latter gives the assessment of magnitude, which determines the limit of termination of iterations under conditions of the assigned magnitude of permissible error of activation function for standards:

Araj-i,J

|Akj|| ||Bk|| |Akl .lAkjl

(8)

Table 1

Effect of initial approximation and parameters of system on the evaluation of error of coefficients of synoptic weights for the fifth

iteration

№ raoo raio I|b15II M c || Ara0s|| lirais!

1 -0.5 5 -6.502E-6 1.740E-6 0.007545 6.8E-5 0.01206

2 0.5 5 0.000941554 1.43196E-5 0.007577 0.008065 0.080015

3 -0.1 0.1 -0.09746387 0.017202286 0.070873 0.53247 0.707926

4 1 1 0.001963412 0.018706151 0.04438 0.09275 1.002492

5 0.1 -0.1 0.00658054 0.010502513 0.032329 0.03513 0.940764

errors. Despite their simplicity, the selection of set standards for training is limited by the existence of asymptotic points, both for the first condition and the second. The latter manifests itself especially when reducing the number of members of decomposition of operators to two. Taking account of the indicator vectors improves the process of convergence but increases the total volume of calculations. The availability of evaluation in analytical form (8) allows automating the learning process if the assessment is assigned of permissible error of description of activation function for a given set of standards.

Designing the systems of initial data processing leads to the need for further development of high-speed methods of qualitative analysis and methods of formation on the basis of their results - conclusions about the nature of their properties [18]. Representation of decomposition of dynamic signal through a new tool of indicators (1) using a recurrent network based on short-long term memory significantly simplifies the process of decomposition due to the parallelization process of analog processing or calculations. However, the introduction of analysis of the changes in values of these indicators, from point to point, and the formation of productive rules for different types of problems, creates a set of new tools of cybernetic methods of control. It is obvious that the ease and the capability of such modules to be embedded into other systems and networks will become the main advantages. Nevertheless, it should be noted that the method is devoid of versatility, its complexity for linear problems is not justified, and thus its advantages are manifested when applying to the analysis of non-linear, oscillating processes or with explicitly expressed hysteresis. A comprehensive study of fundamental properties of decomposition and peculiarities of behavior of both indicators and their changes would apparently open new properties and possibilities of implementation of both comparative approaches and the new instrument - the indicator - vector and the recurrent neural networks that are built based on recurrent approximation.

The data of analysis of the effect of initial approximation and the system parameters on the evaluation of error of coefficients of synaptic weights for the fifth iteration are presented in Table 1. According to results of calculations, the solution of the system are the coefficients of synaptic weights rao = -0.40674, <»1 = 8.925389. Thus, for arbitrarily selected five sets of initial approximations raoo, ra10, we simultaneously calculated the left parts of the system (6) ¡B^H, ||b25| and calculated mean squared deviation of error of activation function a. As demonstrated by analysis of data from Table 1, initial approximations slightly affect the number of iterations to the assigned mean squared error. In addition, an increase in the number of approximations to eight provides accuracy of the calculations, that is, the relative error is less than tenth of percent.

5. Discussion of results of simulation of analytical training of RANN

In the course of analytical training we used algorithms, by which the training is based on the requirement of zero magnitude of error and minimizing the sum of squares of the

5. 1. Modes of RANN operation and algorithm of recurrent formation of the model

Calibration mode. To ensure the calibration process in automatic mode of any sensor or measuring device, it is necessary to measure in one point of time three magnitudes: Xn - vector of independent magnitude of argument, the initial magnitude of signal of sensor L(Xn), reference signal Ls(Xn), or, in the automated regime, to read out a reference device indications and enter them to the system. Under these conditions, it is also advisable to register errors of reference indications and argument. Fragment of a recurrent network is displayed in Fig. 2. Calibrating multiplier is determined during calibration and stored in a database as a function of independent magnitude of argument Xn :

k(Xn) = Ls(Xn)/L (Xn).

During calibration, the range of calibration is also defined and saved. In most cases, the calibration mode is provided due to the formation of physical value Xn, which in turn is dependent on time. In other words, to implement the idea of calibration, it is necessary to additionally control time. Under conditions of a stable time step a, we select a

set of rules "condition-action." A conditional part of each rule specifies the action, under which it would apply. The resulting part defines the action that will be realized under conditions of availability - fulfillment of the first part. By choosing the stable time step of operation a, we will have the condition of transition from the waiting phase [t = t (n - 1)J to the operation phase [t = t (n -1+ A)J. The action of regulator when the operation phase begins is to change signal

X = X„

-S(n) a1 + S(n-1) a2.

Measurement mode and creation of database of properties of the object.

The main task of this mode is to obtain full information about behavior of the object. A fragment of the recurrent network that implements the mode of measuring is presented in Fig. 3. Let us consider the process of measurement. The input of neuron 3 is fed with: Xn - vector of independent value of argument; k(Xn) - multiplier of calibration that is retrieved from the database for the given value Xn:

L (Xn) = k (Xn)Lm (Xn),

where Lm(Xn) is the signal, which is directly obtained from the output of neuron 3 after correction of zero if the signal from the sensor arrives directly to the input of neuron 3. By analogy, simultaneously with neuron 3, the same signals are sent to the inputs of neurons 4, 8, 12. Under these conditions, the data from the input of neuron 1 and the outputs from neurons 4, 8, 12, 16 are submitted to the controller's input. The latter converts them into the database standard and resends to record and store them in it.

Xn+1 (Xn)

1

©

-0

Y(Xn)

Lm(Xn)

L(Xn)

Xn L'(Xn)

Xn L'' (Xn)

ALs(Xn+1) Ls(Xn+1) From 3

measured magnitude. In addition, at this stage it is expedient, starting with the fourth point, to calculate the third derivative and enter the data about it to the database.

B

k(Xn+1)

Fig. 2. Fragment of realization of RANN calibration mode

The mode of formation of the model. The main activity of this stage is to find modules and indicator vectors, multiplication and adding of the products afterwards and formation of the difference between the magnitude of this sum and the

Fig. 3. Fragment of realization of RANN measurement mode

The mode of formation of productive rules and knowledge base. At this stage, data from the database are entered to the network and the behavior of both the physical magnitude that describes the object and its derivatives and errors is examined. Due to the complex of such actions, the productive rules and quantitative criteria are formed that allow selecting the model on the given interval of change in the vector of input.

The mode of formation of evaluation of adequacy by the set of criteria. When implementing this mode, a sequence of values of vector of state is generated, in turn, at the same time a signal is sent to the input of object and the network (to neuron 1). The output of signal from the object is sent to neurons 3, 4, 8, 12. Comparison of the magnitudes of output signals with neuron 3 and 21 makes it possible to obtain the magnitude of error. Upon applying the magnitude of signal of output of neuron 16 and magnitude of two derivatives of the outputs of neurons 8 and 12, we calculate derivative of third order. Thus, using the criteria of adequacy and setting the list of sorted values of Xn - vector of independent magnitude of argument from the set of range of its change, it is possible to calculate absolute and relative errors of both physical magnitude and its derivatives.

5. 2. Modes of finding roots and formation of productive rules

It is advisable to note such a possibility of using new tools of the indicator vector and properties of recurrent neural network that is built based on recurrent approximation as the formation of productive rules to solve problems: finding a simple root of monotonic function; finding a not simple root of monotonic function; finding a root of oscillating function; selection of controlling influence as a problem on the syn-

B

B

B

B

B

B

B

B

thesis of controlling influence. Let us consider in stages the peculiarities of solving each of these problems.

Finding a simple root of monotonic function. Assume that the first approximation is taken arbitrarily, and for it, at the outputs from neurons 6, 10, 14, 19, the value of signals is formed. Then the rule that determines the second and other approximations takes the form:

Xn+1 = Xn - V1V2L (Xn)l

- h 9kL (Xn) k-1 h k!9xk Xn

(9)

by information about the first point. However, if one considers that the built network has a shift by k+1 point, then for this type of cases, limited by the first derivative, the process of finding a root is accelerated. This rule is valid even when the first derivative approaches zero (it has a value less than the accuracy of calculations), since under these conditions V2 = 0. Such a rule simplifies the calculation of approximation, but leads to an increase in the number of approximations. Another variant of building a productive rule is the variant (K = 1), in which the values of the indicator vector are saved and the derivatives of second or senior orders are used. A rule is built based on the behavior of their increments:

Xn+1 = Xn

|l (x„)|(v1+av1xv2+av2)

Y dkL(Xn) AXk-1

h k!dXk AXk

X3 = X2 - 2L'(X2) /L"(X2).

One can see from the latter that if one follows the usual pattern, because of the properties of function and selection of the starting point of approximation, one of the approximations is in the interval, on which there is a cycling observed. When following the algorithm 1-2-3, the process of cycling can be predicted and excluded. For improvement and generalization of the above-described rule, we will introduce controlling function W, the magnitude of which will be associated with three components of the indicator vector. Let us represent decomposition in the last point, which starts the process of cycling:

A 2

L (X2 ) + L'(X2 )A + L''(X2 )y = L (X3 ) W,

(10)

where

L(^) W = j L(X2),if {AV1 = 0 and|AV2I > 1 and V3 * 0};

Only under conditions AV1 = 0 and when the increment value of the second component, processed by comparator (1) |aV2 = 1, as well as when the third component is not equal to zero V3 ^ 0, then equation (10) is fulfilled, and the next approximation will take the form:

Xn+1 = Xn-1

|L(Xn-1)|V1n-1V2n-1

This rule is implemented even in the case when the point of approximation reached the point of root. In this case, the value of magnitude Xn simply ceases to grow.

Finding a not simple root of monotonic function. Suppose that in the point of root (the value of function is zero) and the first derivative is also zero, that is, the point of root is also the point of contact, that is, the point of extremum. Assume that the first approximation is taken arbitrarily. For it, at the outputs of neurons 6, 10, 14, 19, the values of signals are formed, then by rule (9) we determine the second and others approximations.

Finding a root of oscillating function. As demonstrated in [16], for non smooth oscillating functions, in the case of selection of such initial value X1 that for any of the approximation schemes the next value corresponds to X2, for which the indicator vector is equal to (1, 0, 1) or (1, 1, 0), the process of finding out the next approximation must be terminated. Put the following order of actions in accordance with this case:

1. Adopt: l(Xs) = L(X2);

2. Find from decomposition:

A2

L (X3 )= L (X2 ) + L'(X2 )A + L''(X2)— increment

A= 2L' (X2);

L'' (X2);

3. Let us verify - if

VXe[X2, X3] L(X)>L(X2)= L(X3) and L''(X2)*0,

then the new approximation X3, regardless of the peculiarities of behavior of function on the interval, will be found by expression:

|L(Xn)|

(V1,

|L'(Xn-1)|

, A Vm

2|L'(Xn)|

|L'(Xn)V1n+1 ' |L''(Xn)|

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

)(V2

A V2n.

AV2

(11)

Thus, the process of finding the root is simplified and it comes down to simple calculations without interval of oscillation. The latter is carried out through the introduction of the indicator vector and recurrent networks after the formation of the appropriate set of production rules. Construction of additional logical rules for information processing eliminates the cycling and improves the root search algorithm even for nonmonotonic complex behavior - oscillating function.

Mode of selection of controlling influence as the problem on the synthesis of controlling influence.

Assume that there was set a problem of minimization of objective function under conditions of existence of inequality constraints for a nonlinear object. Suppose also that the object is being observed and the numerator determines:

- vector of deviation of objective function, which describes functioning of the process, from the reference value of deviation by precept;

- the indicator vectors of deviation vector.

Depending on the type of behavior of the deviation

vector, the problem comes down to the problem of finding a root. Thus, if one consistently sets the values of the Lagrange function, the value of which is recorded in the database when training the system, and calculates indicators V1, V2, V3 and indicator of argument change V4 and indicator of change in the Lagrange function

v = d [al (x)] = d

3 [L (X)-AL (Xn+1 )],

then the conditions for the formation of productive rules are formed. Let us write down one of the possible productive rules.

If V4>0 and Vi>0 and V2<0, then Un+1 = Un - VsAUn.

According to this rule, the control that provides minimization of the Lagrange function, implements the resulting part of the rule "condition - action" that contains indicator Vs.

One can see from the latter that if the Lagrange function derivative changes its sign, then indicator Vs changes it too and the rule is valid.

One can see from the latter that if the Lagrange function derivative changes its sign, then indicator Vs changes it similarly while preserving V4, and the rule of determining the effective part works correctly. When implementing the problem of maximization, the sign in front of Vs in the last expression of the resulting part of the rule changes from a minus sign to a plus.

Thus, the difference between the current value of any function or Lagrange function and the set of its values from a database allows creating productive rules of the effective part by analogy for the function of controlling influence that will make it possible to find the root of function or to ensure its set value within the permissible deviation. An analysis of root values - the magnitudes of controlling influence, as a solution of the problem on the root, will enable us to find their interval. The latter, by the choice of operator or by the rule "condition-action" will form coordinating and controlling influence and will allow determining amplification factors. The choice of algorithms and the formation of laws that will make it possible to adapt the magnitudes of amplification factors make such a task relevant for practical application of recurrent networks in the practice of coordinating control. In the case of formation of the Lagrange function with regard to constraints, the system's capacities expand to the problems of control with limited inequalities.

6. Discussion of results of research into possibilities of formation of new features and modes of operation of RANN

The examined fragments of RANN, due to the proposed extended functions and operating modes, may serve as standard elements of coordinating control system or decision making support system. The examples that are built based on recurrent approximation, demonstrate new possibilities

of using the tools of the indicator vector and RANN. Their use is also useful for solving separate applied problems: formation of productive rules for solving the problems of finding simple root of monotonic function, finding a not simple root of a monotonic function, finding a root of oscillating function, selection of controlling influence and the problem on the synthesis of controlling influence. Obtained results continue and complement practical implementation of the idea of recurrent approximation for the solution of problems of modeling and design [15]. The latter, based on the assumption of continuity and differentiation by Frechet, has limited application. The search for ways of removing these constraints or their circumvention focuses further research on the search for direct and reverse transition from metric to other spaces. The introduction of a set of linguistic variables with a uniform algorithm for constructing the metric on a limited number of standards opens such ways but the problems of the gap, uniqueness and existence give birth to new obstacles at present.

7. Conclusions

1. The structure of recurrent networks of information processing that is formed based on the indicators - vectors and recurrent approximation of continuous function, provides new modes of its operation and extends the functionality. It is capable to realize calibration, to prepare information on the error of approximation, to solve the problem of minimization and act as a module of decision making support system. Its structure forms the information support of the conditional part of the rule "condition-action" and implements effective part in the algorithms of coordination control.

2. The application of the indicators - vectors renders the algorithms for analytical training practically independent of the choice of initial approximation of synaptic weights coefficients, while the network acquires mechanism of re-adjustent during optimal control depending on the changes that occur to the object.

3. The synthesized structure is capable of implementing the algorithms that provide a necessary set of operating modes and production of controlling rules based on the analysis of behavior of the set of the indicator vectors. It is also able to realize simple algorithms for finding roots and control that minimizes or maximizes continuous function or the Lagrange function under conditions of existence of constraints of inequalities for a nonlinear object in accordance with the analytical criteria of evaluation of error of synoptic weights coefficients.

References

1. Petrov, E. Gh. Koordynacyonnoe upravlenye (menedzhment) processamy realyzacyy reshenyj [Text] / E. Gh. Petrov // Problems of Information Technology. - 2014. - Vol. 02, Issue 016. - P. 6-11.

2. Khodakov, V. E. O razvyty osnov teoryy koordynacyy slozhnykh system [Text] / V. E. Khodakov // Problems of Information Technology. - 2014. - Vol. 02, Issue 016. - P. 12-22.

3. Fisun, M. T. Modeljuvannja dynamichnykh procesiv vitrovoji elektrychnoji stanciji u seredovyshhi gpss [Text] / M. T. Fisun // Problems of Information Technology. - 2015. - Vol. 01, Issue 017. - P. 145-149.

4. Bodyanskiy, Ye. Adaptive prediction of quasiharmonic sequences using feedforward network [Text] / Ye. Bodyanskiy, O. Chaplanov, S. Popov // Proc. Int. Conf. Artificial Neural Networks and Neural Information Processing ICANN, 2003. - P. 378-381.

5. Kryuchkovskiy, V. V. Development of methodology for identification models of intellectual activity [Text] / V. V. Kryuchkovskiy, K. E. Petrov // Problems of information technology. - 2011. - Vol. 9. - P. 26-33.

6. Khodakov, V. E. Kharakternye osobennosti odnogo klassa sotsial'no-ekonomicheskikh sistem [Text] / V. E. Khodakov, A. K. Vezums-kiy // Problems of Information Technology. - 2013. - Vol. 2 (014). - P. 10-14.

7. Kovalenko, Y. Y. Sravnyteljnyj analyz metodov klasyfykacyy v avtomatyzovanykh systemakh tekhnychekoj dyaghnostyky [Text] / Y. Y. Kovalenko // Problems of Information Technology. - 2015. - Vol. 01, Issue 017. - P. 37-41.

8. Kravecj, I. O. Vykorystannja nechitkykh nejronnykh merezh dlja identyfikaciji ta keruvannja slaboformalizovannymy ob'jektamy [Text] / I. O. Kravecj // Problems of Information Technology. - 2015. - Vol. 01, Issue 017. - P. 150-154.

9. Kryvulja, Gh. V. Ekspertnaja systema funkcionaljnogho diaghnostyrovanyja tekhnycheskykh obektov s nejronechetkoj bazoj danykh [Text] / Gh. V. Kryvulja // Problems of Information Technology. - 2015. - Vol. 01, Issue 017. - P. 29-36.

10. Ghavrylenko, V. O. Zastosuvannja metodiv nechitkoji loghiky dlja kontrolju stanu zernovogho nasypu v zernoskhovyshhakh [Text] / V. O. Ghavrylenko // Problems of Information Technology. - 2015. - Vol. 01, Issue 017. - P. 77-82.

11. Dzjuba, D. A. Prymenenye metoda kontrolyruemogho vozmushhenyja dlja modyfykacyy nejrokontrolerov vrealjnom vremeny [Text] / D. A. Dzjuba // Matematychny mashyny y systemy. - 2011. - Vol. 1. - P. 20-28.

12. Kondratenko, Y. P. Correction of the Knowledge Database of Fuzzy Decision Support System with Variable Structure of the Input Data. Modeling and Simulation [Text] / Y. P. Kondratenko, Ie. V. Sidenko; A. M. Gil-Lafuente, V. Krasnoproshin (Eds.) // Proc. of the Int. Conference MS'12, 2012. - P. 56-61.

13. Kondratenko, Y. P. Distributed computer system for monitoring and control of thermoacoustic processes [Text] / Y. P. Kondratenko, V. V. Korobko, O. V. Korobko // 2013 IEEE 7th International Conference on Intelligent Data Acquisition and Advanced Computing Systems (IDAACS), 2013. - P. 249-253. doi: 10.1109/idaacs.2013.6662682

14. Kondratenko, Y. Slip Displacement Sensors for Intelligent Robots: Solutions and Models [Text] / Y. Kondratenko, L. Klymenko, V. Kondratenko, G. Kondratenko, E. Shvets // 2013 IEEE 7th International Conference on Intelligent Data Acquisition and Advanced Computing Systems (IDAACS), 2013. - P. 861-866. doi: 10.1109/idaacs.2013.6663050

15. Trunov, A. N. Recurrence approximation in problems of modeling and design [Text]: monografy / A. N. Trunov. - Mykolayiv: Petro Mohyla BSSU, 2011. - 272 p.

16. Trunov, A. N. Intellectualization of the models' transformation process to the recurrent sequence [Text] / A. N. Trunov // European Applied Sciences. - 2013. - Vol. 9, Issue 1. - P. 123-130.

17. Trunov, A. Application of the recurrent approximation method to synthesis of neuron net for determination the hydrodynamic characteristics of underwater vehicles [Text] / A. Trunov // Problem of Information Technology. - 2014. - Vol. 02 (016). - P. 39-47.

18. Trunov, A. Vector indicator as a tool of recurrent artificial neuron net for processing data [Text] / A. Trunov // EUREKA: Physics and Engineering. - 2016. - Vol. 4 (5). - P. 55-60. doi: 10.21303/2461-4262.2016.000129

19. Khajkyn, S. Nejronnye sety: polnyj kurs. 2nd edition [Text] / S. Khajkyn. - Moscow.: yzd. Dom «Vyljjams», 2006. - 1104 p.

20. Chebotarev, P. Yu. Coordination in multiagent systems and Laplacian spectra of digraphs [Text] / P. Yu. Chebotarev, R. P. Agaev // Automation and remote. - 2009. - Vol. 70, Issue 3. - P. 469-483. doi: 10.1134/s0005117909030126

21. Usjkov, A. A. Yntellektualjnye tekhnologhyy upravlenyja: yskusstvennye nejronnye sety y nechetkaja loghyka [Text] / A. A. Usjkov, A. V. Kuzjmyn. - Moscow: Ghorjachaja lynyja - Telekom, 2004. - 143 p.

22. Rutkovskaja, D. Nejronnye sety, ghenetycheskye alghorytmy y nechetkye systemy [Text] / D. Rutkovskaja, M. Pylynjskyj, L. Rut-kovskyj. - Moscow: Ghorjachaja lynyja - Telekom, 2004. - 452 p.

23. Huh, D. Real-Time Motor Control using Recurrent Neural Networks [Text] / D. Huh, E. Todorov // 2009 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning. - 2009. doi: 10.1109/adprl.2009.4927524

24. Ioffe, S. Batch normalization: Accelerating deep network training by reducing internal, covariate shift [Electronic resource] / S. Ioffe, C. Szegedy. - Available at: https://arxiv.org/abs/1502.03167v3

i Надоели баннеры? Вы всегда можете отключить рекламу.