Научная статья на тему 'The evolving adaptive neural network for data processing with missing observations'

The evolving adaptive neural network for data processing with missing observations Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
138
43
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
NEURAL NETWORK / ORTHOGONAL POLYNOMIALS / CHEBYSHEV POLYNOMIALS / INCOMPLETE DATA WITH MISSING OBSERVATIONS

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Shafronenko A., Pliss I., Bodyanskiy Ye

The problem of computational intelligence systems synthesis in on-line mode, capable for processing stochastic signals with missing observations in the data is considered. An adaptive approach based on using of orthogonal polynomials is developed.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «The evolving adaptive neural network for data processing with missing observations»

ISSN 1607-3274. PagioeneKTpomKa, rn^opMaTHKa, ynpaBmHHA. 2013. № 2

UDC 519.7:004.8

Shafronenko A.1, Pliss I.2, Bodyanskiy Ye.3

1Ph. D. student in Artificial Intelligence, Kharkiv National University of Radio Electronics, Ukraine, E-mail:

[email protected]

2Ph.D. (Candidate of Technical Science), Leading Researcher, Kharkiv National University of Radio Electronics, Ukraine

3Professor, Dr.-Ing. habil., Kharkiv National University of Radio Electronics, Ukraine

THE EVOLVING ADAPTIVE NEURAL NETWORK FOR DATA PROCESSING WITH MISSING OBSERVATIONS

The problem of computational intelligence systems synthesis in on-line mode, capable for processing stochastic signals with missing observations in the data is considered. An adaptive approach based on using of orthogonal polynomials is developed.

Keywords: neural network, orthogonal polynomials, Chebyshev polynomials, incomplete data with missing observations.

INTRODUCTION

Artificial neural networks, neuro-fuzzy systems, hybrid systems of computational intelligence currently are widespread for solving data processing problems, Data Mining, prediction, identification and control of nonlinear stochastic and chaotic objects and systems [1-5].

The most attractive properties of these systems are their universal approximation properties and learning ability, usually understood as a possibility of tuning the parameters by optimizing some quality criterion (objective function, learning criterion). In a wider sense, it is possible to configure the system architecture too. Currently, there exist number of approaches, but the most widely used is so-called constructive approach in which the system of computational intelligence, starting with the simplest architecture, gradually increasing its complexity and at the same time tuning their parameters to achieve the desired quality of the solution. This approach has formed a new direction in computational intelligence, known as evolving systems [6, 7]. At the same time, most of these systems process information in a batch mode, which makes them difficult to use in cases where the data for processing are fed in on-line mode in the form of time series.

In many practical applications involving the processing of sequences of real data, there exists situation when some observations of the controlled time series, for whatever reasons, are missing (lost). It is understood that for the normal operation of the neural network or a hybrid system, these missing data must somehow be restored. The problem of restoring the missing values has received sufficient attention [8-10], in this case as the most effective are neural networks [11-14]. However, the known approaches for restoration of missing values in time series are effective only in cases when the whole data set is given a priori, the amount of missing values is known, and the time series has a fixed number of observations. It is natural that in problems where data are fed for processing in real time, the number of missing values is unknown beforehand and the sequence has nonstationary character, the known approaches are ineffective.

© Shafronenko A., Pliss I., Bodyanskiy Ye., 2013

So, the problem of synthesis of computational intelligence systems in on-line mode, capable for processing stochastic signals with missing observation in the data is interesting and useful.

SEQUENTIAL RESTORATION OF MISSING OBSERVATIONS IN THE TIME SERIES

The proposed approach is based on the use of classical orthogonal polynomials [15] and first of all Chebyshev polynomials (T-systems) [16, 17], that have a number of good properties in terms of the approximation problem using quadratic criterion [16].

Generally Chebyshev polynomials can be compactly represented using trigonometric functions

T (x) = cos(/ arccos x), x e [-1,1], l = 0,1,2,.... (1) or using recurrent relation

T (x) = 2 xTl _1( x) - T _ 2( x). (2)

In some situations it is easier to solve the problem in the interval x e [0,1], which can be used with biased Chebyshev polynomials:

TB (x) = T (2x _ 1), (3)

TB (x) = (4 x _ 2)TB1 (x) _ TB2 (x). (4)

It is also easy to write biased polynomials for any interval

x e [ xmin, xmax].

To approximate arbitrary function of a scalar argument y = f (x) given on a set of nodes x1,x2,...,xN; y1,y2,...yN, let us transform the original data using simple relations

xk = 2 x _xm in--1;

xmax _ xmin

yk = 2 yk _ymin--1, k = 1,2,...,N

ymax _ ymin

for the polynomials (1), (2) and

xk =

Xk Xmm

x - X ■

max min

yk = yk ymi" , k = 1,2,...,N

ymax — ymin

for (3), (4) and write approximating T-system as

TZ (x) = Z wiT(x),

i=0

whose parameters wl are determined by the simple relation

N

Z ykTl(Xk )

... = k=l_

wi = ~N-,

Z Ti( h) k=1

with conditions of orthogonality.

It is easy to see that these expressions are standard

estimates of least squares. The values T (Xk) are calculated

using the recurrent relations

T0(xk) = xk _b1, T (Xk) = (xk _ bi )Ti-i(xk) - alTl_2(xk),

N I 1

X Xk Tl-1( xXk )

n = _k=i_

ai = "N-

X Xk Tl-2 (^k ) k=1

N I N i 1

X XkTl-1() X xk Tl-2 (Xk )

k=1

k=1

= N N

^-1T

Z xXk Ti _1( Xk ) Z Ti _ 2 (xXk ) k=1 k=1

and achievable accuracy of approximation - based on the variance

N

ct2 =a2_i _ w2 Z XkTh(xk), k=1

but in this case, we assume that at some moments in the discrete time k measuremens or have not been done or lost.

Let's introduce two subsets: XP = {Xk, k}, that contains Np < N available observations and XG = {k}, contains moments with missing observations.

Then the coefficients of the approximating polynomial can be calculated using the following relations:

N

Z XkTlN (k)

wl ="

k=1

keXP

N

TiN (k)

k=1

keXP

N

X Xk,

NPk=1 keXP

where index N in TlN means that the corresponding polynomials are orthogonal on the interval k e [1, N].

After this it is easy to restore the missing observations in the form

h

Xk = X wlTlN (k) Vke XG. l=0

Combining the available (Xk e XP) and restored (Xk e Xg ) values of the time series, it's possible to form a sequence containing all N values, referred X1, X2,..., XN. Next, let's introduce the parameter vector

w = (w0,w1,...,Wh) and T-system of orthogonal

polynomials Tn(k) = (T,n(k),Tjn(k),...,ThN(k))T and rewrite (5) in vector form

T;X (k) = WtTn (k),

where vector w is calculated by the standard method of least squares

(

w( N) =

V1

N

X Tn (k)TT (k) k=1

keXP

N

X TN (k) Xk = k=1

keXP

N

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

X

k=1

( N \2

2^-2 1 -cto = X ykX yk

N

k=1

Next, consider the time series X:k, k = 1,2,...,N and introduce approximating T-system

ThX (k) =X WlTl (k),

l=0

N

= P(N) X Tn (k)Xk. k=1

keXP

After that we can finally write

\Xk = wT (N)Tn (k)Vk e Xg ,

I X^k — Xk Vk e Xp.

(6)

ISSN 1607-3274. PagioeneKTpomKa, rn^opMaTHKa, ynpaBmHHA. 2013. № 2

Using relations (6) and (7) processing is realized in batch mode for a fixed number of points N. If the data are fed to the processing sequentially we have to organize on-line data processing. In [17] for this purpose it was proposed to use recurrent least-squares method in the form

w(N) = w(N - 1)

P(N) = P(N -1) -

P(N -1)(% - W (N - 1)TN (N))

1 + TN ( N )P( N -1)TN ( N ) P(N -1)TN (N)TN (N)P(N -1) 1 + TN ( N )P( N -1)Tn ( N ) ,

TN ( N ),

(8)

but it should be noted that with the arrival of the new (N +1)-th observation xN+1, the structure of the approximating polynomials essentially changes, so that TN (k) * TN+1 (k). It is clear that this fact significantly complicates the realization of on-line processing. To keep the structure of the approximating polynomials, we can organize data processing on a sliding window 5, with at each moment k, we will use the computer time k = 1,2,..., 5, connected with real time so that the calculations are provided only in moments k _ 5 +1, k _ s + 2,..., k,

Tos (k) =

T1s (k) =

s (s2 -1)

-(2k - s -1),

Tls (k ) = (alk + bt )T-1, s (k) - clTl _2, s (k ), l = 2,3,..., h,

al =

2 dl (dl - 2)

gl

l

bl =- dl (dl - 2) =- ai,

(9)

gl

l -1 dl (dl + gl - 2)

Cl ' ^ (dl - 4)gl

dl = 2l +1,

2 ,2 gl = s -1 .

Then the estimate of type (6) with the fixed structure of the polynomials Ts (N ) can be written as

Ws ( N ) =

s, N

X Ts (k)Tl (k)

Ik=1,

k=N - s+1

y keXP

s, N

É Ts (k ) xk = k=1,

k=N - s+1

keXP

s,N

■-Ps (N) É Ts (k)Tj (k) k=1,

k=N - s+1 keXp

(10)

and estimates of missing observations

xk = x~k = (N)Ts (k) yk e XG.

Implementation of the described approach is very convenient with the apparatus of artificial neural networks using ortho-synapse [18], shown in Fig. 1 and trained by the algorithm (10). Ortho-synapse on the structure coincides with the nonlinear synapse of neo-fuzzy neuron [19, 20], but instead of the membership functions it contains the orthogonal activation functions Tl5, making the learning process easier and quicker. It is also important that due to using of a sliding window, these activation functions are not changed in the learning process. The value of the sliding window is selected from empirical considerations s > h +1. Thus if approximating sequence is nonstationary, the value should not significantly exceed the number of parameter estimates.

ARCHITECTURE OF ORTHOGONAL NEURAL NETWORK

Using orthogonal polynomials as activation functions leads to the creation of a whole group of orthogonal neural networks [21-29] possessing good approximating properties and high speed learning of synaptic weights. In [30-33] growing orthogonal networks based on the ortho-synapses and ortho-neurons have been proposed [18]. These networks are characterized by simplicity of learning of synaptic weights and architecture.

Fig. 2 shows the architecture of orthogonal neural network for data processing with the lost observations that implements a nonlinear mapping

n h

yk = Z(xk) = ÉÉ WliTli (% ),

i=11=0

(11)

where k = 1,2,... - current discrete time, wu - tuned synaptic weights, Tu (•) - l -th orthogonal activation function of the type (1) or (3) for the input signal x^, i = 1,2,...,n.

At the zero (receptive) layer of the network series of observations x^ containing a priori unknown number of missing values are fed. Note also that the external training signal yk also can contain gaps.

o-k.i

Fig. 1. Ortho-synapse

1

N

Fig. 2. The orthogonal neural network for data processing with the lost observations

The first hidden layer of the network is formed by n ortho-synapses O - sM and serves to restore missing observations.

The same function for the reference training signal yk

performs ortho synapse O - Syf. As a result, in on-line mode

the time series formed us X^, y*. The output layer is formed by ortho-neuron that coincides with the architecture of neo-fuzzy neuron, but with orthogonal activation functions instead of the usual membership functions. Error

ek = yk _ yk

is used by learning algorithm to tune both the weights and architecture.

The present architecture contains 2n +1 ortho-synapses and (h +1) (2n +1) synaptic weights to be estimated at the same time it is very important that the output signal yk depends linearly on the weights.

SYNAPTIC ADAPTATION

Let's introduce the standard criterion of learning

N N f * n h A

Ek = Z ek = Z y*- Z Z wliTli(xki )

k=1 k=1V i=11=0

(h + 1)n x1 - vectors of activation functions

T (xk ) = (T01(Xk1), T11(Xk1),...Th1(Xk1),

/v /v /v T

T02( xk 2),..., Th 2 ( xk 2),..., Thn ( xkn )) = = (TT (xH), T2T (Xk 2),..., tt (Xkn ))T

and synaptic weights

w = (wol, W11,..., whl, w02, wh2-. whn)T = (wT, wT wTn )T

and rewrite the output network (11) in a compact form

yk = w T(xk) = Zw Ti(xk)-i=1

(12)

To tune the vector of synaptic weights w in a sequential mode it's possible to use a recurrent least squares method (8) processing non-stationary signals is using adaptive procedures that have tracking properties, such as the same method of least squares on the sliding window, which in this situation, can be written as

- ( N) ( N 1) + P(N -1)(yN - wT (N - 1)T(Xn )) x

w(N) = w(N -1) +-—---T(xn ),

I3(N) = P(N -1) -

1 + T (XN )P(N - 1)T(Xn ) P( N - 1)T (x N )TT ( X n ) P ( N -1) 1 + TT(Xn)P(N - 1)T(^n) '

(13)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

P( N ) = P ( N )■

P ( N )T ( X N - s )T ( X N - s ) p ( N ) 1 - TT ( X N - s ) P ( N )T ( X N - s ) '

w( N ) = w ( N ) - 13 ( N )(yN - s - ( N )T ( ^N - s)) T (Xn-s ) 1 - T ( X N - s )I3 ( N )T ( X N - s )

or

w( N ) =

N

V1

Z T(Xk)T (Xk Z T(Xk)y** =

V k=N - s+1 J k=N - s+1

N

N

= Ps ( N ) Z T ( Xk ) y*,

k=N - s+1

(14)

thus, due to the diagonality of matrix Ps (N) calculation of estimation (14) even at large n and h does not cause difficulty.

ARCHITECTURAL ADAPTATION

The number of activation functions h +1 in each orthosynapse O _ Sj of output layer is chosen rather arbitrarily, so if it is found that the synthesized neural network does not provide the required quality of the information processing, the number of these functions can be increased (or decreased, if necessary) in on-line mode directly in the learning process. Thus network evolutionary properties by adapting the structure, due to orthogonality with activation functions for the calculated correction synaptic weights made simply.

Suppose that at the instant N = N * it was decided that each ortho-synapse O _ Sj need to add an activation function Th+1i so that to the network should be introduced n additional synaptic weights wh+1,,.

The estimate w(N) (14) should be corrected in this situation as follows:

ISSN 1607-3274. Радюелектронжа, шформатика, управлiння. 2013. № 2

Г X T I Л-1

X т(xk )т (Xk ) R,h+i(Xk)

w (X) =

k=X - i+1

Rl,h+1( Xk )

Rh+1,h+1( xk )

Г X * A

Z T (xxk) y*k

k=X - i+1

rh+1( Xk)

(15)

where ((h + 1)n x n) -matrix Ri ,h+1 (Xk) formed by elements

N

Z Tli (Xk! )Th+1,j Cj i = 0,1,..., h; i = 1,2,...,n;

k=N _ s+1

j = 1,2,...,n; (nxn)-matrix Rh+1,h+1 (x^k) - formed by

N

elements Z Th+1,, (Xfc )Th+1, j (Xkj); (n x1) - the

k=N _ s+1

column ^h+1, i (Xki) - formed by elements

N *

Z Th+1, i( Xki) y*.

k=N _ s+1

Applying the Frobenius formula, estimate (15) can be rewritten as

A

w (X) =

Ps (X) + Ps (X) Ri, h+1 (Xk) H-1 (xXk) RTM1 (Xk) Pi (X) I - Pi (X) RlM1 (%) H~1 (%) ------------------------- | --------------

7-1/

- H-1( xk) Rl ,h+1( Xk) Pi (X)

H_1( Xk)

Г X * A

Z T (xk) y*

k=X - i+1

rh+1( ^^k)

^ w(X) + Pi (X)Rl,h+1 (Xk)H"1(Xk)R/TTh+1 (Xk )w(X) - Pi (X)Rlm (xk )H-11 xk ^ (^^)^

- H -1 (Xk) RTh+1 (Xk) w( X) + H -1 (Xk )rh+1 (Xk)

where

H-1 (Xk) = h+1 (x^k) _ <h+1 (Xk) Ps (N) Rt, h+1 (Xk).

It's obvious that architectural adaptation is more difficult than synaptic. The possibility of its implementation in online mode, by using of orthogonal activation functions, can achieve the required approximating properties in the learning process.

CONCLUSION

The evolving neural network that due to using orthogonal activation functions tunes the synaptic weights and structure in the learning process is proposed. Another important feature of the proposed network is connected with the possibility to process in on-line mode information, spoiled by missing values in the data. The neural network under consideration is characterized by high speed and can to process distorted nonlinear nonstationary stochastic and chaotic signals in real time.

SPISOK LITERATURY

1. Jang, J.-S. Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence / J.-S. Jang, C.-T. Sun, E. Mizutani. - Upper Saddle River : Prentice Hall, 1997. - 640 p.

2. Haykin, S. Neural Networks. A Comprehensive Foundation / S. Haykin. - Upper Saddle River : Prentice Hall, 1999. - 842 p.

3. Xelles, O. Nonlinear System Identification / O. Nelles. - Berlin : Springer, 2001. - 585 p.

4. Rutkowska, D. Neuro-Fuzzy Architectures and Hybrid Learning / D. Rutkowska. - Berlin : Springer, 2002. - 288 p.

5. Rutkowiki, L. Computational Intelligence. Methods and Techniques / L. Rutkowski. - Berlin-Heidelberg : SpringerVerlag, 2008. - 514 p.

6. Kaiabov, X. Evolving Connectionist Systems / N. Kasabov. -London : Springer-Verlag, 2003. - 307 p.

7. Lughofer, E. Evolving Fuzzy Systems - Methodologies, Advanced Concepts and Applications / E. Lughofer. - BerlinHeidelberg : Springer-Verlag, 2011. - 454 p.

8. Загоруйко, Н. Г. Эмпирические предсказания / Н. Г. Заго-руйко. - Новосибирск : Наука, 1979. - 120 c.

9. Han, J. Data Mining : Concepts and Techniques / J. Han, M. Kamber. - Amsterdam : Morgan Kaufman Publ., 2006. -743 p.

10. Gorban, A. Principal Manifolds for Data Visualization and Dimension Reduction / A. Gorban, B. Kegl, B. Wunch, A. Zinovyev (Eds.). - LNCS. - V. 58. - Berlin-Heidelberg. -New York : Springer, 2007. - 330 p.

11. Biihop, C. M. Neural Networks for Pattern Recognition. -Oxford : Clarendon Press, 1995. - 482 p.

12. Gorban, A. X. Neural network modeling of data with gaps / A. N. Gorban, A. A. Rossiev, D. C. Wunch II // Радюелектронжа, шформатика, управлшня. - 2000. - No. 1 (3). -С. 47-55.

x

x

X

НЕЙРО1НФОРМАТИКА ТА ШТЕЛЕКТУАЛЬШ СИСТЕМИ

13. Tkacz, M. Artificial neural networks in incomplete data sets processing / In: Eds. M. A. Klopotek, S. T. Wierzchon, K. Trojanowski Intelligent Information Processing and Web Mining. - Berlin-Heidelberg: Springer-Verlag, 2005. -P. 577-583.

14. Marwala, T. Computational Intelligence for Missing Data Imputation, Estimation, and Management: Knowledge Optimization Techniques. Hershey-New York : Information Science Reference, 2009. - P. 303.

15. Суетин, П. К. Классические ортогональные многочлены / П. К. Суетин. - М. : Наука, 1976. - 328 c.

16. Karlin, S. Tchebycheff Systems with Applications in Analysis and Statistics / S. Karlin, W. J. Studden. - N. Y. - London -Sydney : Interscience Publishes, 1966. - 586 p.

17. Семесенко, М. П. Методы обработки и анализа измерений в научных исследованиях. - Киев-Донецк : Вища школа, 1983. - 240 с.

18. Бодянский Е. В. Ортосинапс, ортонейроны и нейропре-диктор на их основе / Е. В. Бодянский, Е. А. Викторов, А. Н. Слипченко // Системи обробки шформаци. - 2007. -Вип. 4 (62). - С. 139-143.

19. Uchino, E. Soft computing based signal prediction, restoration and filtering / E. Uchino, T. Yamakawa Ed. Da Ruan «Intelligent Hybrid Systems: Fuzzy Logic, Neural Networks and Genetic Algorithms». - Boston : Kluwer Academic Publisher, 1997. - P. 331-349.

20. Miki, T. Analog implementation of neo-fuzzy neuron and its on-board learning / T. Miki, T. Yamakowa Ed. N. E. Mastorakis «Computational Intelligence and Applications». - Piraeus : WSES Press, 1999. - P. 144-149.

21. Yang, S.-S. An orthonormal neural network for function approximation / S.-S. Yang, C.- S. Tseng // IEEE Trans. on Syst., Man, and Cybern. - 1996. - No. 10. - Р. 779-784.

22. Lee, T. T. The Chebychev polynomial-based unified model neural networks for functional approximation / T. T. Lee, J. T. Jeng // IEEE Trans. on Syst., Man, and Cybern. - 1998. -Vol. 28, No. 12. - Р. 925-935.

23. Andras, P. Orthogonal RBF neural network approximation / P. Andras // Neural Prossesing Letters. - 1999. - Vol. 9, No. 2. - P. 141-151.

24. Chien F. Sh. Properties and performance of orthogonal neural network in function approximation / F. Sh. Chien,

Ch.-Ch. Tseng, Ch.-S. Chen // Int. J. Intelligent Systems. -2001. - Vol. 16. - P. 1377-1392.

25. Patra, J. C. Nonlinear dynamic system identification using Chebyshev functional link artificial neural networks / J. C. Patra, A. C. Kot // IEEE Trans. on Syst., Man, and Cybern. - 2002. -Vol. 32, No. 4. - P. 505-511.

26. Bodyanskiy, Ye. Artificial neural network with orthogonal activation functions for dynamic system identification / Ye. Bodyanskiy, V. Kolodyazhniy, O. Slipchenko Eds. by

0. Sawodny, P. Scharff «Sinergies between Information Processing and Automation». - Aachen : Shaker-Verlag, 2004. -P. 122-127.

27. Statiak, B. Fast orthogonal neural network / B. Statiak, M. Yatsymirskyy // Artificial Intelligence and Soft Computing. - 2006. -No. 4029. - P. 142-149.

28. Rodriguez, N. Orthogonal neural network based prediction for OFDM systems / N. Rodriguez, C. Cubillos // Proc. Electronics, Robust, and Automotive Mechanics Conf., 2007. -P. 225-228.

29. Hongwei, W. Tracking control of robot manipulators based on orthogonal neural network / W. Hongwei, Yu. Shuanghe // Int. J. Modeling, Identification, and Control. - 2010. -No. 11 (1-2). - P. 130-136.

30. Bodyanskiy, Ye. Structural and synaptic adaptation in the artificial neural networks with orthogonal activation functions / Ye. Bodyanskiy, V. Kolodyazhniy, O. Slipchenko // Sci. Proc. of Riga Technical University. Comp. Sci., Inf. Technology and Management Sci. - 2004. - No. 20. -P. 69-76.

31. Bodyanskiy, Ye. Ontogenic neural networks using orthogonal activation functions / Ye. Bodyanskiy, O. Slipchenko // Prace naukowe Akademii Ekonomiczney we Wroclawiu. - 2006. -No. 21. - P. 13-20.

32. Bodyanskiy, Ye. Growing neural networks using nonconventional activation functions / Ye. Bodyanskiy,

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

1. Pliss, O. Slipchenko // Int. J. Information Theories & Applications. - 2007. - Vol. 14, No. 3. - P. 275.

33. Bodyanskiy, Ye. The cascade orthogonal neural network / Ye. Bodyanskiy, A. Dolotov, I. Pliss, Ye. Victorov // In «Advanced Research in Artificial Intelligence». - Vol. 2. - Sofia : FOI ITHEA, 2008. - P. 13-20.

Orarra Hagmmjja go pega^ii 16.10.2013.

Шафроненко А.1, Плюс I.2, Бодянський С.3

'Астрантка, Харювський нацюнальний ушверситет радюелектронжи, Украша

2Канд. техн. наук, старший науковий сшвробтик, провщний науковий ствробтик, Харювський нацюнальний ушверситет радюелектрошки, Украша

3Д-р техн. наук, професор, науковий керiвник, Харювський нацюнальний ушверситет радюелектронжи, Украша

ЕВОЛЮЦ1ОНУЮЧА АДАПТИВНА НЕЙРОННА МЕРЕЖА ДЛЯ ОБРОБКИ ДАНИХ 13 ВТРАЧЕНИМИ СПОСТЕРЕЖЕННЯМИ

Розглянуто задачу синтеза еволюцюнуючо! в on-line режи]ш системи обчислювального штелекту, здатно! обробляти стохас-тичш сигнали, що мютять пропуски в даних. Запропоновано адаптивний шдхщ, в основi якого полягае використання ортогональ-них полiномiв.

Ключовi слова: нейронна мережа, ортогональш полшоми, полшоми Чебишева, неповш даш з втраченими спостереженнями.

Шафроненко А.1, Плисс И.2, Бодянский Е.3

'Аспирантка, Харьковский национальный университет радиоэлектроники, Украина

2Канд. техн. наук, старший научный сотрудник, ведущий научный сотрудник, Харьковский национальный университет радиоэлектроники, Украина

3Д-р техн. наук, профессор, научный руководитель, Харьковский национальный университет радиоэлектроники, Украина

ЭВОЛЮЦИОНИРУЮЩАЯ АДАПТИВНАЯ НЕЙРОННАЯ СЕТЬ ДЛЯ ОБРАБОТКИ ДАННЫХ С ПОТЕРЯННЫМИ НАБЛЮДЕНИЯМИ

Рассмотрена задача синтеза эволюционирующей в on-line режиме системы вычислительного интеллекта, способной обрабатывать стохастические сигналы, содержащие пропуски в данных. Предложен адаптивный подход, в основе которого лежит использование ортогональных полиномов.

ISSN 1607-3274. Радюелектронжа, шформатика, управлшня. 2013. № 2

Ключевые слова: нейронная сеть, ортогональные полиномы, полиномы Чебышева, неполные данные с потерянными наблюдениями.

REFERENCES

1. Jang J.-S., Sun C.-T., Mizutani E. Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence, Upper Saddle River: Prentice Hall, 1997, 640 p.

2. Haykin S. Neural Networks. A Comprehensive Foundation. Upper Saddle River, Prentice Hall, 1999, 842 p.

3. Nelles O. Nonlinear System Identification, Berlin, Springer, 2001, 585 p.

4. Rutkowska D. Neuro-Fuzzy Architectures and Hybrid Learning. Berlin, Springer, 2002, 288 p.

5. Rutkowski L. Computational Intelligence. Methods and Techniques. Berlin-Heidelberg, Springer-Verlag, 2008, 514 p.

6. Kasabov N. Evolving Connectionist Systems. London, Springer-Verlag, 2003, 307 p.

7. Lughofer E. Evolving Fuzzy Systems - Methodologies, Advanced Concepts and Applications. Berlin-Heidelberg, Springer-Verlag, 2011, 454 p.

8. Zagoruyko N. G. Empiricheskiye predskazaniya. Novosibirsk, Nauka, 1979, 120 p.

9. Han J., Kamber M. Data Mining, Concepts and Techniques. Amsterdam, Morgan Kaufman Publ., 2006, 743 p.

10. Gorban A., Kegl B., Wunch B., Zinovyev A. (Eds.) Principal Manifolds for Data Visualization and Dimension Reduction, LNCS, V.58, Berlin-Heidelberg, New York, Springer, 2007, 330 p.

11. Bishop C. M. Neural Networks for Pattern Recognition. Oxford, Clarendon Press, 1995, 482 p.

12. Gorban A. N., Rossiev A. A., Wunch II D. C. Neural network modeling of data with gaps, Radio Electronics, Computer Science, Control, 2000, No. 1 (3), pp. 47-55.

13. Tkacz M. Artificial neural networks in incomplete data sets processing, In: Eds. Klopotek M. A., Wierzchon S. T., Trojanowski K. Intelligent Information Processing and Web Mining. Berlin- Heidelberg, Springer-Verlag, 2005, pp. 577583.

14. Marwala T. Computational Intelligence for Missing Data Imputation, Estimation, and Management, Knowledge Optimization Techniques. Hershey-New York, Information Science Reference, 2009, pp. 303.

15. Suetin P. K. Klassicheskiye ortogonal'nye mnogochleny. Moscow, Nauka, 1976, 328 p.

16. Karlin S., Studden W. J. Tchebycheff Systems with Applications in Analysis and Statistics. N.Y. London, Sydney, Interscience Publishes, 1966, 586 p.

17. Semesenko M. P. Metody obrabotki i analiza izmereniy v nauchnykh issledovaniyakh, Kiyev-Donetsk, Vishcha shkola, 1983, 240 p.

18. Bodyanskiy Ye. V., Viktorov Ye. A., Slipchenko A. N. Ortosinaps, ortoneyrony i neyroprediktor na ikh osnove, Sistemi obrobki informatsii, 2007. Vip.4 (62), pp.139-143.

19. Uchino E., Yamakawa T. Soft computing based signal prediction, restoration and filtering. Ed. Da Ruan «Intelligent Hybrid Systems: Fuzzy Logic, Neural Networks and Genetic

Algorithms», Boston, Kluwer Academic Publisher, 1997, pp. 331-349.

20. Miki T., Yamakowa T. Analog implementation of neo-fuzzy neuron and its on-board learning, Ed. N. E. Mastorakis «Computational Intelligence and Applications». Piraeus, WSES Press, 1999, pp. 144-149.

21. Yang S.-S., Tseng C.-S. An orthonormal neural network for function approximation, IEEE Trans. on Syst., Man, and Cybern, 1996, No. 10, pp. 779-784.

22. Lee T. T., Jeng J. T. The Chebychev polynomial-based unified model neural networks for functional approximation, IEEE Trans. on Syst., Man, and Cybern, 1998, vol. 28, No. 12, pp. 925-935.

23. Andras P. Orthogonal RBF neural network approximation, Neural Prossesing Letters, 1999, 9(2), pp. 141-151.

24. Chien F. Sh., Tseng Ch.-Ch., Chen Ch.-S. Properties and performance of orthogonal neural network in function approximation, Int. J. Intelligent Systems, 2001, 16, pp. 13771392.

25. Patra J. C., Kot A. C. Nonlinear dynamic system identification using Chebyshev functional link artificial neural networks, IEEE Trans. on Syst., Man, and Cybern, 2002, vol. 32, No. 4. pp. 505-511.

26. Bodyanskiy Ye., Kolodyazhniy V., Slipchenko O. Artificial neural network with orthogonal activation functions for dynamic system identification, Eds. by O. Sawodny, P. Scharff «Sinergies between Information Processing and Automation», Aachen, Shaker-Verlag, 2004, pp. 122-127.

27. Statiak B., Yatsymirskyy M. Fast orthogonal neural network, Artificial Intelligence and Soft Computing, 2006, 4029, pp. 142-149.

28. Rodriguez N., Cubillos C. Orthogonal neural network based prediction for OFDM systems, Proc. Electronics, Robust, and Automotive Mechanics Conf., 2007, pp. 225-228.

29. Hongwei W., Shuanghe Yu. Tracking control of robot manipulators based on orthogonal neural network, Int. J. Modeling, Identification, and Control, 2010, No. 11 (1-2), pp. 130-136.

30. Bodyanskiy Ye., Kolodyazhniy V., Slipchenko O. Structural and synaptic adaptation in the artificial neural networks with orthogonal activation functions, Sci. Proc. of Riga Technical University. Comp. Sci., Inf. Technology and Management Sci, 2004, No. 20, pp. 69-76.

31. Bodyanskiy Ye., Slipchenko O. Ontogenic neural networks using orthogonal activation functions, Prace naukowe Akademii Ekonomiczney we Wroclawiu, 2006, No. 21, pp. 13-20.

32. Bodyanskiy Ye., Pliss I., Slipchenko O. Growing neural networks using nonconventional activation functions, Int. J. Information Theories & Applications, 2007, vol. 14, No. 3, pp. 275.

33. Bodyanskiy Ye., Dolotov A., Pliss I., Victorov Ye. The cascade orthogonal neural network, In «AdvancedResearch in Artificial Intelligence», Vol. 2, Sofia, FOI ITHEA, 2008, pp. 13-20.

i Надоели баннеры? Вы всегда можете отключить рекламу.