Научная статья на тему 'Organization of solving for partial differential equations by cellular neural networks'

Organization of solving for partial differential equations by cellular neural networks Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
147
52
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
КЛЕТОЧНЫЕ НЕЙРОННЫЕ СЕТИ / CELLULAR NEURAL NETWORKS / МАССИВНО-ПАРАЛЛЕЛЬНЫЕ ВЫЧИСЛЕНИЯ / MASSIVELY-PARALLEL PROCESSING / ДИФФЕРЕНЦИАЛЬНЫЕ УРАВНЕНИЯ В ЧАСТНЫХ ПРОИЗВОДНЫХ / PARTIAL DIFFERENTIAL EQUATIONS

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Gorbachenko Vladimir Ivanovich, Katkov Sergej Nikolaevich

The cellular neural networks with continuous and discrete representation of time as perspective model of massively parallel processing are considered at the solution of partial differential equations.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

ОРГАНИЗАЦИЯ РЕШЕНИЯ ДИФФЕРЕНЦИАЛЬНЫХ УРАВНЕНИЙ В ЧАСТНЫХ ПРОИЗВОДНЫХ НА КЛЕТОЧНЫХ НЕЙРОННЫХ СЕТЯХ

Рассматриваются клеточные нейронные сети с непрерывным и дискретным представлением времени как перспективная модель массивно-параллельных вычислений при решении дифференциальных уравнений в частных производных.

Текст научной работы на тему «Organization of solving for partial differential equations by cellular neural networks»

РАЗДЕЛ 2 МОДЕЛИ, СИСТЕМЫ, МЕХАНИЗМЫ

В ТЕХНИКЕ

УДК 004.032.26

ORGANIZATION OF SOLVING FOR PARTIAL DIFFERENTIAL EQUATIONS BY CELLULAR NEURAL NETWORKS

V. I. Gorbachenko, S. N. Katkov

ОРГАНИЗАЦИЯ РЕШЕНИЯ ДИФФЕРЕНЦИАЛЬНЫХ УРАВНЕНИЙ В ЧАСТНЫХ ПРОИЗВОДНЫХ НА КЛЕТОЧНЫХ НЕЙРОННЫХ СЕТЯХ

В. И. Горбаченко, С. Н. Катков

Abstract. The cellular neural networks with continuous and discrete representation of time as perspective model of massively parallel processing are considered at the solution of partial differential equations.

Key words: cellular neural networks, massively-parallel processing, partial differential equations.

Аннотация. Рассматриваются клеточные нейронные сети с непрерывным и дискретным представлением времени как перспективная модель массивно-параллельных вычислений при решении дифференциальных уравнений в частных производных.

Ключевые слова: клеточные нейронные сети, массивно-параллельные вычисления, дифференциальные уравнения в частных производных.

The relevant direction of multiprocessing of computing processes is the massively-parallel processing. The fine-grain parallelism, or massively parallel processing (MPP) is characterized by a great many of in bridge executed simple and identical operations and exchanges of subproducts on each step of calculus.

It is known a little paradigms of calculus's oriented on fine-grain parallelism [1], among which one the cellular automata (CA) are selected [2], algorithm of parallel substitutions [1, 3] and cellular neural network (CNN) [4 - 7].

For the solution of partial differential equations (PDE) most perspective represents usage of CNN. Usage of neural networks is perspective, basically, because of limitations on weight, overall dimensions and cost of computing systems [8], instead of because of uncertainty in statement of problems, as consider many. The problems with uncertainty in statement of problems can be decided on one and multiprocessor computing systems about Von Neumann architecture of processor clusters.

For fine-grain parallelism there is popular an idea of usage instead of PDE of discrete models of physical phenomena [1, 6]. Such models are not discrete approximating of differential equations. In discrete models dynamics of a state transition of physical process is submitted by discrete rules of transitions, which one image the laws of motion and preservations on a microlevel. But the discrete models are designed for a restricted circle of physical phenomena. There are also large complexities of installation of conformity between physical process and discrete model [1].

Recognizing prospects of transition from PDE to discrete models of physical phenomena, we shall consider MaccoBO-parallel algorithms of the solution of finite-difference clones PDE on CNN, so as CNN while dominate at the description of physical phenomena. And we shall consider the solution PDE on CNN, instead of reading of output vector of a neural network leaning on examples of the solution of set of problems [9]. Usage of output vector of the network can appear useful in some areas, for example, in control of distributed parameter systems, but has restricted applying, so as even for learning of a network it is necessary previously to decide set of examples. Besides it is impossible beforehand to train a network to all possible problems.

Among cellular neural networks it is possible to dedicate CNN with continuous submission of time realizable with means of analogue engineering, and CNN with discrete submission of time realizable by means of digital engineering (the implementation of such networks by means of analogue engineering is possible). At first we shall consider CNN with continuous submission of time [4 - 7]. CNN it is a system of simple processors (cells), arranged in clusters of two or three-dimensional gratings. Each cell in a cellular neural network is connected only to adjacent cells pursuant to a template. The template of a cell is a set of cells, the considered cell, and weight connecting (synaptic weight) is connected to which one. With the factors (weights) control, and also displacement also enter in a template external inputs. The template is convenient for introducing graphically as a matrix, in cages which one records weights of connection. Let's consider a two-dimensional cellular neural network having M of numbers row and N of columns. Let's designate a cell in i to row and j column as C (i, j). In case of a linear

template the equation of state of a cell C(i, j) looks like [4]

cdu^tl = _R_ (t) + ^ ^(i, j;k,i(t) +

dt Rx C(k,l) N(i,j)

+ X (i, j;k,l)uuki (t) + Ij, 1 < i < M; 1 < j < N , (1)

C(k ,l )eN (i, j)

where uxij- (t) - voltage of a condition of a cell C(i, j); C - capacitance of a cell; Rx - input resistance of a cell; uykl (t) - voltage output of a cell C(k,l), entering in set N(i, j) of cells of a template of a cell C(i, j); A(i, j;k,l) - conductivity, proportional synaptic weight; uukl (t) - input voltage; B(i, j; k,l) - factors of control; Ij - displacement.

The equation of an output looks like

uyj = f (uxj ), (2)

where in a general view the sigmoid output function f (x), definite properties \f (x)|< Q and df (x)/dx > 0, where Q - constant will be used.

The equation of an output is usual receive linear uyij = auxij or piecewise

linear.

Concerning parameters of a network the following suppositions are received

gA ((, j;k,l) = gA (k,l;i, j), 1 < i, k <M; 1 < j, l < N, C > 0, Rx > 0.

At absence of external inputs uukl (t) the equation (1) practically coincides

an equation depicting a neuron of a Hopfield network [10]. The essential difference consists only that in a network Hopfield each neuron is connected to everyone.

The cells CNN can be realized on operational amplifiers inclusive an input capacitance [4]. At the solution PDE on CNN it is possible to accept uukl (t) for all k and l. If to enumerate clusters of a grating, in which one the cells CNN are arranged, the matrix record of a system of ordinary differential equations (ODE), depicting dynamics CNN is possible

CddtU = -DU + Tf (U) + I, (3)

where U - the state vector of a network; D = diag(gn, g22, g33,..., gnn) -diagonal matrix; T - sparse matrix of connections of outputs and inputs of neurons (Tu = 0, Tj = g j); g j - conductivity linking an output exit j of a neuron to an input

i of a neuron, gii = p-1 + ^ gj ; p i input resistance i of the amplifier (for modern

j

amplifiers is possible is to accepted p;- , therefore gu =p-x + ^ gj);

j

f (U) - vector function of activation of neurons; I - vector of (bias?) displacement of neurons; C - diagonal matrix of input capacitances of neurons.

C d U = GU +1, (4)

dt

where G = -D + T .

Usually (see, for example, [4, 6, 7, 11, 12]) CNN, depicted by systems (4), the approximating partial differential equation on a method of straight lines will be used for the solution of systems ODE. The system ODE is esteemed as a set of equations depicting a condition CNN with the applicable templates. The transient process in a network should coincide in view of scales transient of a substantial problem. For example, for an equation of the Fourier

d 2u d 2u , du

+ TT = (5)

dx dy dt

It is possible to use approximating on a method of straight lines [13], thus a grid substitutes space, and the time remains continuous. Then the equation (5) is substituted by a system of ordinary differential equations

-bh d = 4ui, j - ui+i,j - u'j - ui, j+i - ui,J-1 ,

in a matrix record looking like

L—X = AX.

(6)

Also can be modeled on CNN with continuous representation of time depicted system of ordinary differential equations

C—U = GU,

(7)

where U = muX , G = mgA , C = mcL . Should be satisfied condition

mcjmtmg = 1,

where mt = xpjxM ; xp - duration of substantial process; xM - duration of the solution on model.

The template of a cellular neural network coincides an incremental template of a differential equation. For example, using a representation by matrices, it is possible to write to a template of a CNN depicted system (7), at an equal step h on axes x and y a view

G(i, j, k, l ) =

m.

h

0 1 0

1 - 4 1

0 1 0

Let's mark, that at the solution of systems (6) with symmetrical positive definite matrixes A, possessing by property of diagonal dominance, CNN can be built from passive members (capacitances and resistors). Thus, known RC-net models [14] it is possible to consider as a particular case CNN with continuous representation of time.

The difference analogues of stationary partial differential equations and difference analogues of non-stationary equations obtained by applying of a difference approximation «discrete space- discrete time», can be resolved on CNN with continuous representation of time with usage of a method of installation [15, 16]. The state vector of a network should be peer a method of installation after completion of transient to the solution of a system of difference equations. For the considered class of problems a system of difference equations we shall write to a matrix form

AX = F, (8)

where the matrix A for many classes PDE is positive definite.

At approximating non-steady PDE the system (8) should be decided on each temporary step.

For symmetrical positive definite matrixes A systems (8) function of activation we shall accept linear f (U) = -U, parameters of a network (3) we shall set as

follows d . = mg \a . |, T j = mg |a j, T. = 0, I = mJFi, where mg - scale on conductivity, mj - the scale on a current, and should be executed the indicator of a similarity mgmU jmj = 1, mU - scale on voltage. Then the equation of state of a network in a matrix form looks like

Cd U = -GU + I = mj R, (9)

dt

where G = mgA , I = mjF, U = muX , R = F - AX - is discrepancy systems (8). The vector U is a function of time.

Using a Lyapunov method, it is possible to demonstrate, that for any initial state of a network the stationary state applicable to the solution of a system is reached (8).

In [15] the method of learning СNN with usage of a displacement vector is offered.

Let's consider CNN with discrete representation of time. Applying to an equation (9) implicit discretization on time, we shall receive a system ODE, depicting CNN with discrete representation of time:

Cu(^-uM =_gu(,+1) + I, (10)

x

where x - step on time; U(k) - vector of condition of neurons in an instant

tk = tk-i + x .

Converting (10), we receive

G1U(k+1)= I1k), (11)

C.

where G1 = G + Gt; Gt = diag{gti}; gti = mg —; mg - scale on conductivity;

I{*)= I + GtU(k).

For an equation of the Fourier (5) we have

bh2 T ( bh2 g y = mga y, St, = mg —, h = mi

T

u{k]

T

For elliptical equation Gt = 0

The neural network depicted by a system (11), under condition of a positive definite and diagonal dominance of a matrix A systems (8) can be realized as an

analogue passive modeling network. As against earlier reviewed CNN, such network will use vector of external inputs U(k) and template of control gti.

Is applicable to an equation (9) obvious discretization on time, in outcome we shall receive a system ODE, depicting CNN with discrete representation of time

U(k+i)_ U(k) (,)

CU-— = _GU(k) + I. (12)

x

Is convertible (12)

U(k+1) = U(k) _ WGU(k) + WI = U(k) + (l1 _ G1U(k)) =

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

= U(k) + R((k )= MU(k) +11, (13)

where W = xC_1; G1 = WG; I1 = WI; R1 = I1 _ G1U(k); M = E _ G1; E - identity matrix.

The network depicted (13), can be utilized for implementation of obvious approximating PDE. For example, we shall consider an equation

du d f du \ d c— = — a, — I + —

f du}

dt dx ^ 1 dx J dy ^ 2 dy y Let's write to a matrix form outcome of approximating of an equation (14)

U(k+1)= U(k )+R 2k), (15)

where R2k) = — AU(k); x - step of a discretization on time; k - number of a step c

on time; U(k) - vector of the solution on k step on time; A - matrix instituted by the known formulas of a difference approximation.

The system of difference equations (15) can be realized on digital CNN with discrete representation of time depicted system (13) if to accept I1 = 0 and

G1 = _R2k). The network realizing a method of obvious approximating according

to expression (15), contains two layers. The maiden layer calculates vector R2k). Thus the template of a network corresponds to an incremental template. The second layer will be derivated by two-input neurons making toting of components of

vectors U(k) and R2k). In the reviewed example the elementary scheme of obvious approximating utilized, the stability by which one is determined by value of a step x discretization on time.

The expression (13) can be esteemed as a record of iterative algorithm of the solution of a system of difference equations (8). In particular, the expression (13) describes a Richardson's method [17] if to accept I1 = F, G1 = A . In [18] are offered neural implementation of the most widespread iterative algorithms and with usage of a method Lyapunov the stability digital CNN is demonstrated at implementation of iterative algorithms. Is demonstrated, that the research of stability

(14)

CNN can be shown to research of convergence of the applicable iterative algorithm. In particular, the stability of a network depicted by expression (13), is determined in spectral radius of a matrix M.

List of reference links

1. Bandman, O. L. Fine-grain parallelism in calculus mathematics / O. L. Bandman // Programming, Nauka. - 2001. - № 4. - Р. 5-20 (in Russian).

2. Wurtz, D. Introduction to cellular automata computing, neural network computing and transputer based special purpose computers / D. Wurtz, G. Hartung // Helvetica Physica Acta. - 1989. - V. 62, № 5. - Р. 461-488.

3. Achasova, S. M. Parallel substitution algorithm. Theory and application / S. M. Achasova, O. L. Bandman, V. P. Markova, S. V. Piskunov. - World Scientific, Singapore, 1994.

4. Chua, L. O. Cellular neural networks: Theory / L. O. Chua, L. Yang // IEEE Transactions on Circuits and Systems. - 1988. - Vol. 35, № 10. - Р. 1257-1272.

5. Chua, L. O. Cellular neural networks applications / L. O. Chua, L. Yang // IEEE Transactions on Circuits and Systems. - 1988. - Vol. 35, № 10. - Р. 1273-1290.

6. Chua, L. O. CNN: a paradigm for complexity / L. O. Chua. - World Scientific, Singapore, 1998.

7. Manganaro, G. Cellular neural networks. Chaos, complexity and VLSI processing / G. Manganaro, P. Arena, L. Fortuna. - Springer, Berlin, 1999.

8. Galushkin, A. I. Neuronal computers, book 3 / A. I. Galushkin. - IPRJ, Moscow, 2000 (in Russian).

9. Puffer, F. Learning algorithm for cellular neural networks (CNN) solving nonlinear partial differential equations / F. Puffer, R. Tetzlaff, D. Wolf // Proc. ISSSE'95. -San Francisco, 1995. - P. 501-504.

10. Haykin, S. Neural networks: a comprehensive foundation / S. Haykin. - Prentice Hall, New Jersey, 1999.

11. Solving partial differential equations by CNN / T. Roska, D. Wolf, T. Kozek, R. Tetzlaff, L. Chua // Proc. 11th European Conf. on Circuit Theory and Design. -Davos, 1993. - Р. 1477-1482.

12. Simulating nonlinear waves and partial differential equations via CNN - Part I: Basic Techniques / T. Roska, L. O. Chua, D. Wolf, T. Kozek, R. Tetzlaff, F. Puffer // IEEE Transactions on Circuit and systems - I: Fundamental Theory and Applications. -1995. - Vol. 42, № 10. - Р. 807-815.

13. Ortega, J. M. An introduction to numerical methods for differential equations / J. M. Ortega, W. G. Poole. - Moscow : Nauka, 1986 (in Russian).

14. Tetelbaum, I. M. Model of direct analogy / I. M. Tetelbaum, J. I. Tetelbaum. -Science, Moscow, 1979 (in Russian).

15. Gorbachenko, V. I. Methods for solving partial differential equations / V. I. Gorbachenko // Neurocomputers: Design and Applications. - Begell House Inc., New York, 2000. - Vol. 1, Issue 2. - Р. 16-29.

16. Gorbachenko, V. I. Solving of Partial Differential Equations by Using Cellular Neural Networks / V. I. Gorbachenko // Neural Information Processing (ICONIP-2001 PROCESSINGS). 8th International Conference on Neural Information Processing, November 14-18, 2001. - Shanghai, China, 2001. - Vol. 2. - Р. 616-618.

17. Hageman, L. A. Applied iterative methods / L. A. Hageman, D. M. Young. -Academic Press, 1981.

18. Gorbachenko, V. I. Solution of systems of difference equations on digital neural networks / V. I. Gorbachenko // Neurocomputers: Design and Applications. - IPRJ, Moscow, 2001. - № 3. - Р. 38-49 (in Russian).

Горбаченко Владимир Иванович доктор технических наук, профессор, заведующий кафедрой компьютерных технологий,

Пензенский государственный университет E-mail: gorvi@mail.ru

Катков Сергей Николаевич старший преподаватель, Пензенский государственный университет E-mail: senika2012@yandex.ru

Gorbachenko Vladimir Ivanovich doctor of technical sciences, professor, head of the department of computer technology, Penza State University

Katkov Sergej Nikolaevich senior lecturer, Penza State University

УДК 004.032.26 Горбаченко, В. И.

Организация решения дифференциальных уравнений в частных производных на клеточных нейронных сетях / В. И. Горбаченко, С. Н. Катков // Модели, системы, сети в экономике, технике, природе и обществе. - 2014. - № 3 (11). -С. 105-112.

i Надоели баннеры? Вы всегда можете отключить рекламу.