Научная статья на тему 'The associative 2d-memories based on matrix-tensor equivalental models'

The associative 2d-memories based on matrix-tensor equivalental models Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
84
15
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
optical neural networks / associative memory / matrix-tensor equivalentor / equivalental model / matrix multilevel logi

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Vladimir Krasilenko, Alexander Nikolsky, Sergei Pavlov

(AM) and neural networks (NN) for 2D-mage based on matrixtensor equivalental models (MTEMs) are considered. The given models use 2-D image adaptive-equivalentive weighting in order to increase memory capacity while storing highly correlated 2Dimages. The architectures of NN and AM are also suggested. Basic devices for implementations on the base of matrix-tensor equivalental model are matrix-tensor equivalentors (MTE). The MTEMs for two level and multilevel 2D-images are shown. The results of recognition processes modeling in such AM are given on the example of 2D-image recognition with the number of neuron from 10-20 thousand. A successful recognition of correlated image is achiev.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Рассматриваются принципы построения ассоциативной памяти и нейронных сетей для двумерных изображений на основе матрично-тензорных эквивалентностных моделей. Предложены модели с адаптивно-эквивалентностным взвешиванием, позволяющие увеличить объем памяти при хранении и распознавании высококоррелированых двухмерных образов, включая двухградационное и многоградационное изображения. Обсуждаются архитектуры и возможные реализации нейронных сетей и асоциативной памяти, даются оценки их показателе

Текст научной работы на тему «The associative 2d-memories based on matrix-tensor equivalental models»

II.IHOOPMATHKA

UDK 004.93:007.52

THE ASSOCIATIVE 2D-MEMORIES BASED ON MATRIX-TENSOR EQUIVALENTAL MODELS

Vladimir Krasilenko, Alexander Nikolsky, Sergei Pavlov

Рассматриваются принципы построения ассоциативной памяти и нейронных сетей для двумерных изображений на основе матрично-тензорных эквивалентностных моделей. Предложены модели с адаптивно-эквивалентностным взвешиванием, позволяющие увеличить объем памяти при хранении и распознавании высококоррелированых двухмерных образов, включая двухградационное и многоградационное изображения. Обсуждаются архитектуры и возможные реализации нейронных сетей и асоциативной памяти, даются оценки их показателей.

The new principles of development of associative memory (AM) and neural networks (NN) for 2D-mage based on matrixtensor equivalental models (MTEMs) are considered. The given models use 2-D image adaptive-equivalentive weighting in order to increase memory capacity while storing highly correlated 2D-images.

The architectures of NN and AM are also suggested. Basic devices for implementations on the base of matrix-tensor equivalental model are matrix-tensor equivalentors (MTE). The MTEMs for two level and multilevel 2D-images are shown. The results of recognition processes modeling in such AM are given on the example of 2D-image recognition with the number of neuron from 10-20 thousand. A successful recognition of correlated image is achieved.

Key words: optical neural networks, associative memory, matrix-tensor equivalentor, equivalental model, matrix multilevel logic.

1. INTRODUCTION

The main perspective tendency of development of information-calculating systems and computer technologies is making them intellectual and similar to human thinking and perception [1]. Demonstration of the intelligence - an associative call from the memory by fragments (features), classification of patterns, their recognition, training, adaptation to the situation, (processed data, necessary precision, etc.), auto-regulation and auto-control - appears in highly developed systems (biological and technical) spontaneously. As the models of intellectual systems in the field of neurophysi-ology, cognitive psychology, artificial intelligence (AI), the next models of auto-processing nets are mainly used: connection models, neural nets models, models with parallel distributed processing, models with activity distribution [2]. Though there are differences and approaches peculiar to different fields of knowledge, where such models are used, basic researches reveal a number of important general princi-

ples besides spontaneous intellectual qualities. The principle of parallel information process on all the levels of working up in intellectual systems and the concept of active (not passive) memory (the functions of storage and associative processing are distributed among elements, which form a system) are related to such features. The model of neural net or the neural model is a connectional model, which imitates biophysical processes of working up of information in nervous system like in auto-processing system. The last one shows global system behavior (considered being "intelligent") caused by simultaneous local interactions of its numerous elements [2]. The great interest to neural model forces to revalue the fundamental theses in many fields of knowledge, including computer techniques. The neural models, concepts and paradigms with their new principles and wide parallel processing are almost the only way of development in creation of new nontraditional computer technique and active dynamic systems [3]. In science neural models, as the models of brain activity and cognitive processes, most probably will cause prospective results in neurophysiology, pathologophysiology, neurology, psychiatry, the appearance of the systems with increased computing resources and intelligent qualities for solving problems of medical informatics and diagnostics.

Many failures on the way of improving of artificial intelligence appeared in recent years because firstly, the chosen computing techniques were not adequate to solve the important and complicated problems, and secondly, simple and not perfect neural models and nets were applied. Today, an active development of mathematical logic, especially matrix (multiciphered, fuzzy, neural) [4-8], accumulation of data about continual (analog) and obviously nonlinear functions of neurons [9-11], elaboration of the neural net theory, neu-robiology and neurocybernetic, and adequate algebrologic instruments for mathematical description and modeling [1215], development of optical technologies create conditions for building technical systems, adequate in resources and architecture almost to any problem of artificial intelligence. The integration of data of neurophysiology, neuroanatomy, neurogenetic, neurochemistry, neurolinguistics i.e. neuroscience will become a basis of further development of neural intelligence and neural computers. Generalization of data of neural science makes possible to mark out a number of basic neurointellectual structures, among which associative memory of 2D-images takes an important place. It makes possible

to form a hypothesis about the object by a set of features, fragments of image and to extract an integrated pattern from the memory. Nervous system has a faculty to self-organization. Talking about active character of recognition processes in associative memory, processes of training and synthesis of internal images, the possibility to choose, to throw off (decrease) or on the contrary to add (increase) separated features, fragments, to change their weights while forming adequate internal images are meant here.

In many neural models (with the exception [15]) used for optical realization of different associative devices and neural

2

2,

N

these equivalence operations on a carrier set are shown on fig. 1 (a, b, c) respectively for:

Q = [ 0 ;1

N

nets, only carrier sets Cb = {-1 }n and Cu = {0;1}

are used in bipolar (b) and unipolar (u) coding respectively [1,3]. Questions connected with increase of neural nets and associative memory (AM) capacity, especially storing large and greatly correlated images, were solving for a long time by many authors [16-18]. The methods and realization of effective recognition of greatly correlated vectors of memory, based on weighing input images [17-18] and weight of interconnections [15, 16], were proposed in [16-18]. But these propositions concerned only coding of binary images and one-dimensional images.

Models, proposed in [13-15] and called equivalental, are more general and good for representation of bipolar and unipolar signals, including multilevel signals. The connections, especially braking, are described more natural there. In such models the basic operation is a standard equivalency of vectors. The models are suitable for different methods of weighing. Considering their prospects, in this work we will show how to build associative (auto-associative and hetero-associative) memory of correlated 2-D images, including multilevel (gray scale) images, on the basis of matrix-tensor models.

2. CONCEPTUAL BACKGROUND AND THEORY

2.1. Basic neurological operations of normilixed

equivalence

For mathematical description of neural net associative memory (NNAM) algebro-logical operations will be considered (equivalental algebra [13-15]), combining linear algebra and neural bio-logic (NBL) [4,6]. Neural bio-logic is an integration (gnoseologically developed and specified) of known logic: multivalued, hybrid, continuous, controlled continuous [12], fuzzy etc. At the same time, the integrated operations in fuzzy logic are: operation of fuzzy negation, t-norms and i-norms, and they have the relation of dualism according to general form of De Morgan principle. The examples of i-norms are: logical multiplication (min), algebraic multiplication (a • b) , limited multiplication etc. The examples of i-norms are logical sum (max), algebraic sum (a + b - a • b), limited sum (1 a (a + b)), contrast sum etc. [20]

The basic operations of NBL, used in equivalental models NNAM [13-15], are binary operations of equivalence and nonequivalence, which have a few variants. The variants of

1

eq1 = a

2

eq2 = a -

b = max{min(a, b), min b = a • b + a • b ; eq3 = a ~ b =

0

0,5

0,8

E2

1

0,8 0,6 0,4 0,2 0

(a, b)};

1 + |a - b| . (1) 1

0,8 0,6

E-

0,4 0,2 0

1

eq1 = a

eq2 = a ~ b

0 0,5

eq3 = a ~ b

Figure 1 - Operations of equivalence

a

0

5

a

E3

a

Their negations are first, second and third nonequivalence respectively. In general case for scalar variables

a, b e aC = [A, B] - continuous line segment, signals

themselves and their functions and segment AC can be brought to segments [-D,D] (or [-1,1]) in bipolar coding and [0,D] (or [0,1]) in unipolar coding. Further a carrier set

AC = [ 0, 1 ] and its variables will be considered. Besides, for easier transformations we can limit ourselves at first to operation of equivalence (nonequivalence) of the second type, namely ( ~) and ( ~ ) respectively.

Extending this basic operation (~) for vector, matrix case of variables and making a corresponding normalization for quality registration of the components in vector or matrix,

we will get normalized equivalence (n) of two sets of varia-

bles A =

m X n m n

and

B =

Lj

A ~ B = — n m X n

^ ^ (atj ~ bj) and normalized non-i = 1 j = 1

m n

1

equivalence: A ~ B = - V V (a, , ~ b, ,) as inten m x n ^ ^ ij ij i = 1 j = 1

grated negation of normalized equivalence.

If excitation matrix (input 2D image) is Si , weight matrix of connections of k, l-th neuron-equivalent with input image is T k, l, then namely this neuron-equivalent

will complete operation ( ~ ), because the signal on its out-

n

put Pki will be:

more general new complementary metrics in matrix space R. In particular, (£) is a normalized metric distance

di (A , B)/m x n , and for A and B e { 0 , 1 }N it turns into

normalized distance of Hamming dn(A, B)/N. The variants

of operations of equivalence and nonequivalence depend on different types of operations of i-norms and i-norms used in them and integrated operations of crossing and joining up in fuzzy logic. Depending on type, variants of equivalent algebra (EA) [15], as a new algebro-logical instrument for creation of equivalental theory of NNAM on the basis of matrix NBL.

2.2. Nonelinear transformation

The basic operation of NBL with variable a, , from

J

A = la, J range of л C = Г 0, 1 lm x n continuous nor-L i, J Am x n &

malized set can be an operation of integrated nonlinear

transformation n(a, a) with coefficient a :

p'(a, a) = aa , p''(a, a) = (a)a = (1 - a)a , p'''(a, a) = p'(a, a) = (1 - a)a,

p''''(a, a) = p''(a, a) = (1 - a)a = 1 - (1 - a)a, a e [0, 1 ], a = 0, 1, 2.

One should mark out that for a =1 operation p"' is in kernel a negation operation in continuous and fuzzy logic, p" is also a negation operation, and p'"'= p' . Let us introduce two more iteration equivalental operations of nonlinear transformation, defined in the next way:

Pkl = Sinp ~ TkV and Pki e C = [1 ] (2)

The operation ( ~ ) will be completed by dual neuron-nonn

equivalent with the signal Pki e AC on its output:

ßkl = 1 - ßkl = Sinp ~ Tkl

y(a, a) = 1 ~ max(a, a) ~ max(a, a)

max(a, a),

(3)

Let us mark out, that with the help of properties of equivalence operations [15] the signals (direct and dual) can be also represented on the output of integrated (dual) neuron-equivalent (nonequivalent) in such a way:

pkl = Sinp~ Tk, l , pkl = Sinp ~ Tk, l, pkl = Sinp ~ Tkl, pkl = Sinp ~ Tkl, pkl = Sinp ~ Tkl, pkl = Sinp ~ Tkl(4)

Therefore, if such integrated neurons have dual inputs and dual outputs, then the last ones can be defined (each one) in four ways considering (1) + (4).

And that, maybe, provides correspondent vitality of neural nets at the expense of such a repeated backup.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Normalized equivalence ( ~) and nonequivalence ( ~) are

a - time

Y(a, a) = 1 ~ min(a, a) ~ min(a, a) ~ ... ~ min(a, a) .(5)

a - time

From (5) can be seen that the second operation is a negation of the fist one and vise versa. Besides, for a > 0, 5 fist y>

and second Y> functions will be:

Y>( a, a) = 1 ~ a ~ a a = 1 ~ a ~ . ~ a , (6)

a - time

a - time

Y> (a, a) = 0 ~ a ~ a ~ . ~ a = 0 ~ a ~ . ~ a , and for a - time a - time

a<0,5 these functions Y< and Y< will be:

Y> (a, a) = 1 ~ a ~ . ~ a ,

a - time

Y>(a, a) = 0 ~ a ~ . „ ~ a a - time

(7)

Hence it is obvious that for all a e [ 0, 1 ] , y(a, a) > 0, 5 and y(a, a) < 0, 5 . By analogy let us determine these functions of variable a :

Y(a, a) = y(a, a), y(a, a) = y(a, a)

(8)

From (8) it follows that these functions are symmetric concerning variable a(a) .

Weighing (equivalently) these functions of variable , we will get nonlinear iterative transformation, reducing ratio between signals a and a . We call it competitive nonlinear transformation and let us define it taking into account properties of the operations:

afn = a ~ Y(a, a) = a ~ y(a, a) = a ~ y(a, a) = = a ~ y(a, a),

aan = a ~ y( a, a) = a ~ y( a, a) = a ~ y( a, a) = = a ~ y(a, a). (9)

Expression (9) can be written in more convenient way taking into consideration (6) and (7):

Y(a, a + 1 ), for a > 0, 5, Y(a, a + 1 ), for a < 0, 5,

-a = J Y(a, a + 1 ), for a > 0, 5,

akn =

Y(a, a + 1 ), for a < 0, 5.

Or in another way:

kn

1 ~ a •

a, for a > 0, 5,

(a + 1 ) - time 0 ~ a ~ ... ~ a, for a < 0, 5,

kn

(a + 1 ) - time

0 ~ a ~ ... ~ a, for a > 0, 5,

1 ~ a a, for a < 0, 5.

2.3 Matrix-tenzor equivalental models (MTEM) of NNAM

For simplification at first let us consider NNAM for associative recognition and reading of two-level (binary) 2-D images. Input, output, and qth trained images are respectively Sinp = I Sn I e {0, 1 }N = m x n ,

inp IJ _|

Sout = LSjTj e {0, 1}N, Sq = [sqj e {0, 1}N, where q e {1 + Q} . Then tensor of weights of interconnections

from input neurons to output will be qT e [ 0, 1 ]N x N, Q

where Ti , f = 1 V (S,q , ~ S,q ,,) for simple NNAM,

J, 1 ,J Q J 1 ,J

q = 1

where 1 < i < m , 1 < J < n , 1 < f < m , 1 < J < n . Considering determination of normalized equivalency (~) the component of interconnection tensor can be assign as:

T.

h J,1, J

_ ■> ->

= j~ ,j

where Si,j = (Sjj...Sfj.SQQJ)',

S

= ( s1, f...sQ ,

where T

■j, 1, J

i , j ; j t, j,

e {0, 1/Q.(Q - 1 )/Q, 1} .

(11)

(10)

To determine a new state of k, l-th neuron Sgut in (t+1)

moment it is necessary to consider contributions of all weights, and it means to calculate normalized equivalence between Sinp( t) and matrix (k, l-th tensor plane)

Tk i = [ TjNi and to take its threshold function 1, if x > x

(12)

9[x] = # ' _ (function of activation) [15].

" 0, if x < x

The expression for net recalculation is:

S0Jt( t +1) = <p[ Sinp( tn~ Tk,,]

or considering duality:

It can be seen from (10) that any variable a or a of the range >0,5 sums equivalently with itself a times at competitive nonlinear transformation, and a variable of the range <0,5 - sums nonequivalently a times with itself. At the same time the indicators for (a + 1) equivalence (nonequiv-alence) in corresponding ranges for variable a®n and its

negation arespectively are "1" and "0" for ranges (>0,5) and (<0,5) in a case and "0" and "1" for the same ranges respectively in a case.

Hence, we can see that different nonlinear transformations can be also expressed by operations of equivalence

(nonequivalence) or by t-norms in case of pf(a, a) transformation .

sko'Jt(t + 1 ) = Sinp(t)~ Tk, l] and

Sko'Jt(t + 1 ) = <p[Sinp(t)rTk, l] = <P[Sinp(t)/Tk, l] . (13)

Considering (11) expression (12) (let it be basic) can be transformed:

" Q

Q i ( sq, l ~Pq )

- q = 1

sko'Jt( t + 1 ) = 9

= 9[ Sk, i-

where pq = Sfnp(t) ~ Sq ; p = (P4, _PQ)t.

Combining all k, l-th outputs into Sout(t + 1) matrix, one

can write an expression for recalculation of equivalental model of simple NNAM, increasing threshold scalar operator

9(x) to matrix pel-by-pel x) :

Q Q

Sout(t + 1 ) = <Ptnet*, j] = 9[[S*.l m P ]] =

Q Q

= [<pw[s*. i nn p ]]m xn = w (14)

2.4. MTEM of adaptive-equivalental NNAM

To improve the properties of the models, namely MTEM of NNAM during recognition of greatly correlated patterns,

we will introduce two weighing coefficients op j and

Y(Pq, p ) (in essence a matrix and P Pn vector) and modify training so as weights of interconnections could be calculated in the next way: Q

i = Q X ( 4;, k, i ~ X i (ap j Y(PP ))), (15)

q = 1

p q

where F^ i - equivalental operation with variables ap and Y(p ) of p-th degree from coefficients

ak, l =

Q Z Sl l ~ 0' 5; Pq = Sinp(i) ~ Sq . The

& q = 1 '

first coefficient takes into consideration the "equivalence" (similarity) of the input images with one of the stored pattern images. The second one accounts the "equivalence" of the k-th and l-th pixel of the pattern images.

The process of recalculation of MTEM of NNAM with such dual adaptive-equivalental weighing comes to matrixtensor procedures with operation of "equivalence":

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

•it{ t + 1) Ф

Q M N ( Sinp( t)X qT ), -1 ( Sfnp X~ qT )jx

X *TtÄ

here Sfnp = Sinp ~ap , pFq = qp^n ~ qPkn (the other types of equivalence operations are possible). The expression for recalculation of the state of one k, l-th neuron (not the whole matrix) in MTEM of NNAM with adaptive-equivalen-tal weighing has more evident form:

then we can decompose this image into k-images, each having only two levels. Each two-level or 1-bit image is referred to as a bit plane. Using different codes for transformation of multilevel image in digit-bit plane, we can complete thus analog-digital transformation of images in any needed code: unit position code, unit normal code at morphological threshold decomposition, binary code at irredandunt coding, alternating code, codes of Fibonacci and others. The devices, which implement such transformation parallel for all pixels, called analog-digit image converters are considered in [21, 22].

For our MTEM of NNAM we used mainly two types of coding in AD-transformations of multilevel images. In the first case we used morphological threshold decomposition with programmed number of levels or bit planes and low (high) threshold levels. At that the whole set of digit levels 0^ 255 for every gray level image (or for every of three main colors R, G, B) was transformed in programmed (as a rule 8 in our experiments) number of bit ordered planes. That led to actual compression of information, and multilevel image, having 256 levels, was transformed in multilevel image with less number of levels. But levels of the last one remained the same and that is why total dynamic range did not change. Applying the operation of pel-by-pel logic sum of corresponding ordered bit planes we form (of 8 planes) a result prepared (converted) bit plane (two-level image) (RCBP). Thus any input image and all trained multilevel images are represented by their RPBPs. The last ones are used as input image and trained images correspondingly in NNAM for two-level images.

In the second case we used 8-digit ADC of picture type (virtual) for transformation of multilevel images. For that purpose, a corresponding subprogram was written. In hardware realization and in this program we use a new algorithm of ADC and a new mathematical model on the basis on neural logic, but we will not describe then here in detail (we will do that in another work). One should note that all 8 bit planes (input, standard, output, gray level image) are processed parallel (or consecutively, virtually) and used during recognition with the help of 8 independent (possibly dependent) NNAM for two-level, two-gradation images.

Hence, MTEM of NNAM with AE weighing for multilevel images in the first case of ADC does not differ from the one described in section 2.4., as since RPBP, similar by shape, but prepared one was used instead of ordinary 2D two-level images. For the second case the model consists of several two-level images, i.e. it is a superposition, combination of digit-by-digit bit planed models.

S

Sk, l

°out(t + 1 )

Ф

Sq, l

(ap ~ S,

inp n

sq )p

kn

(16)

2.5. MTEM of NNAM with weighing for multilevel images

There are basically two types of still images: two-level and multilevel. A two-level image is often called a "black-and-white" image, where as a multilevel image is usually called a gray level image. If we represent each pixel of a multilevel image by means of as k-bit binary code word,

3.SYSTEM DESIGN AND PROPOSED IMPLEMENTATIONS

3.1. Systems of NNAM

Fig. 2 shows the structure scheme of neural net associative memory for multilevel 2D images with the first variant of transformation of input and trained multi-gradation (colour) 2D images into a result bit plane. Analog-digit converter (ADC) of picture type (PT) and digit-analog converters (DAC) of picture type perform necessary trans-

q

formations of multilevel images in a set of bit planes and visa versa. Schemes of coding and decoding of picture type (C of PT and DC of PT) complete transformation of the set

of bit planes S°ut + S^^ into one result two-level image

RCBPinp and inversion of output RCBPout into a set of bit

planes S^ut + S^7Ut respectively. Trained multilevel images

Sq, being input into memory with the help of ADC of PT and C of PT, are transformed into RCBPq. That is why in ordinary addressed memory of page type only two-level (binary) images are stored.

For the second variant of coding, according to section 2.5., the structure scheme will differ from the one on fig.2 only because it does not have coder and decoder, and the number of blocs of NNAM and blocs of memory of trained images MTI will be equal to the number of bit planes. Every

i-th bit plane Si of input and S1... Sq ■■■SQ of trained

images is processed on i-th NNAMi, which forms i-th bit

S'out of output image on its output. All

plane

NNAM0 ^NNAM7 can work independently and jointly. In the last case the internal result measure of equivalence can be a certain complicated function (of vector normalized equivalence type) of particular i-th measures of equivalence of i-th bit planes.

Taking into consideration limitation connected with the size of the article, let's concentrate on realization of basic associative memory with two-level 2-D images.

3.2. Implementations

As can be seen from section 2 for simultaneous parallel calculation of all components of pQ vector or pa vector,

net vector vector-matrix and matrix-tensor procedures with operation of equivalence are necessary to form result sets of values, data proportional to equivalence. Besides, for simultaneous calculation of all coefficients aiJ, all functions

iJ

p'fq, qpkn , ap, matrixes of (2D-array) elements are

required for component-by-component calculation of equivalence operations, nonlinear transformations, threshold processing, sum and so on. That is why proposed MTEMs of NNAM are easier represented on modern and progressive matrix architecture, multifunctional elements of matrix logic [4,7,13,21] and processors of picture type [23, 24, 25]. One should mention that normalization may not be done, it is necessary to change threshold, taking function of activation.

The device, which performs x operation over S matrix

and T tensor, namely S xT, will be called matrix-tensor equivalentor (MTE). According to elaborated in paragraph 2 models (MTEMs of NNAM) the basic structure element for

them will be just these MTEs. If (~) equivalency is used, then MTE can be built on ordinary digit matrix-tensor multipliers, including optical, as:

S x T = S x T + S x T

(17)

Fig. 3 shows architecture of NNAM on the basis of two matrix-tensor eqivalentors MTE1 and MTE2. The first nonlinear converter NC1 of matrix type is used for competitive nonlinear transformation (expression (9) section 2.2.), and the second one NC2 of PT is used for threshold procession according to activation function. Input commutator-multiplexer M and output commutator-demultiplexer D are intended to input, output and form back propagation of iteration recalculations.

multilevel 2D image

"ADC

of

PT h S

Al

Coder of PT

input image

memory teaching images (MTI)

RCBF,,P teaching

images

q e{l,..Q}

Neuro netvorks i associative ' memory |

two-level 2D-images

r*

\~a

(MTEMNNAM)

RCBPo,

S

Decoder of PT

"DA"

C of

_PT_

S

multilevel 2D image

RCBP

Figure 2 - Schematic block diagram of associative memory for multilevel 2D-images

S

Feedback Loop

S

M

T

S (t S out v

DM

M

S o

NN AM 2D-images

Figure 3 - Block diagram of NNAM

Light k_i_i

►c

3

SLM f JSLM 2 <-

!

Computer

SLM 3

m/H/mmmmmmm, BS s s N .......................v....

)

HM

LA

SLM 1

]

]SLM4 Phffh SLM2

S

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

U inp

Q times

jS 1 All Q teaching images Sq

SLM 3

SLM 4

EH3" —

I-y^Ç Q multiplaxing Sin

S i

Q—Q- S i Ail q teaching images Sq

a)

b)

Figure 4 - Optical system for MTE a) and inputting in SLM images b)

In a fig.4. one of possible variants of optical realization MTE is shown. Input images Sinp are recorded from the managing computer in first spatial light modulators (SLM\) Q times, i.e. multiplicited. This multiplication is carried out at recording in SLMi in the computer.

Contrastly inverted input image Sinp is also recorded in SLM3 Q times. On the second SLM2 all Q trained images (Sl,...Sq) are recorded and on the fourth SLM4 - all Q

trained contrastly inverted images ( Si, ... Sq ). The lens

array LA serves for spatial integration within each q-zone thus, on an input of every q-photodetector of array PDAa signal will be proportional to normalized equivalence of an input image with the trained Sq. Non-linear transformation insfead of blocks NCi and NC2 are shown in the circuit on fig3. They are carried out over the PDA output signals in the computer. Instead of PDA it is possible to use commercial liquid crystal television (LCTV) with 500x500 pixels and 30 ms refresh rates [26]. In this case it is possible to work with 2D-imagesdimantion 32x32 pixels and 10x10=100 teaching 2D-images, since on every LCTV it is necessary to

record all Q images by dimension mxn. And it puts forward restriction on m x n < 500 x 500/Q at given Q or restriction on Q < (500 x 500)/(m x n) with the given format of the images mxn 2D.

Let's estimate productivity of such NNAM on the basis of optical realization of MTE. The recording in all SLM (LCTV) (fig4.) and updating given in each iteration is made simultaneously. Nonlinear processing simulating the work of NCi and NC2 (fig.3) and recordings of the processed data in SLM can be combined. Let's consider therefore, that one iterative recalculation of a network the system has executed in 50 ms. At dimension of matrix of the input image 32x32 pixels or " 103 elements of a vector (image) the matrix of weight coefficients will have " 106 components or connections. It means that the system in 50 ms calculated as a matter of fact 106 connections. Thus, the rating show that productivity of NNAM realized on the simplest traditional circuits (fig.4) and on a basis slowly working LCTV archives

2 • 107 of connections/s. There is a quite real opportunity to increase capacity of SLM up to 5000x5000 (90 line pairs/ mm) pixel resolution and 50ms response time [27]. In this

case at dimension of 2D - image equal 100x100 pixels(and Q=1000) in 50ms already 108 connections are calculated. Therefore ratings will show productivity of NNAM at a level

= 2 • 1012 of connections/s. The common time of reading from associative memory does not exceed (2 - 3)Tter ~ 100|c .The usage of other more high-speed

optoelectronic matrixes [13-15], calculating necessary operation of equivalence and matrix-tenzor procedures will allow to reduce significantly the time of reference to NNAM. In this work we wont concentrate on realizations in details, as our purpose is to show new most general principles of their realization.

4. RESULTS OF MODELING

For modeling we developed a program realization of MTEMs models. Algorithm is realized in a program product on a low class PC. Technical characteristics this product:

- any quantity of segments (elements, neurons) of a processed image with the dimensions in X x Y up to 30000 neurons;

- time of recognition of a image - part of a second to second;

- a number of iterations (steps) necessary for recognition of a image - 1 to 5;

- training period depends only on time of data base insert and is significantly shorter than that of other well-known neuro - net paradigms;

- correlation between the quantity of sample vectors and number of neurons in the network (network capacity) is 2 -2,5 times more (30%) than that in Hopfield networks (1415%);

- the level of distortion of image which permits its authentic recognition is up to 30%;

- the number of gradation of images under process is 256 in each color, as well as black - and -white image;

- quantity of digit layers at the process of image coding- 8 beat layers;

- a possibility of space-invariant recognition of images;

Mathematical models used are equivalental models. Metric system for comparison of images is non - Euclidean.

A possibility can be fore seen of an adaptive regulation of the network for renewing and extending data base and for the optimization criterion and metric parameters.

We shall gave it's characteristics below. The results of modeling are shown on fig. 5-8.

5. CONCLUSION

The suggested matrix-tensor equivalental models MTEMs for construction on their basis of neuro-net associative memory of two-level and multilevel 2_D-images, comprising models with doubled weighting, are of great importance, since they provide the opportunity to implement the whole number of competitive NNAP systems with increased capacity, productivity and fast acting. The result of modeling and experiment prove the validy of the theoretical work. Since suggested MTEMs are realized on the base of matrix tenzor (vector-

matrix) procedures and multipliers and equivalentors it is possible to make a conclusion about necessity and, possibility of construction of complete digital optical NN associative memory and optical pattern of recognition systems.

a)

Figure 5

b)

Output of neurons open layer Pa for different p: a) p=1; b) p=3.

Figure 6 - Two-level 2D-images recognition results

Figure 8

Multilevel 2D-images invariant recognition results

Figure 7 - Multilevel 2-D images recognition on base of RCBP result

6. REFERENCES

1. N.M. et al..Amosov Neurostructures and Intelligent Robots. Naukova Dumka, Kiev, 1991.

2. D. A. Redgia, G. G. Satton Autoprocessing Nets and Their Significance for Biomedical Researches. TIIER, vol. 76, №6, pp. 46-59, 1988.

3. Freeman James a., D.M. Skapura Neural Networks: Algorithms, Applications and Programming Techniques, Addison-Wesley Publishing Company, 1992.

4. V.G. Krasilenko, O.K. Kolesnitsky, A.K. Boguhvalsky "Creation Opportunities of Optoelectronic Continuous Logic Neural Elements, Which are Universal Circuitry Macrobasis of Optical Neural Networks ". Proc. SPIE, Vol. 2647, pp. 208217, 1995.

5. V.I. Levin. "Continuous Logic, Its Generalization and Application". Automatica and Telemechanica, №8, pp. 3-22, 1990.

6. V.G. Krasilenko et. al. "Lines of Optoelectronic Neural Elements with Optical Inputs/Outputs Based on BISPIN-devices for Optical Neural Networks". Proc. SPIE, Vol. 2647, pp. 264272, 1995.

7. V.G. Krasilenko, A.T.Magas "Fundamentals of Design of Multifunctional Devices of Matrix Multiciphered Logic with Fast Programmed Adjusting". Measuring and Computer Technique in Technological Processes. №4, pp. 113-121, 1999.

8. A.S. Abdul Awwal, "Khan M. Iftekharuddin. Computer Arithmetic for Optical Computing. Special Section". Optical Eng. Vol. 38, №3, 1999.

9. E.N. Sokolov, G. G.Vaytkyavichus. Neurointelligence: from Neuron to Neurocomputer. M.: Nauka, -238 p, 1989.

10. N.V. Pozin Modeling of Neural Structures. M.: Nauka, -264 p. 1970.

11. Yu.G. Antomonov Principles of Neurodynamics. Naukova

Dumka, Kiev, 1974.

12. L.N. Volgin. "Complementary Algebra and Relative Models of Neural Structures with Coding of Channel Numbers". Electrical Modeling, Vol. 16, №3, pp. 15-25. 1994.

13. V.G. Krasilenko, A.K.Bogakhvalskiy, A.T.Magas. "Equivalental Models of Neural Networks and Their Effective Optoelectronic Implementations Based on Matrix Multivalued Elements". Proc. SPIE, Vol. 3055, pp. 127-136, 1996.

14. V.G. Krasilenko et. al. "Applications of Nonlinear Correlation Functions and Equivalence Models in Advanced Neuronets". Proc. SPIE, Vol. 3317, pp. 211-222, 1997.

15. V.G. Krasilenko et. al. "Continuous Logic Equivalental Models of Hamming Network Architectures with Adaptive-Correlated Weighting". Proc. SPIE, Vol. 3402, pp. 398-408, 1997.

16. Guo Doughui, Chen Zhenxiang, Lui Ruitaugh, Wu Boxi. "A Weighted Bipolar Neural Networks with High Capacity of Stable Storage". Proc. SPIE, Vol. 2321, pp. 680-682.

17. B.Kiselyov, N.Kulakov, A.Mikaelian, V.Shkitin. "Optical Associative Memory for High-order Correlation Patterns". Opt. Eng., Vol. 31, №4, pp. 764-767.

18. A.L. Mikaelian. "Holographic Memory: Problems and Prospective Applications". Proc. SPIE. Vol. 3402, pp. 2-11, 1997.

19. A.L. Mikaelian (Editor). Optical Memory and Neural Networks. Proc. SPIE, Vol. 3402, 1997.

20. V.G. Krasilenko et. al. "Gnoseological Approach to Search of Most General Functional Model of Neuron". Collected research works, №7 (2000) according to the results of 7-th STC "Measuring and Computer Technique in Technological processes", Khmelnitsky, MCTTP, pp. 23-27, 2000.

21. O.K. Kolesnitskiy, V.G. Krasilenko, "Analog-Digit Transformers of Picture Type for Digit Optoelectronic Processors (Review)". Autometry.- №2, pp. 16-29, 1992.

22. O.K. Kolesnitskiy, V.G. Krasilenko, "Analog-to-Digital Image Converters for Parallel Digital Optoelectronic Processors". Pattern Recognition and Image Analysis. Vol. 2, №1, pp. 227233, 1992.

23. V.G. Krasilenko et. al., "Digital Optoelectronic Processor of Multilevel Images", Ellectronnoe Modelirovanie, Vol. 15, №3, pp. 13-18, 1993.

24. Patent №1781679 (SU). Device for Multiplication of Square Matrix of Picture-Image, V. G.Krasilenko et. al., Publ. At BI №46, 1992.

25. A.Huang, "About Architecture of Optical Digit Computing Machine"//TIIER.-Vol. 72, №7.-pp. 34-41, 1984.

УДК 621.74

ОПТИМИЗАЦИЯ ЛОГИЧЕСКОЙ СХЕМЫ композиционного МИКРОПРОГРАММНОГО устройства УПРАВЛЕНИЯ

А.А.Баркалов, Аль-Рабие Аднан, А.А.Красичков

Предлагается методика оптимизации схем адресации композиционного микропрограммного устройства управления. Она основана на кодировании номеров переходов конечного автомата. Данная методика позволяет уменьшать аппаратные затраты, необходимые на реализацию схем адресации. Приведен пример ее использования.

Пропонуеться методика оптим1зацп схем адресацИ композицшного мтропрограмного пристрою керування. Бона заснована на кодувант номер1в переход1в ктцевого автомата. Дана методика дозволяе зменшувати апаратн витрати, необх1дн1 на реал1защю схем адресацИ. Приведено приклад 'И використання.

The optimization method of microcommand's addressing circuits for compositional control unit is proposed. Method based on the encoding of transition's numbers of hard-logic automata. Given method allows to reduce the number of LSI-outputs in the addressing circuits. The method is illustrated by example.

Управляющее устройство (УУ) цифровой системы может быть реализовано в виде композиционного микропрограммного устройства управления (КМУУ) [1]. В настоящее время для реализации логических схем УУ широко применяются программируемые логические устройства (ПЛУ) [2]. Значительная стоимость ПЛУ вызывает необходимость уменьшения числа микросхем в схеме УУ, либо замены микросхем программируемой логической матрицы (ПЛМ) и программируемой матричной логики (ПМЛ) более дешевыми ПЗУ. В настоящей работе предлагается метод оптимизации стоимости КМУУ, который применим при синтезе любого УУ на счетчике [3].

Пусть для граф схемы алгоритма (ГСА) Г получено множество операторных линейных цепей (ОЛЦ)

операторных вершин ГСА между которыми нет условных вершин [1]. Каждая ОЛЦ Х е С имеет входы

и один выход О^. В операторных вершинах ГСА Г записываются микрокоманды (МК) У( с У, где У = {>>1, ...,у^} - множество микроопераций. В условных вершинах ГСА Г записываются элементы множества логических условий X = {х^ ..., х^} . Пусть в пределах каждой ОЛЦ микрокоманды имеют соседние адреса, то есть если МК У( записана в вершине Ь¿, а МК -

Yg в следующей за ней вершине bj, то

A (Yg) = A (Yt) + 1,

(1)

C = (Ц, ..., X } , где

£ C

последовательность

где А(У^) - адрес микрокоманды Ук(к е {g}) .

В этом случае для интерпретации ГСА Г применимо КМУУ (Рис.1), представляющее собой композицию автомата с "жесткой" логикой $1 (КС, ИС) и автомата с "программируемой логикой" $2 (СТ, МП).

КМУУ функционирует следующим образом. По сигналу "Пуск" в регистр памяти автомата $1 И С и в счетчике СТ заносятся нулевые коды. Пусть в момент времени 1 в ИС находятся код К(ат) текущего состояния

ат е А автомата $1, где А = (а1,...,ам1). Пусть в момент

времени t в СТ находится адрес А(Уг) микрокоманды,

входящей в ОЛЦ Хg е С. Если МК У] не является

выходом ОЛЦ Х, то из управляющей памяти УП вместе

54

ISSN 1607-3274 "Радюелектрошка. 1нформатика. Управл1ння" № 2, 2002

i Надоели баннеры? Вы всегда можете отключить рекламу.