Научная статья на тему 'Parallel implementation of neural network algorithm using parallel Virtual machine (pvm) software'

Parallel implementation of neural network algorithm using parallel Virtual machine (pvm) software Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
109
44
i Надоели баннеры? Вы всегда можете отключить рекламу.

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Jamil Ahmad

Although there are many implementations of artificial neural networks (ANN) are available on sequential machines, but most of these implementations require an immoderate amount of time to train or run ANNs, especially when the ANN models are large. It can be observed that the problem is related to computational power of computer machines. One possible approach for solving this problem could be a parallel implementation of ANNs. Hence, researchers have adopted a number of strategies since the rebirth of ANN in 1986 to implement ANN model in parallel environment. Very few of these strategies are using software platform for such implementations. Therefore, this paper presents a novel technique of implementing ANN for the recognition of characters using Parallel Virtual Machine (PVM) software package. PVM permits a heterogeneous collection of computers hooked together by a network to be used as a single large parallel computer. Thus large computational problems can be solved more cost effectively by using the aggregate power and memory of many computers. The nodes (neurons) of ANN model are distributed over on the participating computers in the parallel environment to do the needful calculation in parallel. Weights are also adjusted in the same way, if there is any discrepancies between computed and target outputs. Simulation of the study shows that parallel implementation of the ANN produces better results than sequential implementation.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Для повышения скорости работы нейронных сетей предлагается использовать технологию параллельной реализации нейронных сетей. Рассмотрен пример решения задачи распознавания символов нейронной сетью, моделируемой с помощью программного комплекса параллельной виртуальной машины.

Текст научной работы на тему «Parallel implementation of neural network algorithm using parallel Virtual machine (pvm) software»

1НФОРМАТИКА

± l vectors, J. Combinatorial Theory A, vol. 47, pp. 124-33, 1988.

19. Y. Crama, P.Hansen and B. Jaumard. Detection of Spurious states of Neural networks, IEEE Transactions on NN, Vol. 2, No I, pp. 165-169,1991.

20. J.-D. Gascuel, B.Moobed, M.Weinfeld. An Internal Mechanism for Detecting Parasite Attractors in a Hopfield Network. Neural Computation, 6,5, pp.902-915, 1994.

21. M. Weinfield. A fully digital integrated CMOS Hopfield network including the learning algorithm. Proc. of Intern. Workshop on VLSI For Artificial Intelligence , E1-E10, Univ.of Oxford, 1988.

22. V.S. Dotsenko, N.D. Yarunin and E.A. Dorotheyev. Statistical mechanics of Hopfield-like neural networks with modified interactions, J. Phys. A: Math. Gen 24. pp. 2419-2429,

1991.

23. A.Engel and M. Weigt. Storage capacity of the truncated projection rule, J. Phys. A: Math. Gen 24, pp. 3707-3709. 1991.

24. A. Schultz. Five variations of Hopfield Associative memory networks, Journal of Artificial Neural Networks vol. 2, no 3, pp. 285-294, 1995.

25. B.Telfer and D.Casasent. Updating optical pseudoinverse associative memories. Applied Optics, vol. 28, no. 13. pp. 2518-2528, 1989.

26. S. Diederich. M. Opper. Learning of correlated patterns in spin-glass networks by local learning rules, Phys. Rev. Lett.-58. N9, 949-952, 1987.

27. E. Gardner. The space of interactions in neural network models, Phys. Rev. A vol 21, pp. 257-270. 1988.

28. J. Kratschmar and G. Kohringi Retrieval of neural networks constructed from local and nonlocal learning rules, Journal de Physique. 2, pp. 223-229, Feb. 1990.

29. A. Johannet, L. Personnaz, G. Dreyfus, J.-D. Gascuel, and M. Weinfeld Specification and Implementation of a Digital Hopfield-Type Associative Memory with On-Chip Training, IEEE Transactions on Neural Networks Vol. 3, N. 3, p. 529, July

1992.

30. M.H. Hassoun and A.M. Youssef. Autoassociative neural memory capacity and Dynamics. International Joint Conf. on NN (IJCNN'90) Proceedings vol 1, pp. 763-769. San Diego, USA, 1990.

31. A.N. Michel, J.Si and G. Yen. Analysis and Synthesis of a class of discrete-time neural Networks, IEEE Transactions on NN, Vol. 2, No 1, pp. 29-39, 1991.

32. S. Coombes and J. Taylor, Using Features for the Storage of Patterns in a Fully Connected Net. Neural Networks 9, pp. 837-844, 1995.

33. S. Coombes and J. Taylor, The Storage and Stabilisation of Patterns in a Hopfield Net, Neural Network World, 5. (1995)

133-150.

34. M.A.G. Abyshagur and A.M.Helaly. Neural network training using the bimodal optical computer, Proceedings of SP!E -The International Society for Optical Engineering, vol 1294, pp. 77-83, 1990.

35. G.R. Gindi, A.F. Gmitro and K. Parthasarathy. Hopfield model associative memory with non-zero diagonal terms in memory matrix. Applied optics 27, pp. 129-134, 1988.

36. H. Yanai and Sawada Y. Associative memory network composed of neurons with Hysteretic Property, Neural Networks Vol.3, N2, pp. 223-228, 1990.

37. D.O. Gorodnichy and A.M. Reznik. Increasing Attraction of Pseudo-Inverse Autoassociative Networks, Neural Processing Letters, volume 5, issue 2, pp. 123-127, 1997.

38. P. Floreen and P. Orponen. Attraction radii in binary Hopfield nets are hard to compute, Neural Computation 5. pp. 812821, 1993.

39. R.D. Henkel and M. Opper. Parallel dynamics of the neural network with the pseudoinverse coupling matrix. J, Phys. A: Math, Gen 24, pp. 2201-2218, 1991.

40. A.M. Reznik, D.O. Gorodnichy. A.S. Sychov. Controling local self-connection in neural networks designed with the projection learning rule (in Russian). Kibemetika i Sistemnyi Snaliz, N 6, pp 153 - 162. 1996.

41. A.M. Reznik. Non-Iterative Learning for Neural Networks, CD-ROM Proceedings o IJCNN>99, Washington, July 12-17, 1999.

42. D.O. Gorodnichy. Investigation and design of high performance fully connected neural networks. PhD dissertation, National Academy of Sciences of Ukraine, Kiev, Ukraine (written in Russian). Available online at ftp://ftp.cs.ualberta.ca/ pub/dmitri/PINN 1997.

43. D.O. Gorodnichy. The Optimal Value of Self-connection ("Best Presentation" award paper). CD-ROM Proceedings of IJCNN'99, Washington, July 12-17, 1999).

44. S. Matsuda. "Optimal" Hopfield network for Combinatorial Optimization with Linear Cost Function. IEEE Transactions on Neural Network., Vol. 9, No. 6, pp. 1319-1330, 1998.

45. M. Brucoli, L. Carnimeo, and G. Grassi. Hetereassociative memories via celluar neural networks, Int. Journal of Circuit Theory and Applications, 26, 231-241, 1998.

46. Ciuseppe Crassi and Giuseppe Acciani Cellular Neural Networks for Information Storage and Retrieval: A New Design Method CD-ROM Proceedings of IJCNN'99, Washington, July 12-17, 1999).

47. M. Ohta. On the self-feedback controlled chaotic neural network and its application to the N-Queen problem, CD-ROM Proceedings of IJCNN'99, Washington, July 12-17, 1999.

YAK 681.32:007.52

PARALLEL IMPLEMENTATION OF NEURAL NETWORK ALGORITHM USING PARALLEL VIRTUAL MACHINE (PVM) SOFTWARE

Jamil Ahmad

Для повышения скорости работы нейронных сетей предлагается использовать технологию параллельной реализации нейронных сетей. Рассмотрен пример решения задачи распознавания символов нейронной сетью, моделируемой с помощью программного комплекса параллельной виртуальной машины.

Although there are many implementations of artificial neural networks (ANN) are available on sequential machines, but most of these implementations require an immoderate amount of time to train or run ANNs, especially when the ANN models are large. It can be observed that the problem is related to computational power of computer machines. One possible approach for solving this problem could be a parallel implementation of ANNs. Hence, researchers have adopted a number of strategies since the rebirth of ANN in 1986 to implement ANN model in parallel environment. Very few of these strategies are using software platform for such implementations. Therefore, this paper presents a novel technique of implementing ANN for the recognition of characters using Parallel Virtual Machine

(PVM) software package. PVM permits a heterogeneous collection of computers hooked together by a network to be used as a single large parallel computer. Thus large computational problems can be solved more cost effectively by using the aggregate power and memory of many computers. The nodes (neurons) of ANN model are distributed over on the participating computers in the parallel environment to do the needful calculation in parallel. Weights are also adjusted in the same way, if there is any discrepancies between computed and target outputs. Simulation of the study shows that parallel implementation of the ANN produces better results than sequential implementation.

1. INTRODUCTION

The growing importance of artificial neural networks is now widely recognized, it is critical that these models run fast and generate results in real time. As discussed above

58

ISSN 1607-3274 "Радюелектрошка. 1нформатика. Управл1ння" № 2, 2001

that a number of implementations of neural networks are available in sequential form, but most of them are suffered due to its slow speed. One approach for speeding up the implementation of ANN is to implement them on parallel machines.[1] In recent past many attempts have been made to implement ANN models through parallel environment to take full advantage of its structures. There are many ways in which neural networks can be organized to operate in parallel. Some of them are discussed in [2]-[7]. In one particular way, ANN in parallel environment may look like that several different networks, each operating in logical or real parallelism working on the same set of data. Each network will be trained in logical or real parallelism, to make different distinctions. Sometimes, when a set of subset of subtle or complex distinctions are desired, the best solution is to break the problem up into a number of subtasks, each solved by a separate network. A system of the networks can be created. The results of the different networks can be fused or correlated to obtain desired results, i.e., multiple networks are operating in parallel to solve a single problem.[2]

More interesting system design issues arise when we seek to develop systems which can make subtle and complex distinctions from a large body of incoming data.[8] Sometimes, a complex distinction may require that several different types of features need to be extracted from the data, and we may not know in advance what features will be needed. As the network size grows, the representation of feature information can become more distributed, or needed generalizations may fail to be made. Parallel implementation of ANN models provides better solution in such circumstances than sequential implementation. All of attempts, which are made to implement ANN models in parallel environment can be divided into two categories, i.e., software and hard-ware.[9],[2] However, the ratio of using parallel hardware is very high as compared to software. There is one major problem with the hardware implementation of ANN is the 'portability'. The life of a particular parallel machine is generally only a few years. A user who had developed code for one machine a couple of years ago, often has to rewrite it for another machine if the original machine is no longer available. In the right kind of simulation environment, the users should be able to easily update their implementations for a new machine through the environment. [1]

In this paper we propose a novel method to implement ANN to recognize characters using parallel software platform. PVM software is used to implement basic perceptron model to recognize English characters, which are presented to network in many different fonts. PVM software can be implemented on any kind of machine; the implementation in this environment does not suffer from problems of portability and flexibility (see section 2 for more discussion on PVM).

2. SELECTION OF APPROPRIATE ALGORITHM FOR PVM SOFTWARE

After a through investigations we found that not all of the neural network model can easily be implemented in parallel environment especially those models in which calculations in neurons are interdependent on each other.

Perceptron model is selected to be implemented with PVM software for three reasons; however, other model such as Hopfield and Kohanan etc. can also be implemented with PVM. Interested readers are referred to [1] for further discussion on this issue.

Firstly, Perceptron had perhaps the most far-reaching impact of any of the early neural nets. Secondly, in the Per-ceptron algorithm the calculations in each neuron of the network don't depend on the calculations in other neurons. Thus, distribution of neurons over different participating computers is easy and possible. Thirdly, under suitable assumptions, it is iterative learning procedure can be proved to converge to the correct weights i.e., the weights that allow the net to produce the correct output value for each of the training input pattern. One of the necessary assumptions is that such weight exists. Details on the perceptron model can be found in [10]-[11 ]. The algorithm used during simulation is shown in Figure 1.

Step 1:

1.1. Initialize all variables

1.2. Set weights and bias

1.3. Set learning rate a (0 < a < =1 ) Step 2:

Determine a stopping condition: if the stopping condition is false, do step 3-7 Step 3: For each training pair s : t, do steps 4-6 Step 4: Set activation of input units:

Step 5: Compute response of output unit: in = b + £xiWi ;

1, if y_in > 0

y = " 0, if -0<y_in <0

-1, if y_in < -0

Step 6: Update weights and bias if an error occurred for this pattern

if y * t,

wi( new) = wi( old) + a txi b (new) = b (old) + a t

else

wi( new) = wi( old)

b (new) = b (old) Step 7. Stopping condition (If no weights changed, stop else continue)

Figure 1 - Algorithm used in this research

2.1 What is PVM software?

We have used the PVM software package that permits a heterogeneous collection of computers hooked together by a network to be used as a single large parallel computer. Thus

xi " si

IHÔOPMATHKA

large computational problems can be solved more cost effectively by using the aggregate power and memory of many computers. The software is very portable and can be compiled on any kind of machine ranging from laptops to super computer CRAYs. [12]

PVM enables users to exploit their existing computer hardware to solve much larger problems at minimal additional cost. Hundreds of sites around the world are using PVM to solve important scientific, industrial, and medical problems. In addition PVM is used as an educational tool to teach parallel programming. With tens of thousands of users, PVM has become the de facto standard for distributed computing worldwide.

3. SIMULATION AND RESULTS

3.1. Input processing and presenting to ANN model

A large number of input patterns were taken to train the network which consists of 7 characters (A, B, C, D, E, J, K). Each pattern is made of 63 pixels (7 column and 9 rows) where pixels are represented with binary values 1s and 0s. However, the input and target outputs are presented into neural network in bipolar form (+1 and -1) since per-ceptron gives better results in bipolar form.

computers and making the final decision. All of the common and sequential calculations are out by the master routine in the PVM environment. On the other hand salve routine is mainly used for calculations and tasks which can be carried out in parallel. Ideally, there is more than one salve routine to achieve parallelism. The execution starts from master routine where it's first enrolled itself into the Parallel Virtual Memory environment. After successfully enrolling itself to the PVM environment it then distributed data over all salves. Slaves processes are spawned over different participating host computers available in the PVM environment, the process is shown diagrammatically in Figure 2.

Training is mostly done through salve routines; the input patterns received by master is distributed over salves for parallel calculations. Nodes (neurons) of the network are assigned and distributed over different salves, see Figure 3. If there are n slaves, n-1 slaves have equal distribution of the neural network nodes and the last slave handles the remaining of the nodes. Now each slave calculates the weights and biases associated to the nodes of neural network it is handling. In fact, slaves simply run in an infinite loop, it first looks for the special control input, if its an exit message, the slave exit from the loop and exit from the PVM environment. Otherwise, it receives the data, process it, calculates the weights and biases from the data, and gives back the result to master and waiting for next instruction.

Figure 2 - Opearting and dataflow of the system

Figure 3 - Structure of Neural network in PVM environment: Master sends and receives data from all slaves, where salves are responsible for calculation at each neuron in the model

3.2. Creating Parallel Environment

Usually, PVM implementation consists of two main routines or procedures: they are known as master and salve. The master routine is responsible for initialization of data, receiving input, distributed data over different participating

3.3. Analysis of Results

Figure 4 shows the graph of the time taken by the sequential as well as by the parallel implementation of the neural network training for pattern recognition. The same information is also shown in Table 1.

60

ISSN 1607-3274 "Pa^ioeëeKTpoHiêa. Ii^opMaTHKa. YnpaBëiHHfl" № 2, 2001

Figure 4 - Graph of the time taken by parallel model with different number of salves and sequential neural network model

Table 1 - Time taken by parallel model with different number of salves and sequential implementation of neural network model

In the graph, the sequential time is represented by line and for the clarity it is drawn against all the slaves. As seen from the graph, if we use one slave, it will take even more time than the sequential, theoretically it should take equal time to the sequential, but due to the network overhead and spawning the slave and initializing the PVM environment also takes time.

The ideal parallel execution is achieved when two slaves were used because the total time taken by the model in this case is recorded to be very low as compare to other sequential and other parallel situations (more than two salves), see Figure 4. The main reason is that the model needs less time distributing data to and collecting it back from slaves. Results further indicate that a parallel model with three salves also provides better performance than sequential

implementation. As we increase the number of slave, it needs more time, as the data is such that if we divide it more, its bandwidth is decreased and fewer calculations are done on the slave.

4. CONCLUSION

The simulation results have shown that a PVM based implementation of ANN can be used to solve complex problem in less time as compared to sequential implementation of ANN. PVM based implementation of ANN model provide a good balance of speed and flexibility. The number of slave's plays an important role because of extra time required to manage them in a parallel environment, i.e., network overhead etc. as discussed earlier. As there is no specific formula available to calculate the optimum number of slaves for any given problem, therefore, experimental techniques can be used to determine it. However, slave's population depends on the number of objects to be recognized by the neural network. It could be at-least 25% to 50% of the number of the objects to be recognized. Implementation of back-propagation and Kohanon nets with PVM software are the main considerations for future work.

REFERENCES

[1] Manavendra M., "Parallel environments for implementing neural networks," Neural Computing Surveys, Vol. 1 pp. 4860, 1997.

[2] Skrzypek, J., editor. Neural Network Simulation Environments. Kluwer Academic Publishers, 1993.

[3] Weigang, L. and Da Silva, N. C., "A study of parallel neural networks," IJCNN'99. International Joint Conference on Neural Networks. Proceedings., Vol. 2, pp. 1113-16, IEEE Service Center, 1999.

[4] Hammerstrom, D., "A highly parallel digital architecture for neural network emulation," In: Delgado-Frias, J. G. and Moore, W. R. (eds.), VLSI for Artificial Intelligence and Neural Networks, chapter 5.1, pages 357-366. Plenum Press, New York, 1991.

[5] Gevins, A. S. and Morgan, N. H., "Application of neural network (NN) Signal processing in brain research," IEEE Trans. Acoustics, Speech, and Signal Processing, 36, 1152-1166.

[6] Hering, D., Khosla, P. and Kumar, B. V. K. V., "The use of modular neural networks in tactile sensing," Proc. Second Int'l. Join Conference on Neural Networks (Washington, D. C. Jan. 15-19, 1990), II-355-358,1990.

[7] Rossen, M. L. and Anderson,J.A., "Representational issues in a neural network model of syllable recognition," Proc. First Int'l. joint conference on Neural Network, (Washington, D. C. June. 18-22, 1989), I-19-26, 1989.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

[8] Casselman, F. and Acres, J. D., "DASA/LARS, a large diagnostic system using neural network," Proc. Second Int'l Joint Conf. On Neural Networks (Washington, D. C. Jan. 15-19, 1990), II-539-542, 1990.

[9] Nordstrom, T. and B. Svensson, "Using and designing massively parallel computers for artificial neural," Journal of Parallel and Distributed Computing, vol. 14, no. 3, pp. 260-285, 1992.

[10] Jain, A. K. Mao, J. and Mohiddin, K. M., "Artificial Neural Network: A tutorial," IEEE Computers, pp. 31-44, 1996.

[11] Rumelhart, D. E., Hinton, G. E. and Williams, R. J., "Learning Internal Representations by error Propagation", in Parallel Distribution Processing: Explorations in the Microstructure of Cognition, vol. 1: Foundations 1986.

[12] Information on PVM are available at: http://www.epm.ornl.gov/pvm/pvm_home.html

Number of Slaves Time in msec

1 2840

2 1730

3 2025

4 3012

5 3637

6 4143

7 5409

Sequential Time: 2675 msec

i Надоели баннеры? Вы всегда можете отключить рекламу.