THE USE OF DISTRIBUTED COMPUTING IN MACHINE LEARNING
DOI 10.24411/2072-8735-2018-10090
Almagul B. Kondybayeva,
National Research Technological University "MISiS", Moscow, Russia, [email protected]
Anastasia P. Ositis,
International Telecommunication Academy, Moscow, Russia, [email protected]
Evgeniy A. Kalashnikov,
National Research Technological University "MISiS", Moscow, Russia, [email protected]
Keywords: distributed computing, supercomputer, multithreading, machine learning, parallel algorithms
At the present time, parallel computers or supercomputers have become very common. This happens due to the fact that it is economically much more profitable to make many cores with a lower frequency than one core with a high frequency. In connection with this fact, a new direction as parallel computing arose. Parallel computing and supercomputers are used in such areas as data mining, graphics, medical diagnostics, physical and financial modeling. All these tasks are united by one common detail - a huge amount of processed data. This detail very often allows you to parallelize the processing of this data. When developing parallel algorithms for solving problems in computational mathematics, the principal point is to analyze the effectiveness of using parallelism, which usually consists in evaluating the acceleration of the computation process. The formation of such estimates of acceleration can be performed in relation to the chosen computational algorithm (estimation of paral-lelization efficiency of a particular algorithm). Another important approach can be to construct estimates of the maximum possible acceleration of the process of obtaining a solution to a particular type of problem (evaluating the effectiveness of a parallel method for solving a problem) [1]. Distributed computing in connection with the widespread penetration of global networks and the consolidation of business, is becoming more and more popular. Reducing the cost of the network infrastructure while increasing its productivity and providing a powerful incentive to increase the number of distributed systems that are designed to unite the spaced-apart branches or customers scattered around the world. The purpose of this work is to study the capabilities of Windows Azure, with the purpose of their further application for writing the system that trains the neural network is distributed [2].
Let us highlight some points for our work:
- Consider the operation of distributed computing;
- We choose the problem for the study;
- We are implementing this task.
There are many criteria for choosing a platform. From the speed of computing to the support of known technologies. The most common are: Performance; Security; Scalability; Price. To implement this task, Microsoft Azure was chosen[l-5].
Information about authors:
Kondybayeva Almagul Baurzhanovna, Master's Degree, Department of Automated Control Systems National Research Technological University "MISiS", Moscow, Russia
Ositis Anastasia Petrovna, Professor, President of the International Telecommunication Academy; Communications Worker of the Russian Federation","Honorary Radio Operator","Master of Communication". International Telecommunication Academy, Moscow, Russia Kalashnikov Evgeniy Alexandrovich, Candidate of Technical Sciences, Professor, Department of Automated Control Systems National Research Technological University "MISiS", Moscow, Russia
Для цитирования:
Кондыбаева А.Б., Оситис А.П., Калашников Е.А. Использование распределенных вычислительных систем в машинном обучении // T-Comm: Телекоммуникации и транспорт. 2018. Том 12. №5. С. 77-81.
For citation:
Kondybayeva A.B., Ositis A.P., Kalashnikov E.A. (2018). The use of distributed computing in machine learning. T-Comm, vol. 12, no.5, pр. 77-81.
T-Comm Vol.12. #5-2018
г Г\
Introduction
Microsoft Windows Azure is a cloud platform from Microsoft. The Windows Azure platform provides the ability to develop and run applications and store data on servers located in distributed data centers.
Microsoft Azure fully implements two cloud models - platforms as a sen.'ice (Platform as a Service, PaaS) and infrastructure as a service (Infrastructure as a Service, IaaS). The workability of the platform Windows Azure provides a network of global data centers Microsoft.
The main features of this model:
• payment only for consumed resources;
• General, multithreaded structure of calculations;
• abstraction from the infrastructure.
At the heart of Microsoft Azure is the launch of a virtual machine for each instance of the application.
The developer determines the necessary amount of data storage and the required processing power (the number of virtual machines), after which the platform provides the appropriate resources.
When the initial resource requirements change, according to the new customer request, the platform allocates additional or reduced unused data center resources for the application.
Microsoft Azure as I'aaS will provide not only all the basic functions of the operating system, but also additional ones: allocating resources on demand for unlimited scalability, automatic synchronous replication of data to improve fault tolerance, processing infrastructure failures for permanent availability, and much more.
Deploy
Microsoft Azure also implements a different type of service -infrastructure as a service. The infrastructure provision model (hardware resources) implements the ability to lease resources such as servers, storage devices, and network equipment. The entire infrastructure is managed by the supplier, the consumer manages only ihe operating system and installed applications. Such services are paid for in actual use and allow increasing or decreasing the volume of infrastructure through a special porta! provided by suppliers. In this serv ice model, virtually any application installed on standard OS images can be launched.
Azure consists of three components:
"Compute" — is a component that implements calculations on the Windows Azure platform.
"Storage" - the storage component provides a scalable storage. The repository does not have the ability to use the relational model and is an alternative, "cloudy" version of SQL Server.
"Fabric" - Windows Azure Fabric is designed as the "controller" and the core of the platform, performing real-time monitoring, fault tolerance, capacity allocation, server deployment, virtual machines and applications, load balancing and equipment management.
Availability of Cray supercomputer capacities
Microsoft Corporation and developer of supercomputers Cray agreed to provide the computing power of supercomputers Cray XC and Cray CS in the cloud Microsoft Azure.
Under the terms of the agreement, Cray supercomputers are connected directly to Azure, integrated with Azure virtual ma-
chines and Azure Data Lake storage, as well as with artificial intelligence and Microsoft machine learning technologies.
In general, the proposal will allow customers with a limited budget to obtain the necessary capacities for high-performance computing and other resource-intensive tasks (AI applications, simulation and modeling, complex analytics).
Supercomputers Cray series XC and CS work on the basis of Intel processors and graphics systems Nvidia, Some of the machines arc equipped with programmable logic integrated circuits (FPGA). The Cray Aries network interconnect is also used. The performance of one container exceeds pctailops.
An artificial neural network (INS) is a mathematical model, as well as its software or hardware implementation, built on the principle of the organization and functioning of biological neural networks - nerve cell networks of a living organism. This concept arose when studying the processes occurring in the brain, and when trying to simulate these processes. The first such attempt was the neural networks of W. McCulloeh and W. Pitts. After the development of learning algorithms, the resulting models began to be used for practical purposes: in forecasting problems, for pattern recognition, in control problems, etc. 111.
INS is a system of connected and interacting simple processors (artificial neurons). Such processors are usually quite simple (especially in comparison with processors used in personal computers). Each processor of such a network only deals with the signals it periodically receives, and the signals it periodically sends to other processors. And, nevertheless, being connected to a sufficiently large network with controlled interaction, such locally simple processors together are able to perform rather complex tasks.
* Stages of solving problems
Data collection for training;
Data preparation and normalization;
Select the network topology;
Experimental selection of network characteristics;
Experimental selection of training parameters;
Actually training;
* Check the adequacy of training;
Choosing the data for training the network and processing it is the most difficult step in solv ing the problem. A set of data for training must meet several criteria:
• Representativeness - the data should illustrate the true state of affairs in the subject area;
• Consistency - inconsistent data in the training sample will lead to poor quality of network learning.
The source data is converted to the form in which they can be fed to the network inputs. Each entry in a data file is called a learning pair or training vector. The training vector contains one value for each network input and, depending on the type of training (with or without a teacher), one value for each output of the network. Training the network on a "raw" set, as a rule, does not give qualitative results. There are a number of ways to improve the "perception" of the network.
Normalization is performed when data of different dimensions is fed to different inputs. For example, the first input of the network is supplied with values from zero to one, and on the second — from one hundred to one thousand. In the absence of normalization, the values at the second input will always have a much greater effect 011 the output of the network than the values
at the first input. When you normalize the dimension of all input and out]!)ut data are brought together.
Quantization is performed over continuous quantities for which a finite set of discrete values is allocated. For example, quantization is used to specify the frequencies of audio signals in spccch recognition; Filtering is performed for "noisy" data.
In addition, the very role piayed by the very representation of both input and output data. Suppose the network learns to recognize letters on images and has one numerical output - the letter number in the alphabet. In this case, the network will get a false idea that letters with numbers 1 and 2 are more similar than letters with numbers 1 and 3, which, in general, is incorrect. In order to avoid such a situation, use a network topology with a large number of outputs, when each output has its own meaning. The more outputs in the network, the greater the distance between classes and the more difficult it is to contuse them [5-10].
To implement the neural network I will use the freely distributed library C #: Neuro.
The library contains six main entities:
Neuron is the basic abstract class for all neurons encapsulating such common entities as the neuron weight, output value and input value. Other classes of the neuron are inherited from the base class in order to extend it with additional properties and specialize it.
Layer - represents a collection of neurons. This is a basic abstract class that encapsulates the common functional of all layers of neurons.
Network - represents a neural network, is a collection of layers of neurons. This is a basic abstract class that provides the general functionality of a typical neural network. To implement a specific neural network architecture, it is required to inherit the class, expanding it with the specific functionality of any neural network architecture.
ActivationFunction - the interface of the activation function. Activation functions are used in neurons of activation - the type of the neuron, where the weighted sum of its inputs is calculated, and then the value is transferred to the input of the activation function, and the output value becomes the output value of the neuron.
!UnsupervisedLearning is an interface for uncontrolled learning algorithms, such as learning algorithms, where the system provides input samples only at the learning stage, but without the desired outputs. The task of the system is to organize itself in order to find the relationship and similarities between the data samples.
iSupervisedLearning is an interface for managed learning algorithms, such as learning algorithms, where the system at the learning stage is given input patterns along with the desired output values. The task of the system is to summarize the training data and I earn how to provide the correct output value when only the input value is presented to it.
The activation network is a neural network, where each neuron calculates its output as the output of the activation function, and the argument is the weighted sum of its inputs in combination with the threshold value. A network can consist of one layer or several layers. Trained by the algorithm of controlled learning, the network allows solving such tasks as approximation, prediction, classification and recognition.
6»tictMViWtL««nrig
ISyfl+tvlttdL+JrfirltQ
P«fc*p(ranL* wrong
6*5 kProptgitiCftLwWne
Nwm «¿3"
AdiväionNwm
Ol *t anu Matron
ActiV«(!OnL»yW
tkftwulljv
SignWdluKtion
kj.
lAtH FflAifrOli
&pcf v wdFisvbOT
D*tBM4FM werfe
Fig. 1. The Neuro Library
The distance network is a neural network, where each neuron calculates its output as a distance between its weight values and input values. The network consists of one layer and can serve as the basis for such networks as Kohonen self-organizing map, elastic network and Hamming network.
To write the application, the back propagation algorithm was used, the problem: approximation^ 1]. Since the learning process of a neural network can not be divided into several parallel processes, the idea was to train the network on the same input data but with different parameters simultaneously. Using the library Neoro, he wrote 3 programs, each of which trains the neural network by the method of back propagation, but with different weights: Figure 10, 11.
* lr 1
1 Q
2 3 *
5 6 0
7 fl
3 D
Fig, 2. Approximation with a weight of O.i
_1
Fig. 3. Approximation with a weight of 0.2
T-Comm Vol.12. #5-2018
7T\
Referentes
in the study of the task of developing a distributed learning system for a neural network, the capabilities of the Microsoft Azure cloud platform were explored: the creation and deployment of cloud services, databases and the publication of sites; Ultimate Visual Studio 2013 programming environment: creating web applications, connecting cloud services.
Acknowledgment
This research was supported/partially supported by International Telecommunication Academy and Department of Automated Control Systems National University of Science and Technology "MISiS". We thank our colleagues fromDepartment of Automated Control Systems National University of Science and Technology "MISiS" who provided insight and expertise that greatly assisted the research, although they may not agree with all ofthe interpretations/conclusions of this paper.
We thank Doroshkevich Olga, Director of Development in LLC "Media Publishers" for assistance with publishing the International Telecommunication Academy for comments that greatly improved the manuscript.
1. Nemnugin S.A., Stesik Q.L., (2002), Parallel programming for multiprocessor computing systems, St. Petersburg: University Press, 370 p.
2. Voevodin V.V., Voevodin VI.V., (2002), Parallel computing, PB: BHV, 200 p.
3. Gregory R. Andrews, (2003), Fundamentals of multithreaded, parallel and distributed programming, Williams: publishing house, 512 p.
4. Ante no v A.S., (2004), Parallel Programming Using MP I TechnologyMoscow University Press, 300 p.
5. Kalitkin N.N., (1978), Numerical methods, Moscow: Nauka, 512 p.
6. S. Nemnyugin, O. Stesik, (2004), Contemporary Fortran. Self-teacher, St. Petersburg: "BE IV", 481 p.
7. Bukatov A,A., Datsyuk V.N., Zhegulo A.I., (2003), Programming of multiprocessor computer systems, Rostov-on-Don: Publishing House LLC "Central Research Institute", 208 p.
8. Korneev V.D., (2003), Parallel programming in MPI, Izhevsk: publishing house "Regular and chaotic dynamics", 303 p.
9. Levin M.P., (2008), Parallel Programming Using OpenMP. Moscow: BINOM. Laboratory of Knowledge, 200 p.
10. Bogachev K.Yu. (2003) Fundamentals of parallel programming Moscow: publishing house "Binom. Laboratory of Knowledge", 342 p.
11. Voevodin V.V. (1987), Parallel structures of algorithms and programs, Moscow: OVM USSR Academy of Sciences, 148 p.
2 October
° 21-27
октября
Я
Международная конференции
ИНФОФОРУМ
Доверие и безопасность К Т Д \А в информационном обществе Ixlrl I Cl Irl
International Conference
Confidence and Security in the Information Society À
Shanghai, Hangzhou Шанхай, Ханчжоу INFOFORUM China
2 January-February
s 31-1 v ... -
января февраля ^. _
Moscow Москва
БОЛЬШОЙ НАЦИОНАЛЬНЫЙ ФОРУМ
ИНФОРМАЦИОННОЙ БЕЗОПАСНОСТИ
ИНФОФОРУМ2018
GRAND NATIONAL FORUM FOR INFORMATION SECURITY
INFOFORUM 2018
ИСПОЛЬЗОВАНИЕ РАСПРЕДЕЛЕННЫХ ВЫЧИСЛИТЕЛЬНЫХ СИСТЕМ
В МАШИННОМ ОБУЧЕНИИ
Кондыбаева Алмагуль Бауржановна, Национальный исследовательский технологический университет "МИСиС",
Москва, Россия, [email protected]
Оситис Анастасия Петровна, Международная академия связи, Москва, Россия, [email protected] Калашников Евгений Александрович, Национальный исследовательский технологический университет "МИСиС",
Москва, Россия, [email protected]
Аннотация
В настоящие время стали очень распространены параллельные компьютеры или ЭВМ. Это связано с тем, что экономически намного выгоднее делать много ядер с низкой частотой, чем одно ядро с большой частотой. В связи с этим фактом возникло новое направление - параллельные вычисления. Они применяются в таких областях как data mining, графика, медицинская диагностика, физическое и финансовое моделирование. Все эти задачи объединяет одна общая деталь - огромный объём обрабатываемых данных. Эта деталь очень часто позволяет распараллелить обработку этих данных. При разработке параллельных алгоритмов решения задач вычислительной математики принципиальным моментом является анализ эффективности использования параллелизма, состоящий обычно в оценке получаемого ускорения процесса вычисления. Формирование подобных оценок ускорения может осуществляться применительно к выбранному вычислительному алгоритму (оценка эффективности распараллеливания конкретного алгоритма). Другой важный подход может состоять в построении оценок максимально возможного ускорения процесса получения решения задачи конкретного типа (оценка эффективности параллельного способа решения задачи). Распределенные вычисления в связи с повсеместным проникновением глобальных сетей и укрупнения бизнеса, становится все более и более популярным. Снижение стоимости сетевой инфраструктуры с одновременным повышением ее производительности, дают мощный стимул к увеличению количества распределенных систем, которые призваны объединить в одно целое разнесенные в пространстве филиалы, либо клиентов, разбросанных по всему миру.
Целью данной работы является изучение возможностей Windows Azure, с целью их дальнейшего применения для написания системы, обучающей нейронную сеть распределено. Выделим некоторые пункты для нашей работы:
- Рассмотрим функционирование распределенных вычислений;
- Выберем задачу для исследования;
- Реализуем данную задачу.
Существуют множество критериев для выбора платформы. От скорости вычислений до поддержки известных технологий. Наиболее общими являются: производительность, безопасность, масштабируемость и цена. Для реализации данной задачи была выбрана: Microsoft Azure.
Ключевые слова: Распределенные вычисления, суперкомпьютер, многопоточность, машинное обучение, параллельные алгоритмы. Литература
1. Немнюгин С.А., Стесик О.Л. Параллельное программирование для многопроцессорных вычислительных систем. СПб.: Петербург, 2002.
2. Воеводин В.В., Воеводин Вл.В. Параллельные вычисления. СПб: BHV, 2002.
3. Грегори Р. Эндрюс. Основы многопоточного, параллельного и распределенного программирования. Издательство "Вильямс", 2003. 512 с.
4. Антонов А.С. Параллельное программирование с использованием технологии MPI. М.: Издательство Московского университета, 2004.
5. Калиткин Н.Н. Численные методы. М.: Наука, 1978. 512 с.
6. Немнюгин С., Стесик О., Современный Фортран. Самоучитель. "БХВ", Санкт-Петербург, 2004. 481 с.
7. Букатов А.А., Дацюк В.Н., Жегуло А.И. Программирование многопроцессорных вычислительных систем. Ростов-на-Дону. Издательство ООО "ЦВВР", 2003. 208 с.
8. Корнеев В.Д. Параллельное программирование в MPI. Издательство "Регулярная и хаотическая динамика" 2003. 303 с.
9. Левин М.П. Параллельное программирование с использованием OpenMP. БИНОМ. Лаборатория знаний, 2008.
10. Богачев К.Ю. Основы параллельного программирования. Издательство "Бином. Лаборатория знаний" 2003. 342 с.
11. Воеводин В.В. Параллельные структуры алгоритмов и программ. М.: ОВМ АН СССР, 1987. 148 с.
Информация об авторах:
Кондыбаева Алмагуль Бауржановна, магистр, кафедра автоматизированных систем управления Национальный исследовательский технологический университет "МИСиС", Москва, Россия
Оситис Анастасия Петровна, Профессор, Президент Международной общественной академии связи. "Заслуженный работник связи РФ", "Почетный радист", "Мастер связи"; Международная академия связи, Москва, Россия
Калашников Евгений Александрович, к.т.н., профессор, кафедра автоматизированных систем управления Национальный исследовательский технологический университет "МИСиС", Москва, Россия