Научная статья на тему 'PRACTICAL APPLICATION OF ARTIFICIAL NEURAL NETWORK IN MANAGEMENT'

PRACTICAL APPLICATION OF ARTIFICIAL NEURAL NETWORK IN MANAGEMENT Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
16
3
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
artificial intelligence / artificial neuron / neural network / biological neuron / weight coefficient / excitation functions / expert system / weight module / characteristic function / training sample / learning cycle

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Sh. Kadirova, P. Matyakubova, G. Boboev, M. Makhmudzhonov

The work discusses the practical application of an artificial neural network for solving complex, non-formalizable problems, such as pattern recognition, image processing, information processing, etc., as well as for controlling complex nonlinear dynamic objects. Artificial neural networks are based on the features of living neural networks, which allow them to solve various problems and are able to indicate the confidence level of each solution in a specific and logical way. In neural networks, all mathematical operations are carried out by a training program called a learning cycle

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «PRACTICAL APPLICATION OF ARTIFICIAL NEURAL NETWORK IN MANAGEMENT»

PRACTICAL APPLICATION OF ARTIFICIAL NEURAL NETWORK IN MANAGEMENT

1Kadirova Sh.A.,2Matyakubova P.M.,3G.G. Boboev,4Makhmudzhonov M.M.

1Tashkent State Technical University, Professor of the Department of Metrology, Technical

Regulation, Standardization and Certification 2Tashkent State Technical University, Head of the Department of Metrology, Technical Regulation, Standardization and Certification 3Tashkent State Technical University, Associate Professor of the Department of Metrology, Technical Regulation, Standardization and Certification 4Tashkent State Technical University, doctoral student of the department of "Metrology, technical regulation, standardization and certification" https://doi.org/10.5281/zenodo.10396077

Abstract. The work discusses the practical application of an artificial neural network for solving complex, non-formalizable problems, such as pattern recognition, image processing, information processing, etc., as well as for controlling complex nonlinear dynamic objects. Artificial neural networks are based on the features of living neural networks, which allow them to solve various problems and are able to indicate the confidence level of each solution in a specific and logical way. In neural networks, all mathematical operations are carried out by a training program called a learning cycle.

Keywords: artificial intelligence, artificial neuron, neural network, biological neuron, weight coefficient, excitation functions, expert system, weight module, characteristic function, training sample, learning cycle.

Recently, all over the world there has been a sharp increase in the volume of scientific research in the field of the theory of artificial neural networks, neurocomputers and neuroinformatics. This is due, first of all, to the capabilities that artificial neural networks provide for solving complex, often unformalizable, applied problems.

Artificial neural networks and specialized computing devices created on their basis -neurocomputers - are built and function on the same principles as biological neural networks. Like their biological counterparts, artificial neural networks are a homogeneous structure consisting of a large number of parallel working simple computing elements - neurons. By using a fundamentally new method of information processing, a much higher speed of operation of neural network algorithms is achieved compared to other algorithms. Each of the constituent elements of a neural network - neurons - carries out a nonlinear transformation, so the neural network as a whole is a nonlinear system, which is especially important when using neural networks to solve complex applied problems with nonlinear characteristics.

Artificial neural networks are widely used in many areas of human activity. They are actively used to solve complex, often unformalized, applied problems, such as pattern recognition, image processing, signal processing, information processing, etc., and are also widely used to solve various problems related to the control of dynamic systems. Being by their nature nonlinear adaptive systems, neural networks are successfully used to control complex, essentially nonlinear or unformalized dynamic objects, where traditional control algorithms are ineffective. The degree of use of neural networks in control problems over the past few years has reached such a scale that

we can already talk about the emergence of a new field of control theory - neurocontrol. The main task of the science called "neurocontrol" is to analyze the capabilities and methods of using artificial neural networks to control complex dynamic objects. Interest in neurointelligence arose in the early stages of the development of computer technology. It is based on the neural organization of artificial systems, which has biological prerequisites. The ability of biological systems to learn, self-organize and adapt has a great advantage over modern computing systems. The advantage of computer systems is the high speed of information dissemination and the ability to take into account the large amount of knowledge accumulated by humanity in this area.

The development of artificial intelligent systems that combine the advantages of biological beings and modern computing technology creates potential prerequisites for the transition to a qualitatively new stage of evolution in computing technology. A neural network is a computational or logical circuit built from homogeneous processing elements, which are simplified functional models of neurons.

Artificial neural networks are based on the features of living neural networks, allowing them to solve irregular problems:

- a simple processing element - a neuron;

- a very large number of neurons are involved in information processing;

- one neuron is connected to a large number of neurons (global connections);

- between neurons the weights of connections change;

- parallel processing of information.

Biological neuron. The prototype for creating a neuron was a biological neuron of the brain. A biological neuron is a body, a set of processes - dendrites, through which input signals enter the neuron, and a process - an axon, which transmits the neuron signal to other cells. The point of connection between a dendrite and an axon is called a synapse. The functioning of a neuron can be represented as follows:

- the neuron receives a set (vector) of input signals from the dendrites;

- the neuron evaluates the total value of the input signals. Moreover, the neuron not only sums the values of the input signals, but also calculates the scalar value (product) of the vector of input signals and the vector of weighting coefficients;

- the neuron generates an output signal, the intensity of which depends on the value of the calculated scalar product. If it does not exceed a given threshold, then the output signal is not generated - the neuron "does not fire";

- the output signal arrives at the axon and is transmitted to the dendrites of other neurons.

The behavior of an artificial neural network depends both on the value of the weight

parameters and on the neuron excitation function. There are three main types of excitation functions: threshold, linear and sigmoid. For threshold elements, the output is set at one of two levels depending on whether the total signal at the neuron input is greater or less than a certain threshold value. For linear elements, the output activity is proportional to the total weighted input of the neuron.

Artificial neuron. For sigmoid elements, depending on the input signal, the output changes continuously, but not linearly, as the input changes.

A neural network is a collection of a large number of simple elements - neurons, the topology of connections of which depends on the type of network. To solve any specific problem, they must choose how neurons should be connected to each other, and select the values of the

weight parameters on these connections accordingly. From the connections established, one element can influence another.

An artificial neural network reacts in most cases in a suitable manner to the external environment. Since these networks are able to indicate the confidence level of each decision, that is, "knows what it does not know" and transfers the given case to the expert system for resolution. Decisions made at this higher level may be specific and logical, but they require the collection of additional facts to reach a final conclusion. The combination of the two systems would be more powerful than either system alone.

The basis for the work of self-learning neuroprograms is a neural network, which is a collection of neurons - simple elements connected to each other in a certain way. Neurons and interneuron connections are specified programmatically or on a regular computer, or they can have a special microcircuit (neurochip).

The structure of relationships between neurons in a neurocomputer or neuroprogram is similar to that in biological objects. An artificial neuron communicates with other neurons through synapses that transmit signals from other neurons to this one. In addition, a neuron can be connected to itself. Several neurons connected to each other form a neural network.

As a biological analogue, a neural network must have communication channels with the outside world.

At the same time, some channels ensure the flow of information from the neural network to the outside world. Some neurons may not communicate with the outside world, but interact with input, output and the same neurons ("hidden neurons").

Depending on the increase in the number of neurons in the network, there are a huge number of ways to connect neurons.

The most common is layered architecture, in which neurons are arranged in "layers." The axons of each neuron in one layer are directed to the neurons of the next layer. Thus, the neurons of the first layer are input, and the neurons of the last layer are output. In Fig. 1. shows a diagram of a three-layer network.

N1 N3 N5

N2 N4 N6

Rice. 1. Three-layer network with 6 neurons.

Another type of architecture is fully connected, when every neuron is connected to everyone, including itself. The diagram of the simplest neural network of 3 neurons is shown in Fig. 2.

Fig. 2. Diagram of the simplest neural network of 3 neurons.

The network has 13 synapses, 4 of which serve to communicate with the outside world, and the rest connect neurons to each other.

For the convenience of the image, not one, but several axons emerge from each neuron, directed towards other neurons, which is similar to several dendrites attached to one axon through synapses.

Layered networks are special cases of fully connected ones.

To build expert systems, it is desirable to select fully connected neural networks based on the fact that, firstly, with the same number of neurons, fully connected networks have a larger number of interneuron connections, which increases the information capacity of the network. Secondly, a fully connected network (architecture) is universal, since it does not require experimentation with changing the connection diagram for each task. Thirdly, the functioning and simplicity of software implementation without compromising the quality of learning. In Fig.3. a neuron is represented with a group of synapses connecting the neuron either to other neurons or to the outside world.

Fig.3. Neuron diagram. A neuron consists of two functional blocks: an input adder (E) and the neuron itself, or converter (Pr). The functioning of a neuron occurs as follows: Through input synapses (there are 3 in the figure), signals from other neurons and/or from the outside world are sent to the neuron. Each synapse has a parameter called synapse weight, which represents a number. The signal passing through a synapse is multiplied by the weight of that synapse. Depending on the weight, the signal can be amplified (weight module 1) or weakened (weight module 1) in amplitude. Signals from all synapses leading to a given neuron are received by the adder. The adder sums all

signals and sends one number to its own neuron (transducer) - the resulting amount. The magnitude of this number depends on the magnitude of the original signals, as well as on the weights of the synapses. The neuron that receives this number transforms it according to its function, which results in another number, and sends it along the "axon" to all other neurons through synapses.

Subsequent neurons perform the same operations with the received signals only with the difference that, firstly, the weights of their synapses may be different, and secondly, other neurons may have a different type of transformation function. In these neural networks, all neurons have the same function. This function is called characteristic and has the following form:

X

/(X) = C + lXj

where X is the signal coming from the adder; C is a constant called the characteristic of a neuron. The experimentally obtained optimal range of characteristics for solving the vast majority of problems is from 0.1 to 0.8. The graph of the characteristic function is presented in Figure 4. The graph of the function is smooth, continuous over the entire range of variables X, the range of values is always limited.

1

j 0,8 ■

1 » 1 0,6 ■

0,4 ■

1 1 1 i , 0,2 •

f ' 1

i» 0,5 -0,2 I 0,5

o ,y-

l •

-— -0,8 ■

Fig.4. Characteristic function graph.

In the case of neural network emulation on a regular computer, all mathematical operations are performed by the program. The neural network represents an array of synaptic weights. This array can be located either on the computer disk in the form of a file of a certain format, or in the computer's RAM (when the neural network is functioning).

When creating a new neural network, space is allocated in computer memory for an array of synaptic weights, called a map. This array is filled with completely random numbers from a certain range. Therefore, each created network is unique. The uniqueness is that networks with the same parameters, trained on the same tasks, behave the same. This concerns training time, quality of training, and confidence in answers during testing.

A neural network that receives a certain signal as an input is capable of, after passing it through the neurons, producing a specific response at the output, depending on the weighting coefficients of all neurons and on the signal itself. It is obvious that when carrying out such procedures on a newly initialized network, we will receive signals at the output that are devoid of any meaning (the weighting coefficients are random). In order for the network to produce the required result, it needs to be trained.

To train a neural network, you need a training sample (problem book) consisting of examples. Each example is of the same type with an individual choice of conditions (input parameters) and a pre-known answer. For example, in one example, the examination data of one patient could be used as input parameters, then the answer could be a diagnosis. Several examples with different answers form a problem book, which is located in a database, each entry of which is an example.

The general scheme for training a neural network is:

1. The current example (example) and its output parameters are taken from the training sample, which are fed to the input synapses of the trained neural network;

2. The neural network produces a given number of operating cycles. The vector of input signals is distributed along the connections between neurons (direct functioning);

3. The signals produced by the neurons, which are considered output, are measured;

4. The output signals are interpreted and a score is calculated that characterizes the difference between the response issued by the network and the required response. The score is calculated using the appropriate score function. The lower the score, the better the example is recognized, the closer the answer given by the network is to the required one. A score of zero means that the required correspondence between the calculated and known answers has been achieved;

5. If the example score is zero, nothing is done. Otherwise, based on the assessment, correction coefficients are calculated for each synaptic weight of the connection matrix, after which the synaptic weights are adjusted (inverse functioning). Correction of synapse weights is what learning consists of;

6. The transition is made to the next example of the problem book and the above operations are repeated. Going through all examples of the training set from the first to the last is considered one training cycle. As you go through the cycle, each example has its own score. In addition, the total score of the set of all examples of the training set is calculated. If after passing several cycles it is equal to zero, the training is considered complete, otherwise the cycles are repeated.

Thus, it should be noted that the number of training cycles for complete training depends on the size of the training sample, the number of input parameters, the type of task, the type and parameters of the neural network, and even on the weights of synapses when initializing the network.

REFERENCES

1. Гельман М. М. Системные аналого-цифровые преобразователи и процессы сигналов. - М: Мир, 1999

2. Р.Г. Джексон. Новейшие датчики. 2-е издание, доп.-М.: Техноосфера, 2008

3. С. Рассел, П. Норвич. Искусственный интеллект: современный поход. - 2 - ос изд. -М.: Вильямс. 2007

4. Круглов В. В. , Борисов В.В. Искусственный нейронные сети. Теория и практика. 1-

еизд,

5. ЭШМУРАДОВ Д. Э., МУХАМЕДЖАНОВ А. А. ТЕОРИЯ И ПРАКТИКА СОВРЕМЕННОЙ НАУКИ //ТЕОРИЯ И ПРАКТИКА СОВРЕМЕННОЙ НАУКИ Учредители: ООО" Институт управления и социально-экономического развития". -№. 12. - С. 424-435.

6. Eshmuradov D. et al. STANDARTILASHTIRISH, SERTIFIKATLASH VA SIFATNI BOSHQARISH TIZIMLARI SOHASIDAGI ME'YORIY HUJJATLAR //Science and innovation. - 2022. - T. 1. - №. A8. - C. 595-600.

i Надоели баннеры? Вы всегда можете отключить рекламу.