Научная статья на тему 'ANALYSIS OF CLASSIFICATION PROBLEM AND ITS ALGORITHMS IN MACHINE LEARNING'

ANALYSIS OF CLASSIFICATION PROBLEM AND ITS ALGORITHMS IN MACHINE LEARNING Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
0
0
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
deep learning / Computer tomography / Color images / Medical use in modern medicine.

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — O. Ismailov, D. Eshmuradov, Kh. Temirova, F. Tulaganova

Machine learning technologies are improving the efficiency of diagnosing X-ray images, creating new opportunities in medicine. With their help, it is possible to detect diseases early, increase the accuracy of diagnosis and make the process of treating patients more effective.

i Надоели баннеры? Вы всегда можете отключить рекламу.

Похожие темы научных работ по компьютерным и информационным наукам , автор научной работы — O. Ismailov, D. Eshmuradov, Kh. Temirova, F. Tulaganova

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «ANALYSIS OF CLASSIFICATION PROBLEM AND ITS ALGORITHMS IN MACHINE LEARNING»

ANALYSIS OF CLASSIFICATION PROBLEM AND ITS ALGORITHMS IN MACHINE LEARNING

1Ismailov O.M., 2Eshmuradov D.E., 3Temirova Kh.F., 4Tulaganova F.K.

1ISFT DSc, professor 2TATU, PhD, associate professor 3,4TATU, doctoral student https://doi.org/10.5281/zenodo.13925973

Abstract. Machine learning technologies are improving the efficiency of diagnosing X-ray images, creating new opportunities in medicine. With their help, it is possible to detect diseases early, increase the accuracy of diagnosis and make the process of treating patients more effective.

Keywords: deep learning, Computer tomography, Color images, Medical use in modern medicine.

Introduction. A classification problem in machine learning is the task of classifying a given input into one of the predefined classes. Classification algorithms are used to deal with different data types and levels of complexity.

The classification problem consists of the following elements:

Input Data: Consists of feature vectors, each vector representing an object or event.

Output: Each input vector is assigned to the appropriate class.

Dataset: Defined data used for training and testing.

Classification Algorithms:

Logistic Regression A simple and easy-to-understand model for binary classification. Separates classes with linear boundaries.

K-Nearest Neighbors (KNN) classifies based on nearest neighbors. Simple and powerful, but slow for large datasets.

Support Vector Machines (SVM) Separate classes using linear and non-linear thresholds. It tries to find the optimal limit with large margins.

Besiesna Classifier (Naive Bayes) is based on Bayes theorem and assumes independence between features. Fast and efficient.

Random Forest Ensemble classification using multiple decision trees. It reduces overfitting and has high accuracy.

Neural Networks for large and complex data. Able to learn complex patterns with a multilayer model.

Gradient Boosting Machines (GBM) work with an ensemble method to reduce errors in the learning process. There are options like XGBoost, LightGBM.

Selection of Classification Algorithms should consider the following:

Volume and Complexity of Data Efficiency is important for large volumes of data.

Accuracy requirements - some applications require high accuracy, such as medical diagnostics

Computational Resources - some algorithms require a lot of resources and run slowly

Classification algorithms are widely used in many fields, including text classification, medical diagnostics, customer segmentation, and more.

Basics of Logistic Regression. Linear model creates a linear combination using input properties.

z = w1x1 + w2x2 + —+ wnxn + b (1)

Here w-weights, X-features, b — bias (offset)

Sigmoid Function transforms a linear combination into a probability

°(z) = (2)

v J 1+e-z v '

The result is between 0 and 1, which represents the class probability.

Classification Rule. If the probability is higher than 0.5, it is assigned to the positive class, otherwise it is assigned to the negative class.

Loss Function Log - loss function is used.

L(y,9) = -1l(yi log(yi) + (1- yd log (1 - yi)) (3)

Here y is the actual value, y"is the predicted probability.

Advantages:

Simplicity - easy to implement and quick to operate;

Understandability - easy to interpret results;

Effective for linearly separable data;

Linear bounds. Only work well for linearly separable data. Sensitive to outliers. Outliers can affect the model. Logistic regression is used in many fields, including marketing to predict customer behavior and medical diagnostics.

K-NearestNeighbors (KNN). The K-Nearest Neighbors (KNN) technique is a well-known supervised machine learning algorithm that can be used for classification and regression tasks. It predicts data points based on their similarity.

The "K" in K-NN stands for the number of nearest neighbors to consider in the prediction. The method works by comparing a new, unlabeled data point with labeled data points in the training data set. Finds K nearest neighbors using a distance metric such as Euclidean distance or Manhattan distance, which measures the similarity of data points.

For classification problems, K-NN assigns a class label to a new data point based on a majority vote of its K nearest neighbors. For example, if most of the K nearest neighbors are from class A, the algorithm predicts that the new data point is also from class A.

In regression problems, K-NN predicts the numerical value of the target variable for a new data point by taking the average or weighted average of the target values of K nearest neighbors.

The choice of K is a crucial parameter in K-NN. A small value of K (e.g. K=1) can lead to overfitting where the algorithm becomes sensitive to noisy data points. On the other hand, a large value of K may lead to poor performance, where the algorithm may ignore local patterns in the data. Thus, it is very important to choose an appropriate value of K to achieve optimal performance.

K-Nearest Neighbors (KNN) works based on the following steps.

Step 1: Choose K number of neighbors.

Step 2: Calculate the Euclidean distance between K neighbors.

Step 3: Determine K nearest neighbors based on Euclidean distance.

Step 4: Count the number of data points in each category among these k neighbors.

Step 5: Assign the new data points to the category with the highest number of neighbors.

Step 6: Our model is ready.

Let's say we have a new data point and we need to put it in the desired category. Check out the image below:

Figure 1. New data point and assigning it to the desired category

First, we choose the number of neighbors, so we choose k = 5. Next, we calculate the Euclidean distance between the data points. The Euclidean distance is the distance between two points and can be calculated as:

Figure 2. Euclidean distance

By calculating the Euclidean distance, the nearest neighbors were obtained, that is, the three nearest neighbors of class A and the two nearest neighbors of class B. It is shown in the picture below.

Figure 3. Nearest neighbors, i.e. three nearest neighbors of class A and two nearest neighbors

of class B

Three nearest neighbors belong to class A, so this new data point must belong to class A.

Euclidean distance: Euclidean distance is a measure of the straight-line distance between two points in Euclidean space. In mathematics, it is calculated using the Pythagorean theorem.

Let's consider two points in Euclidean space expressed as P = (xi, yi, zi, ..., ni) and Q = (x2, y2, Z2, ..., n2). Here, (xi, yi, zi, ..., ni) and (x2, y2, Z2, ..., n2) are the coordinates of two points in n-dimensional space.

The Euclidean distance between these two points, denoted by d (P, Q), is calculated as follows:

d(P, Q) = sqrt((x2 — xi)2 + (y2 — yi)2 + (z2 — zi)2 + . + (n2 — ni)2) (4)

Euclidean distance is the square root of the sum of the squared differences between the corresponding coordinates of two points.

For example, the Euclidean distance between the points P(xi, yi) and Q(x2, y2) in two-dimensional space (n = 2) can be calculated as follows:

d(P, Q) = sqrt((x2 — xi)2 + (y2 — yi)2) (5)

This formula represents the length of a straight line segment connecting two points. Euclidean distance is commonly used as a distance metric in various applications, including machine learning algorithms such as KNN, where it measures the similarity or dissimilarity between data points based on their feature values.

Advantages of K-NN:

Simplicity: K-NN is a simple and straightforward algorithm. It makes no assumptions about the underlying data distribution, making it a non-parametric algorithm.

No training phase: K-NN is a lazy learning algorithm, meaning it does not build an accurate model during the training phase. It memorizes training data and performs calculations during prediction. This can be useful if the training data is updated frequently.

Versatility: K-NN can be applied to both classification and regression tasks. It can solve multi-class classification problems and adapt to different types of data.

Outlier robust: K-NN is robust against noisy data because it considers the local neighborhood of data points for prediction.

Interpretable results: K-NN provides transparency in the decision-making process. Prediction is based on nearest neighbors, allowing users to interpret and understand the reasoning behind classification or regression results.

The K-Nearest Neighbors (K-NN) algorithm is a popular and versatile machine learning algorithm used for classification and regression tasks. It works on the concept of finding K nearest neighbors of a given data point and making predictions or decisions based on their properties.

Neural networks in machine learning refer to a set of algorithms designed to help machines recognize patterns without being explicitly programmed. They consist of a group of interconnected nodes. These nodes represent the neurons of the biological brain.

A basic neural network consists of:

• Input layer

• Hidden layer

• Output layer

Figure 4. basic neural network steps

Neural networks in machine learning use mathematical or computer models to process information.

These neural networks are typically nonlinear, which allows them to model complex relationships between input and output data and to find patterns in data sets.

Applications of neural networks in machine learning generally fall into one of these three broad categories:

Neural network pattern and sequence recognition classification

• Functional approximation and regression analysis

• Data processing, including data clustering and filtering

Using neural networks for machine learning has some advantages.

• They store information throughout the network, which means the neural network can continue to function even if some information is lost from one part of the neural network.

• When neural networks are trained on quality data sets, they save costs and time because they take less time to analyze the data and provide results. They are less prone to errors, especially when trained with high-quality data.

• Neural networks ensure quality and accuracy of results.

Today, there are several types of neural networks. These neural networks are classified based on their density, layers, structure, data flow, and depth activation filters, among other properties. We focus on three types of neural networks.

• Convolutional Neural Network (CNN).

• Recurrent Neural Network (RNN).

• Deep Neural Network (DNN).

A convolutional neural network (CNN) is a deep learning algorithm specifically designed for image data processing. Convolutional neural networks are used in image recognition and processing.

The neural networks in a CNN are arranged in a similar way to the frontal lobe of the human brain, the part of the brain responsible for processing visual stimuli.

A convolutional neural network consists of:

• Convolutional layer;

• Connecting layer;

• Fully connected input layer;

• Fully bonded layer;

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

• Fully connected output layer.

Convolutional neural networks are distinguished by their ability to learn by themselves. This algorithm works by assigning learned weights and biases to an image to distinguish it from other images while preserving the important features to achieve better prediction.

A recurrent neural network (RNN) is an artificial neural network that uses sequential or time-series data to solve speech recognition and language translation problems. RNNs have been used in:

• Language translation

• Natural language processing

• Speech recognition

• Caption the image

In recurrent neural networks, connections between nodes form directed or undirected graphs of temporal sequences. These neurons have internal memory, which makes RNNs the best neural networks for machine learning problems involving sequential data.

As with other neural networks, RNN consists of input, hidden and output layers. The input layer receives and processes the data before sending it to the hidden layers. The hidden later extracts any useful information from the data before passing it to the output layer. In RNN models, each input is transformed into a dependent variable and cycled through the layers of the RNN stored in its memory.

Advantages of Recurrent Neural Networks.

• Recurrent neural networks can remember previous information, allowing them to accurately predict the next sequence

• They can be used with convolutional layers to extend the effective pixel neighborhood

• Larger input size does not increase model size

A deep neural network (DNN) is an artificial neural network consisting of several layers between input and output layers. These layers can be recurrent neural network layers or convolutional layers, making DNN a more sophisticated machine learning algorithm. DNNs have the ability to recognize and analyze sound, creative thinking, and voice commands.

A DNN is a type of machine learning algorithm that learns from many samples through repeated motions. When you feed data to a computer, a DNN sorts the data based on its elements, such as loudness. The data is passed through successive layers until we can clearly identify the type of sound created in the data. The model then receives feedback on the correct answer, which reinforces its learning process.

Deep neural networks also have their own advantages of use:

• DNNs are capable of learning non-linear mappings between inputs and outputs and the underlying structure of input data vectors.

• DNNs are capable of self-learning

• They are expandable

Neural networks form the basis of machine learning applications designed to solve real-world problems. There are many types of neural networks to choose from depending on the application. Neural Networks are used in many fields, including image recognition, natural language processing, and voice command recognition.

Conclusion

The effectiveness of the use of machine learning technologies in the diagnosis of X-ray images increases in several ways. Below are the main factors that increase this efficiency. X-ray analysis with machine learning speeds up the process and reduces errors. Algorithms study a large number of images and provide high accuracy in disease detection.

Disease detection. Early diagnosis. With the help of machine learning, the possibility of detecting diseases in the early stages increases, which facilitates the treatment process.

Detection of various diseases Algorithms help to detect dental caries, periodontal diseases and other problems.

Data analysis. Working with big data: Ability to quickly and efficiently analyze large amounts of X-ray images. Contextual learning: Algorithms learn to analyze new images in context based on their experience.

Reduction of Errors. Reduction of Human Factors: Human factor errors in diagnosis are reduced, which leads to more reliable results.

Learning process. The system continuously improves itself through machine learning. The system helps to quickly extract the most important information, which allows doctors to make quick decisions.

Patient Management Personalized Treatment: Machine learning can be used to develop treatment plans tailored to a patient's condition. Machine learning technologies are improving the efficiency of diagnosing X-ray images, creating new opportunities in medicine. With their help, it is possible to detect diseases early, increase the accuracy of diagnosis and make the process of treating patients more effective.

REFERENCES

1. Kumar, S., Rani, S., & Singh, R. (2021). Automated recognition of dental caries using K-Means and PCA based algorithm

2. Atencio, Y. P., Marin, J. H., Holguin, E. H., Yanqui, F. T., & Cabrera, M. I. (2021, October). Image Processing Techniques for Medical Applications. In 2021 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME) (pp. 1-6). IEEE.

3. Thulaseedharan, A., & PS, L. P. (2023, May). Detection of typical Pathologies and Differential Treatments in Dental Panoramic X-Rays based on Deep Convolutional Neural Network. In 2023 International Conference on Control, Communication and Computing (ICCC) (pp. 1-6). IEEE.

4. Goswami, M., Maheshwari, M., Baruah, P. D., Singh, A., & Gupta, R. (2021, September). Automated Detection of Oral Cancer and Dental Caries Using Convolutional Neural Network. In 2021 9th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions)(ICRITO) (pp. 1-5). IEEE.

5. Mohan, R., Arunmozhi, R., & Rajinikanth, V. (2023, March). Deep-Learning Segmentation and Recognition of Tooth in Thresholded Panoramic X-ray. In 2023 Winter Summit on Smart Computing and Networks (WiSSCoN) (pp. 1-5). IEEE.

6. Chin, C. L., Lin, J. W., Wei, C. S., & Hsu, M. C. (2019, November). Dentition labeling and root canal recognition using ganand rule-based system. In 2019 International Conference on Technologies and Applications of Artificial Intelligence (TAAI) (pp. 1-6). IEEE.

7. Park, J., & Lee, Y. (2021, October). Object Detection in Dental X-ray Image Using 5-axis Coordinate System. In 2021 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME) (pp. 1-6). IEEE.

8. Thulaseedharan, A., & PS, L. P. (2022, November). Deep Learning based Object Detection Algorithm for the Detection of Dental Diseases and Differential Treatments. In 2022 IEEE 19th India Council International Conference (INDICON) (pp. 1-7). IEEE.

9. George, J., Hemanth, T. S., Raju, J., Mattapallil, J. G., & Naveen, N. (2023, August). Dental Radiography Analysis and Diagnosis using YOLOv8. In 2023 9th International Conference on Smart Computing and Communications (ICSCC) (pp. 102-107). IEEE.

10. Yakhshiboyev R. et al. Evaluation of machine learning algorithms for gastroenterological diseases prediction //Science and innovation. - 2023. - T. 2. - №. A7. - C. 83-94.

11. Kudratov G., Eshmuradov D., Yadgarova M. GENERAL ISSUES OF PROTECTION OF THE BACKLINE COMPUTER NETWORKS //Science and Innovation. - 2022. - T. 1. - №. 8. - C. 684-688.

i Надоели баннеры? Вы всегда можете отключить рекламу.