Научная статья на тему 'Analysis of Medical Images Classification Methods: The Case of Neutrophil Nuclei'

Analysis of Medical Images Classification Methods: The Case of Neutrophil Nuclei Текст научной статьи по специальности «Медицинские технологии»

CC BY
33
5
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
fractals / deep learning / neural networks / image processing / neutrophils

Аннотация научной статьи по медицинским технологиям, автор научной работы — Anna V. Neupokoeva, Semen A. Naydenov, Elena V. Shevchenko

A comparative analysis of methods for processing and classification of medical images was exemplified by neutrophil nuclei digital images. Special consideration was given to three methods: 1) measuring the fractal dimension of neutrophil nuclei to determine their functional state; 2) selecting the neutrophil nucleus contours and calculating their characteristics; 3) using a neural network to classify neutrophils according to the maturity degree. When using a neural network trained on the initial data, the classification accuracy was 60%, whereas after expanding the dataset by modifying the original images, the accuracy reached 85%, although the result was not stable. The combination of contours selection of the target objects in the image, the calculation of the numerical characteristics of these contours, and classification using deep learning methods achieves the accuracy of 72–73%, whereas the accuracy deviation does not exceed 5.6%. © 2023 Journal of Biomedical Photonics & Engineering.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Analysis of Medical Images Classification Methods: The Case of Neutrophil Nuclei»

Analysis of Medical Images Classification Methods: The Case of Neutrophil Nuclei

Anna V. Neupokoeva1*, Semen A. Naydenov2, and Elena V. Shevchenko2

1 Samara State Medical University, 89 Chapaevskaya str., Samara 443099, Russian Federation

2 Irkutsk State Medical University, 1 Krasnogo Vosstaniya str., Irkutsk 664003, Russian Federation

*e-mail: annett 2005@inbox.ru

Abstract. A comparative analysis of methods for processing and classification of medical images was exemplified by neutrophil nuclei digital images. Special consideration was given to three methods: 1) measuring the fractal dimension of neutrophil nuclei to determine their functional state; 2) selecting the neutrophil nucleus contours and calculating their characteristics; 3) using a neural network to classify neutrophils according to the maturity degree. When using a neural network trained on the initial data, the classification accuracy was 60%, whereas after expanding the dataset by modifying the original images, the accuracy reached 85%, although the result was not stable. The combination of contours selection of the target objects in the image, the calculation of the numerical characteristics of these contours, and classification using deep learning methods achieves the accuracy of 72-73%, whereas the accuracy deviation does not exceed 5.6%. © 2023 Journal of Biomedical Photonics & Engineering.

Keywords: fractals; deep learning; neural networks; image processing; neutrophils.

Paper #8811 received 5 Mar 2023; revised manuscript received 27 May 2023; accepted for publication 10 Jun 2023; published online 9 Jul 2023. doi: 10.18287/JBPE23.09.030302.

1 Introduction

These days, there are two clearly pronounced and interrelated trends in medicine. On the one hand, development of diagnostic technology and increased number of available diagnostic studies lead to a rise in the amount of information on each patient's health. On the other hand, the workload for doctors increases, especially during mass morbidity, which drives the need for developing systems for automatic processing of diagnostic information. One of the most common areas in medicine is processing and classification of the images obtained from ultrasound examinations, computed tomography, X-rays, blood smears, or images of histological sections.

The current clinical practice makes wide use of White Blood Cell Differential Count - a percentage ratio of various leukocyte types, especially different neutrophil forms counted in a stained blood smear under a microscope.

Neutrophil granulocytes (neutrophil leukocytes, neutrophils) are the most numerous group of leukocytes. The reference values of their blood level vary from 48 to

78% of the total leukocyte number. The nucleus of a mature segmented neutrophil contains 3-5 segments connected by thin filaments. In blood, there are neutrophils of three types of maturity: metamyelocytes (juvenile neutrophils), band neutrophils and segmented neutrophils. Metamyelocytes and band neutrophils qualify as young cells. Under normal conditions, the metamyelocyte level does not exceed 0.5% and is characterized by a kidney bean shaped nucleus. Band neutrophils under normal conditions make up 1-6% of the total number of leukocytes having an unsegmented nucleus often of a horseshoe or S-shape. Increased percentage of metamyelocytes and band neutrophils may indicate the presence of an acute pyoinflammatory process or acute vast blood loss. This is caused by accelerated hemopoiesis process and by replacement of the lost cells with younger forms [1, 2].

Microscopic examination of blood smears in an ordinary hospital is very time-consuming due to a large number of patients. Various methods of image processing automation have been actively developed in recent years. Conventionally, they can be divided into two types using

This paper was presented on the Biophotonics Workshop of the XX All-Russian Youth Conference-Contest on Optics and Laser Physics, Dedicated to the 100th Anniversary of N.G. Basov, Samara, Russia, November 8-12, 2022.

different approaches. The first type involves the selection of certain numerical characteristics from the source image: the distribution of target objects by brightness, color, size, area, contour length, surface roughness, etc. The next classification [3-6] is based on the obtained numerical data using various methods: the k-nearest neighbors algorithm, the naive Bayes classifier method, the random forest method, and the support vector machine method.

Another approach involves submitting images as input data and using convolutional neural networks (CNN) for classification. Over the past few years, there has been an active growth in the use of neural networks for medical image recognition, for example, for screening diabetic retinopathy, classifying skin diseases and detecting metastases in lymph nodes [7-11]. There have also appeared studies in which both approaches are used for comparison and selecting the best method, or combined together to increase efficiency [12-14].

At the same time, most papers focus on the results of X-ray images or computed tomography since they contain a large amount of labeled data necessary for training a neural network [8, 10-12].

The papers [15-20] devoted to processing blood images solve the problems of erythrocyte classification [3, 4] in anemia or recognition and classification of the leukocyte types in diagnosing leukemia [15-21]. In the case of using neural networks, both publicly available image databases [16, 18] and original images [17, 19, 20] are used for their training. At the same time, it should be noted that training a neural network on a set from one source followed by presenting a set from another source for testing can significantly reduce classification accuracy [18].

Another problem is excessive complexity of standard neural network architectures when using small datasets, as well as the need to modify the architecture and to configure network parameters taking into account specific classification tasks [15, 16].

Refs. [15-21] address the task of leukocyte classification via solving two problems. The first is assigning the leukocyte in the image to a certain class and then counting the number of cells for each class (i.e. automating the quantitative processing of blood smears). The second is identifying the altered (atypical) forms of leukocytes and calculating the ratio between the normal and atypical forms in a blood smear to detect leukemia and specify its stage.

However, if the blood test is not related to detecting oncology, then the task of classifying one of the leukocytes types - neutrophils - by the maturity degree becomes more routine and more widely demanded.

Therefore, the purpose of this work is to compare different methods of classifying neutrophil images and to choose the optimal approach in terms of accuracy, the need for pre-processing of images and the degree of pre-processing automation.

2 Materials and Methods

The materials for this study were fixed and routine stained blood smears in patients of the Surgery Department of Irkutsk Research Center of Surgery and Traumatology (Russia). Blood was collected before and after surgery. The blood smears were stained by Romanowsky-Giemsa solution for 20 min.

The prepared smears were viewed and microphotographed by the Altami-163t microscope equipped with the Levenhuk C-Series digital camera with the resolution of 2048 x 1546 px. The cells were viewed at 1000x magnification. The total number of the neutrophils photos was 146 images: 30 images of metamyelocytes, 57 images of band neutrophils and 59 images of segmented neutrophils (Fig. 1).

All images were divided into three classes with the neutrophils classified according to the morphological characteristics of granulocyte germ cells given below [2]:

1) metamyelocytes (Fig. 1(a)) having the cell size from 10 to 16 ^m, a bean-shaped or kidney-shaped core, the lilac shade core color. The cytoplasm is pale pink, grayish or light blue. Inside, the cell contains neutrophilic granularity evenly distributed throughout the cytoplasm;

2) band neutrophils (Fig. 1(b)) having the cell size from 9 to 12 ^m. The nucleus is of curved band shape and of purple color. The chromatin has a crude structure. The cytoplasm is pink with a purple shade. Inside, it is characterized by neutrophil granularity, often unevenly spread over the cytoplasm.

3) segmented neutrophils (Fig. 1(c)) having the cell size from 9 to 12 ^m. The nucleus is of irregular shape, polymorphic, segmented (with 2 to 5 lobes) and bright purple. Between the lobes there are thin connecting filaments. The cell cytoplasm is pink or pink purple with neutrophil granularity.

2.1 Calculation of Fractal Dimension

The fractal dimension was calculated with the covering method by a specially developed algorithm described in Ref. [22].

2.2 Selection of Contours and Calculation of

Their Parameters

To highlight the contours in the photo, two functions were written in Python using the OpenCV library.

The image of a blood smear containing neutrophil was read in grayscale. Then the image was blurred: a filter replaces the central pixel to the median of all pixels in the core area. This type of blur is most effective at removing salt noise. The size of the original image was 1000 x 1000 px and the selected size of the blur core was 25 px. The next step was to binarize the image: all pixels below a certain threshold were changed to white and those above a certain threshold were changed to black. Binarization allows isolating the image of the neutrophil nucleus that looks much darker than the other cells on the stained smear, such as erythrocytes (Fig. 1).

Fig. 1 Three degrees of neutrophil maturity: (a) metamyelocyte, (b) band neutrophil, and (c) segmented neutrophil.

C

>

Fig. 2 Selection of a simple nucleus with one contour on the example of a band neutrophil (a, b, c, d) and a composite nucleus with several contours on the example of a segmented neutrophil (e, f, g, h): (a), (e) the original image, (b), (f) the image after blurring, (c), (g) the binary image, (d), (h) the selection of the core contour (it can be single (d) or composite (h)).

In the binary image, the contours were selected and sorted by area from the largest to the smallest. Then the lengths of two contours (the largest and the second largest) were calculated and compared sequentially. If the difference between the lengths exceeded 600 units, then only the largest contour was taken into account for calculation while the others were discarded. If the difference between the contour lengths was less than 600 units, the total length of the contours was considered. After that, the comparison procedure was repeated for the second and third circuits, then for the third and fourth, etc. At the same time, the program counted how many contours were taken into account for this image in the end.

The algorithm for calculating the contour length by summing the lengths of several contours was forcibly used to account for all segments of the nucleus of segmented neutrophils. For band neutrophils and metamyelocytes, this algorithm was automatically interrupted at the first step, since the image contained no objects comparable in size to the neutrophil nucleus in the image (Fig. 2).

As a result of applying the written function to all images of the same class, 6 lists were formed in accordance with the number of calculated parameters:

1) 2)

3)

4)

5)

contour length (d), contour area (S),

the number of contours taken into account for calculations (N),

the ratio of the total contour length to the number of contours taken into account (the perimeter of a single contour is d/N),

the contour form-factor that takes into account the degree of deviation of the contour from the spherical shape and was calculated by the following formula:

form-factor = ^ • 100,

d2

(1)

6) contour compactness calculated by the formula:

compactness = —.

(2)

2

Further classification was carried out according to several combinations of parameters. The most effective pairs were "contour form-factor - contour area" and "contour form-factor - the ratio of the total length of the contour to the number of contours taken into account in the calculation". Classification methods (the k-nearest neighbors method, the Bayes method and the linear regression method) were applied by means of the standard algorithms of the Sklearn library [23-25]. At the same time, the entire dataset was mixed randomly with 80% of the data used to build a classifier and 20% used to test its efficiency. Each method was applied 10 times, the dataset was mixed after each application. Next, the average accuracy (as a percentage) and the accuracy average square deviation were calculated.

2.3 Using a Neural Network for Classification

To classify neutrophils according to the degree of maturity, a convolutional neural network (CNN) was used.

Any neural network consists of the input layer, one or more hidden layers and the output layer. Convolutional neural networks have one or more blocks of convolutional layers designed to highlight the image characteristic features by performing a convolution operation over the entire image [26] followed as a rule by dimension reduction (pooling). Convolutional layers are followed by fully connected layers in which each neuron has a weighted connection with each neuron in the previous layer. The best-known networks that have shown themselves well in image recognition and classification, such as AlexNet, GoogLeNet, VGG, ResNet, have a large number of convolutional layers (5 for AlexNet and 19 for VGG19), as they are designed to recognize a large number of classes (e.g., 1000 classes in the ImageNet Large-Scale Visual Recognition Challenge). At the same time, it is well-known [11, 26] that under small datasets, as well as under a small number of classes, a large number of convolutional layers leads to overfit neural network, decreased classification accuracy and increased learning time.

Since the neural network architecture affects the image classification capabilities and computational complexity of this process, the network configuration should be optimized. The optimization task consists of several steps including selecting the activation function, determining the number of hidden layers and the number of neurons in each layer. The number of neural blocks (or filters) in the input and output layers depends on the input data of this model and the number of classes at the output. There are no specific recommendations for determining and selecting the optimal number of hidden layers and the number of neurons in them, so in practice these parameters are often chosen by trial and error [11, 17, 18, 27].

To determine the optimal neural network architecture, various neural networks with different numbers of neurons and hidden layers were experimentally tested. In our case, the CNN input layer should consist of three neurons (one neuron for each component of the RGB image), and the output layer also consists of three neurons, according to the number of classes that we define. Variants with one, two and three blocks of convolutional layers were tested. Each block consisted of two consecutive convolutional layers with a 3 x 3 convolution core, and the number of neurons in the subsequent layer was two times greater than in the previous one. The next layers were maxpooling (that reduces dimension) and data normalization (BatchNormalization). To avoid CNN overfitting, 20% of neurons were dropped out. The activation functions were ReLU.

To classify neutrophil images without preprocessing the best result was shown by a neural network whose architecture is presented in Fig. 3. The CNN was created by means of the Keras library functions of the Python programming language.

An image converted to 256 x 256 px format was fed to the network input. The neural network included three blocks of convolutional layers (each block was followed by a dimension reduction operation) and three fully connected layers containing 64, 32, and 3 neurons, respectively (Fig. 3).

Fig. 3 Neural network architecture.

.5 1.10 ■a

» 1.08

1.02

VvMa-ws^

10

15

20

25

30

Band neutrophils Segmented neutrophils

Metamyelocytes

Fig. 4 Fractal dimension of neutrophils.

The whole set of images was divided into a part intended for training a neural network (80% or 117 images) and a part used for validation of learning outcomes (20% or 29 images). To assess the quality of the neural network classification, the standard parameters were used: accuracy and loss function. The training was carried out for 100 epochs.

3 Results

The calculation of the fractal dimension shows the absence of intersections for neutrophils belonging to three different classes according to their morphological features (Fig. 4): for the band neutrophils, the fractal dimension of the nucleus ranges from 1.06 to 1.08. For the segmented neutrophils, the fractal dimension of the nucleus ranges from 1.11 to 1.14. For the metamyelocytes, the fractal dimension of the nucleus ranges from ranges from 1.01 to 1.03. The confidence interval was calculated using the standard method for small samples, so the same number of images of each class was selected from the entire set of images containing 30 pieces each [22].

The obtained data were processed by calculating the Student coefficient for independent quantities. A comparison of the critical and empirical t-criteria showed the statistical significance of differences in the fractal dimension of the nuclei of the metamyelocytes as well as the segmented and band neutrophils [22].

Calculating the lengths, areas, and other parameters of the neutrophil nucleus contours described above is adequate for separating the metamyelocytes (contour length 872 ± 234, contour area (60.8 ± 13.2) x 103) from the two other classes (contour length 1356 ± 432 and 1470 ± 284, contour area (74.4 ± 12.8) x 103 and (62.6 ± 10.6) x 103). At the same time, it is impossible to separate the segmented and the band neutrophils by the length or the area of the core contour, since these parameters overlap for both classes. But when considering the number of contours to take into account, the total length of the contour metamyelocytes and band neutrophils for all images contain one contour, whereas the segmented ones contain from 1 to 4 (1.8 contours on average).

Contours

120

100

o 80

o

1 £ 60

o

M— 40

20

• • pal seg

A- • un

• • • • • •

• • i

• né • •

» « • • • p

20000 40000 60000 80000 100000 120000 square

(b)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Fig. 5 The position of the objects in the form of points on the coordinate plane: (a) "contour form-factor is the ratio of the total length of the contour to the number of contours taken into account in the calculation", (b) "contour form-factor is a contour area". The metamyelocytes, the band and segmented neutrophils are marked with dots of different colors.

Therefore, two combinations of parameters were experimentally selected for automatic classification: "contour form-factor - contour area" and "contour form-factor - the ratio of the total length of the contour to the number of contours taken into account in the calculation". Fig. 5 shows the position of all the studied objects in the form of points on the coordinate plane with the objects of different classes indicated by different colors.

The accuracy of classification by various methods based on the numerical contour data is presented in Table 1. The highest accuracy was shown by the Bayes method (73%) whereas the logistic regression method has a slightly lower accuracy (72%) but a more stable classification result.

The results for the neural networks applied for original image classification turned out to be unstable: the classification accuracy according to training data reached 93%, but the validation accuracy did not exceed 60% (Fig. 6a). Low classification accuracy can be attributed to a relatively small amount of training data.

Table 1 Classification accuracy (%) of neutrophil nuclei by various methods based on numerical contour parameters.

Classification methods

Contours parameters k-nearest neighbors method linear regression method Bayes method

"contour form-factor - the ratio of the total

length of the contour to the number of contours 69.1 ± 5.6 72.4 ± 5.0 73.6 ± 5.6

taken into account in the calculation"

'contour form-factor - contour area"

52.4 ± 8.8 65.8 ± 5.6 71.9 ± 7.7

Fig. 6 Classification accuracy and loss function: (a) for source images, (b) for binary images. The blue dotted line is training, the solid orange line is validation.

In this study, we generated additional data by rotating images at a random angle up to 270 degrees, shifting them vertically and horizontally by 10% of the original size, and reflecting them horizontally. The amount of data for segmented and band neutrophils was increased 10 times while the increase for metamyelocytes was 20 times since the number of their images was initially smaller. Thus, the expanded dataset consisted of 1510 images (520 for the segmented, 500 for the bond neutrophils and 490 for the metamyelocytes). 1208 images were used to train the neural network, and 302 images were used for validation.

Neural network training on an expanded dataset took 100 epochs, while the accuracy of the training sample reached saturation by 70 epochs and reached 98%, although classification accuracy during validation remained unstable and the averaged magnitude was about 85% (Fig. 7).

The relatively low accuracy of classification of neutrophil images using a neural network is explained by a small number of images included into the training set. In contrast, non-binary classification of objects often demonstrates poor accuracy even with a training sample size of several thousand images. Thus, an attempt to divide the fundus images automatically into 5 classes -the absence of diabetic retinopathy and four stages of retinopathy presence - demonstrated only a 57% classification accuracy even for a set of more than 12,000 images [29]. The same paper reports that transition to binary classification increased the accuracy up to 80%.

Fig. 7 Classification accuracy and loss function for classification by an extended set of non-processed images. The blue dotted line is training, the solid orange line is validation.

The use of neural networks to classify binary images has shown the classification accuracy of 75-85% for the training data while the validation accuracy did not exceed 65%, however their classification results are more stable than for the original images (Fig. 6(b)).

To increase classification accuracy, a larger dataset is needed. One of the ways to expand the training dataset is to add images obtained by rotating/shifting/compressing the original photos [20, 28].

Ref. [30], on the contrary, states that 3000 images are sufficient not only for a stable classification result but also to obtain an accuracy of 95-98% by dividing images into three classes. Both papers [29, 30] used open data from the Kaggle database. At the same time, in Refs. [31, 32], the original image sets required preprocessing of images or including additional parameters to improve the accuracy of classification.

Refs. [15, 16, 18] describe the classification of leukocytes by five types of neural networks with a much more complex architecture than the one we propose. Their high classification accuracy (97-98%) was achieved by training the network on a large dataset and, then, using an already trained network to classify leukocytes (a so-called "transfer learning" approach).

Ref. [33] used a set of images comparable to ours, while the goal was to segment the cells in the image and to divide them into three classes. The classification accuracy depended on the cell type and reached 56-61% for erythrocytes, 67-91% for lymphocytes, and 28-56% for platelets.

It should be noted that the classification accuracy we obtained by using a neural network of 60% for the initial set of images and 85% for the extended set is in good agreement with the literature data on solving similar problems [18, 19, 33], but the stability of the results is insufficient for practical use.

Therefore, in the case of a small initial image set, a more stable result is obtained by preprocessing images, which makes it possible to identify the numerical characteristics of the contours, and then to use them for classification. A similar approach was used in Ref. [17] to solve the classification problem, and in Ref. [34] for segmentation of cell images.

The algorithm of contour selection and calculation of their numerical parameters proposed in this paper has proven its validity and feasibility as one of the stages in automating the process of calculating fractal dimension. It should be noted that the algorithm identified the core contour incorrectly only in 16% of cases. As a rule, incorrect operation of the algorithm is attributed to a single binarization threshold for all images, although in some cases, red blood cells are the same dark color as the neutrophil nucleus, which leads to erroneous account for an additional contour. The opposite situation is observed,

when the core is painted weakly and in this case, the selected contour is unnecessarily "cut off'.

The presented results demonstrate that fractal analysis provides the best accuracy in the classification of neutrophils by maturity degree because this method most definitely takes into account the shape of the neutrophil nucleus rather than its size. A disadvantage of the method at the moment is incomplete automation at the contour selection stage which should be eliminated in the future. Moreover, dimensionality calculation was carried out using images taken with the same magnification and having the same resolution. The question of the stability of the results with other image acquisition parameters remains open. For example, Ref. [30] uses an algorithm similar to the one we described in this paper - image binarization, contour selection and fractal dimension calculation - and reveals that the results of fractal analysis depend on the degree of magnification of histological images.

The most stable classification result is observed for a combination of several numerical contour parameters and further classification using deep learning methods.

4 Conclusion

This study has shown the possibility of applying two different approaches to image analysis: image classification be means of a neural network and using images to isolate numerical features, which allows classification to be carried out with the help of deep learning methods.

Training a neural network on a limited amount of data has proven to be possible but classification accuracy of this technique is quite low. When expanding the dataset by modifying the original images, classification accuracy increases to an average of 85%, whereas the main disadvantage of this method is unstable results.

Selecting the contours of the image target object and calculating their numerical characteristics allow the classification accuracy to achieve 72-73%, while the accuracy deviation does not exceed 5.6%.

5 Disclosures

The authors have no relevant financial interest in this article and no conflict of interest to disclose.

References

1. J. Actor, Elsevier's Integrated Review Immunology and Microbiology, 2nd ed., Elsevier (2012). ISBN: 9780323074476.

2. D. Zucker-Franklin, M. F. Greaves, C. E. Grossi, and A. M. Marmont, "Neutrophils, Atlas of Blood Cells: Function and Pathology," Lea and Ferbiger, Philadelphia 1(2), (1988).

3. T. Go, H. Byeon, and S. J. Lee, "Label-free sensor for automatic identification of erythrocytes using digital in-line holographic microscopy and machine learning," Biosensors and Bioelectronics 103, 12-18 (2018).

4. Y. G. Kim, Y. Jo, Y. Cho, H. S. Min, and Park, "Learning-based screening of hematologic disorders using quantitative phase imaging of individual red blood cells," Biosensors and Bioelectronics 123, 69-76 (2019).

5. V. K. Ilyin, Z. O. Solovieva, M. A. Skedina, N. V. Verdenskaya, K. V. Volkova, and I. A. Ivanova, "Choice of an optimal set of signs and evaluation of the quality of microbial objects recognition by their images," Aerospace and Environmental Medicine 52(3), 73-79 (2018). [in Russian]

6. P. Wang, J. Wang, L. Wang M. Yin,Y. Li, and J. Wu, "Classification of pathogenic bacteria using near-infrared diffuse reflectance spectroscopy," Journal of Applied Spectroscopy 85(6), 1029-1036 (2018).

7. V. Gulshan, L. Peng, M. Coram, M. C. Stumpe, D. Wu, A. Narayanaswamy, S. Venugopalan, K. Widner, T. Madams; J. Cuadros, R. Kim, R. Raman, P. C. Nelson, J. L. Mega, and D. R. Webster, "Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs," Journal of the American Medical Association 316(22), 2402-2410 (2016).

8. R. Yamashita, M. Nishio, R. K. G. Do, and K. Togashi, "Convolutional neural networks: an overview and application in radiology," Insights into Imaging 9, 611-629 (2018).

9. V. O. Vinokurov, Y. Khristoforova, O. Myakinin, I. Bratchenko, A. Moryatov, A. Machikhin, and V. Zakharov, "Neural network classifier of hyperspectral images of skin pathologies," Computer Optics 45(6), 879-886 (2021). [in Russian]

10. X. Yi, E. Walia, and P. Babyn, "Generative adversarial network in medical imaging: A review," Medical Image Analysis 58, 101552 (2019).

11. G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, Jeroen A.W.M. van der Laak, Bram van Ginneken, and C. I. Sánchez, "A survey on deep learning in medical image analysis," Medical Image Analysis 42, 60-88 (2017).

12. Yu. D. Agafonova, A. V. Gaidel, E. N. Surovtsev, and A. V. Kapishnikov, "Meningioma detection in MR images using convolutional neural network and computer vision methods," Journal of Biomedical Photonics and Engineering 6(3), 030301 (2020).

13. G. C. Mallika, A. Alsadoon, D. T. H. Pham, S. Abdullah, H. T. Mai, P. W. C. Prasad, and T. Q. V. Nguen, "A Novel Intelligent System for Detection of Type 2 Diabetes with Modified Loss Function and Regularization," Proceedings of the Institute for System Programming of the Russian Academy of Sciences, 33(2), 93-114 (2021). [In Russian]

14. A. E. Sulavko, P. S. Lozhnikov, A. G. Choban, D. G. Stadnikov, A. A. Nigrey, and D. P. Inivatov, "Evaluation of EEG identification potential using statistical approach and convolutional neural networks," Information and Control Systems (6), 37-49 (2020).

15. A. Meenakshi, J. A. Ruth, V. R. Kanagavalli, and R. Uma, "Automatic classification of white blood cells using deep features based convolutional neural network," Multimedia Tools and Applications 81(21), 30121-30142 (2022).

16. H. Kutlu, E. Avci, and F. Özyurt, "White blood cells detection and classification based on regional convolutional neural networks," Medical Hypotheses 135, 109472 (2020).

17. A. Bodzas, P. Kodytek, and J. Zidek, "Automated Detection of Acute Lymphoblastic Leukemia From Microscopic Images Based on Human Visual Perception," Frontiers in Bioengineering and Biotechnology 8,1005 (2020).

18. L. Vogado, R. Veras, K. Aires, F. Araújo, R. Silva, M. Ponti, and J. M. R.Tavares, "Diagnosis of leukaemia in blood slides based on a fine-tuned and highly generalisable deep learning model," Sensors 21(9), 2989 (2021).

19. C. Marzahl, M. Aubreville, and A. Maier, "Classification of leukemic B-Lymphoblast cells from blood smear microscopic images with an attention-based deep learning method and advanced augmentation techniques," in ISBI 2019 C-NMC Challenge: Classification in Cancer Cell Imaging, Lecture Notes in Bioengineering, A. Gupta, R. Gupta (Eds.), Springer, Singapore, 13-22 (2019).

20. M. S. Jarjees, S. S. M. Sheet, and B. T. Ahmed, "Leukocytes identification using augmentation and transfer learning based convolution neural network," Telkomnika (Telecommunication Computing Electronics and Control) 20(2), 314-320 (2022).

21. S. M. Abas, A. M. Abdulazeez, and D. Q. Zeebaree, "A YOLO and convolutional neural network for the detection and classification of leukocytes in leukemia," Indonesian Journal of Electrical Engineering and Computer Science 25(1), 200-213 (2022).

22. S. A. Naydenov, P. A. Naydenov, and E. V. Shevchenko, "Practical implementation analysis of the fractal dimension calculating algorithm for the medical images by the example of neutrophil nuclei," Journal of Biomedical Photonics and Engineering 6(1), 010304 (2020).

23. H. Brink, J. W. Richards, and M. Fetherolf, Real-World Machine Learning, Manning Publication, Shelter Island, New York (2016). ISBN 9781617291920.

24. C. M. Bishop, Pattern recognition and machine learning, Springer, New York (2006). ISBN: 9780387310732.

25. A. Smola, S. V. N. Vishwanathan, Introduction to Machine Learning, Cambridge University Press, United Kingdom (2008).

26. M. Nielsen, Neural Networks and Deep Learning, Determination Press, San Francisco, CA, USA (2015).

27. T. Falk, D. Mai, R. Bensch, Ö. Ci?ek, A. Abdulkadir, Y. Marrakchi, A. Böhm, J. Deubner, Z. Jäckel, K. Seiwald, A. Dovzhenko, O. Tietz, C. D. Bosco, S. Walsh, D. Saltukoglu, T. L. Tay, M. Prinz, K. Palme, M. Simons, I. Diester, T. Brox and O. Ronneberger, "U-Net: deep learning for cell counting, detection, and morphometry," Nature Methods 16, 67-70 (2019).

28. T. A. Sumi, M. S. Hossain, and K. Andersson, "Automated Acute Lymphocytic Leukemia (ALL) Detection Using Microscopic Images: An Efficient CAD Approach," Chapter 5 in Proceedings of Trends in Electronics and Health Informatics: TEHI 2021. Lecture Notes in Networks and Systems 376, M. S. Kaiser, A. Bandyopadhyay, K. Ray, R. Singh, V. Nagar (Eds), 363-376 (2022).

29. T. H. Mamedov, D. V. Dzjuba, and A. N. Narkevich, "Application of convolutional neural networks for recognition of diabetic retinopathy," Siberian Medical Review 1, 83-87 (2022). [in Russian]

30. A. M. Ignatova, M. A. Zemlyanova, M. S. Stepankov, and Y. V. Kol'dibekova, "Using multifractal analysis to assess the morphology of lung tissues with and without pathology," Fundamental and Applied Aspects of Public Health Risk Analysis: Materials of the All-Russian Scientific and Practical Internet Conference of Young Scientists and Specialists of Rospotrebnadzor with International Participation, 10-14 October, 2022 Perm, Russia, 222-229 (2022). [in Russian]

31. V. K. Belyakov, E. P. Sukhenko, A. V. Zakharov, P. P. Koltsov, N. V. Kotovich, A. A. Kravchenko, A. S. Kutsaev, A. S. Osipov, and A. B. Kuznetsov, "On one method of blood cell classification and its software implementation," Software & Systems 4(108), 46-56 (2014). [in Russian]

32. S. N. Rjabceva, V. A. Kovalev, V. D. Malyshev, I. A. Siamionik, M. A. Derevyanko, R. A. Moskalenko, A. S. Dovbysh, T .R. Savchenko, and A. N. Romaniuk, "Development of an algorithm for searching for tumor areas based on the processing of full-slide histological images of breast cancer," Doklady BGUIR 18(8), 21-28 (2020).

33. J. Pfeil, A. Nechyporenko, M. Frohme, F. T. Hufert, and K. Schulze, "Examination of blood samples using deep learning and mobile microscopy," BMC Bioinformatics 23, 65 (2022).

34. A. Sharma, B. Buksh, "Intellectual acute lymphoblastic leukemia (ALL) detection model for diagnosis of blood cancer from microscopic images using hybrid convolutional neural network," International Journal of Engineering and Advanced Technology 8(6), 2972-2981 (2019).

i Надоели баннеры? Вы всегда можете отключить рекламу.