Научная статья на тему 'AUTOMATED DETECTION OF COVID-19 CORONAVIRUS INFECTION BASED ON ANALYSIS OF CHEST X-RAY IMAGES BY DEEP LEARNING METHODS'

AUTOMATED DETECTION OF COVID-19 CORONAVIRUS INFECTION BASED ON ANALYSIS OF CHEST X-RAY IMAGES BY DEEP LEARNING METHODS Текст научной статьи по специальности «Медицинские технологии»

CC BY
85
38
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
CHEST X-RAYS / DEEP LEARNING / CONVOLUTIONAL NEURAL NETWORKS

Аннотация научной статьи по медицинским технологиям, автор научной работы — Shchetinin Evgenii Yu., Sevastyanov Leonid A.

Early detection of COVID-19 infected patients is essential to ensure adequate treatment and reduce the load on the healthcare systems. One of effective methods for detecting COVID-19 is deep learning models of chest X-ray images. They can detect the changes caused by COVID-19 even in asymptomatic patients, so they have great potential as auxiliary systems for diagnostics or screening tools. This paper proposed a methodology consisting of the stage of pre-processing of X-ray images, augmentation and classification using deep convolutional neural networksXception, InceptionResNetV2, MobileNetV2, DenseNet121, ResNet50 and VGG16, previously trained on theImageNet dataset. Next, they fine-tuned and trained on prepared data set of chest X-rays images. The results of computer experiments showed that theVGG16 model with fine tuning of the parameters demonstrated the best performance in the classification of COVID-19 with accuracy 99,09%, recall=98,318%, precision=99,08% and f1_score=98,78. This signifies the performance of proposed fine-tuned deep learning models for COVID-19 detection on chest X-ray images.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «AUTOMATED DETECTION OF COVID-19 CORONAVIRUS INFECTION BASED ON ANALYSIS OF CHEST X-RAY IMAGES BY DEEP LEARNING METHODS»

ВЕСТНИК ТОМСКОГО ГОСУДАРСТВЕННОГО УНИВЕРСИТЕТА

2022 Управление, вычислительная техника и информатика № 58

Tomsk: State University Journal of Control and Computer Science

ИНФОРМАТИКА И ПРОГРАММИРОВАНИЕ INFORMATICS AND PROGRAMMING

Original article

doi: 10.17223/19988605/58/9

Automated detection of COVID-19 coronavirus infection based on analysis of chest X-ray images by deep learning methods

Evgenii Yu. Shchetinin1, Leonid A. Sevastyanov2

1 Financial University under the Government of Russian Federation, Moscow, Russian Federation, riviera-molto@mail.ru 2 Peoples Friendship University of Russia, Moscow, Russian Federation, sevast@sci.pfu.edu.ru

Abstract. Early detection of COVID-19 infected patients is essential to ensure adequate treatment and reduce the load on the healthcare systems. One of effective methods for detecting COVID-19 is deep learning models of chest X-ray images. They can detect the changes caused by COVID-19 even in asymptomatic patients, so they have great potential as auxiliary systems for diagnostics or screening tools.

This paper proposed a methodology consisting of the stage of pre-processing of X-ray images, augmentation and classification using deep convolutional neural networksXception, InceptionResNetV2, MobileNetV2, DenseNet121, ResNet50 and VGG16, previously trained on thelmageNet dataset. Next, they fine-tuned and trained on prepared data set of chest X-rays images. The results of computer experiments showed that theVGG16 model with fine tuning of the parameters demonstrated the best performance in the classification of COVID-19 with accuracy 99,09%, recall=98,318%, precision=99,08% and f1_score=98,78. This signifies the performance of proposed fine-tuned deep learning models for COVID-19 detection on chest X-ray images.

Keywords: COVID-19; chest X-rays; deep learning; convolutional neural networks

Acknowledgments:

This paper has been supported by the RUDN University Strategic Academic Leadership Program.

For citation: Shchetinin, Eu.Yu., Sevastyanov, L.A. (2022) Automated detection of COVID-19 coronavirus infection based on analysis of chest X-ray images by deep learning methods. Vestnik Tomskogo gosudarstvennogo universiteta. Upravlenie, vychislitelnaja tehnika i informatika - Tomsk State University Journal of Control and Computer Science. 58. pp. 97-105. doi: 10.17223/19988605/58/9

Научная статья УДК 519-7

аог 10.17223/19988605/58/9

Компьютерная система обнаружения COVID-19 по рентгеновским снимкам легких методами глубокого обучения

Е.Ю. Щетинин1, Л.А. Севастьянов2

1 Финансовый университет при Правительстве России (Москва, Россия), riviera-molto@mail.ru 2Российский университет дружбы народов (Москва, Россия), sevast@sci.pfu.edu.ru

Аннотация. Одним из эффективных методов обнаружения коронавирусной инфекции СО'УГО-19 является рентгенография легких. В работе предложена методика компьютерного анализа рентгеновских снимков с использованием глубоких сверточных нейронных сетей Хсерйоп, МоЫ1е№^2, Беше№и21, ResNet50, 1псерйопБе№^2 и VGG16, предварительно обученных на наборе данных ImageNet. Компьютерные экспе-

© Еи.Уи. ВИсИейпт, Ь.А. Sevastyanov, 2022

рименты показали, что модель VGG16 обладает наилучшей производительностью при классификации

COVID-19 с показателями точности (accuracy) 99,09%, чувствительности (recall) 98,318%.

Ключевые слова: COVID-19; рентгеновские снимки; глубокое обучение; сверточные нейронные сети

Благодарности:

Работа выполнена при поддержке программы стратегического академического лидерства Российского университета дружбы народов.

Для цитирования: Щетинин Е.Ю., Севастьянов Л.А. Компьютерная система обнаружения COVID-19 по рентгеновским снимкам легких методами глубокого обучения // Вестник Томского государственного университета.

Управление, вычислительная техника и информатика. 2022. № 58. С. 97-105. doi: 10.17223/19988605/58/9

The 2019 coronavirus (COVID-19) pandemic, caused by the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), continues to have a devastating global impact with far-reaching social and economic consequences. The COVID-19 pandemic has placed a huge burden on healthcare systems around the world struggling to provide care and treatment for patients due to limited numbers of healthcare workers and clinical resources such as ventilators, oxygen, personal protective equipment and other medical supplies [1, 2]. Due to the exponential increase in the number of covid 19 cases, there is a high demand for its quick and accurate testing. The standard for testing is reverse transcription polymerase chain reaction (RT-PCR). Other testing methods include antigen testing for coronavirus disease. But as a rule, patients experiencing symptoms of coronavirus disease are asked to undergo a chest CT scan or X-ray scans.

Chest X-ray has several important advantages as a screening tool for COVID-19 diagnostics. First, chest X-ray equipment is one of the most affordable medical imaging techniques in healthcare settings. Chest X-ray equipment is relatively quick to decontaminate compared to other medical imaging equipment, and the widespread availability of portable systems allows screening to be performed in isolated rooms, which greatly reduces the likelihood of infection transmission. Finally, chest X-ray is commonly used to assess respiratory complaints, which is one of the key symptoms of COVID-19 and can therefore be used in parallel with other tests. Finally, a chest X-ray can be used to assess the severity of a patient with a positive COVID-19, which cannot be done using PCR tests [3].

Despite the many benefits of chest X-ray for screening for COVID-19, one challenge is the limited number of expert radiologists needed to interpret the data, visualize it for screening, and assess disease severity. Thus, the development and implementation of automated clinical decision support systems to assist radiographers to accelerate the interpretation of imaging data can significantly help the clinical process of COVID-19, improve the medical care of large numbers of patients and manage the course of the COVID-19 pandemic [4]. Artificial intelligence and deep learning are currently the most advanced methods of predicting results in almost all areas. Computer systems based on artificial intelligence demonstrate serious advances in the field of healthcare, and their use will significantly reduce the time for identifying patients infected with the COVID-19 virus [5, 6].

The main goal of this work is to develop effective deep learning models for detecting and classifying COVID-19 and pneumonia based on the analysis of chest X-rays. We developed four models based on pre-trained deep neural networksVGG16 [7], DenseNet121, ResNet50 [8], Xception [9], InceptionResNetV2 and MobileNetV2. They were then trained and tested on thedesigned data set of chest X-rays from different public data sets. The results showed that the models proposed in the paper achieved theaccuracy 99.09%, recall=98,318%, precision=99,08% and f1_score=98,78, which are exceeds or comparable tothe indicators in similar studies.

The main contribution of this paper to the research of the detection of COVID-19 using computer systems are the constructed deep learning models, as well as quantitative estimates of their performance for three classes of images from a designed set of X-ray chest images.

1. Development of computer models for automatic detection of COVID-19 cases based on chest X-ray images analysis

Automated diagnosis of pneumonia, including those associated with COVID-19 infection, based on chest X-ray images, is a computer vision problem that has been proposed to be solved as an image

classification problem. Extensive research is currently underway to determine an accurate and reliable deep learning (DL) model for the detection and classification of COVID-19 disease. Researchers classify chest X-rays and CT images of patients using various deep learning models. As a rule, in these works devoted to similar studies, the problem of binary classification for the classes {COVID-19, No_COVID-19} is solved and rather high values of the COVID-19 detection accuracy metrics are obtained.Authors of [10] proposed different convolutional network (CNN) models that used as binary classifiers and reached 99% accuracy.Authors of [11] classified normal and COVID-19 X-ray images using deep CNN pretrained models ResNet50, ResNet18, ResNet101, VGG19 and VGG16. They found 92.6% accuracy for ResNet50 model. Ozturk et al. [12] proposed the DarkNet model for binary classes, it has 98.08% accuracy, and for multi-class cases, it has 87% accuracy.In paper [13] authors proposed a convolutional neural network (CNN) model to classify Normal, Pneumonia and COVID-19 classes with accuracy of 92.4% [13]. Similarly, other researchers also put efforts to detect COVID-19 cases from chest X-ray images using variousdeep models [14-19].

However, in our opinion, the task of detecting pneumonia and its distinguishing from COVID-19 is also significant in clinical practice, since it is solved together with the detection of COVID-19 and has similar symptoms when these diseases are detected, but different methods of their treatment. In our work, we conducted research on the development of deep learning models aimed at classifying three classes of chest X-rays containing signs of COVID-19 disease, pneumonia, as well as the lungs of healthy people. For this, the ResNet50, MobileNetv2, VGG16, Xception, InceptionResNetV2 and DenseNet121 basic models of deep neural networks were used, pre-trained on the ImageNet image set [20], and then tuned to the set of studied X-ray images. This approach is standard in the practice of training deep learning models, as it helps to speed up and simplify the development and training of models on other data.

First, we need to create a sequential model in the Keras deep learning library [21]. To build a classification model for X-ray images, we imported the weights of the base models from the ImageNet library and 'frozen' them, setting the value of the training parameter for each layer as 'False'. Thus, the values of the parameters of the convolutional layers of the model are saved in the process of its training. The sizes of the input data for the initial layer of pretrained models can vary, so you need to change the input_shape parameter of the input layer to match the input sizes of the neural network (IMG_SIZE = 224, IMG_SIZE = 224,3). Next, Flatten layer, two Dense layers with activation functions 'relu' and 'softmax', respectively, separated byDropout(0,2) and Batch_Normalization layer were sequentially attached to the frozen layers of the model, which avoids overfitting. In addition, the output layer of the pretrained models needs to be changed, since it is configured to classify 1000 image classes, while we have three classes ('COVID-19' - 0, 'Normal' -1, 'Pneumonia' - 2). The model constructed in this way was trained with the 'sparse_categorical_crossentropy' loss function and accuracy, recall, f1-score, AUC metrics for several epochs (epochs = 20).

Then, in order to improve the performance metrics, the models have been fine-tuned as follows. First, the upper part of the convolutional blocks of the model is unfrozen, namely, the parameters of these layers are set as 'True', which are later to be updated during the retraining process. For example, theVGG16model contains 13 convolutional blocks, of which we unfrozen the last three. Then we attached the Global Average Pooling_2D, Batch Normalization and Dropout (0,2) layers to the model. Finally, the layers Dense(units = 512, activation = 'relu'), Batch Normalization and Dropout (0,5), Dense (3, activation = 'softmax') were attached. The model was compiled using the 'categorical_crossentropy' loss function and the Adam optimizer (lr = 0,0001). Next, the model was retrained and its performance metrics were calculated on X-ray images from the train_set. The fine tuning was carried out in the same way for the rest of the models.

2. Description of the data set of chest X-rays

In our investigations we used chest X-rays images of COVID19 patients, healthy patients, patients with opaque lungs, and patients with viral pneumonia. We collected images from publicly available data [22, 23]. We combined all X-ray images into three classes: COVID-19 (3865 images), Normal (9850) and Pneumonia (1440), a total 15170 images.In addition, the images have been resized from 1024x1024 to 224x224

to reduce the computational costs of training the models. Then the data set was divided into training set (train_set) with dimension (12124,1), on which the models training will be carried out and testing set (test_set) with dimension (3031,1).The examples of X-ray images are shown on Fig. 1.

a b

Fig. 1. Chest X-ray images from classes: (a) Pneumonia (b) COVID-19

3. Investigation of the performance of deep models in the problem of classification of chest X-rays

On the set of X-ray images described above, computer experiments were carried out to classify them for three classes, the performance of deep learning models VGG16, ResNet50, DenseNet121, Xception, InceptionResNetV2 and MobileNetv2 was evaluated and their accuracy, precision, recall and f1-score metrics were calculated. They are shown in Table 1. As follows from the results presented here, the VGG16 and Xception turned out to be the best among all proposed models. This choice was based on the analysis of the values of the accuracy and recall metrics for the classes COVID-19 and Pneumonia. This means that the priority in choice the best model is the value of its sensitivity and accuracy in detection of patients with COVID-19, as well as pneumonia. Recall metrics indicates how well the model is correctly detecting what it is supposed to detect. For our classes, it shows what percentage of patients that the model has detected as having COVID-19 or pneumonia are actually these ones.

recall =

TP

(1)

TP+FN'

where TP are the true positive cases for a given class, FN are the false negative cases for a given class (in particular, for the case of COVID-19 it is the number of patients with COVID-19, predicted by the model as not COVID-infected). Specificity is the proportion of predicted true negatives to the summation of predicted true negatives and false positives. The specifity is calculated as

TN

specifity =

(2)

TN+FP'

where TN is the number of the true negative cases (in particular, true number of patients with pneumonia and healthy lungs), FP are the false positive cases for a given class (in particular, the X-rays from other classes, falsely predicted by the model as X-rays with signs of COVID-19). Precision is the proportion of predicted true positive values to the total number of predicted true positive and false positive values. Precision can be calculated as

TP

precision =-. (3)

f tp+fp v '

The accuracy is calculated as follows:

accuracy

TP+TN

TP+TN+FP+FN

(4)

Metrics f1-score is calculated as follows:

f1-score =

2*specifity »sensitivity

(5)

specifity+sensitivity

To evaluate the performance of machine learning model, it is customary to use the confusion matrix. For Xception and VGG16 models, they are presented in Fig. 2 and Fig. 3. Signature 'Actual' denotes the true class of X-ray image to be predicted (Normal, Pneumonia, COVID-19). Signature 'Predicted' denotes the class of predicted images class. Diagonal cells in blue color represent true positive values (TP,%) and true negative values (TN,%). If percentages are indicated in the cells, they are considered to contain the values of the accuracy index for the corresponding class. The yellow cells of the matrix contain the percentage of images that are evaluated by the model as false positive (vertical elements, FP,%) and false negative (horizontal elements, FN,%) images.A higher recall value means a higher true positive value and a lower false negative value. However, a higher precision value will mean a higher true positive value and a lower false positive rate. Therefore, both false positive and false negative values should be as low as possible. But we are interested first in all in true number of detected COVID-19 and Pneumonia patients. So, we prefer to control recall and accuracy metrics for the best model selection.

I

98.21% 020% 0.00%

1.53% 99.59% 1.40%

0.26% 0.20% 98 60%

Normal Predicted

Fig. 2. The confusion matrix for the Xception model

si

99.09% 0.66% 0.00%

0.78% 99.14% 0.70%

0.13% 0.20% 99.30%

Normal Predicted

Fig. 3. The confusion matrix of the VGG16 model

From Table we can conclude that VGG16 model outperforms the Xception model in classification of images from the COVID-19, Pneumonia and Normal classes. The accuracy of classification of images from the COVID-19 class (accuracy) for the VGG16 model is 99,09%, for the Xception model is 98,21%. The accuracy score for VGG16 is 99,142%, while for Xception it equals 99,062%. The precision and recall metrics of Xception model are lower due to the greater error in classifying healthy patients as COVID-19-positive 1,53% in Xception versus 0,78% in VGG16) and as patients with pneumonia (1,41% in Xception versus 0,7% in VGG16). Also, necessary to note 100% accuracy in distinguishing COVID-19 from Pneumonia classes by these models. All this made it possible to establish VGG16 as the best model for classifying X-rays of the lungs and detecting COVID-19 diseases and pneumonia from them.

Performance metrics of deep learning models for COVID-19 class

Deep model class_accuracy, precision, recall, f1-score, accuracy model,

% % % % macro, %

DenseNet121 95,93 95,373 97,542 96,729 97,691

ResNet50 95,77 95,771 99,612 97,654 98,482

MobileNetV2 90,39 90,387 99,741 94,834 96,866

VGG16 99,09 99,081 98,318 98,78 99,142

Xception 98,21 98,212 99,483 98,843 99,062

InceptionResNetV2 96,13 96,135 99,741 97,905 98,251

Using the VGG16 model, a prediction of X-ray images from the test_set belonging to the corresponding classes for was also performed. The result is shown in Fig. 4. Label 'Actual' denotes the true class of X -ray image to be predicted (Normal, Pneumonia, COVID-19). Label on image denotes images predicted class. In our prediction, 0,78% of healthy patients (class Normal) and 0,13% of patients with pneumonia (class Pneumonia) were mistakenly classified as patients with COVID-19; 0,7% of healthy patients were mistakenly recognized as patients with pneumonia, 0,66% of healthy patients were mistakenly recognized as patients with COVID-19 and 0,2% of images from class Normal were recognized as images from Pneumonia class.

Actual: Normal Actual: Normal Actual: Normal Actual Normal Actual: Normal

Actual: COVID Actual: Pneumonia Actual: Normal Actual: Normal Actual: COVID

¡Actual: Pneumonia Actual: COVID Actual: COVID Actual: Pneumonia Actual: COVID

■■Q f\ W

Actual: PneumoniActual: Pneumonia Actual: Normal Actual: Pneumonia Actual: Normal|

in N M M

Fig. 4. Results of prediction of the class of chest images from test_set

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

PNEUMONIA pred as: PNEUMONIA COVID19 pred as C0VID19 PNEUMONIA pred as: PNEUMONIA NORMAL pred as: PNEUMONIA

Ml Bin

COVID19 pred as C0VID19 PNEUMONIA pred as: PNEUMONIA PNEUMONIA pred as: PNEUMONIA PNEUMONIA pred as: PNEUMONIA

WV' " H

PNEUMONIA pred as: PNEUMONIA

I I

Fig. 5. The heatmap of predicted chest X-ray images from test_set

PNEUMONIA pred as: PNEUMONIA PNEUMONIA pred as: PNEUMONIA NORMAL pred as: NORMAL

Additionally, the gradient-based class activation mapping (Grad-CAM) [24] was used to represent the decision area on a heatmap. Figure 5 illustrates the heatmaps for three COVID-19 cases, confirming that the proposed method extracted correct features for detection of COVID-19, Pneumonia and Normal classes, and the model is mostly concentrated on the lung area. Radiologists might use these heatmaps to evaluate the chest area more accurately.

4. Results discussion

In this paper we proposed and investigated a lot of deep models for the automated detection of COVID-19 and pneumonia clinical cases based on the analysis of chest X-rays images. Deep convolutional neural network models Xception, MobileNetV2, DenseNet121, ResNet50, InceptionResNetV2 and VGG16 were developed, pretrained on the ImageNet, and then fine-tuned on a set of chest X-rays [22, 23]. As we noted later many of the earlier studies implied the classification of X-ray images into two classes: COVID-19 and Not COVID-19 [10, 11, 15, 17, 18]. However, despite the high performance of these models, they do not consider the class of images containing features of pneumonia. Such images are either not included in the calculations or are included in other image classes. Despite the fact that at an early stage, the clinical symptoms of pneumonia and COVID-19 are very similar, the course of these diseases and the methods of their treatment are different. Therefore, it is fundamentally important to differentiate them already at an early stage of detection. In our study, the task of classifying into three classes was solved: COVID-19, Normal, and Pneumonia.

Evaluation of the effectiveness of the studied models was carried out using the metrics of accuracy, precision, recall, f1-score and AUC. Based on their analysis, the VGG16 model was found to be the best model of all considered in our paper. It classifies X-rays images from the COVID-19 class with an accuracy of 99,09%, which means that the model has a high performance for images belonging to this class. In addition, the model rarely misclassifies COVID-positive patients as healthy or with pneumonia. This fact is confirmed by the high value of recall metrics, which is equal to 98,318%. Moreover, fine-tuned VGG16 model classifies X-rays images from class Pneumonia with accuracy of 99,14% and from class Normal with accu-racy=99,30%. With an error 0,7% this model classifies healthy patients as Pneumonia class, and with an error of 0,13% it detected patients with pneumonia to COVID-19 class, which allows asserting the high performance of the model in pneumonia detection.

Also, we performed the fine-tuning of deep classification models to improve their performance. Such studies also used a variety of deep learning models, pre-trained on large datasets, as a rule, on ImageNet [20]. So, in paper [11] ResNet50 model have resulted the average accuracy 92,6%, whilst end-to-end training of the developed CNN model produced average accuracy=91,6%. In paper [12] accuracy=89,33%, re-call=88,17%. Muhammad Talha Nafees and coauthors [13] developed CNN model trained on three classes with average accuracy of 92,3%. H. Nasiri and S. Hasani [14] employed DenseNet169 to extract features from X-ray images and used XGBoost for classification; so, they gained 98,24% and 89,70% in binary, and three-class classification, respectively. In papers [14] authors applied transfer learning method for COVID-19 images recognition and for pre-trained ResNet50 model achieved accuracy=92,32%, precision=95,69% and recall=95,62%. In paper [16] binary classifications with four classes (COVID-19, normal (healthy), viral pneumonia and bacterial pneumonia) by using 5-fold cross validation have implemented. Considering the performance results obtained, it has seen that pre-trained ResNet50 model provides the highest classification performance accuracy=99,7% among other four used models. Hamid Nasiri, Seyyed Ali Alavi [17] proposed deep model with ANOVA features selection that achieved evaluated accuracy=92% and recall=88,46%. Authors of papers [18, 19] developed several deep learning architecture are deployed for the detection of COVID-19 such as ResNet, Inception, Googlenet etc. as binary classifiers. The best model is ResNet50 with accuracy=98%.

Conclusion

The proposed models significantly exceed or comparable to the results in papers mentioned above, which confirms the effectiveness of proposed approach of preprocessing x-ray images and followed by their

fine-tuning deep learning models to solve the task of multiclass classification with imbalance of images in classes. For future research, it is necessary to eliminate a number of shortcomings. In particular, a more detailed analysis requires a larger volume of patient data, especially those associated with COVID-19. In addition, such effective deep learning models as VGG16 are trained on images from ImageNet that are not medical. So, the methods of synthetic data generation are most prospective in this task solution.

In future work, we intend to develop a mobile application for wearable devices and mobile X-ray units, with the aim of detecting COVID-19 and pneumonia on the early stages of the disease. We are also planning to extend our work to the segmentation of COVID-19 chest X-rays and CT scans to give more information for the radiologists.

References

1. World Health Organization. (n.d.) [Online] Available from: https://www.who.int/emergencies/diseases/novel-coronavirus-2019

(Accessed: 25th October 2021).

2. Sohrabi, C. et al. (2020) World Health Organization declares global emergency: A review of the 2019 novel coronavirus (COVID-19).

International Journal of Surgery. 76. pp. 71-76. DOI: 10.1016/j.ijsu.2020.02.034

3. Cleverley, J. Piper, & Jones, M.M. (2020) The role of chest radiography in confirming covid-19 pneumonia. BMJ. 370. DOI:

10.1136/bmj.m2426

4. Kim, M., Yan, C., Yang, D. & Wu, G. (2020) Deep learning in biomedical image analysis. Biomedical Information Technology.

pp. 239-263.

5. Mei, X. & Lee, H.C. (2020) Artificial intelligence-enabled rapid diagnosis of patients with COVID-19. Nat. Med. 26. pp. 1224-

1228. DOI: 10.1038/s41591-020-0931-3

6. Kong, W. & Agarwal, P.P. (2020) Chest imaging appearance of COVID-19 infection. Radiol. Cardiothorac. Imaging. 2(1).

p. e200028.

7. Simonyan, K. & Zisserman, A.J. (2014) Very deep convolutional networks for large-scale image recognition.

arXiv:1409.1556 [cs.CV].

8. He, K., Zhang, X., Ren, S. & Sun, J. (2016) Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer

Vision and Pattern Recognition (CVPR). pp. 770-778. DOI: https://doi: 10.1109/CVPR.2016.90

9. Szegedy, C. et al. (2015) Rethinking the Inception Architecture for Computer Vision. arXiv: 1512.00567[cs.CV].

10. Chowdhury, M.E., Rahman, T., Khandakar, A. et al. (2020) Can AI help in screening viral and covid-19 pneumonia? IEEE Access. 8. pp. 132665-132676.

11. Ismael, A.M. & Sengur, A. (2021) Deep learning approaches for COVID-19 detection based on chest x-ray images. Expert Systems with Applications. 164. p. 114054.

12. Ozturk, T., Talo, M., Yildirim, E.A., Baloglu, U.B., Yildirim, O. & Acharya, U.R. (2020) Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput. Biol. Med. 121. p. 103792.

13. Nafees, M. T., Rizwan, M., Khan, M.I. & Farhan, M. (2021) A Novel Convolutional Neural Network for COVID-19 detection and classification using Chest X-Ray images. medRxivpreprint. DOI: 10.1101/2021.08.11.21261946

14. Nasiri, H. & Hasani, S. (2021) Automated detection of COVID-19 cases from chest X-ray images using deep neural network and XGBoost. arXiv:2109.02428.

15. Katsamenis, I., Protopapadakis, E. & Voulodimos, A. (2020) Transfer learning for COVID-19 pneumonia detection and classification in chest X-ray images. PCI 2020: 24th Pan-Hellenic Conference on Informatics. DOI: https://doi.org/10.1101/2020.12.14.20248158, medRxiv preprint

16. Narin, A., Ceren, K. & Ziynet, P. (2021) Automatic Detection of Coronavirus Disease (COVID-19) Using X-ray images and deep convolutional neural networks. Pattern Anal Appl. pp. 1-14. DOI: 10.1007/s10044-021-00984-y

17. Nasiri, H. & Alavi, S.A. (2021) Novel framework based on deep learning and ANOVA feature selection method for diagnosis of COVID-19 casesfrom chestX-ray Images. medRxiv preprint. DOI: 10.1101/2021.10.10.21264809

18. Shenoy, V. & Malik, S. (2021) CovXR: Automated Detection of COVID-19 Pneumonia in Chest X-Rays through Machine Learning. arXiv:2110.06398v1 [eess.IV].

19. Ilyas, M., Rehman, H. & Nait-Ali, A. (2020) Detection of Covid-19 from chest X-ray images using artificial intelligence: an early review. ArXiv:2004.05436v1 [eess.IV].

20. Deng, J., Dong, W., Socher, R., Li, L., Kai, L. & Fei-Fei, L. (2009) Imagenet: A large-scale hierarchical image database. IEEE Conference on Computer Vision and Pattern Recognition. pp. 248-255.

21. Chollet, F. (2017) Deep Learning with Python. Maning.

22. Patel, P. (2020) Chest X-ray (COVID-19 & Pneumonia). [Online] Available from: https://www.kaggle.com/prashant268/chest-xray-covid19-pneumonia (Accessed: 25th October 2021).

23. Tawsifur, R. et al. (2021) Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images. Computers in Biology and Medicine. 132. p. 104319. DOI: 10.1016/j.compbiomed.2021.104319

24. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D. & Batra, D. (2017) Grad-cam: Visual explanations from deep networks via gradient-based localization. Proceedings of the IEEE international conference on computer vision. pp. 618-626.

Information about the authors:

Shchetinin Evgenii Yurievich - Doctor of Physical and Mathematical Sciences, Professor of the Department of Mathematics, Financial University under the Government of Russian Federation (Moscow, Russian Federation). E-mail: riviera-molto@mail.ru Sevastianov Leonid Antonovich - Doctor of Physical and Mathematical Sciences, Professor of Department of Applied Informatics and Probability of Peoples Friendship University of Russia (Moscow, Russian Federation). E-mail: sevast@sci.pfu.edu.ru

Contribution of the authors: the authors contributed equally to this article. The authors declare no conflicts of interests.

Информация об авторах:

Щетинин Евгений Юрьевич - доктор физико-математических наук, профессор кафедры математики Финансового университета при Правительстве России (Москва, Россия). E-mail: riviera-molto@mail.ru

Севастьянов Леонид Антонович - доктор физико-математических наук, профессор кафедры прикладной информатики и теории вероятностей Российского университета дружбы народов (Москва, Россия). E-mail: sevast@sci.pfu.edu.ru

Вклад авторов: все авторы сделали эквивалентный вклад в подготовку публикации. Авторы заявляют об отсутствии конфликта интересов.

Received 21.10.2021; accepted for publication 28.02.2022 Поступила в редакцию 21.10.2021; принята к публикации 28.02.2022

i Надоели баннеры? Вы всегда можете отключить рекламу.