Научная статья на тему 'Medical images segmentation Operations'

Medical images segmentation Operations Текст научной статьи по специальности «Медицинские технологии»

CC BY
130
22
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
ГЛУБОКИЕ НЕЙРОННЫЕ СЕТИ / DEEP NEURAL NETWORKS / СВЁРТОЧНЫЕ НЕЙРОННЫЕ СЕТИ / CONVOLUTIONAL NEURAL NETWORKS / ОПУХОЛИ МОЗГА / BRAIN TUMORS / КОСТНЫЕ ГЛАЗНЫЕ ОРБИТЫ / BONY ORBIT / МЕДИЦИНСКИЕ ИЗОБРАЖЕНИЯ / MEDICAL IMAGES / SEGMENTATION

Аннотация научной статьи по медицинским технологиям, автор научной работы — Musatian S.A., Lomakin A.V., Sartasov S. Yu., Popyvanov L.K., Monakhov I.B.

Extracting various valuable medical information from head MRI and CT series is one of the most important and challenging tasks in the area of medical image analysis. Due to the lack of automation for many of these tasks, they require meticulous preprocessing from the medical experts. Nevertheless, some of these problems may have semi-automatic solutions, but they are still dependent on the person's competence. The main goal of our research project is to create an instrument that maximizes series processing automation degree. Our project consists of two parts: a set of algorithms for medical image processing and tools for its results interpretation. In this paper we present an overview of the best existing approaches in this field, as well the description of our own algorithms developed for similar tissue segmentation problems such as eye bony orbit and brain tumor segmentation based on convolutional neural networks. The investigation of performance of different neural network models for both tasks as well as neural ensembles applied to brain tumor segmentation is presented. We also introduce our software named "MISO Tool" which is created specifically for this type of problems. It allows tissues segmentation using pre-trained neural networks, DICOM pixel data manipulation and 3D reconstruction of segmented areas.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Способы сегментации медицинских изображений

Извлечение различной значимой медицинской информации из КТ и МРТ снимков это одна из наиболее важных и трудных задач в сфере анализа медицинских изображений. Недостаток автоматизации в этих задачах становится причиной необходимости скрупулезной обработки данных экспертом, что ведет к возможности ошибок, связанных с человеческим фактором. Несмотря на то, что некоторые из методов решения задач могут быть полуавтоматическими, они все еще опираются на человеческие компетенции. Основной целью наших исследований является создание инструмента, который максимизирует уровень автоматизации в задачах обработки медицинских снимков. Наш проект состоит из двух частей: набор алгоритмов для обработки снимков, а также инструменты для интерпретирования и визуализации результатов. В данной статье мы представляем обзор лучших существующих решений в этой области, а также описание собственных алгоритмов для актуальных проблем, таких как сегментация костных глазных орбит и опухолей мозга, используя сверточные нейронные сети. Представлено исследование эффективности различных моделей нейронных моделей при классификации и сегментации для обеих задач, а также сравнительный анализ различных нейронных ансамблей, применяемых к задаче выделения опухолей головного мозга на медицинских снимках. Также представлено наше программное обеспечение под названием «MISO Tool», которое создано специально для подобного рода задач и позволяет выполнять сегментирование тканей с использованием предварительно обученных поставляемых вместе с ПО нейронных сетей, производить различные манипуляции с пиксельными данными DICOM-изображения, а также получать 3D-реконструкция сегментированных областей.

Текст научной работы на тему «Medical images segmentation Operations»

Medical Images Segmentation Operations

S.A. Musatian <sabrina.musatian@yandex.ru> A.V. Lomakin <alexander.lomakin@protonmail.com> S.Yu. Sartasov <Stanislav.Sartasov@spbu.ru> L.K. Popyvanov <lev.popyvanov@gmail.com> I.B. Monakhov <i.monakhov1994@gmail.com> A.S. Chizhova <Angelina.Chizhova@lanit-tercom.com> Saint Petersburg State University, 7/9, University Embankment, Saint Petersburg, 199034

Abstract. Extracting various valuable medical information from head MRI and CT series is one of the most important and challenging tasks in the area of medical image analysis. Due to the lack of automation for many of these tasks, they require meticulous preprocessing from the medical experts. Nevertheless, some of these problems may have semi-automatic solutions, but they are still dependent on the person's competence. The main goal of our research project is to create an instrument that maximizes series processing automation degree. Our project consists of two parts: a set of algorithms for medical image processing and tools for its results interpretation. In this paper we present an overview of the best existing approaches in this field, as well the description of our own algorithms developed for similar tissue segmentation problems such as eye bony orbit and brain tumor segmentation based on convolutional neural networks. The investigation of performance of different neural network models for both tasks as well as neural ensembles applied to brain tumor segmentation is presented. We also introduce our software named "MISO Tool" which is created specifically for this type of problems. It allows tissues segmentation using pre-trained neural networks, DICOM pixel data manipulation and 3D reconstruction of segmented areas.

Keywords: deep neural networks; convolutional neural net-works; brain tumors; bony orbit; medical images; segmentation

DOI: 10.15514/ISPRAS-2018-30(4)-12

For citation: Musatian S.A., Lomakin A.V., Sartasov S. Yu., Popyvanov L.K., Monakhov I.B., Chizhova A.S. Medical Images Segmentation Operations. Trudy ISP RAN/Proc. ISP RAS, vol. 30, issue 4, 2018. pp. 183-194. DOI: 10.15514/ISPRAS-2018-30(4)-12

1. Introduction

Modern ray diagnosis is at the stage of development, and completely different settings and methods are required for different organs: x-ray, MRI, CT, ultrasound are supplemented with invasive contrast methods. Only the doctor can see

everything necessary for correct diagnosis and subsequent treatment. However, at the heart of all these methods lie common tasks - the most accurate visualization of the selected zone and obtaining as much data as possible from the results of the examination. In 3D methods (CT and MRI), these tasks are essentially the same, despite the differences in both physical principles and additional settings. Since the goal of our work is to create a tool that would as accurately as possible visualize isolated structures from raw data obtained by MRI and CT procedures, then this complex work can be decomposed into separate logical components. To isolate complex structures, we formulated the problem of segmentation of tumor processes in MRI images. MRI better visualizes soft tissue and allows to carry out various sequences, change the basic settings of the method in a wide range and use contrast agents. To determine the volume and edge isolation of structures, the problem of determining the volume of bony orbits on a CT was singled out. In this method the bone structures have a high contrast, the distance between slices is very small, and the method itself is widely distributed and takes little time, which allows to study a large data volume.

From the point of medical informatics those problems are not completely dissimilar and could be solved in a unified manner. Moreover, creating a single instrument that may solve all of these challenging tasks autonomously will not only save doctors' time, but also decrease the amount of errors. To the best of our knowledge, there have not been introduced any instrument for automatic segmentation of different body tissues. We came to the conclusion that while the segmentation tasks on different body parts may seem different, they may also all be derived from a core solution based on the deep neural networks.

In this work, we explored state-of-the-art solutions based on deep neural networks for brain tumor segmentation and created an ensemble to see if their performance can be improved and used not only for the brain segmentation task but also for complicated head bony structures in general. We use the results of this research as a first step for creating a convenient and powerful instrument for all medical specialties.

2. Overview

An interest in the possibility of medical images segmentation has increased during the last decade and many different approaches were explored. However, only a few researches evolutionized into complete useful tools for medicine. Commonly used software, that allows semi-automatic segmentation is Brainlab IPlan (commercial distribution) and ITK-SNAP (open source project). The main feature of IPlan, that have already been used in several studies [1, 2], is atlas-based segmentation. Atlas is the described and sketched out by experts shape variations of the ROIs (Region of Interest). Due to complexity of human body structure, there are many problems about the accuracy of delineated atlas. ITK-Snap allows segmentation via active contour evolution method - smooth blow-out of preplaced bubbles into the desired region of interest [3]. Although many of the tasks have been solved by these

instruments, there are still many problems that specialists face constantly waiting for improvement. Segmentation is performed by manual or semi-automatic methods.

For the brain tumor segmentation problem many different approaches have been explored and evaluated. There may be formed mainly two classes for these algorithms: methods, which require training on the dataset in advance and those which do not. Early works in this area treated a brain tumor segmentation problem as an anomaly detection problem on the image. Representative works for these approaches may be [4] and [5]. The main advantage of these works is that the presented solutions do not need to be trained beforehand, however that makes it harder to improve the quality of the detection, especially on the smaller tumors. Another class of approaches is based on the idea of using supervised learning methods, such as random forests [6] or support vector machines [7]. These models can learn a powerful set of features and work quite well on the most common cases, but due to the highly discriminative nature of brain tumors it is hard to detect the correct feature set and create a good model. As a result, recent approaches on segmentation refer to the deep neural networks. It is a powerful instrument that has a capability of extracting new features while training and hence may outperform pre-defined features sets of the supervised learning methods. The results of these algorithms may be also used for different kinds of medical images. We are developing our own tool - Medical Images Segmentation Operations (MISO), which uses neural networks as a back-end for solving various segmentation tasks in medicine. In the next sections we overview separately application of neural networks for brain tumor and bony orbit segmentation as they were trained and used in MISO.

3. Brain Tumor Segmentation

For that task we chose to overview two CNNs (Convolutional Neural Networks) with different architecture which have proven to be the best in this field: DeepMedic [8] - 11-layers deep, multi-scale, 3D CNN with fully connected conditional random field and WNet [9] - fully convolutional neural network with anisotropic and dilated convolution.

3.1 Data

For the experiments we used BraTS 2017 dataset [10, 11], which includes images from 285 patients of glioblastoma (GBM) and lower grade glioma (LGG). For acquiring this data each patient (fig. 1) was scanned with native T1, post-contrast T1-weighted (T1Gd), T2-weighted (T2), and T2 Fluid Attenuated Inversion Recovery (Flair). For all patients ground-truth segmentation was provided.

Segmentation Operations.. Trudy ISP RAN /Proc. ISP RAS, vol. 30, issue 4, 2018, pp. 183-194

(a) (b) (c) (d) (e)

Fig.1. Original data from BRaTS 2017 dataset: a) T1Gd b) T1 c) Flair d) T2 e) Ground-truth

3.2 Implementation Details

For WNet we used configuration described in the original papers and BRaTS 2017 dataset for training. For DeepMedic we trained two versions of this network on different datasets and injected some changes into original architecture of this network. For the first version we introduced the following changes: model was trained only on T1 and T2 images.

The reason for that change was that these are the most common MRI sequences. Having a network trained only with this data makes the model available for more hospitals in future. Also, instead of PReLu non-linearity, introduced in the original model, we use SELU [12], which improves the performance and time spend on training. For the second version of DeepMedic we also used SELU, but this network was trained only on T1 images. We wanted to explore how this network will cope when having only one source. For all of these three networks we separated initial dataset into 3 chunks: training (about 80% percent), validation (10%) and test (10%). The performance of these networks on test data may be seen at Table 1. In the observed studies, authors were aiming not only to detect the tumor but also to segment the tumor into three categories: whole tumor, tumor core and enhancing tumor core. However, in our work we are only interested in the whole tumor detection problem.

Table 1. Individual performance of observed CNNs

Network Dice coefficient

Wnet 0.9148

DeepMedic (inputs: T1+T2) 0.8317

DeepMedic (inputs: T1) 0.6725

3.3 Detecting the Percentage of False Negative Segments

The original works analyse the quality of CNN performance based on the Dice and Hausdorff measurements, which are good for the segmentation problems in general, but hides the necessary details about misclassifications. For that reason, we explored the results from work of the considered networks to determine the percentage of false positives via false negatives results. Our main goal was to examine whether these methods are more prone to predict false positives then false negatives.

Since the decisive opinion during the diagnosis and treatment is always on doctor, our main goal is to indicate if there may be a pathological tissue and get the surgeon's attention to this area. Our system is aiming to find all suspicious areas and send them for reevaluation to medical specialist. Hence, one of the main qualities of this system that should be optimized first-hand would be not false positive results, but false negatives, because those when unnoticed may not get the essential medical care and be a reason for future proliferation of tumor cells. The results of this experiment may be seen at Table 2.

Table 2. Number offalse positive via false negative in the final segmentation

Network mean (False positive / ground truth) mean (False negative / ground truth)

Wnet 0.0863 0.0830

DeepMedic (inputs: T1+T2) 0.2330 0.1170

DeepMedic (inputs: T1) 0.4690 0.2455

3.4 Neural Network Ensembles

We wanted to detect whether the general performance of these three networks can be improved, when they are used together. So, we proposed the idea of forming the neural networks ensemble [13] out of them. We implemented the following voting scheme: for each voxel we determine each individual result for every neural network, based on their already pre-trained models, and then we qualify a voxel as part of the tumor if and only if the majority of networks classify it as tumor, otherwise it is considered to be a healthy matter. The results of this experiment may be seen at Table 3.

Table 3. The performance of neural network ensemble. The results of combining networks together differently

CNN 1 CNN 2 CNN 3 Dice coefficient

Wnet DeepMedic (inputs: T1 + T2) - 0.8861

DeepMedic (inputs: T1+T2) DeepMedic (inputs: T1) - 0.7657

DeepMedic (inputs: T1) Wnet - 0.7941

DeepMedic (inputs: T1+T2) DeepMedic (inputs: T1) Wnet 0.8823

DeepMedic (inputs: T1+T2) DeepMedic (inputs: T1) Wnet 0.8823

4. Bony Orbit Segmentation

4.1 Methods

Our approach consists of two steps. First of all, image classification was presented, dividing initial dataset into two groups: «contains orbit» and «does not contain orbit». The next step is to segment the orbit in the images marked by the classifier in the previous stage. In this paper first step is described in details, whereas the second step is introduced briefly as it is the subject of further research.

4.2 Data Collection

Raw CT scans was presented by faculty of Medicine of Saint Petersburg State University. Using Toshiba Scanner as instrument and Helical image acquisition as main method, 5 series were made and anonymized. The initial image dimensions were 512*512, using short (2-byte number) to represent radiation intense with Grayscale Standard display function. Orbits occupy less than 1/4 of the image, so we reduced the original size from 512*512 to 256*256 in order to decrease computation complexity (fig. 2 b). Slices with orbit was labeled and some of them was manually segmented by expert (fig. 2 c). Total amount of data: 601 sinus + 80 head CT images were marked as «contains orbit» and 1414 were marked as «doesn't contain orbit». 150 images were segmented.

T f

(a)

(b)

(c)

(d)

Fig. 2. Data for bony orbit segmentation: a) Initial image b) cropped image c) segmented by expert d) extracted mask (label for cropped image)

4.3 Model Choosing

To achieve best classification performance of 1st CNN, some important parameters like number of layers and convolutional kernel size must be chosen. So, several kernel sizes and layers number have been evaluated for classification accuracy. The quantitative assessments are shown in Table 4. As a result, the model used for training consisted of eight layers, out of which four were convolutional layers and four were fully connected layers. The output of last fully-connected layer has been fed to a sigmoid function, as it is a standard neural network classification layer [14]. The initial images were cropped and compressed in order to reduce training time.

Hence, network accepts grayscale images of dimension 128 x 128 as inputs. The first layer filters input with 32 kernels of size 5 * 5.

As it could be seen from experiments, rectified linear unit (ReLU) [15] nonlinearity applied to the outputs of all convolutional layers gives best result compared with other activation functions. The (n+1)th convolutional layer takes the output of nth as input processed by ReLU nonlinearity and max pooling layer respectively and process it with Fn + 1 filters. Filters configuration are shown in Table IV. All fully connected layers have equal number of neurons i.e., 256. For the Second CNN the U-net architecture [16] was chosen, as it has already proven its suitability for segmentation in general. Several layer sequences were evaluated to find most fitting model. In order to reduce bias and increase universality, 2 dropout layers with dropout rate equals to 0.2 were added.

Table 4. Quantitative assessments of different CNN configurations

Neurons in each FCLs* 1st CVL* kernel Filters model val.acc.

3200 11 32-64-128-128 0.725

256 11 32-64-128-128 0.9964

3200 7 32-64-128-128 0.7821

512 7 32-64-128-128 0.9782

512 7 64-64-128-256 0.9295

512 11 32-64-128-128 0.9964

256 7 32-64-128-128 0.8214

FCL - fully-connected layers, CVL - convolutional layer, val. acc. - accuracy on validation

dataset

4.4 Training Details

Classification CNN was implemented, trained and evaluated using Python 3.6 as programming language on NVIDIA GTX740M GPU with CUDA Toolkit 9.0 and CuDNN 7.0.5. Keras 2.1.*(version was continuously updated during development) was chosen as neural networks framework, working on top of Tensorflow 1.5*. We have trained and evaluated CNNs on a range different filter models (number of filters in each convolutional layer), kernel sizes and neuron amount in fully-connected layers. Also experiments with dropout layer [17] were performed.

4.5 Output Image Visualization

After segmentation has been performed, series of marked images are converted to voxel grid using initial DICOM metadata in order to create 3D model using Marching cubes algorithm by means of MISO Tool and The Visualization Toolkit library. Result is presented in fig. 3.

Segmentation Operations.. Trudy ISP RAN /Proc. ISP RAS, vol. 30, issue 4, 2018, pp. 183-194

Fig. 3. Rendered bony eye orbit using marching cubes algorithm

4.6 Experimental Results 4.6.1 Images cropping

As the main purpose of our work is to create an instrument, that could be run on our servers from multiple clients, in order to deliver the best performance to the customers and lessen waiting time, computation complexity must be decreased as much as possible. To achieve that goal, it was decided to perform experiments with cropped and resized images. When the image was reduced to less than 128*128, we were unable to achieve the required accuracy. The best result under the condition "accuracy > 0.95" showed the approach in which a piece of 256*256 was cut out of the image, which subsequently was compressed to 128. Because of high similarity of head position in CT scans, it was not necessary to move the cropping window.

Fig. 4. Different cropping window positions and sizes were examined

4.6.2 Performance

For the 1st CNN we used different kernels from 3 to 11 pixels, different CNN model configurations, activation functions and a suitable epoch number to illustrate 190

which one of these properties support CNN to get the highest level of performance. Data was split between train and validation in proportion 4:1. Our model performs best after 115 training epochs - validation accuracy 99% and then stabilizes. Dropout layers with dropout rate lower than 0.4 doesn't impact the accuracy significantly, and more than 0.4 fails the accuracy to ~85%, so it was decided to exclude dropout layers from final model. Worth noticing the fact that models with 512 neurons in each FCL showed approximately same result as a model with 256 neurons, but it takes up to 1.4 times more computation time, so 256 was chosen as less resource-consuming.

5. Conclusion

In this paper, the first step for the medical segmentation system was introduced. Based on the existing CNN solutions we demonstrated that they may be easily adapted for the segmentation tasks on different medical images. Also, in this work has been shown that these segmentations may be used for creating 3D models and volume estimation. Based on the obtained results, the target tool model was developed using C# 7.0 as programming language and .NET 4.7 as framework. As the development is still in the very beginning, there is no purpose for service hosting, although it is considered as the only possible option for the further development, so for now MISO (Medical Images Segmentation Operations) tool has been prototyped as a classic desktop application with CNN results visualization abilities (fig. 5)

Fig. 5. MISO tool interface

References

[1]. Wagner M.E., Gellrich N.C., Friese K.I. et al. Model-based segmentation in orbital volume measurement with cone beam computed tomography and evaluation against

current concepts. International Journal of Computer Assisted Radiology and Surgery, vol. 11, issue 1, 2016, pp 1-9

[2]. Jean-Franois D, Andreas B. Atlas-based automatic segmentation of head and neck organs at risk and nodal target volumes: a clinical validation. Radiation Oncology, 2013, 8:154

[3]. Yushkevich P.A. Piven J., Hazlett H.C. et al. User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability. Neurolmage, vol. 31, issue 3, 2006, pp. 1116-1128

[4]. Doyle, S., Vasseur, F., Dojat, M., Forbes, F. Fully Automatic Brain Tumor Segmentation from Multiple MR Sequences using Hidden Markov Fields and Variational EM. In Procs. of the NCI-MICCAI BRATS, 2013, pp. 18-22

[5]. Cardoso, M.J., Sudre, C.H., Modat, M., Ourselin, S. Template-based multimodal joint generative model of brain data. Lecture Notes in Computer Science, vol. 9123, 2015, pp. 17-29

[6]. H. N. Bharath, S. Colleman, D. M. Sima, S. Van Huffel. Tumor Segmentation from Multimodal MRI Using Random Forest with Superpixel and Tensor Based Feature Extraction. Lecture Notes in Computer Science, vol. 10670, 2018, pp. 463-473.

[7]. Chi-Hoon Lee, Mark Schmidt, Albert Murtha, Aalo Bistritz, Jöerg Sander, Russell Greiner. Segmenting brain tumors with conditional random fields and support vector machines. Lecture Notes in Computer Science, vol. 3765, 2005, pp. 469-478

[8]. Kamnitsas K., Ledig C., Newcombe V.F.J., Simpson J.P., Kane A.D., Menon D.K., Rueckert D., Glocker B. Efficient multi-scale 3DCNN with fully connected CRF for accurate brain lesion segmentation. Medical Image Analysis, vol. 36, 2017, pp. 61-78.

[9]. G. Wang, W. Li, S. Ourselin, T. Vercauteren. Automatic brain tumor segmentation using cascaded anisotropic convolutional neural networks. Lecture Notes in Computer Science, vol. 10670, 2018, pp. 178-190

[10]. Menze B.H., Jakab A., Bauer S. et al. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Transactions on Medical Imaging, vol. 34, issue 10, 2015, pp. 1993-2024

[11]. Spyridon Bakas, Hamed Akbari, Aristeidis Sotiras et al. Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features. Scientific Data, vol. 4, 2017, Article number: 170117

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

[12]. Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter. Self-Normalizing Neural Networks. Advances in Neural Information Processing Systems, vol. 30, 2017

[13]. L.K. Hansen and P Salamon. Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, issue 10, 1990, pp. 993- 1001

[14]. J. van Doorn. Analysis of deep convolutional neural network architectures. Available at: https://pdfs.semanticscholar.org/6831/bb247c853b433d7b2b9d47780dc8d84e4762.pdf, accessed: 13.06.2018

[15]. Hahnloser R.H., Sarpeshkar R., Mahowald M.A., Douglas R.J., Seung H.S. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature, vol. 405, 2000, pp. 947-951

[16]. O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. Lecture Notes in Computer Science, vol. 9351, 2015, pp. 234-241

[17]. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, vol. 15, issue 1, 2014, pp. 1929-1958

Способы сегментации медицинских изображений

С.А. Мусатян <sabrina.musatian@yandex.ru> А.В. Ломакин <alexander.lomakin@protonmail.com> С.Ю. Сартасов <Stanislav.Sartasov@spbu.ru> Л.К. Попыванов <lev.popyvanov@gmail.com> И.Б. Монахов <i.monakhov1994@gmail.com> А.С. Чижова <Angelina.Chizhova@lanit-tercom.com> Санкт-Петербургский государственный университет, 7/9, Университетская набережная, Санкт-Петербург, 199034

Аннотация. Извлечение различной значимой медицинской информации из КТ и МРТ снимков - это одна из наиболее важных и трудных задач в сфере анализа медицинских изображений. Недостаток автоматизации в этих задачах становится причиной необходимости скрупулезной обработки данных экспертом, что ведет к возможности ошибок, связанных с человеческим фактором. Несмотря на то, что некоторые из методов решения задач могут быть полуавтоматическими, они все еще опираются на человеческие компетенции. Основной целью наших исследований является создание инструмента, который максимизирует уровень автоматизации в задачах обработки медицинских снимков. Наш проект состоит из двух частей: набор алгоритмов для обработки снимков, а также инструменты для интерпретирования и визуализации результатов. В данной статье мы представляем обзор лучших существующих решений в этой области, а также описание собственных алгоритмов для актуальных проблем, таких как сегментация костных глазных орбит и опухолей мозга, используя сверточные нейронные сети. Представлено исследование эффективности различных моделей нейронных моделей при классификации и сегментации для обеих задач, а также сравнительный анализ различных нейронных ансамблей, применяемых к задаче выделения опухолей головного мозга на медицинских снимках. Также представлено наше программное обеспечение под названием «MISO Tool», которое создано специально для подобного рода задач и позволяет выполнять сегментирование тканей с использованием предварительно обученных поставляемых вместе с ПО нейронных сетей, производить различные манипуляции с пиксельными данными DICOM-изображения, а также получать 3D-реконструкция сегментированных областей.

Ключевые слова: глубокие нейронные сети; свёрточные нейронные сети; опухоли мозга; костные глазные орбиты; медицинские изображения.

DOI: 10.15514/ISPRAS-2018-30(4)-12

Для цитирования: Мусатян С.А., Ломакин А.В., Сартасов С.Ю., Попыванов Л.К., Монахов И.Б., Чижова А.С. Способы сегментации медицинских изображений. Труды ИСП РАН, том 30, вып. 4, 2018 г., стр. 1S3-194 (на английском языке). DOI: 10.15514/ISPRAS-2018-30(4)-12

Список литературы

[1]. Wagner M.E., Gellrich N.C., Friese K.I. et al. Model-based segmentation in orbital volume measurement with cone beam computed tomography and evaluation against

193

current concepts. International Journal of Computer Assisted Radiology and Surgery, vol. 11, issue 1, 2016, pp 1-9

[2]. Jean-Franois D, Andreas B. Atlas-based automatic segmentation of head and neck organs at risk and nodal target volumes: a clinical validation. Radiation Oncology, 2013, 8:154

[3]. Yushkevich P.A. Piven J., Hazlett H.C. et al. User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability. Neurolmage, vol. 31, issue 3, 2006, pp. 1116-1128

[4]. Doyle, S., Vasseur, F., Dojat, M., Forbes, F. Fully Automatic Brain Tumor Segmentation from Multiple MR Sequences using Hidden Markov Fields and Variational EM. In Procs. of the NCI-MICCAI BRATS, 2013, pp. 18-22

[5]. Cardoso, M.J., Sudre, C.H., Modat, M., Ourselin, S. Template-based multimodal joint generative model ofbrain data. Lecture Notes in Computer Science, vol. 9123, 2015, pp. 17-29

[6]. H. N. Bharath, S. Colleman, D. M. Sima, S. Van Huffel. Tumor Segmentation from Multimodal MRI Using Random Forest with Superpixel and Tensor Based Feature Extraction. Lecture Notes in Computer Science, vol. 10670, 2018, pp. 463-473.

[7]. Chi-Hoon Lee, Mark Schmidt, Albert Murtha, Aalo Bistritz, Jöerg Sander, Russell Greiner. Segmenting brain tumors with conditional random fields and support vector machines. Lecture Notes in Computer Science, vol. 3765, 2005, pp. 469-478

[8]. Kamnitsas K., Ledig C., Newcombe V.F.J., Simpson J.P., Kane A.D., Menon D.K., Rueckert D., Glocker B. Efficient multi-scale 3DCNN with fully connected CRF for accurate brain lesion segmentation. Medical Image Analysis, vol. 36, 2017, pp. 61-78.

[9]. G. Wang, W. Li, S. Ourselin, T. Vercauteren. Automatic brain tumor segmentation using cascaded anisotropic convolutional neural networks. Lecture Notes in Computer Science, vol. 10670, 2018, pp. 178-190

[10]. Menze B.H., Jakab A., Bauer S. et al. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Transactions on Medical Imaging, vol. 34, issue 10, 2015, pp. 1993-2024

[11]. Spyridon Bakas, Hamed Akbari, Aristeidis Sotiras et al. Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features. Scientific Data, vol. 4, 2017, Article number: 170117

[12]. Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter. Self-Normalizing Neural Networks. Advances in Neural Information Processing Systems, vol. 30, 2017

[13]. L.K. Hansen and P Salamon. Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, issue 10, 1990, pp. 993- 1001

[14]. J. van Doorn. Analysis of deep convolutional neural network architectures. Доступно по ссылке:

https://pdfs.semanticscholar.org/6831/bb247c853b433d7b2b9d47780dc8d84e4762.pdf, дата обращения 13.06.2018

[15]. Hahnloser R.H., Sarpeshkar R., Mahowald M.A., Douglas R.J., Seung H.S. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature, vol. 405, 2000, pp. 947-951

[16]. O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. Lecture Notes in Computer Science, vol. 9351, 2015, pp. 234-241

[17]. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, vol. 15, issue 1, 2014, pp. 1929-1958

i Надоели баннеры? Вы всегда можете отключить рекламу.