About authors:
Kolesnik Kamila Alexandrovna, MD, PhD, DSc, Professor, Head of the Department of Pediatric Dentistry; tel.: +79788312576; e-mail: [email protected]; https://orcid.org/0000-0003-4691-1857
Belousova Anastasia Mikhailovna, Assistant;
tel.: +79787684700; e-mail: [email protected]; https://orcid.org/0009-0004-4889-8216
Yelcheva Lidiya Anatolievna, Senior Lecturer at the Department of Surgical Dentistry; tel.: +79787253877; e-mail: [email protected]; https://orcid.org/0000-0003-4180-5406
© Group of authors, 2024 UDC 617.51-007.2-053.1
DOI - https://doi.org/10.14300/mnnc.2024.19034 ISSN - 2073-8137
USING DEEP CONVOLUTIONAL NEURAL NETWORKS FOR THREE-DIMENSIONAL CEPHALOMETRIC ANALYSIS
A. A. Muraev 1, N. Yu. Oborotistov 2, M. E. Mokrenko 1, T. V. Shiryaeva 2, O. A. Aleshina 3, M. V. Ershov 4, P. N. Emel'yanov 4, L. R. Agarleva 4, A. A. Dolgalev 5, M. E. Zorych 6
1 The Peoples Friendship University of Russia (RUDN University), Moscow, Russian Federation
2 Russian University of Medicine, Moscow, Russian Federation
3 Dental clinic «NizhStomPlus», Nizhniy Novgorod, Russian Federation
4 UBIC Technologies, Moscow, Russian Federation
5 Stavropol State Medical University, Russian Federation
6 Dental clinic «Ecomedservice», Minsk, Republic of Belarus
ИСПОЛЬЗОВАНИЕ СВЕРТОЧНЫХ НЕЙРОННЫХ СЕТЕЙ
ДЛЯ ПРОВЕДЕНИЯ ТРЕХМЕРНОГО ЦЕФАЛОМЕТРИЧЕСКОГО АНАЛИЗА
А. А. Мураев 1, Н. Ю. Оборотистов 2, М. Е. Мокренко 1, Т. В. Ширяева 2, О. А. Алешина 3, М. В. Ершов 4, П. Н. Емельянов 4, Л. Р. Агарлева 4, А. А. Долгалев 5, М. Е. Зорич 6
1 Российский университет дружбы народов им. Патриса Лумумбы, Москва, Российская Федерация
2 Российский университет медицины, Москва, Российская Федерация
3 Стоматологическая клиника «НижСтомПлюс», Нижний Новгород, Российская Федерация
4 ООО «УБИК», Москва, Российская Федерация
5 Ставропольский государственный медицинский университет, Российская Федерация
6 Стоматологическая клиника «Экомедсервис», Минск, Республика Беларусь
The study included the development of a new convolutional neural network (CNN) model for recognizing and fitting cephalometric points on cone-beam computed tomography (CBCT) slices for further 3D cephalometric analysis and evaluation of its accuracy. We used DICOM files for 192 cone beam tomograms in the study. Each set of files was imported into ViSurgery software (Skolkovo, Russia). Next, three-dimensional models of the patient's soft tissues, bones, and teeth were generated, and 26 points were placed on the facial surface (soft tissue points), 38 on the skull surface (bone points), and ten dental cephalometric points per model. At the same time, the position of the points was corrected on CT plane slices in 3 planes. This study demonstrated the high efficiency of the image segmentation approach for training CNN to identify cephalometric points on CBCT. The proposed method, integrated into specialized software, has a high potential for reducing the labor intensity of the workflow.
Keywords: cone-beam computed tomography, cephalometric point landmarking, three-dimensional cephalometrics, convolutional neural networks, automatic identification, computer-assisted diagnostics
Разработана новая модель сверточной нейронной сети (СНС) для распознавания и установки цефаломе-трических точек на срезах конусно-лучевой компьютерной томограммы (КЛКТ) для дальнейшего проведения 3D-цефалометрического анализа и оценки его точности. В исследовании были использованы файлы DICOM для 192 конусно-лучевых томограмм. Каждый набор файлов был импортирован в программное обеспечение ViSurgery (Сколково, Россия). Далее были сформированы трехмерные модели мягких тканей, костей и зубов пациентов и установлены 26 точек на поверхности лица (мягкотканые точки), 38 - на поверхности черепа (костные точки) и 10 зубных
medical news of north caucasus медицинский вестник северного кавказа
2024. Vol. 19. iss. 2 2024. Т. 19. № 2
цефалометрических точек на каждую модель. При этом положение точек корректировалось на плоскостных срезах КТ в 3 плоскостях. Проведенное исследование продемонстрировало высокую эффективность подхода с сегментацией изображений для обучения СНС определению цефалометрических точек на КЛКТ. Предлагаемый метод, интегрированный в специализированное программное обеспечение, обладает высоким потенциалом в плане сокращения трудоемкости рабочего процесса.
Ключевые слова: конусно-лучевая компьютерная томография, цефалометрические точки, трехмерный цефало-метрический анализ, сверточная нейронная сеть, автоматическая идентификация, компьютерная диагностика
For citation: Muraev A. A., Oborotistov N. Yu., Mokrenko M. E., Shiryaeva T. V., Aleshina O. A., Ershov M. V., Emel'ya-nov P. N., Agarleva L. R., Dolgalev A. A., Zorych M. E. Using deep convolutional neural networks for three-dimensional ceph-alometric analysis. Medical News of North Caucasus. 2024;19(2):146-151. DOI - https://doi.org/10.14300/mnnc.2024.19034
Для цитирования: Мураев А. А., Оборотистов Н. Ю., Мокренко М. Е., Ширяева Т. В., Алешина О. А., Ершов М. В., Емельянов П. Н., Агарлева Л. Р., Долгалев А. А., Зорич М. Е. Использование сверточных нейронных сетей для проведения трехмерного цефалометрического анализа. Медицинский вестник Северного Кавказа. 2024;19(2):146-151. DOI - https://doi.org/10.14300/mnnc.2024.19034
BCE - binary cross-entropy CBCT - cone beam computed tomography CNN - convolutional neural networks CT - computed tomography
DICOM - Digital Imaging and Communications in Medicine
MAE - mean absolute error
SD - standard deviation
VAE - variational autoencoder
XML - extensible markup language
Cephalometric analysis of soft tissues and bones of the skull is widely used in orthodontics and maxillofacial surgery for diagnosis and treatment planning of dentoalveolar pathology. The cephalometric analysis of teleradiographs in lateral projection, most often used in clinical practice, may differ significantly from the results of 3D cephalo-metry in the same patient. The cone beam computed tomography (CBCT) method has been used in clinical practice for over 20 years. A study [1] showed that cephalometric measurements from computed tomography (CT) data are 4-5 times more accurate than measurements obtained from lateral teleradi-ography compared to measurements made directly on skulls. For asymmetric skull deformities, this discrepancy is even more characteristic [2]. Three-dimensional cephalometry is an excellent alternative to the classical method, especially for diagnosing complex and asymmetric pathology [3-5].
Due to the accurate visualization of soft tissue and bone structures, CBCT and high-performance software can improve the quality of cephalometric analysis and allow us to investigate the correlation between changes in the skull bones and soft tissues [6, 7].
At the same time, 3D analysis is challenging to apply and requires highly skilled clinicians. Finding and marking cephalometric landmarks on DICOM slices consumes much time. Also, the information obtained in 3D analysis is at least two times greater than in teleradiogram analysis [1, 3].
Converged neural networks (CNN) have become a versatile technology in computer vision. Compared to other solutions, they offer greater accuracy and stability. However, CNN requires an extensive training sample. Our study shows that the CNN training algorithm can achieve good results in identifying cephalometric landmarks on teleradiographs in forward and lateral projections [8, 9].
The study aimed to develop a new CNN model for recognizing and setting cephalometric landmarks on CBCT with 3D cephalometric analysis and evaluation of its accuracy.
Material and Methods. Data selection. DICOM files for 192 cone beam tomograms were used in the study. All data were taken from the Department of Oral and Maxillofacial Surgery and Surgical Dentistry archives of the Peoples' Friendship University of Russia and the
Department of Orthodontics of the Moscow Medical and Dental University. All images were obtained using a PlanmecaProMax 3D Max FOV CBCT (Planmeca, Finland) in 23x26 format. This format allowed us to get complete skull models, including soft tissues from the subchondral region to the skull vault.
Each set of DICOM files was imported into ViSurgery software (Skolkovo, Russian Federation) and automatically anonymized. For anonymization, we used the ViSurgery software function that removes unique identifiers in the DICOM file and replaces the patient's name with the «no name» flag. Three-dimensional models of the patient's soft tissues, skull bones, and jaws were created from the data. Then, 74 cephalometric points each were placed on the surface of the models, and their positions were corrected on axial, sagittal, and frontal slices. Graduate students initially determined the position of each point, and verification was performed sequentially by two practitioners: the first was a surgeon, and the second was an orthodontist.
After the markup of the 3D facial model, the coordinates of the points were exported as an XML file. Each cephalometric point represents one voxel that coordinates in a series of DICOM images. The DICOM and XML data were jointly used to train the CNN. The training dataset consisted of 174 pairs of DICOM files and XML files, and 18 designs were used for validation.
After training, the neural network was placed on a server, and a function was created to send DICOM to the server. The CNN processed the data and returned the coordinates of the cephalometric points to the ViSurgery software. In this way, we obtained data to evaluate the neural network's performance.
The trained model was tested on the remaining 18 test projects. We compared 74 key coordinates and cephalometric measurements between them.
We selected critical clinical cephalometric points on soft tissues, bones, and teeth [10] used in orthodontic treatment planning and orthognathic surgery to predict facial changes.
Data preprocessing. All DICOM file sets consisted of 768x768x576 tensor voxels, each with a Hounsfield scale density. First, the image was converted to a resolution of 224x224x224 voxels using 3rd-order spline interpolation. Spline interpolation reduces memory utilization during
training without significantly degrading the image quality. After that, the voxel density value, which fits between -1000 and 5000, was scaled to scale from 0 to 1. These tensor voxels were used as input to the neural network.
Recognition of critical coordinates was treated as a segmentation task. For this purpose, the coordinates of the 26 soft tissue, 38 bone, and 10 dental key points were transformed into images with 26, 38, and 10 channels, respectively. In this case, all voxels in each channel were equal to 0, and only one voxel with keypoint coordinates was equal to 1. Each channel of such images was then converted from a one-hot image to a heat map using a Gaussian filter placed at the key point. These images, in which each channel represents a heat map of the key point, were used as a target during the neural network training.
Neural network architecture. We used a 3D U-net neural network from the Monai framework (https://github. com/Project-MONAI) for segmentation. At each step of the compressive path in direct propagation in this neural network, two convolution layers are combined with ReLU and the subsampling layer. In this way, the image size and the number of channels are reduced. At each step of the decompressive path, a transpose convolutional layer is applied to the current feature map. After that, the current feature map is merged with its corresponding clipped map from the compression path. The convolution layers with ReLU are applied to the merge results. Due to this, the image size is increased, and the number of channels is reduced.
Monai and TorchLightning deep learning frameworks were used to train the neural network. The architecture of the neural network is shown in Figure 1.
The primary metric for assessing the quality of the neural network model was the Mean Absolute Error (MAE), which showed the model's error in millimeters.
N
MAE = i^J Ji-Ji |
Fig. 1. The architecture of the U-net CNN
Convolutional Neural Network Training. The hyperparameters for the U-net architecture were defined. The 3D convolution layers used 16, 32, 64, 128, and 256 channels in increments of 2 in each layer. The number of input channels was 1, and the number of output channels was 26, 38, and 10 (Fig. 2).
Since the output data are just numbers, converting them into the probability of belonging to a key point was necessary. Therefore, a sigmoid was applied to the output data after direct propagation to obtain a heat map. This heat map is compared to the target heat map y using a loss function calculating binary cross-entropy (BCE):
DiceBCELoss(y,y) = Dice(y,y)+ BCE(y,y) ;
2 |y n y|
Dice(y'9) = MTm ; i N
BCE(y,y) = --j^yilogy; + (l-yi)log(l -y;).
The BCE function was optimized using the Adam algorithm with a learning rate 0.0001. The model was trained using a Tesla-V100 for 184 epochs until the MAE stopped decreasing at 20 epochs. The batch size was set to 1 due to the large image sizes. The training took 24 hours.
To compute the MAE for key points, the output heat-map had to be converted to coordinates in millimeters. To do this, the arg max function was applied to each channel of the output heatmap, and the coordinates of the maximum value in each channel were considered the key point predicted by the neural network.
Statistics. The accuracy of image CNN partitioning in the validation dataset was determined using MAE. The coordinates obtained by the neural network and the original point positions determined by humans were compared in 18 test projects, for which median measurement errors and detection frequencies (percentage of the total number of points for which the error was less than a particular value) were calculated.
Results and Discussion. To visualize the results of the CNN, the coordinates of the predicted points were loaded into ViSurgery and fitted to 3D facial models, as shown in Figure 3. The MAE for all points were 2.78 mm, 3.03 mm, and 4.30 mm for soft tissue, bone, and dental points, respectively. Median errors and detection frequencies for all points are summarized in Table.
Three-dimensional cephalometric analysis based on CBCT data remains a difficult and time-consuming diagnostic method. However, the demand for this method is increasing. Due to increased hardware performance and the development of 2D and 3D graphics software, new possibilities for processing large datasets, mainly CT scans, have emerged. This has contributed to developing various automated algorithms and systems for 3D cephalometric analysis.
Kang et al. created a system that consisted of a convolutional neural network and an algorithm for resampling image data. The mean errors in setting cephalometric landmarks were 3.26 mm, 3.18 mm, and 4.81 mm (for three axes) and 7.61 mm for 3D. The training sample included 27 CBCT. The authors overcame the disadvantages of the limited dataset by using two strategies: first, a mathematical function was used to characterize the location of landmarks in terms of a smooth transition to adjacent coordinates. This prevented overtraining and ensured high learning efficiency despite the small initial sample. The second strategy addressed the computational resource constraint by reducing the size of the image data through resampling to reduce the computational requirements while improving the predictability of training. Although the 3D CNN-based annotation system has not reached the level of accuracy required for clinical applications, it can nevertheless serve to approximate landmarks, thereby reducing the time needed for image partitioning [11, 12].
Yun et al. developed a multi-stage deep learning platform for automatic 3D cephalogram labeling. The proposed method initially detects several landmarks (7 out of 93) that can be reliably and accurately marked on the skull image. The placement of these seven landmarks allows the determination of the mid-sagittal
medical news of north caucasus
2024. Vоl. 19. Iss. 2
медицинским вестник северного кавказа
2024. Т. 19. № 2
epoch epflch
Fig. 2. Learning curves of the Neural Network: A - skin points, B - bone points, C - tooth points
Fig. 3. Asymmetric deformity landmarking. The cephpoints positioned by the humans are painted red, the points determined by the CNN are painted green: A - skin points, B - bone points, C - tooth points (labels were removed not to overlap the image)
plane, on which the next step of the neural network establishes eight more cephalometric points. The remaining 78 (78=93-15) landmarks are set based on the knowledge of the coordinates of 15 points and the Variational AutoEncoder (VAE) based representation of the morphological similarity/dissimilarity of the normalized skull (low-dimensional representation of the total number of cephalometric landmarks using the VAE and annotator). The VAE implementation allows us to learn three-dimensional morphological features from two-dimensional images and to study the similarity/non-similarity representation of the combined vectors of cephalometric landmarks. Using a small training sample, the proposed method provides an average error of 3.63 mm for 93 cephalometric landmarks. The novelty is using VAE to learn 3D objects from 2D images by representing multidimensional 3D landmark vectors using hidden variables of much lower dimensionality.
This low-dimensional latent representation is achieved by applying a normalized skull and fixed reference Cartesian coordinates. Experiments have confirmed the performance of the proposed method even with a limited number of training data [13].
The peculiarity of our approach was that we focused on creating a tool that should be as close as possible to clinical implementation. For this purpose, we made the ViSurgery program in which, using CBCT data, we can create 3D models of soft and hard tissues, set the positions of cephalometric points on these models, correct their positions on 2D slices, and perform 3D cephalometric analysis. The entire training sample was trained in ViSurgery software. After training, the SNS was hosted on a server, and a function was created
to send DICOM to the server where the data is processed, and the coordinates of the cephalometric points are sent back to ViSurgery. Our segmentation-based method showed higher accuracy for 74 landmarks (MAE for soft tissue points 2.78 mm, for bone points 3.03 mm, and for dental points 4.30 mm) than the previous deep learning method [11]. In the model described by Kang et al., the average 3D error was 7.61 mm for only 12 cephalometric landmarks. However, our method performed as well as Yun et al. obtained their paper and achieved a mean error of 3.63 mm for 93 cephalometric landmarks [13].
Table
Median errors and detection frequencies of cephalometric points
Cephalometric point Median error (mm) Detection rate (%)
<2 mm <3 mm <4 mm
Points on soft tissues
Alare L 2.58 38.89 55.56 77.78
Alare R 2.09 50 66.67 77.78
Bridge Of Nose 2.5 50 50 72.22
Sellion 1.79 55.56 88.89 94.44
cer 3.64 22.22 38.89 61.11
gl 3.19 22.22 44.44 55.56
gn 2.68 44.44 61.11 72.22
go L 5.04 5.56 27.78 38.89
go R 4.78 5.56 16.67 33.33
ll 1.22 66.67 88.89 94.44
me C 3.43 16.67 44.44 55.56
me L 4.11 16.67 22.22 44.44
me R 4.37 16.67 22.22 50
or L 4.34 16.67 27.78 38.89
or R 3.51 16.67 38.89 55.56
pg 2.23 50 66.67 77.78
po L 3.16 38.89 50 61.11
po R 3.13 16.67 33.33 77.78
prn 1.71 55.56 94.44 100
sm (STBpoint) 1.92 50 72.22 83.33
sn 1.5 72.22 94.44 100
t L 1.97 50 83.33 94.44
t R 2.23 44.44 88.89 94.44
ul 1.12 83.33 94.44 100
zy L 4.53 22.22 38.89 44.44
zy R 7.59 11.11 11.11 27.78
Dots on the bones
A 2.14 33.33 83.33 94.44
ANS 1.92 50 83.33 88.89
ArL 9.91 0 0 0
ArR 6.81 0 0 5.56
B 2.02 44.44 83.33 100
End of table
Ba 2.56 27.78 83.33 94.44
CoL 2.22 44.44 66.67 77.78
CoR 1.55 61.11 72.22 77.78
Gn 1.5 77.78 88.89 94.44
GoL 4.42 16.67 33.33 33.33
GoR 3.61 27.78 38.89 50
JL 5.4 0 5.56 22.22
JR 5.62 0 5.56 11.11
MeC 1.53 66.67 88.89 94.44
MeL 3.44 33.33 38.89 66.67
MeR 3.36 16.67 38.89 61.11
Mid Ramus L 2.33 33.33 55.56 72.22
Mid Ramus R 2.22 44.44 72.22 77.78
N 2.41 44.44 66.67 83.33
OrL 2.58 33.33 55.56 61.11
OrR 2.8 27.78 55.56 61.11
PNS 2.46 27.78 66.67 100
Pg 1.7 83.33 100 100
PoL 5.05 5.56 11.11 22.22
PoR 4.98 11.11 22.22 44.44
PtL 6.9 0 22.22 22.22
PtR 7.05 5.56 11.11 11.11
R2L 4.62 11.11 27.78 44.44
R2R 2.56 27.78 61.11 77.78
R4L 3.09 22.22 50 66.67
R4R 2.02 50 61.11 72.22
Ramus Point L 4.39 5.56 16.67 44.44
Ramus Point R 3.12 27.78 44.44 61.11
S 2.06 50 77.78 94.44
Sigmoid Notch L 1.97 55.56 94.44 94.44
Sigmoid Notch R 1.69 66.67 83.33 100
ZyL 1.75 66.67 83.33 88.89
ZyR 2.21 44.44 72.22 94.44
Points on teeth
L1 tip L 1.86 61.11 83.33 94.44
L1 tip R 1.83 55.56 72.22 94.44
L6 L 2.95 16.67 61.11 77.78
L6 R 2.39 27.78 61.11 66.67
LI 1.87 50 83.33 88.89
U1 tip L 2.27 38.89 72.22 100
U1 tip R 2.63 33.33 55.56 83.33
U6 L 2.47 38.89 61.11 77.78
U6 R 2.1 38.89 61.11 88.89
UI 2.48 33.33 66.67 88.89
medical news of north caucasus
2024. Vol. 19. Iss. 2
Conclusion. The results of coordinate recognition for some landmarks were unsatisfactory. As a rule, poor recognition quality is associated with low bone density in this area. In this case, it is difficult for the doctor and the computer to determine the position of the point. This is because even a tiny error in determining the coordinates of the point on the dentition can lead to a significant deviation in the results of cephalometric measurements. Thus, there is a need to improve the method to increase the accuracy of tooth location. One of the solutions
медицинский вестник северного кавказа
2024. Т. 19. № 2
may be the preliminary segmentation of the tooth row with subsequent placement of cephalometric points on the boundaries of the segments, which will be tested in further studies.
Thus, the study demonstrates the high efficiency of a segmentation approach to train CNN to identify cephalometric points on anatomical 3D models based on CBCT. The proposed method, integrated into specialized software, has a high potential for reduction in terms of reducing the workflow labor intensity.
Disclosures: The authors declare no conflict of interest.
References
1. Adams G. L., Gansky S. A., Miller A. J., Harrell W. E. Jr., Hatcher D. C. Comparison between traditional 2-dimen-sional cephalometry and a 3-dimensional approach on human dry skulls. Am. J. Orthod. Dentofacial Orthop. 2004;126(4):397-409. https://doi.org/10.1016/j.ajodo.2004.03.023
2. Kragskov J., Bosch C., Gyldensted C., Sindet-Peder-sen S. Comparison of the reliability of craniofacial anatomic landmarks based on cephalometric radiographs and three-dimensional CT scans. Cleft Palate Craniofac. J. 1997;34(2):111-116.
https://doi.org/10.1597/1 545-1569_1997_034_0111_ cotroc_2.3.co_2
3. Vlijmen O. J., Maal T., Berge S. J., Bronkhorst E. M., Katsaros C., Kuijpers-Jagtman A. M. A comparison between 2D and 3D cephalometry on CBCT scans of human skulls. Int. J. Oral Maxillofac. surg. 2010;39(2):156-160. https://doi.org/10.1016Zj.ijom.2009.11.017
4. Nalpaci R., Ozturk F., Sokucu O. A comparison of two-dimensional radiography and three-dimensional computed tomography in angular cephalometric measurements. Dentomaxillofac. Radiol. 2010;39(2):100-106. https://doi.org/10.1259/dmfr/82724776
5. Jodeh D. S., Kuykendall L. V., Ford J. M., Ruso S., Decker S. J. [et al.] Adding Depth to Cephalometric Analysis: Comparing Two- and Three-Dimensional Angular Cephalometric Measurements. J. Craniofac. surg. 2019;30(5):1568-1571.
https://doi.org/10.1097/SCS.0000000000005555
6. Cintra O., Grybauskas S., Vogel C. J., Latkauskiene D., Gama N. A. Jr. Digital platform for planning facial asymmetry orthodontic-surgical treatment preparation. Dental Press J. Orthod. 2018;23(3):80-93. https://doi.org/10.1590/2177-6709.23.3.080-093.sar
7. Wang R. H., Ho C. T., Lin H. H., Lo L. J. Three-dimensional cephalometry for orthognathic planning: Normative data and analyses. J. Formos Med. Assoc. 2020;119(1 Pt 2):191-203. https://doi.org/10.1016/jjfma.2019.04.001
8. Muraev A. A., Tsai P., Kibardin I., Oborotistov N., Shiraye-va T. Frontal cephalometric landmarking: humans vs artificial neural networks. Int. J. Comput. Dent. 2020;23(2):139-148.
9. Muraev A. A., Kibardin I. A., Oborotistov N. Yu., Ivanov S. S., Persin L. S. Use of neural network algorithms for the automated placement of cephalometric points on Lateral Ceph. REJR. 2018;8(4):16-22.
10. Kazimierczak N., Kazimierczak W., Serafin Z., Nowicki P., Nozewski J., Janiszewska-Olszowska J. AI in Orthodontics: Revolutionizing Diagnostics and Treatment Planning - A Comprehensive Review. Journal of Clinical Medicine. 2024;13(2):344.
11. Kang S. H., Jeon K., Kim H. J., Seo J. K., Lee S. H. Automatic three-dimensional cephalometric annotation system using three-dimensional convolutional neural networks: a developmental trial. Computer Methods in Biomecha-nics and Biomedical Engineering: Imaging & Visualization. 2020;8(2):210-218.
https://doi.org/10.1080/21681163.2019.1674696
12. Kang S. H., Jeon K., Kang S. H., Lee S. H. 3D cephalometric landmark detection by multiple stage deep reinforcement learning. sci. Rep. 2021;11(1):17509. https://doi.org/10.1038/s41598-021-97116-7
13. Yun H. S., Jang T. J., Lee S. M., Lee S. H., Seo J. K. Learning-based local-to-global landmark annotation for automatic 3D cephalometry. Phys. Med. Biol. 2020;65(8):085018.
https://doi.org/10.1088/1361-6560/ab7a71
Received 06.12.2023
About authors:
Muraev Alexandr Alexandrovich, MD, PhD, Professor of the Department of Oral and Maxillofacial Surgery; tel.: +79037110246; e-mail: [email protected]; https://orcid.org/0000-0003-3982-5512
Oborotistov Nikolay Yurievich, PhD, Associate Professor of the Department of Orthodontics, Head of the Department Clinic of Orthodontics;
tel.: +79852113533; e-mail: [email protected]; https://orcid.org/0000-0002-8523-6076 Mokrenko Mark Evgenievich, postgraduate student;
tel.: +79501010300; e-mail: [email protected]; https://orcid.org/0000-0002-9421-600X Shiryaeva Tatyana Vyacheslavovna, postgraduate student;
tel.: +79060988905; e-mail: [email protected]; https://orcid.org/0000-0001-6554-9449
Aleshina Olga Aleksandrovna, PhD, Assistant Professor, Chief Physician of the Dental Clinic; tel.: +79108753399; e-mail: [email protected]; https://orcid.org/0000-0002-7990-6459
Ershov Mikhail Vladimirovich, Chief Data Scientist;
tel.: +79266051684; e-mail: [email protected]; https://orcid.org/0000-0002-7445-529X Emel'yanov Petr Nikolaevich, R&D Director;
tel.: +79683819616; e-mail: [email protected]; https://orcid.org/0000-0001-7232-0482 Agarleva Luisa Robertovna, Python Analyst Developer;
tel.: +79196893252; e-mail: [email protected]; https://orcid.org/0000-0003-3116-9936
Dolgalev Aleksandr Aleksandrovich, MD, Associate Professor, Professor of the Department of Dentistry of General Practice and Pediatric Dentistry; tel.: +79624404861; e-mail: [email protected]; https://orcid.org/0000-0002-6352-6750
Zorych Maryanna Yevgeniyevna, PhD, Associate Professor;
tel.: +375296517331; e-mail: [email protected]; https://orcid.org/0009-0007-0074-6389