Научная статья на тему 'A novel switching bilateral filtering algorithm for depth map'

A novel switching bilateral filtering algorithm for depth map Текст научной статьи по специальности «Медицинские технологии»

CC BY
252
65
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
Компьютерная оптика
Scopus
ВАК
RSCI
ESCI
Ключевые слова
DEPTH MAP / SWITCHING FILTERING / 3D RECONSTRUCTION

Аннотация научной статьи по медицинским технологиям, автор научной работы — Ruchay Alexey N., Dorofeev Konstantin A., Kalschikov Vsevolod V.

In this paper, we propose a novel switching bilateral filter for depth map from a RGB-D sensor. The switching method works as follows: the bilateral filter is applied not at all pixels of the depth map, but only in those where noise and holes are possible, that is, at the boundaries and sharp changes. With the help of computer simulation we show that the proposed algorithm can effectively and fast process a depth map. The presented results show an improvement in the accuracy of 3D object reconstruction using the proposed depth filtering. The performance of the proposed algorithm is compared in terms of the accuracy of 3D object reconstruction and speed with that of common successful depth filtering algorithms.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «A novel switching bilateral filtering algorithm for depth map»

ОБРАБОТКА ИЗОБРАЖЕНИЙ, РАСПОЗНАВАНИЕ ОБРАЗОВ

A novel switching bilateral filtering algorithm for depth map

A.N. Ruchay',2, K.A. Dorofeev2, V.V. Kalschikov2 1 Federal Research Centre of Biological Systems and Agro-technologies of the Russian Academy of Sciences,

Orenburg, Russia,

2 Department of Mathematics, Chelyabinsk State University, Chelyabinsk, Russia

Abstract

In this paper, we propose a novel switching bilateral filter for depth map from a RGB-D sensor. The switching method works as follows: the bilateral filter is applied not at all pixels of the depth map, but only in those where noise and holes are possible, that is, at the boundaries and sharp changes. With the help of computer simulation we show that the proposed algorithm can effectively and fast process a depth map. The presented results show an improvement in the accuracy of 3D object reconstruction using the proposed depth filtering. The performance of the proposed algorithm is compared in terms of the accuracy of 3D object reconstruction and speed with that of common successful depth filtering algorithms.

Keywords: depth map, switching filtering, 3D reconstruction.

Citation: Ruchay AN, Dorofeev KA, Kalschikov VV. A novel switching bilateral filtering algorithm for depth map. Computer Optics 2019; 43(6): 1001-1007. DOI: 10.18287/2412-6179-201943-6-1001-1007.

Acknowledgments: The Russian Science Foundation (project #17-76-20045) financially supported the work.

Introduction

The 3D object reconstruction is a popular task for object recognition, object tracking, object retrieval, scene understanding, human-computer interaction, virtual maintenance, navigation, engineering and visualization [1, 2, 3, 4].

In this paper, we are interested in filtering a depth map from a RGB-D sensor for improving the its quality [5]. The depth map is described by piecewise smooth regions bounded by sharp object boundaries, therefore, the depth value varies discontinuity, and a small error around object boundary may lead to significant artifacts and misrepresentations. Besides, the depth map is noisy because of infrared light reflections, and missing pixels without any depth value appear as black holes in depth maps. To reduce noise and fill small holes, the median and binomial filters are used [6, 7]. The noise and holes affect the accuracy of 3D object reconstruction, therefore, the de-noising and hole-filling algorithms are used for 3D reconstruction systems [8, 9, 7, 10, 11]. Traditional 3D depth denoising methods are focused on fusing multiple consecutive noisy depths to get a higher quality: a method based on the correlation between aligned color and depth frames provided by such sensors [12,13]; spatial-temporal denoising approaches [14, 15]; a deep-learning based approach which makes use of aligned gray images to de-noise depth data [16]. Enhancing the quality of the depth map obtained with a single depth frame is an increasingly popular research task: wavelet denoising [17]; total variation regularization [18]; median filtering based on adaptive weighted Gaussian [19]; bilateral filter [20]; nonLocal-Mean method [21].

In the last years, the following algorithms were proposed: an effective divide-and-conquer method for handling disocclusion of the synthesized image [22]; a depth

filtering scheme based on exploiting the temporal information and color information [18]; a nonlinear down/upsampling filtering and a depth reconstruction multilateral filtering using a spatial resolution, boundary similarity, and coding artifacts features [23]; a 3D collaborative filtering in graph Fourier transform domain [24]; a weighted mode filter and joint bilateral filter where the joint bilateral kernel provides an optimal solution with the help of the joint histogram [25]; an adaptive method to denoise depth using Differential Histogram of Normal Vectors features along with a linear SVM [26]; a three-phase depth map correction, including eliminating anomalies, segmentation, amendment and finally inter-frame and intra-frame filtering [27]; a method based on utilizing a combination of Gaussian kernel filtering and aniso-tropic filtering [28].

Bilateral filtering is a technique to smooth images while preserving edges [29]. The base idea of the bilateral filter is that for a pixel to influence another pixel, it should not only occupy a nearby location but also have a similar value. The bilateral filter might not be the most advanced denoising technique but its strength lies in its simplicity and flexibility. The following modifications of the bilateral filter were proposed: Adaptive Bilateral Filter (ABF) [26], Fast Bilateral Filter (FBF) [30], Joint Bilateral Filter (JBF) [31] and Joint Bilateral Upsampling (JBU) [20].

In the paper [5], we tested and compared state-of-the-art methods of depth filtering with respect to the reconstruction accuracy using real data, where our presented results showed an improvement in the accuracy of 3D object reconstruction using depth filtering from a RGB-D sensor. In this article, we propose a novel switching bilateral filter (SBF) for denoising depth map. We apply the bilateral filter not at all pixels of the depth map, but only

in those where noise and holes are possible, that is, at the boundaries and sharp changes. For this, we find areas with sharp changes and boundaries in a RGB, then apply the bilateral filter only to these areas of depth map.

We consider denoising depth algorithms for 3D object reconstruction [32, 33, 34], therefore, we use the raw depth map as noisy data and we evaluate the performance of the denoising methods based on the enhancement achieved in the accuracy of 3D object reconstruction. In contrast to this approach, a common approach of noise reduction is that the raw depth map represented the ground truth, added an artificial noise such as additive or impulse, and then proposed a method to remove the noise [26]. Although this common approach can be used for quantitative comparison, wherein proposed methods reduce only the artificial noise but not the original noise contained in the raw depth. Therefore, our main goal is to evaluate the denoising methods to enhance reconstruction accuracy which depends on the quality of the captured raw depth map. We use the metric of evaluation as the root mean square error (RMSE) of measurements in the iterative closest point (ICP) algorithm.

The performance of the proposed algorithm is compared in terms of the accuracy of 3D object reconstruction and speed with the following depth denoising algorithms: ABF [26], FBF [30], JBF [31], JBU [20], Noise-aware Filter (NF) [35], Weight Mode Filter (WMF) [36], Anisotropic Diffusion (AD) [37], Markov Random Field (MRF) [38], Markov Random Field(Second Order Smoothness) (MRFS) [39], Markov Random Field(Kernel Data Term) (MRFK) [39], Markov Random Field(Tensor) (MRFT) [39], Layered Bilateral Filter (LBF) [40], Kinect depth normalization (KDN) [41], Roifill filter (RF) [42], Median filter (MF), Bilateral Filter (BF), Okada filter (OF) [43].

The paper is organized as follows. In Section 2, we describe the proposed depth denoising algorithm based on switching bilateral filter. Computer simulation results are provided in Section 3. Finally, Section 4 summarizes our conclusions.

1. Proposed algorithm

In this section, we describe the proposed depth denoising algorithm based on switching bilateral filter.

First, we describe the original bilateral filter. We denote a depth map as the image D and the graylevel image I converted from RGB image, and use the notation Dp for the image value at pixel position p. Pixel size is assumed to be 1. F [I ] designates the output of a filter F applied to the image I. We will consider the set S of all possible image locations that we name the spatial domain. For instance, the notation ~Lqes denotes a sum over all image pixels indexed by q. We use 11 for the absolute value and II II for the Euclidean distance.

The bilateral filter is defined by:

BF[D]p = -L Y^G^ (II p - q II) (|Dp -Dq|)Dq, (1)

Wp qeS

where normalization factor Wp ensures pixel weights sum to 1.0:

Wp (II p - q II) (IGp - Gq\).

(2)

q.S

Here as is the spatial parameter and ar is the range parameter for the 2D Gaussian kernel Gct(x): 1 (

' 2ct2

Ga(x )=-

-exp

(3)

2na2

This equation is a normalized weighted average where Gas is a spatial Gaussian weighting that decreases the influence of distant pixels, Gar is a range Gaussian that decreases the influence of pixels q when their intensity values differ from D.

The joint bilateral filter is defined by:

JBF[D,I]p = -L"^G^s (II p - q II) (IIp -1„\) (4)

Wp q.S

with

Wp = ^Ga, (II p - q II ) (\IP - Iq| ).

(5)

q.S

In the case impulse noise, the bilateral filter may need to mollify the input image before use [30]. This practice is commonplace in robust statistics: users apply a very robust estimator such as the median filter first to obtain a suitable initial estimate, then apply a more precise estimator (the bilateral filter) to find the final result. Compute the range Gaussian weights on a median-filtered version of the image. Let M be median filtering, than the modified bilateral filter (MBF) is defined by: MBF[D] p =

= WT YG°< (II p - q II) (|M[D]p -M[D]q|)Dq (6)

p qeS

with

Wp = ^ (II p - q II) (( [ D] p - M [ D]q|). (7)

qeS

The proposed switching bilateral filter (SBF) is defined by

SBF[ D, I ] p£R =

(8)

= WT(II p - q II) (Dp - Dq\)Dq

Wp qeS

with

Wp (II p - q II ) (Dp - Dq I)

(9)

qeS

where the R of all possible image locations at the boundaries and edges of graylevel image I. Fig. 1 shows the RGB image from RGB-D datasets [44] and edges finding in graylevel image by Canny filter.

Also we propose a modification of the switching bilateral filter (MSBF) with median filtering is defined as follows

1

MSBF[ D, I ] p.* =—YG., (II p - q

Wp

p q.s

■■■Gar (M [D] p - M [D]q| )) [ D]q

(10)

with

Wp = (II p - q II) (( [ D]p - M [ d],|). (11)

q.S

Extensive experiments revealed that very good denoising results can't be achieved using the following filters: ABF, FBF, WMF, AD, MRFT, LBF, KDN, RF, and OF. The main reason of this is uncorrected point cloud after filtering, therefore, we don't use these filters for our next experiments and comparisons.

A common algorithm for counting RMSE by using the ICP algorithm between two closest point clouds consists of the following steps:

1. Registration a RGB and depth data.

2. Use a depth denoising algorithm: JBF, JBU, BF, SBF, MSBF, NF, MRF, MRFS, MRFK, MF, MBF.

3. Make point clouds using denoising depth data.

4. Detection and matching of keypoints in PCi and PCi -1 with the keypoint detection algorithm SIFT [45].

5. Remove outliers with correspondence rejectors RANSAC [45].

6. Count transformation matrix and RMSE with ICP using the associate 3D points of the inliers.

2. Computer simulation In this section, computer simulation results of the accuracy of 3D object reconstruction based on the proposed depth denoising algorithm using real data are presented and discussed.

As previously stated, we evaluate the performance of our proposed denoising filter against other state-of-the-art filters based on the enhancement of reconstruction accuracy achieved by each filter. We have experimental results for evaluation of the performance of the ICP algorithm for object 3D reconstruction. The metric of evaluation is the root mean square error (RMSE) of measurements. We choose the special RGB-D datasets [44].

In our experiments, we select 11 different depth denoising algorithms which are widely cited and used in comparison: JBF, JBU, BF, SBF, MSBF, NF, MRF, MRFS, MRFK, MF, MBF. The experiments are carried out on a PC with Intel(R) Core(TM) i7-4790CPU @ 3.60 GHz and 8 GB memory.

To evaluate the performance of 3D object reconstruction based on the proposed depth denoising algorithm with cascade mechanism in our experiments, we carried out the point cloud fusion and 3D reconstruction of a lion from dataset [44]. Fig. 2 shows RGB images and depth maps of a lion taken with a step of 1.

Corresponding RMSE values calculated for each pair with a step of 1 in the ICP algorithm with JBF, JBU, BF, SBF, MSBF, NF, MRF, MRFS, MRFK, MF, MBF depth denoising algorithms are shown in Table 1.

The quality of depth denoising we can also evaluate visually looking at the restored point cloud. Figs. 3 and 4 shows the depth maps and the 3D point clouds of a lion after denoising JBF, JBU, BF, SBF, MSBF, NF, MRF, MRFS, MRFK, MF, MBF filters. The proposed MSBF yield the best result in terms of RMSE, speed and visual evaluation among all depth denoising algorithms.

Conclusion

In this paper, we presented the novel switching bilateral filter (MSBF) of depth map based on the bilateral filter and the median filter. The switching method is that we apply the filter not at all pixels of the depth map, but only at the edges. We evaluated the performance of the ICP algorithm with the proposed depth denoising algorithm for object 3D reconstruction using real data. Also, the performance of the proposed algorithm is compared in terms of the accuracy of 3D object reconstruction and speed with that of common successful depth filtering algorithms. The experiment has shown that the proposed MSBF filter yield the best result in terms of RMSE, speed and visual evaluation among all depth denoising algorithms.

m 'i

í

I \

E л

Fig. 1. The RGB image and edges finding in graylevel image References

[1] Gonzalez-Fraga JA, Kober V, Diaz-Ramirez VH, Gutierrez E, Alvarez-Xochihua O. Accurate generation of the 3D map of environment with a RGB-D camera. Proc SPIE 2017; 10396: 103962A. DOI: 10.1117/12.2273074.

[2] Echeagaray-Patrön BA, Kober VI, Karnaukhov VN, Kuznetsov VV. A method of face recognition using 3D facial surfaces. Journal of Communications Technology and Electronics 2017; 62: 648-652. DOI: 10.1134/s1064226917060067.

[3] Ruchay A, Kober V, Yavtushenko E. Fast perceptual image hash based on cascade algorithm. Proc SPIE 2017; 10396: 1039625. DOI: 10.1117/12.2272716.

[4] Ruchay A, Dorofeev K, Kober A. Accurate reconstruction of the 3D indoor environment map with a RGB-D camera based on multiple ICP. Proceedings of the International Conference Information Technology and Nanotechnology. Session Image Processing and Earth Remote Sensing 2018; 2210: 300-308. DOI: 10.18287/1613-0073-2018-2210300-308.

[5] Ruchay A, Dorofeev K, Kober A, Kolpakov V, Kalschikov V. Accuracy analysis of 3D object shape recovery using depth filtering algorithms. Proc SPIE 2018; 10752: 1075221. DOI: 10.1117/12.2319907.

Fig. 2. The RGB images and depth maps of a lion are taken by a Kinect sensor with a step of 1

Fig. 3. The restored depth maps of a lion without filtering and after denoising JBF, JBU, BF, SBF, MSBF, NF, MRF, MRFS, MRFK, MF, MBF filters (from left to right from top to bottom)

Table 1. Results of measurements using a common ICP algorithm with JBF, JBU, BF, SBF, MSBF, NF, MRF, MRFS, MRFK, MF, MBF depth denoising algorithms (DDA) for each pair closest point clouds with numbers 1—2, 2—3, 3—4, 4—5, 5—6. This table presents RMSE and an average time of processing in sec. (Time)

DDA 1-2 2-3 3-4 4-5 5-6 Time

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Without 6.64E-04 6.31E-04 5.35E-04 6.01E-04 7.14E-04 0.000

MSBF 4.65E-04 5.12E-04 4.21E-04 4.42E-04 4.06E-04 0.617

SBF 4.66E-04 5.24E-04 4.22E-04 4.39E-04 4.07E-04 0.599

MBF 4.69E-04 5.17E-04 4.24E-04 4.35E-04 4.16E-04 0.640

BF 4.87E-04 5.12E-04 4.31E-04 4.42E-04 4.17E-04 1.573

MF 5.18E-04 5.52E-04 4.81E-04 5.37E-04 8.92E-04 0.008

MRFK 2.80E-03 2.49E-03 7.85E-03 1.22E-03 1.68E-03 1.671

MRFS 8.06E-04 7.32E-04 6.81E-04 6.51E-04 6.95E-04 3.415

MRF 2.77E-03 2.61E-03 7.87E-03 1.15E-03 1.66E-03 1.684

NF 1.31E-03 1.71E-03 1.67E-03 1.84E-03 1.26E-03 6.261

JBU 1.33E-03 1.64E-03 1.58E-03 1.85E-03 1.33E-03 4.942

JBF 1.15E-03 1.14E-03 1.06E-03 9.67E-04 7.95E-04 3.430

0.15

0.Г

0.08

0.10

c0.12

'0.14

0.08

0.10

0.08

0.10

0.08

0.10

0.12

■0.14

0.15

0.Г

0.08

0.10

0.12

'0.14

0.08

0.10

0.08

0.10

O.T

0.08

0.10

'0.12

■0.14

10

0.08

0.08

0.10

0.08

0.10

0.12

0.14

Fig. 4. The restored point clouds of a lion without filtering and after denoising JBF, JBU, BF, SBF, MSBF, NF, MRF, MRFS, MRFK, MF, MBF filters (from left to right from top to bottom)

[6] Ruchay A, Kober V. Impulsive noise removal from color images with morphological filtering. In Book: van der Aalst WMP, et al, eds. Analysis of images, social networks and texts. Cham: Springer International Publishing; 2018: 280-291. DOI: 10.1007/978-3-319-73013-4_26.

[7] Ruchay A, Kober A, Kolpakov V, Makovetskaya T. Removal of impulsive noise from color images with cascade switching algorithm. Proc SPIE 2018; 10752: 1075224. DOI: 10.1117/12.2319914.

[8] Tihonkih D, Makovetskii A, Voronin A. A modified iterative closest point algorithm for noisy data. Proc SPIE 2017; 10396: 103962W. DOI: 10.1117/12.2274139.

[9] Makovetskii A, Voronin S, Kober V. An efficient algorithm for total variation denoising. In Book: Ignatov DI, et al, eds. Analysis of images, social networks and texts. Cham: Springer International Publishing; 2017: 326337. DOI: 10.1007/978-3-319-52920-2_30.

[10] Voronin S, Makovetskii A, Voronin A, Diaz-Escobar J. A regularization algorithm for registration of deformable surfaces. Proc SPIE 2018; 10752: 107522S. DOI: 10.1117/12.2321521.

[11] Makovetskii A, Voronin S, Kober V. An efficient algorithm of 3D total variation regularization. Proc SPIE 2018; 10752: 107522V. DOI: 10.1117/12.2321646.

[12] Liu W, Chen X, Yang J, Wu Q. Robust color guided depth map restoration. IEEE Trans Image Process 2017; 26: 315327. DOI: 10.1109/tip.2016.2612826.

[13] Milani S, Calvagno G. Correction and interpolation of depth maps from structured light infrared sensors. Signal Processing: Image Communication 2016; 41: 28-39. DOI: 10.1016/j.image.2015.11.008.

[14] Fu J, Wang S, Lu Y, Li S, Zeng W. Kinect-like depth denoising. 2012 IEEE International Symposium on Circuits and Systems 2012: 512-515. DOI: 10.1109/iscas.2012.6272078.

[15] Lin BS, Chou WR, Yu C, Cheng PH, Tseng PJ, Chen SJ. An effective spatial-temporal denoising approach for depth images. 2015 IEEE International Conference on Digital Signal Processing (DSP) 2015: 647-651. DOI: 10.1109/icdsp.2015.7251954.

[16] Zhang X, Wu R. Fast depth image denoising and enhancement using a deep convolutional network. 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2016: 2499-2503. DOI: 10.1109/icassp.2016.7472127.

[17] Moser B, Bauer F, Elbau P, Heise B, Schoner H. Denoising techniques for raw 3D data of TOF cameras based on clustering and wavelets. Proc SPIE 2008; 6805: 68050E. DOI: 10.1117/12.765541.Boubou S, Narikiyo T, Kawanishi M. Adaptive filter for denoising 3D data captured by depth sensors. 2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) 2017: 1-4. DOI: 10.1109/3dtv.2017.8280401.

[18] Bhattacharya S, Venkatesh KS, Gupta S. Depth filtering using total variation based video decomposition. 2015 Third International Conference on Image Information Processing (ICIIP) 2015: 23-26. DOI: 10.1109/iciip.2015.7414733.

[19] Frank M, Plaue M, Hamprecht FA. Denoising of continuous-wave time-of-flight depth images using confidence measures. Optical Engineering 2009; 48: 077003. DOI: 10.1117/1.3159869.

[20] Kopf J, Cohen MF, Lischinski D, Uyttendaele M. Joint bilateral upsampling. ACM Trans Graph 2007; 26: 96. DOI: 10.1145/1276377.1276497.

[21] Georgiev M, Gotchev A, Hannuksela M. Real-time denoising of ToF measurements by spatio-temporal nonlocal mean filtering. 2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) 2013: 1-6. DOI: 10.1109/icmew.2013.6618384.

[22] Lei J, Zhang C, Wu M, You L, Fan K, Hou C. A divide-and-conquer hole-filling method for handling disocclusion in single-view rendering. Multimedia Tools and Applications 2017; 76: 7661-7676. DOI: 10.1007/s11042-016-3413-3.

[23] Zhang Q, Chen M, Zhu H, Wang X, Gan Y. An efficient depth map filtering based on spatial and texture features for 3D video coding. Neurocomputing 2016; 188: 82-89. DOI: 10.1016/j.neucom.2014.11.103.

[24] Chen R, Liu X, Zhai D, Zhao D. Depth image denoising via collaborative graph fourier transform. In Book: Zhai G, Zhou J, Yang X, eds. Digital TV and wireless multimedia communication. Singapore: Springer; 2018: 128-137. DOI: 10.1007/978-981-10-8108-8_12.

[25] Fu M, Zhou W. Depth map super-resolution via extended weighted mode filtering. 2016 Visual Communications and Image Processing (VCIP) 2016: 1-4. DOI: 10.1109/vcip.2016.7805430.

[26] Pourazad MT, Zhou D, Lee K, Karimifard S, Ganelin I, Nasiopoulos P. Improving depth map compression using a 3-phase depth map correction approach. 2015 IEEE International Conference on Multimedia Expo Workshops (ICMEW) 2015: 1-6. DOI: 10.1109/icmew.2015.7169790.

[27] Liu S, Chen C, Kehtarnavaz N. A computationally efficient denoising and hole-filling method for depth image enhancement. Proc SPIE 2016; 9897: 98970V. DOI: 10.1117/12.2230495.

[28] Paris S, Kornprobst P, Tumblin J. Bilateral Filtering. Hanover, MA: Now Publishers Inc; 2009.

[29] Durand F, Dorsey J. Fast bilateral filtering for the display of high-dynamic-range images. ACM Trans Graph 2002; 21: 257-266. DOI: 10.1145/566570.566574.

[30] Petschnigg G, Agrawala M, Hoppe H, Szeliski R, Cohen M, Toyama K. Digital photography with flash and no-flash image pairs. ACM Transactions on Graphics 2004; 23: 664-672. DOI: 10.1145/1015706.1015777.

[31] Ruchay A, Dorofeev K, Kober A. 3D object reconstruction using multiple Kinect sensors and initial estimation of sensor parameters. Proc SPIE 2018; 10752: 1075222. DOI: 10.1117/12.2319911.

[32] Ruchay A, Dorofeev K, Kober A. An efficient detection of local features in depth maps. Proc SPIE 2018; 10752: 1075223. DOI: 10.1117/12.2319913.

[33] Ruchay AN, Dorofeev KA, Kolpakov VI. Fusion of information from multiple Kinect sensors for 3D object reconstruction. Computer Optics 2018; 42(5): 898-903. DOI: 10.18287/2412-6179-2018-42-5-898-903.

[34] Chan D, Buisman H, Theobalt C, Thrun S. A noise-aware filter for real-time depth upsampling. ECCV Workshop on Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications 2008: M2SFA2.

[35] Min D, Lu J, Do MN. Depth video enhancement based on weighted mode filtering. IEEE Transactions on Image Processing 2012; 21: 1176-1190. DOI: 10.1109/tip.2011.2163164.

[36] Liu J, Gong X. Guided depth enhancement via anisotropic diffusion. In Books: Huet B, Ngo C-W, Tang J, Zhou Z-H, Hauptmann AG, eds. Advances in Multimedia Information Processing - PCM 2013: 408-417. DOI: 10.1007/978-3-319-03731-8_38.

[37] Diebel J, Thrun S. An application of Markov random fields to range sensing. Proc 18th Int Conf Neural Inform Process Systems 2005: 291-298.

[38] Harrison A, Newman P. Image and sparse laser fusion for dense scene reconstruction. In Book: Howard A, Iagnemma K, Kelly A, eds. Field and service robotics. Berlin, Heidelberg: Springer; 2010: 219-228. DOI: 10.1007/978-3-642-13408-1_20.

[39] Yang Q, Yang R, Davis J, Nister D. Spatial-depth super resolution for range images. 2007 IEEE Conference on Computer Vision and Pattern Recognition 2007: 1-8. DOI: 10.1109/cvpr.2007.383211.

[40] Newcombe RA, Izadi S, Hilliges O, Kim D, Davison AJ, Kohli P, et al. KinectFusion: Real-time dense surface mapping and tracking. IEEE ISMAR 2011: 127-136. DOI: 10.1109/ismar.2011.6162880.

[41] Fuhrmann S, Goesele M. Fusion of Depth Maps with Multiple Scales. ACM Trans Graph 2011;30:148:1-148:8. doi:10.1145/2024156.2024182.

[42] Okada M, Ishikawa T, Ikegaya Y. A computationally efficient filter for reducing shot noise in low S/N data. PLoS ONE 2016; 11: e0157595. doi: 10.1371/journal.pone.0157595.

[43] Lee K-R, Nguyen TQ. Realistic surface geometry reconstruction using a hand-held RGB-D camera. Mach Vis Appl 2016; 27: 377-385. DOI: 10.1007/s00138-016-0747-9.

[44] Rusu RB, Cousins S. 3D is here: Point Cloud Library (PCL). 2011 IEEE Int Conf Robot Automat 2011: 1-4. DOI: 10.1109/icra.2011.5980567.

Authors' information

Alexey N. Ruchay (b. 1986) graduated from Chelyabinsk State University in 2008, PhD. Currently he works as the leading researcher at Federal Research Centre of Biological Systems and Agro-technologies of the Russian Academy of Sciences, and associate professor at Chelyabinsk State University. Research interests include machine vision, processing of signal and image, and biometrics. E-mail: [email protected] .

Konstantin A. Dorofeev (b.1989) graduated from Chelyabinsk State University in 2011. Engineer-researcher at Chelyabinsk State University. Research interests: machine vision, and processing of signal and image. E-mail: [email protected] .

Vsevolod V. Kalschikov (b. 1998) student of Chelyabinsk State University. Research interests: machine vision, processing of signal and image. E-mail: [email protected] .

Code of State Categories Scientific and Technical Information (in Russian - GRNTI)): 28.23.15 Received June 04, 2019. The final version - September 9, 2019.

i Надоели баннеры? Вы всегда можете отключить рекламу.