Научная статья на тему 'Method for removing haze from images, captured under a wide range of lighting conditions'

Method for removing haze from images, captured under a wide range of lighting conditions Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
0
0
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
haze removal / image restoration / dark channel prior / transmission map / point-light source / low-light conditions

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Andrei Igorevich Filin, Andrei Valerievich Kopylov, Inessa Alexandrovna Gracheva

The presence of haze on images degrades the quality of perception and automatic analysis of scenes. One of the most popular methods of haze removal is the dark channel prior method, which is based on the Koschmieder atmospheric scattering model. However, its underlying assumptions are not met for nighttime, since localized light sources make a significant, if not the main, contribution to lighting. We propose here to use the degree of belonging of an image element to a localized light source, determined based on a one-class classifier, as a value that characterizes the confidence of the corresponding element of the estimated transmission map during its rectifi-cation based on the gamma-normal model, which makes it possible to increase the accuracy of dehazing when processing images, captured in low-light or nighttime conditions.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Method for removing haze from images, captured under a wide range of lighting conditions»

Method for removing haze from images, captured under a wide range of lighting conditions

A.I. Filin1, A. V. Kopylov1, I.A. Gracheva1 1 Laboratory of Cognitive Technologies and Simulation Systems, Tula State University, 300012, Tula, Russia, Lenin ave. 92

Abstract

The presence of haze on images degrades the quality of perception and automatic analysis of scenes. One of the most popular methods of haze removal is the dark channel prior method, which is based on the Koschmieder atmospheric scattering model. However, its underlying assumptions are not met for nighttime, since localized light sources make a significant, if not the main, contribution to lighting. We propose here to use the degree of belonging of an image element to a localized light source, determined based on a one-class classifier, as a value that characterizes the confidence of the corresponding element of the estimated transmission map during its rectifi-cation based on the gamma-normal model, which makes it possible to increase the accuracy of dehazing when processing images, captured in low-light or nighttime conditions.

Keywords: haze removal, image restoration, dark channel prior, transmission map, point-light source, low-light conditions.

Citation: Filin AI, Kopylov AV, Gracheva IA. Method for removing haze from images, captured under a wide range of lighting conditions. Computer Optics 2024; 48(1): 102-108. DOI: 10.18287/2412-6179-CO-1361.

Introduction

Surveillance systems have become increasingly popular in recent years. The presence of fog, haze, dust and other particulate matter in the atmosphere is a typical environment for outdoor systems. The presence of fine scattering particles significantly degrades visibility on images, captured in such conditions, which leads to difficulty in the perception of scene objects by a person, as well as increases errors rate in the operation of various automatic image analysis and enhanced vision systems.

Since the degree of absorption and scattering of atmospheric light depends on the distance from the observer to the object, depth information is essential for haze removal. At the same time haze removal approaches, based on a known depth map, are not suitable for dehaz-ing images in real-world applications due to a serious limitation - specifically, depth information must be provided by a user or obtained using supplemental equipment. Therefore, we consider here single-image haze removal methods.

The method for removing haze from images, based on the concept of a dark channel, proposed by He et al. [1], currently establishes a leading position among methods for processing images, taken in hazy conditions or in the presence of fine particles in the atmosphere. This method is based on the observation that at least one channel in the RGB color space contains pixels of low intensity in local areas of the image that do not contain haze.

Later, Berman et al. [2], suggested that the saturation of an image without haze can be expressed using several different colors that form dense clusters in the RGB color space and are non-locally distributed throughout the image. In hazy images, due to differences in atmospheric

scattering, each color cluster forms the haze line. The method employs these lines to reconstruct the haze-free image. Zhu et al. [3] proposed an a priori color fading model for determining scene depth using a linear dehaz-ing model.

Most haze removal methods use the Koschmieder atmospheric scattering model [4]. However, its underlying assumptions are not met for nighttime, since a significant, if not the main, contribution to the formation of illumination is made by localized light sources.

Haze removal methods, based on deep learning [5 -7], also suffer from the mentioned disadvantages. Difficulties in obtaining pairs of haze and haze-free image, taken simultaneously have led to the fact the vast majority of hazed and haze-free image pairs in public datasets [8, 9] used for training are synthesized using the same optical model [4]. Therefore, images obtained in low light conditions and with the presence of localized light sources most often do not consider the effect of haze on illumination from localized sources.

We propose here to use the degree of belonging of an image element to a localized light source, determined based on a one-class classifier, as a value that characterizes the confidence of the corresponding element of the estimated raw transmission map during its rectification using the gamma-normal model, which makes it possible to increase the accuracy of dehazing when processing images, captured in low-light or nighttime conditions. In addition, we noticed that images, containing large areas that are capable to reflect extensive light close in spectral composition to the light of localized sources, lead to erroneous airlight estimation. In this work, we decided not to consider the image pixels, belonging to point light sources with a high score (the accepted threshold is equal

to 0.7), for assessing atmospheric illumination. As a result, we improve the quality of the proposed dehazing method for images, obtained with sufficient illumination.

The experimental results demonstrate improvements in the haze removal quality of the proposed method in comparison with the previously proposed [10], as well as other methods involved in the experiments. In addition, the proposed method has the least computational complexity among the compared methods.

1. Related work

Our recently proposed image haze removal method [10] utilizes the one-class classifier, based on support vector data description (SVDD) [11], to estimate the probabilities that image elements belong to localized light sources. In addition, to refine the rough transmission map, derived from the dark channel, a structure transfer filtering method, based on the probabilistic gammanormal model [12], was applied. This method is considered here as the baseline method.

The general structure of the baseline method consists of the following major stages [10]: transmission map evaluation, localized light sources estimation, airlight estimation, transmission map refinement, restoration model evaluation, haze removal based on the restoration model.

1.1. Optical model

The Koschmider model of atmospheric scattering [4] is widely used in haze removal methods. It describes how the image at the point of observation Ic(s) is formed as a result of refraction, reflection, and mixing of the incident and reflected light as a result of interaction with particles suspended in the atmosphere:

Ic (s) = Jc (s)T(s)+ Ac (1 -T(s)), (1)

where S = {s = (51, 52): s: = 1,...,N1, 52 = 1,...,N2> is the discreet image pixels grid, Ic(s)is the hazy image intensity value at the position seS in the color channel ce{r, g, b} of RGB color space, Jc(s) is the haze-free image intensity value at the same position, and the color channel c, Ac is the airlight of the color channel c and T(s) is the medium transmission map.

The model assumes that particles are evenly distributed in space, so their number between the object and the observation point increases evenly as the distance from the object to the observation point increases. Another assumption is the independence of the scattering coefficient from the wavelength:

T (s) = e-pd (s), (2)

where d(s)is the scene depth at the position s and p is the scattering coefficient.

1.2. Dark channel prior

In the baseline method, we employ Dark Channel Prior [1] for estimating atmospheric light and the medium transmission map. The dark channel is defined as follows:

D (Ic (s)) = min (mi n ((y))), (3)

yeQ (s)\ ce{r, g ,b}

where Q(s) is a local patch (L*W pixels), centered in seS.

Taking the local patch Q(s) as approximately uniform, and computing the dark channel on both sides of equation (1) after the normalization by Ac, it becomes:

D (ir) = D (Jr) T (s) + (1 - T (s)). (4)

The intensity of the dark channel of a haze-free image tends to be zero, and the intensity of atmospheric light goes to max value, so the medium transmission map becomes:

T (s) = 1 - D (^). (5)

Thus, expression (5) allows us to evaluate the transmission map from a known image with haze, dividing it into small patches where the assumptions made above are valid, and applying a minimal filter to them.

1.3. The airlight estimation in the presence of localized light sources with one-class classifier

The estimation of the airlight vector takes an important place in the Koschmieder atmospheric scattering model (1) and strongly influences the result of haze removal. We follow here the He et al. [1] method in general, but the haze formation model (1) takes into account a single global illumination source. The problem is that at night, the brightest areas of the image mainly correspond to localized light sources, but not to the sky region. In the previous work [10], we proposed to use the estimates of the pixel belonging to a point light source, made by a one-class classifier, by applying these estimates as a mask during obtaining a dark channel before estimating the atmospheric light and transmission map. So, the pixels of the dark channel decrease their value according to the estimates of the belonging of the corresponding pixels to point light sources, which reduced the chance, that these pixels will be used in the assessment of atmospheric illumination, but at the same time, distorted the transmission map.

Coordination of local solutions of a one-class classifier was carried out simultaneously with the dark channel rectification within a single procedure. For this, the gamma-normal model was used.

1.4. Transmission map rectification based on gamma-normal model

Equations (3 - 5), allow us to obtain a rough approximation of a real transmission map by finding the minimum within the local sliding patch. The structure transferring filter based on a probabilistic gamma-normal model was introduced in [12] to refine the transmission map and thus suppress the halo artifacts in the final dehazed image.

The main idea is to consider the resulting transmission map as a field of continuous random variables, which represent medium transmission at each point and the distribution of these random variables is also considered random. This distribution is characterized by the moments - mathematical expectation and covariance matrix. It will be convenient for us to use the inverse value, the precision, instead of the covariance. Let us assume the distribution of the mathematical expectation to be Gaussian and the precision is distributed independently for each picture element according to the gamma distribution. The normal and gamma distribution form the conjugate family and leads to highly efficient image processing procedures.

The sought-for mathematical expectations X = (xs, seS) and precisions A=(Xs, seS) form the hidden component and observations Y = (y, seS) form an observable component of two components random field [(X, A), Y].

Bayesian MAP estimation of a hidden field (X, Y) leads to the following optimization problem:

(X, A | Y, q) = argmin J(X, A | Y, q),

J(X, A | Y, q) = X (* - Xs )2 +

seS

+ X Xs'(Xs'-Xs.)2 + (1 -|)XlnXs

(s',s")eG seS seS

(6)

where G is the variable adjacency graph having the form of a lattice for images, | and q are structural parameters which control the degree of image smoothing and selectivity respectively.

Note, that the precisions field A = (Xs, seS) can be treated as a kind of a penalty on the difference between values of two corresponding neighboring variables xs' and xs", (s', s")eG, and thus represents structural information about dependencies between elements of the hidden field. Taking into account that the structure of the transmission map and the initial hazed image are similar, the latter can be used as an additional guided field Xg to transfer the structure of local relations between elements of the source image to the result of processing.

Criterion (6) gives the following relations for optimal A with fixed X and structural parameters q. The estimates A, in turn, give the optimal estimates X of the hidden field X:

X s'=(|- 1)/

X (x? -xg) + q

(s',s")eG

XT = arg min

X

X (*s - Xs )2 + X Xs' (Xs' - Xs'' )2

seS (s',s")eG

(7)

The parametric procedure of dynamic programming was used to optimize criterion (7), based on the approximation of the lattice-like adjacency graph of image elements G by a sequence of horizontal and vertical trees [12]. Therefore, the method has linear computational complexity relative to the number of image elements.

In our case the rough transmission map T plays a role of the observable component Y and the resulting transmission map Trect corresponds to the hidden component X of the two components random field. Considering (6) we obtain:

(Tree, A | T, q) = arg min J(TreCt, A | T, q),

Tect ,A

J(Trect, A | T, q) = X (ts - trect )2 +

seS

+ X Xs'(tsrect - tr )2 +qXXs + (1 -|)X in Xs.

(s',s")eG seS seS

(8)

According to the concept of exponential variation, introduced in [13], based on the observation that the medium transmission map depends on scattering coefficient and depth exponentially, the medium transmission map was rewritten as follows:

T (s ) = ß(frec (s) )

+ e,

(9)

where Trect is the transmission map, rectified using the gamma-normal model (8), s is a prediction error of the model, p is a scattering coefficient, and y is an exponential factor.

Values of parameters p, y, and e were found in [13] by particle-swarm optimization of PSNR on the Middlebury dataset [14]: p = 0.5880, y = 1.9898 and e = 0.1492.

Finally, the restored haze-free image can be obtained by the following known equation, derived from (1):

Jc(s) =

Ic (s) - Ac (1 - TT(s)) f(s) '

2. Proposed method

(10)

Although the previously presented method [10] makes it possible to exclude the influence of point light sources on the atmospheric light estimation, nevertheless, the transmission map in the overexposure areas was not estimated correctly, since the underlying assumptions on the model (1) are not met in these areas. In this section, we present a new approach to transmission map estimation in haze removal tasks, which employs the likelihood of belonging an image element to a localized light source as a measure of confidence during transmission map rectification. According to this approach, during the transmission map estimation, the confidence in the dark channel in areas of the image, that have a high score of belonging to point light sources, will be low, and estimation in these areas will be based on the estimates of the transmission map in neighboring elements.

Fig. 1 demonstrates the full flow chart of the presented method. Further in this section, we describe the proposed changes from the baseline [10] method.

2.1. Atmospheric light estimation

Here we use a one-class classifier for airlight estimation in the same way, as described in [10]. The classifier

evaluates the score of each pixel of the original image in such a way that the score is close to 0 if the corresponding pixel most likely belongs to a localized light source, and close to 1 if a pixel is most likely associated with the

rest of the scene. The resulting mask represents a distributed illumination caused by different illumination sources and plays a role, similar to the glow image in the method of Li et al. [15].

SVDD

Localized light sources

Coordination of local decisions using ASF

Training

Hazed image

D(/,(s))= mill mm (/,.( yllxlfni T

' " »"[>(■)•<!>■ s'r Transmission map

Rectified transmission map

Haze-free ¡mag

/,(») = (<,(»),»€ S)

Image structure

^»»argmmJiJ^.Wirw.rWAii.i!)

a ' n»)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

r(s)=Kt,(»»r+s

- xSBB

A=argminJ(A /f(s),nil) A

Fig. 1. Full flow chart of the presented method

Since the one-class classifier evaluates the degree of likelihood of each pixel to a point light source, the elements of the resulting score map are inconsistent. In our previous work [10], the matching of local solutions of a one-class classifier was carried out using the gamma-normal model simultaneously with the dark channel rectification within a single procedure. Since in the current work it was proposed to use estimates of the pixel belonging to localized light sources for interpolation of the transmission map, the procedures of joint estimation of airlight and the transmission map had to be separated. Therefore, in this work, we employ a simple quadratic model to adjust the local solutions of the one-class classifier:

(W | Wsvdd , Xw ) = arg min J (W | WSVDD, Xw ), w

J (W | WSVDD , Xw ) =

= X(wSVDD - ws )2 + X Xw (w.,- ws»)2,

seS (.',.")eG

(11)

where W = (ws, seS) is a hidden component of the two components random field (W, Wsvdd), representing the resulting consistent localized light sources map, WSVDD = (wsSVDD, s e S) is the original localized light sources map, obtained by SVDD one-class classifier. Note, that the impulse response of the such filter is simi-

lar to the ideal Point Spread Function (PSF) [16]. The meaning of the other variables is the same as in (6).

The procedure of finding the optimal solution of (11) is extremely fast and does not give a big degradation of computational speed, but made it possible to improve the quality of the transmission map estimation.

In addition, we noticed that images, containing large areas that are capable to reflect extensive light close in spectral composition to the light of localized sources, lead to erroneous airlight estimation. To remedy this problem, in this work we additionally check the number of pixels corresponding to localized light sources. Heuristically, we accept the threshold for the proportion of such pixels equal to 0.7. If the proportion is lower, as in [10], the consistent localized light sources map is used as a mask for the dark channel; if the threshold is exceeded, then it is rejected and is not used as a mask for the dark channel.

w. =

w., if mean(w) <= 0.7 1, otherwise

(12)

2.2. Transmission map rectification

In this paper, we propose to use the coordinated localized sources map (12) for transmission map rectification using the gamma-normal model (8). It is supposed to use the values W = (ws, seS) (12) as a coefficients, regulating the penalty for the mismatch be-

tween the elements of the resulting Trect and observed T maps in equation (8):

(13)

(Trec , A | T,W, q) = = arg min J (Tree, A | T , W, q),

Trect, A

J(Trec,, A | T,W, I q) = X Ws (ts - tr )2 +

seS

+ X Xs'(tr-t;ect)2 s + (1 -|)XinX.

(s',s")eG seS seS

Coefficients W = (ws, seS) correspond to the inverse variation of observations in the gamma normal model. As a result, in the areas of the image with the localized light sources, elements ws will be close to 0, therefore, the penalty for the difference between the estimated transmission map elements, based on the dark channel, and the desired transmission map, will be insignificant. So, the estimation of elements, belonging to such areas, will be based on the neighboring elements of the transmission map.

Fig. 2a and b demonstrates the baseline and proposed transmission maps, respectively. As it can be seen, the area of the rectified transmission map that is illuminated by the car's headlights, as well as the headlights themselves, has a brighter light, which corresponds to a smaller amount of scatter than on the raw scatter map. This result is obtained because the rectified transmission map in this area relies on interpolation over the transmission map elements that are located around this area and does not use the dark channel values. The large scatter in this area on the baseline transmission map was obtained because the dark channel is based on model (1), which does not assume the presence of localized light sources, due to which car headlights are perceived as an atmospheric light source, and therefore, assumed as the more distant object.

a)

b)

Fig. 2. The baseline (a) and proposed (b) estimations of the transmission map

3. Experimental results

Comparative experiments were carried out using the proposed method and other known dehazing meth-

ods [1 - 3, 6, 17]. For tests, we use datasets, consisting of both synthesized daylight images (SOTS from the RESIDE set [9]) and real images, obtained in high [18, 19] and low light conditions in the presence of localized light sources [20, 21]. PSNR and SSIM were used as metrics.

Table 1 demonstrates the quality improvement of the haze removal of the proposed method compared to the baseline [10] by ~ 4.6 % and ~ 4.1 % on daylight datasets (i-haze, o-haze, SOTS-indoor, SOTS-outdoor); ~ 10.4 % and ~ 14.3 % on datasets with images with presence of localized light sources (night-haze, night-haze-ext) by metrics PSNR and SSIM, respectively. Fig. 3 demonstrates that both (the proposed and the baseline) methods overperform other methods in the operation speed. Moreover, the proposed method overperforms the baseline by ~ 6.7 % and ~ 7.1 % on average over all datasets in terms of PSNR and SSIM.

Fig. 3. Changing the mean calculation time over 69 images from the o-haze [19] and night-haze [20] datasets for different images resolutions

Conclusions

Improvements in the previously published method [10], which consists in excluding the localized light sources from both the atmospheric light and the transmission map estimations, make it possible to use the atmospheric scattering model [4] in low light conditions and in the presence of localized light sources. Experiments show better results in terms of PSNR and SSIM metrics compared to the baseline method [10], as well as for the methods under evaluation. The average SSIM metric over the datasets demonstrates an advantage compared to other methods and the same could be said about the PSNR metric for all the methods except the Qin et al. [6] method.

Acknowledgments

This research is funded by the Ministry of Science and Higher Education of the Russian Federation within the framework of the state task FEWG-2021-0012.

Tab. 1. Quantitative dehazing results for comparative methods by PSNR and SSIM metrics

PSNR

Method I-Haze O-haze SOTS-indoor SOTS-outdoor Nizht-haze. Nizht-haze-ext Average on daylight datasets Average on datasets with localized light sources Average

Dhara et 13.43 16.27 19.60 16.62 18.59 19.28 16.48 18.94 17.30

al. [17]

Qin et al. 15.65 14.67 29.58 19.48 19.37 18.99 19.85 19.18 19.62

[61

Berman et 15.81 15.71 17.28 17.96 15.76 14.60 16.69 15.18 16.19

al. [21

He et al. 11.91 15.11 16.56 14.40 17.42 17.39 14.50 17.41 15.47

[11

Zhu et al. 16.66 16.50 19.05 22.05 17.65 19.74 18.57 18.70 18.61

[31

Baseline 16.53 15.57 16.75 20.80 20.70 18.82 17.41 19.76 18.20

[101

Proposed 16.52 17.59 17.66 21.07 22.87 20.77 18.21 21.82 19.41

SSIM

Dhara et 0.64 0.69 0.86 0.80 0.71 0.63 0.75 0.67 0.72

al. [17]

Qin et al. 0.70 0.60 0.97 0.84 0.74 0.63 0.78 0.69 0.75

[61

Berman et 0.75 0.73 0.78 0.83 0.73 0.56 0.77 0.65 0.73

al. [21

He et al. 0.58 0.66 0.80 0.75 0.49 0.60 0.70 0.55 0.65

[11

Zhu et al. 0.73 0.66 0.81 0.89 0.62 0.69 0.77 0.66 0.73

[31

Baseline 0.71 0.65 0.80 0.81 0.64 0.61 0.74 0.63 0.70

[101

Proposed 0.72 0.70 0.82 0.82 0.74 0.69 0.77 0.72 0.75

References

[1] He K, Sun J, Tang X. Single image haze removal using dark channel prior. IEEE Trans Pattern Anal Mach Intell 2011; 33: 2341-2353. DOI: 10.1109/TPAMI.2010.168.

[2] Berman D, Avidan S, Treibitz T. Non-local image dehazing. 2016 IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2016: 1674-1682.

[3] Zhu Q, Mai J, Shao L. A fast single image haze removal algorithm using color attenuation prior. IEEE Trans Image Process 2015; 24: 3522-3533.

[4] Koschmieder H. Theorie der horizontalen Sichtweite. Beitr Phys Freie Atmos 1924; 12: 33-55.

[5] Cai B, Xu X, Jia K, Qing C, Tao D. DehazeNet: An end-to-end system for single image haze removal. IEEE Trans Image Process 2016; 25: 5187-5198. DOI: 10.1109/TIP.2016.2598681.

[6] Qin X, Wang Z, Bai Y, Xie X, Jia H. FFA-Net: Feature fusion attention network for single image dehazing. Proc AAAI Conf on Artificial Intelligence 2020; 34: 1190811915.

[7] Zhang S, He F, Ren W. NLDN: Non-local dehazing network for dense haze removal. Neurocomputing 2020; 410: 363-373. DOI: 10.1016/j.neucom.2020.06.041.

[8] Ancuti C, Ancuti CO, De Vleeschouwer C. D-HAZY: A dataset to evaluate quantitatively dehazing algorithms.

2016 IEEE Int Conf on Image Processing (ICIP) 2016: 2226-2230. DOI: 10.1109/ICIP.2016.7532754.

[9] Li B, Ren W, Fu D, Tao D, Feng D, Zeng W, Wang Z. Benchmarking single-image dehazing and beyond. IEEE Trans Image Process 2018; 28: 492-505.

[10] Filin A, Gracheva I, Kopylov A, Seredin O. Fast channel-dependent transmission map estimation for haze removal with localized light sources. In Book: Dang NHT, Zhang Y-D, Tavares JMRS, Chen B-H, eds. Artificial intelligence in data and big data processing: Proceedings of ICABDE 2021. Cham, Switzerland: Springer; 2022: 461-471.

[11] Tax DMJ, Duin RPW. Support vector data description. Mach Learn 2004; 54: 45-66.

[12] Gracheva I, Kopylov A. Image processing algorithms with structure transferring properties on the basis of gammanormal model. In Book: Ignatov DI, Khachay MYu, Labunets VG, Loukachevitch N, Nikolenko SI, Panchenko A, Savchenko AV, Vorontsov K, eds. Analysis of images, social networks and texts. Cham, Switzerland: Springer International Publishing AG; 2017: 257-268. DOI: 10.1007/978-3-319-52920-2_24.

[13] Shi LF, Chen BH, Huang SC, Larin AO, Seredin OS, Kopylov AV, Kuo SY. Removing haze particles from single image via exponential inference with support vector data description. IEEE Trans Multimed 2018; 20: 25032512. DOI: 10.1109/TMM.2018.2807593.

[14] Hirschmuller H, Scharstein D. Evaluation of cost functions for stereo matching. 2007 IEEE Conf on Computer Vision and Pattern Recognition 2007: 1-8.

[15] Li Y, Tan RT, Brown MS. Nighttime haze removal with glow and multiple light colors. 2015 IEEE Int Conf on Computer Vision (ICCV) 2015: 226-234. DOI: 10.1109/ICCV.2015.34.

[16] Wolf E, ed. Progress in optics. Vol 47. Elsevier; 2005. ISBN: 0-444-51598-4.

[17] Dhara SK, Roy M, Sen D, Biswas PK. Color cast dependent image dehazing via adaptive airlight refinement and non-linear color balancing. IEEE Trans Circuits Syst Video Technol 2020; 8215: 1-1. DOI: 10.1109/tcsvt.2020.3007850.

[18] Ancuti C, Ancuti CO, Timofte R, De Vleeschouwer C. I-HAZE: A dehazing benchmark with real hazy and haze-free indoor images. In Book: Blanc-Talon J, Helbert D,

Philips W, Popescu D, Scheunders P, eds. Advanced concepts for intelligent vision systems. Cham, Switzerland: Springer Nature Switzerland AG; 2018: 620-631. DOI: 10.1007/978-3-030-01449-0_52.

[19] Ancuti CO, Ancuti C, Timofte R, De Vleeschouwer C. O-HAZE: A dehazing benchmark with real hazy and haze-free outdoor images. 2018 IEEE/CVF Conf on Computer Vision and Pattern Recognition Workshops (CVPRW) 2018: 867-875. DOI: 10.1109/CVPRW.2018.00119.

[20] Filin A, Kopylov A, Seredin O, Gracheva I. Hazy images dataset with localized light sources for experimental evaluation of dehazing methods. 6th Int Workshop on Deep Learning in Computational Physics 2022: 19.

[21] Filin A, Kopylov A, Gracheva I. Asingle image dehazing dataset with low-light real-world indoor images, depth maps and infrared images. Int Arch Photogramm Remote Sens Spat Inf Sci 2023; 48: 53-57.

Authors' information

Andrei Igorevich Filin graduated from the Institute of Applied Mathematics and Computer Science of Tula State University (Tula, Russia) with a master's degree in Data Science in 2019. Currently he is a Postgraduate Student and a Junior Researcher at the Laboratory of Cognitive Technologies and Simulation Systems in Tula State University. Research interests are data mining, image processing, machine learning, programming. ORCID: 0000-0003-1028-0926 E-mail: andrewifilin@gmail.com

Andrei Valerievich Kopylov received the Ph.D. degree from the Institute of Control Sciences of the Russian Academy of Sciences, Moscow, Russia, in 1997. In 1997, he joined the Department of Automation and Remote Control, Tula State University, as an Assistant Professor and became an Associate Professor in 2005. Currently he is an Associate Professor with the Institute of Applied Mathematics and Computer Science and the Leading Researcher at the Laboratory of Cognitive Technologies and Simulation Systems in Tula State University. Research interests are data mining, image processing, image analysis, pattern recognition, machine vision. ORCID: 0000-0003-3193-583X E-mail: andkopylov@,gmail. com

Inessa Alexandrovna Gracheva, received the Ph.D. degree in Engineering from Tula State University in 2020, the topic of the dissertation is "Image processing algorithms based on a probabilistic gamma-normal model" (2020). Since 2016, she has been working at Tula State University, currently as Deputy Head of the Research Department, Associate Professor with the Institute of Applied Mathematics and Computer Science and Senior Researcher at the Laboratory of Cognitive Technologies and Simulation Systems. Research interests are computer vision, image processing, machine learning. ORCID: 0000-0003-3367-3538 E-mail: gia1509@mail.ru

Code of State Categories Scientific and Technical Information (in Russian - GRNTI)): 28.23.15 Received May 29, 2023. The final version - August 25, 2023.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.