Научная статья на тему 'Smartphone video motion deblur order model'

Smartphone video motion deblur order model Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
0
0
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
smartphone platform / motion blur / gaussian orientation / blur filter / loss function / платформа смартфона / размытие изображения в движении / ориентация по Гауссу / фильтр размытия / функция потерь

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Resen A. Sallama

A method has been proposed to eliminate slight motion blur in the image. The method is implemented in three stages. Blur estimation is achieved by prior information on the distribution image gradient. The Gaussian Orientation Filter (GOF) fits the prior information to find the regression coefficients. Order combines different estimate GOF parameters to generate a removal blur filter. Estimation parameters are fixed and set blur on the image to produce an image without boosting the noise and unwanted. The proposed model optimization solves the problem by minimizing the loss function. The suggested method applies to outdoor and indoor video acquired by modern smartphones. The experiment result display is accurate for the full regression motion blur model. The suggested model example on video dataset conditions has 23 s video time long and 228 MP dataset size. Measurement evaluation established on time consumer, Structural Similarity Index Measure and Peak Signal-to-Noise Ratio. Experimental results show that the image artifact phase is less consuming computational time. The proposed model has a minimized cost function and generates image quality.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Метод удаления размытия видеоизображения со смартфона при движении

Предложен метод устранения размытия видеоизображения при движении. Метод реализуется в три этапа. Оценка размытия достигается за счет предварительной информации о градиенте распределения изображения. Ориентационный фильтр Гаусса (Gaussian Orientation Filter, GOF) соответствует априорной информации для нахождения коэффициентов регрессии. Представленный метод объединяет различные параметры оценки GOF для создания фильтра размытия. Параметры оценки фиксированы и устанавливают размытие видеоизображения без увеличения шума и нежелательных артефактов. Выполненная оптимизация решает проблему за счет минимизации функции потерь. Предлагаемый метод применим к видеоизображениям, полученным с помощью современных смартфонов на открытом воздухе и в помещении. Результаты эксперимента являются точными для модели полного регрессионного размытия в движении. Продолжительность типового эксперимента по набору видеоданных 23 с, размер набора данных 228 Мп. Оценка измерений установлена по времени потребителя, по показателю индекса структурного сходства и пикового отношения сигнал-шум. Экспериментальные результаты показывают, что фаза устранения артефактов видеоизображений требует меньше вычислительного времени. Предложенный метод имеет минимизируемую функцию стоимости и формирует качественное изображение.

Текст научной работы на тему «Smartphone video motion deblur order model»

НАУЧНО-ТЕХНИЧЕСКИЙ ВЕСТНИК ИНФОРМАЦИОННЫХ ТЕХНОЛОГИЙ, МЕХАНИКИ И ОПТИКИ

май-июнь 2024

Том 24 № 3

http://ntv.ifmo.ru/

I/ITMO

SCIENTIFIC AND TECHNICAL JOURNAL OF INFORMATION TECHNOLOGIES, MECHANICS AND OPTICS

ИНФОРМАЦИОННЫХ ТЕХНОЛОГИЙ, МЕХАНИКИ И ОПТИКИ

May-June 2024 ISSN 2226-1494 (print)

Vol. 24 No 3

http://ntv.ifmo.ru/en/ ISSN 2500-0373 (online)

doi: 10.17586/2226-1494-2024-24-3-483-489

Smartphone video motion deblur order model

Resen A. Sallama®

Directorate General of Vocational Education — Vocational Edu Iraq, Bagdad, 10001, Iraq

salamaresen@gmail.com®, https://orcid.org/0009-0007-5044-8857

Abstract

A method has been proposed to eliminate slight motion blur in the image. The method is implemented in three stages. Blur estimation is achieved by prior information on the distribution image gradient. The Gaussian Orientation Filter (GOF) fits the prior information to find the regression coefficients. Order combines different estimate GOF parameters to generate a removal blur filter. Estimation parameters are fixed and set blur on the image to produce an image without boosting the noise and unwanted. The proposed model optimization solves the problem by minimizing the loss function. The suggested method applies to outdoor and indoor video acquired by modern smartphones. The experiment result display is accurate for the full regression motion blur model. The suggested model example on video dataset conditions has 23 s video time long and 228 MP dataset size. Measurement evaluation established on time consumer, Structural Similarity Index Measure and Peak Signal-to-Noise Ratio. Experimental results show that the image artifact phase is less consuming computational time. The proposed model has a minimized cost function and generates image quality. Keywords

smartphone platform, motion blur, gaussian orientation, blur filter, loss function

For citation: Sallama R.A. Smartphone video motion deblur order model. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2024, vol. 24, no. 3, pp. 483-489. doi: 10.17586/2226-1494-2024-24-3-483-489

Метод удаления размытия видеоизображения со смартфона при движении

Ресен Адхаб Саддама®

Главное управление профессионального образования — профессиональное образование Ирака, Багдад, 10001, Ирак

salamaresen@gmail.com®, https://orcid.org/0009-0007-5044-8857

Предложен метод устранения размытия видеоизображения при движении. Метод реализуется в три этапа. Оценка размытия достигается за счет предварительной информации о градиенте распределения изображения. Ориентационный фильтр Гаусса (Gaussian Orientation Filter, GOF) соответствует априорной информации для нахождения коэффициентов регрессии. Представленный метод объединяет различные параметры оценки GOF для создания фильтра размытия. Параметры оценки фиксированы и устанавливают размытие видеоизображения без увеличения шума и нежелательных артефактов. Выполненная оптимизация решает проблему за счет минимизации функции потерь. Предлагаемый метод применим к видеоизображениям, полученным с помощью современных смартфонов на открытом воздухе и в помещении. Результаты эксперимента являются точными для модели полного регрессионного размытия в движении. Продолжительность типового эксперимента по набору видеоданных 23 с, размер набора данных 228 Мп. Оценка измерений установлена по времени потребителя, по показателю индекса структурного сходства и пикового отношения сигнал-шум. Экспериментальные результаты показывают, что фаза устранения артефактов видеоизображений требует меньше вычислительного времени. Предложенный метод имеет минимизируемую функцию стоимости и формирует качественное изображение. Ключевые слова

платформа смартфона, размытие изображения в движении, ориентация по Гауссу, фильтр размытия, функция потерь

© Sallama R.A., 2024

УДК 621.397

Аннотация

Ссылка для цитирования: Саллама Р.А. Метод удаления размытия видеоизображения со смартфона при движении // Научно-технический вестник информационных технологий, механики и оптики. 2024. Т. 24, № 3. С. 483-489 (на англ. яз.). doi: 10.17586/2226-1494-2024-24-3-483-489

Introduction

Many research approaches focus on Computer vision area topics of visual objects, augmented reality, object detection, tracking and recognition. Smartphone platforms video and images have specific problems caused by cameras. The main reason is sensor sizes which are smaller than digital camera sensors. Pictures acquired by smartphones have many problems, one of them is motion blur. Almost every object or camera moves during time capture of the source motion blur. Digital camera image has more light than smartphones because a larger sensor can receive more light. Sometimes, blur can result from wrongly setting the camera focus or due to limited depth of field when a large camera aperture is used. In Smartphone images, there is a certain amount of intrinsic blur due to the optics and time captured by the camera. Blur is a highly complex regression due to many different sources causing different types of blur which different mathematical models represent. Image deblurring generates a high-quality image with clean sharpness for a blurred image. The goal is to recover a sharper version of the real original image by removing the blur. A blurred image is an integration of multi-image instances and sharp snapshots. The traditional blurring method handles this problem by applying a blur filter. A sharper version of the input blur image can be recovered through a blur filter. The proposed method uses blurred and sharp pairs to focus on a regression motion blurring model. Ghosting artifacts were avoided and the energy function was processed minimally with customized image processing algorithms. This paper focuses on two goals: deblur images with small blur and eliminate artifacts results. The estimation parameters problem is the goal to recover the latent clean signal. The model has been designed to remove small motion blur and minimize loss function to do denoising. The estimation blur parameters method is simple to implement and avoids loss function in the case of unidimensional vectors. Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) measurement evaluate the restoration image compared with the target image in the smartphone platform. The proposed method consists of three phases. First, estimate image blur treats small noise. Second, apply an improved blur estimation parameters blur filter and, third, remove undesirable artifacts that may have been introduced during the improved blur estimation parameters. The blur estimation parameters have been improved by combining gradient parameters. The operator blur filter was extended to three order models to restore the image. Image deblurring is an improved blur.

Related work

The image blur problem has been discussed in many papers over the previous thirty decades [1]. In [2], Baptiste Magnier works to reverse the heat equation. Improving the sharpness of an image is related to image deblurring.

In [3] and [4], regression methods for non-blind and blind combined priors and optimization energy functions have been used. Model regression was used in the papers [5] and [6]. Based on these two things, a space of high-quality images and a solution to the restoration problem were obtained. The common technique uses a large amount of data and then applies deep training models for restoration. In the paper [7], total variation can be seen. Later in the [8] and [9], the latest tendencies started designing other approaches to modeling high-quality signals. Wavelets or sparse representation dictionaries help us to remove blur. Many articles discussed extended other research types that leverage ideas from other domains. For example, there has been an attempt to use image denoisers as priors, as in red or plan-and-play methods, and more recently, to leverage genetic models learned from data as good image priors. In [10], Generative Adversarial Networks variation autoencoders or diffusion models as priors were used. The model presents a different method that tries to solve this deep learning problem in very specific conditions. Two papers [11, 12] needed to focus on dataset real-world scenes which vary color distributions and apply pre-pressing operations to reduce varying distribution. Heavy-tailed gradients have been used for small blur instead of Gaussian distribution with adaptive scale. Paper produced sharp reconstruction images versus time-consuming and complex. Authors of [13, 14] proposed to approximate the inverse operation to better control the noise amplification. They designed a multi-degree blur deburring polynomial. In [12], an approach involving different ordination distributions is proposed, which makes this feature invariant to small shifts and noise with an optimized loss function. The classical methods model high-quality images degradation and space and solve a restoration problem based on these two things. The current trend is slightly different based on using large amounts of data and training deep models. To solve a restoration problem, the goal is to target a small blur instead of trying to accurately estimate a full blur kernel. The paper proposed a simple blending mask that blends the de-blurred image and the input image, which removes small motion bluer and artifacts.

Deblur modeling based on estimation blur parameter

Blur removal is modeled based on blur parameter estimation. Previous methods processed blurred images revealing unseen image details. The proposed methods modeled priori information producing estimated image blur based on the Gaussian Orientation Blur filter (GOF). Filter parameters are modeled to satisfy a variety of light distributions. The proposed model has three goals: remove small blurs from camera shake and lens aberration. Generate a sharper image without introducing any new artifacts. Some limitations, such as object movement and depth of field, might be unrealistic and it should be able to run fast on smartphone platforms. The proposed model goals would be achieved in three stages.

Fig. 1. GOF parameters (a, d, ro)

Stage 1: blur estimation.

Estimation bluer is based on parametrizing the space of possible blur. GOF runs on small noise. GOF is defined by three parameters (a, d, ro), where ro is the main orientation of blur while a, d are standard divisions at both principal axes, as shown in Fig. 1.

These assumptions are accepted for small blurs to parameterize the space of blurs using anisotropic Gaussian functions defined by three parameters (a, d, ro). An angle is the main orientation of the blur. The standard deviation refers to both the principal axis and the orthogonal one. Scan the gradient intensity at a discrete of image numbers from i to n at different orientations. The minimum value determines the direction of the blur at the maximum gradient value and estimates the standard deviation of the Gaussian blur, as shown in Fig. 2.

The image gradient is related to the Gaussian blur standard deviation. The image gradient in different directions can identify which direction is the blurriest. Estimating the Gaussian parameters is shown in Fig. 2. The Gaussian parameters obtained from the maximum gradient values in that direction and the orthogonal one as procedure below:

1. scan the gradient intensity at N different orientations RroN;

2. compute the maximum gradient values Rro1, Rro2, Rro3

R .

...........^San

3. find minimum RroN to determine the direction of the blur receptive to the standard deviation value (a);

4. estimate the Gaussian parameters equation

av =

^ - a, = "T - b2 Rl У j

(1)

The coefficient b in equation (1) is set to sense the blur, leading to slightly noisier results. Scan the image gradient in different directions to identify the blurry direction. The maximum gradient values in that direction and the orthogonal Rro, ax, ay estimate the Gaussian parameters.

Stage 2: multi-order model improves bluer estimation.

The first phase output is an estimation of blur parameters roughly under the assumption small blur is put on to estimate GOF parameters. To solve the problem of another blur, the paper proposed an approach associating different blur parameters to detect missing estimation blur. Expand the blur filter operator into three orders. The operator is close to the identity when the blur is small. The general filter to remove noise follows the equation:

v = u*k + n.

(2)

Where: v is the captured image; u is the underlying sharp image; k is the unknown blur kernel; n is additive noise; (*) is convolution operation.

The equation (2) blind deconvolution is solved by minimizing a function with constraints relative to the blur kernel. The proposed approach with a functional maximum gradient is to minimize the direction of the blur. The order model adds and subtracts the value of the estimated image blur. Multi-order equation combines estimate GOF parameters to improve estimation bluer. Multi-order model modified General filter to remove noise follows the equation

h(k)v = k(k)u + h(k)n,

where h(k) function equal to equation h(k) = ak2 + dk + c,

(3)

(4)

where h(k) is the equation of the three-order deblur model; a, d, c the coefficients of the deblur model.

The equation order degree is three, and the coefficients (a, d, c) are set independently of the blur and image. The coefficients are chosen related to size, type and blur effect. Coefficient c is controlling the assumed linear relation between the gradient feature and the level of blur while d leads to sharper but slightly noisier results. Joint equations (3) and (4) characterize proposed deblur modeling based on estimation blur parameters. The model will force the reconstruction images to be divergent, forming high-quality images. The multi-order coefficients will be fixed and set

Fig. 2. Blur Estimation

independently of the blur in the image to generate appealing results and avoid amplified noise.

Stage 3: deblur image (artifacts detected and removed).

Parameters extracted from the distribution of light gradient are appropriate for high-frequency information such as image sharpness. Using the distribution makes this feature invariant to small shifts in noise and other small changes presented in images. Artifacts are generated due to mis-estimation or due to operator model mismatch. The blur estimation is rough, and the model might introduce artifacts that can be characterized as gradient reversal pixels. Equation belowdisplays the treatment of the artifacts problem:

M(x) = -Vv(x)*Vu(x). (5)

Equation (5) is the final image of the reconstruction M(x) which has opposite gradients, blurry image v and restored image u. Generate a merger filter that balances the de-blurred and input images, minimizing the gradient reversal. This allows for the removal of the most of the sharpening artifacts. The model is accepted for low and high-quality images. The parameters model satisfies the minimized loss function. The loss function shows the mismatch between the prediction and the high-quality reference target. The square pixel reconstruction error is computed directly to measure the variance in image pixels. Images de-blurring does not have a unique solution. An infinite number of high-quality images leads to the same low-quality target.

The predicted average of all possible solutions was justified to optimize and minimize the loss function. Through the best circumstances, it can minimize error perfectly. The predicted image does not wholly de-blurry due to being the average of many possible candidates. Apply blur filter through the sharpen feature and the integration to images.

Experiment results

Experiment results present the analysis evaluation and cost of the processing. Motion deblur and noise removal are implemented based on the estimation of blur parameters. The model is implemented to eliminate slight image blurring in a smartphone video. Smartphone platforms capture dataset videos with specifications shown in Table 1.

Dataset designation recovers diverse motion blur situations where results obtained from examining different environment videos are as follows:

Scenario 1. Indoor in the daytime. One object includes one moving object parallel with the smartphone camera movement direction and both of them move slowly with global motion central.

Scenario 2. Indoor in the daytime. Includes two moving objects and the smartphone camera itself. Objects move faster than the camera but both move in the same direction. Another shot is taken of the same object but the object walks in front of a static smartphone camera. Local motion dominates in the video.

Scenario 3. Outdoors in the daytime. It includes one moving object parallel with the smartphone camera movement direction. The smartphone camera moves quickly when facing an object so any change that occurs in image intensity results from the camera movement.

Scenario 4. Outdoors at nighttime. The video includes one moving object and a smartphone camera moving in the same direction but slower than the object moves.

Scenario 5. Outdoors at nighttime. Include two objects moving against the smartphone camera direction and both objects have a slow move global motion dominates. Another sense comes from the interference of two objects.

The video splits into 30 frames per second (fps). The performance measured in each phase individually compares the blur frame with the target. Experimental results are examined in a variety of circumstances, as shown in Fig. 3.

In general, quality image measurement compares the content loss by estimations of blur in frames. Estimation was adjusted to maximize parameters precisely, controlling blur detection from the low-quality input. Time-consuming computing for the deblur model phases individually. Proposed de-blur model exams on various moving objects; each video reported the average of the results computed 10 times.

Deblur model average computing time in an 8 MP frame on a modern mobile platform in 300 ms. Estimation blur phase time expends a bigger processing time than another phase. The estimation needs less time according to different scene complexity. The main factor consuming time is camera movement rate and video acquired in a shining degree environment. Based on the results, an average value of 30 fps for each video was calculated. Frame quality characterizes the sharpness of the restoration frame.

Table 1. Smartphone specifications

Features Specification Details

Display type super retina xdr oled, 120 Hz, hdr10

resolution 1170 x 2532 pixels, 19.5:9 ratio

cpu hexa-core (2 x 3.23 GHz, 4 x 1.82 GHz)

gpu apple gpu (5-core graphics)

Main camera triple 12 MP, f/1.5, 26 mm, 1.9 ^m, dual pixel pdaf, 12 MP, f/2.8, 77 mm, 1.0 ^m, 3 x optical zoom 12MP, f/1.8, 13 mm, 1.0 ^m 3d scanner (depth)

features dual-led dual-tone flash, hdr photo

video 4k@24/30/60fps, 1080p@30/60/120/240fps, up to 60fps

Table 2. Time computing, ms

Dataset Blur Estimation Multi Order Parameter Model Removal Artifacts

Scenrio 1 66 13 9

Scenrio 2 203 94 14

Scenrio 3 235 35 16

Scenrio 4 61 19 3

Scenrio 5 152 75 20

Table 3. PSNR, dB

Dataset Blur Estimation Multi Order Parameter Model Removal Artifacts

Scenrio 1 25.345 27.471 26.457

Scenrio 2 27.681 29.356 25.395

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Scenrio 3 29.426 28.910 28.921

Scenrio 4 29.168 30.001 29.041

Scenrio 5 26.534 27.325 27.375

Table 4. SSIM computing, dB

Dataset Blur Estimation Multi Order Parameter Model Removal Artifacts

Scenrio 1 0.958 0.681 0.429

Scenrio 2 0.953 0.654 0.579

Scenrio 3 0.947 0.708 0.558

Scenrio 4 0.950 0.595 0.403

Scenrio 5 0.955 0.789 0.432

SSIMis the similarity measurement, the resultant SSIM index is a decimal value between -1 and 1.

A distributed blur distance exists between the model phase and the target image. Measurement of PSNR and SSIM assessment frame data, which depends on the previous stage, was done as illustrated in Table 3 and Table 4.

According to the dominant type of motion, whether global or local, the obtained deblur frame is generated from a phase model. SSIM and PSNR indicate sharp grades after the removal of artifacts. Estimate the Gaussian parameters of the blur using an empirical observation that sharp images have more or less the same maximum gradient intensity in every direction. The image gradient is related to the Gaussian blur standard deviation. Fig. 4 displays the maximum gradient values in that direction,

and orthogonal ones estimate the Gaussian parameters. Fig. 4 shows different multi-orders and shapes are affected by the parameters a, d, ro for 100 iterations. An infinite number of high-quality images leads to the same low-quality observation. This implies variables to minimize the loss function to predict the average of all possible solutions.

Minimize error predicted image being the average of many possible candidates the regression to the mean issue as shown in Fig. 5.

The loss function measures the mismatch in image pixels by computing the Mean Square Error between the prediction and the reference target. Blur estimated parameters error in Fig. 5 shows the relation between the

0.2 0.4 0.6 0.8 д, degree

Fig. 4. Distribution parameters for blur estimation (a, d, œ)

M

-1

2

а„цт

• • • •

• •• • • _

••• V

2

а,,, цт

Fig. 5. Blur model estimated error parameters (MSE)

estimated gradient (ax, cy) blur value. A threshold value between (-1, 1) applies to detect the accepted noise result and the rejected noise value. The estimation is close to the real value. Multi-order can enhance bluer estimation results as in previous stage to eliminate artifacts. Comparing the results of Table 2 and Table 3, find that the removal of the artifacts phase does not affect the process blur estimation accurately. Scenarios have a high value of PSNR, and SSIM reduces process time by about 50 %. The complexity quantity of the estimation procedure is an important factor that affects on performance and motion of the video.

Conclusion

Image deblurring is an improved blur estimation parameters problem which is the goal to recover the hidden clean signal. Modeling the variation of the gradient degree and direction in the image was done. Models in three orders

to restore images that minimize some loss function of the blur were used. Blur estimation parameters coefficient improved deblur image that close to the identity blur. The result moves toward the average of a low degree. The approximated model then processes the image noise remaining from the filter. Multiple high-quality signals can lead to the same target image. This paper proposed solving an improved blur estimation parameters problem by variational formulation. Variational formulation progresses an energy function that has multiple terms. The optimization problem is solved by the data fitting observable image and found compatible with the regression model. The accuracy of the deblur motion was affected by the blur estimation, improved model, and elimination of artifacts detection and removal. PSNR and SSIM are set to evaluate the performance of the proposed model for each phase.

References

Baptiste M., Behrang M., Cédric M. A shock filter for image deblurring and enhancement with oriented hourglass tensor. Proc. of the 11th International Symposium on Image and Signal Processing and Analysis (ISPA), 2019, pp. 111-116. https://doi.org/10.1109/ ispa.2019.8868552

Lai W.-S., Huang J.-B., Hu Z., Ahuja N., Yang M.-H. A comparative study for single image blind deblurring. Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 1701-1709. https://doi.org/10.1109/cvpr.2016.188 Zhang K., Luo W., Zhong Y., Ma L., Stenger B., Liu W., Li H. Deblurring by realistic blurring. Proc. of the IEEE/CVF Conference

Литература

Baptiste M., Behrang M., Cédric M. A shock filter for image deblurring and enhancement with oriented hourglass tensor // Proc. of the 11th International Symposium on Image and Signal Processing and Analysis (ISPA). 2019. P. 111-116. https://doi.org/10.1109/ ispa.2019.8868552

Lai W.-S., Huang J.-B., Hu Z., Ahuja N., Yang M.-H. A comparative study for single image blind deblurring // Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016. P. 1701-1709. https://doi.org/10.1109/cvpr.2016.188 Zhang K., Luo W., Zhong Y., Ma L., Stenger B., Liu W., Li H. Deblurring by realistic blurring // Proc. of the IEEE/CVF Conference

on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 2734-2743. https://doi.org/10.1109/cvpr42600.2020.00281

4. Wieschollek P., Hirsch M., Scholkopf B., Lensch H. Learning blind motion deblurring. Proc. of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 231-240. https://doi.org/10.1109/ iccv.2017.34

5. Pan J., Sun D., Pfister H., Yang M.-H. Deblurring images via dark channel prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, vol. 40, no. 10, pp. 2315-2328. https://doi. org/10.1109/tpami.2017.2753804

6. Chen L., Fang F., Wang T., Zhang G. Blind image deblurring with local maximum gradient prior. Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 17421750. https://doi.org/10.1109/cvpr.2019.00184

7. Guo Q., Feng W., Gao R., Liu Y., Wang S. Exploring the effects of blur and deblurring to visual object tracking. IEEE Transactions on Image Processing, 2021, vol. 30, pp. 1812-1824. https://doi. org/10.1109/tip.2020.3045630

8. Whang J., Delbracio M., Talebi H., Saharia C., Dimakis A.G., Milanfar P. Deblurring via stochastic refinement. Proc. of the IEEE/ CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 16272-16282. https://doi.org/10.1109/ cvpr52688.2022.01581

9. Carbajal G., Vitoria P., Lezama J., Musé P. Blind motion deblurring with pixel-wise kernel estimation via kernel prediction networks. IEEE Transactions on Computational Imaging, 2023, vol. 9, pp. 928943. https://doi.org/10.1109/tci.2023.3322012

10. Zhang R., Isola P., Efros A.A., Shechtman E., Wang O. The unreasonable effectiveness of deep features as a perceptual metric. Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 586-595. https://doi.org/10.1109/ cvpr.2018.00068

11. Niklaus S., Mai L., Liu F. Video frame interpolation via adaptive separable convolution. Proc. of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 261-270. https://doi. org/10.1109/iccv. 2017.37

12. Ge X., Liu J., Hu D., Tan J. An extended sparse model for blind image deblurring. Signal, Image and Video Processing, 2024, vol. 18, no. 2, pp. 1863-1877. https://doi.org/10.1007/s11760-023-02888-2

13. Fergus R., Singh B., Hertzmann A., Roweis S.T., Freeman W.T. Removing camera shake from a single photograph. ACM Transactions on Graphics, 2006, vol. 25, no. 3, pp. 787-794. https://doi. org/10.1145/1141911.1141956

14. Delbraico M., Garcia-Dorado I., Choi S., Kelly D., Milanfar P. Polyblur: Removing mild blur by polynomial reblurring. IEEE Transactions on Computational Imaging, 2021, vol. 7, pp. 837-848. https://doi.org/10.1109/tci.2021.3100998

on Computer Vision and Pattern Recognition (CVPR). 2020. P. 27342743. https://doi.org/10.1109/cvpr42600.2020.00281

4. Wieschollek P., Hirsch M., Scholkopf B., Lensch H. Learning blind motion deblurring // Proc. of the IEEE International Conference on Computer Vision (ICCV). 2017. P. 231-240. https://doi.org/10.1109/ iccv.2017.34

5. Pan J., Sun D., Pfister H., Yang M.-H. Deblurring images via dark channel prior // IEEE Transactions on Pattern Analysis and Machine Intelligence. 2018. V. 40. N 10. P. 2315-2328. https://doi.org/10.1109/ tpami.2017.2753804

6. Chen L., Fang F., Wang T., Zhang G. Blind image deblurring with local maximum gradient prior // Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2019. P. 17421750. https://doi.org/10.1109/cvpr.2019.00184

7. Guo Q., Feng W., Gao R., Liu Y., Wang S. Exploring the effects of blur and deblurring to visual object tracking // IEEE Transactions on Image Processing. 2021. V. 30. P. 1812-1824. https://doi.org/10.1109/ tip.2020.3045630

8. Whang J., Delbracio M., Talebi H., Saharia C., Dimakis A.G., Milanfar P. Deblurring via stochastic refinement // Proc. of the IEEE/ CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2022. P. 16272-16282. https://doi.org/10.1109/ cvpr52688.2022.01581

9. Carbajal G., Vitoria P., Lezama J., Musé P. Blind motion deblurring with pixel-wise kernel estimation via kernel prediction networks // IEEE Transactions on Computational Imaging. 2023. V. 9. P. 928943. https://doi.org/10.1109/tci.2023.3322012

10. Zhang R., Isola P., Efros A.A., Shechtman E., Wang O. The unreasonable effectiveness of deep features as a perceptual metric // Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018. P. 586-595. https://doi.org/10.1109/ cvpr.2018.00068

11. Niklaus S., Mai L., Liu F. Video frame interpolation via adaptive separable convolution // Proc. of the IEEE International Conference on Computer Vision (ICCV). 2017. P. 261-270. https://doi. org/10.1109/iccv.2017.37

12. Ge X., Liu J., Hu D., Tan J. An extended sparse model for blind image deblurring // Signal, Image and Video Processing. 2024. V. 18. N 2. P. 1863-1877. https://doi.org/10.1007/s11760-023-02888-2

13. Fergus R., Singh B., Hertzmann A., Roweis S.T., Freeman W.T. Removing camera shake from a single photograph // ACM Transactions on Graphics. 2006. V. 25. N 3. P. 787-794. https://doi. org/10.1145/1141911.1141956

14. Delbraico M., Garcia-Dorado I., Choi S., Kelly D., Milanfar P. Polyblur: Removing mild blur by polynomial reblurring // IEEE Transactions on Computational Imaging. 2021. V. 7. P. 837-848. https://doi.org/10.1109/tci.2021.3100998

Author

Resen Adhab Sallama — PhD, Lecturer, Directorate General of Vocational Education — Vocational Edu Iraq, Bagdad, 10001, Iraq salamaresen@gmail.com, https://orcid.org/0009-0007-5044-8857

Автор

Саллама Ресен Адхаб — PhD, преподаватель, Главное управление профессионального образования — профессиональное образование Ирака, Багдад, 10001, Ирак, https://orcid.org/0009-0007-5044-8857, salamaresen@gmail.com

Received 19.03.2024

Approved after reviewing 23.04.2024

Accepted 16.05.2024

Статья поступила в редакцию 19.03.2024 Одобрена после рецензирования 23.04.2024 Принята к печати 16.05.2024

© 0®

Работа доступна по лицензии Creative Commons «Attribution-NonCommercial»

i Надоели баннеры? Вы всегда можете отключить рекламу.