Научная статья на тему 'Evaluation of the change in synthetic aperture radar imaging using transfer learning and residual network'

Evaluation of the change in synthetic aperture radar imaging using transfer learning and residual network Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
515
53
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
SAR images / change detection / transfer learning / residual network.

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — I. Hamdi, Y. Tounsi, M. Benjelloun, A. Nassim

Change detection from synthetic aperture radar images becomes a key technique to detect change area related to some phenomenon as flood and deformation of the earth surface. This paper proposes a transfer learning and Residual Network with 18 layers (ResNet-18) architecture-based method for change detection from two synthetic aperture radar images. Before the application of the proposed technique, batch denoising using convolutional neural network is applied to the two input synthetic aperture radar image for speckle noise reduction. To validate the performance of the proposed method, three known synthetic aperture radar datasets (Ottawa; Mexican and for Taiwan Shimen datasets) are exploited in this paper. The use of these datasets is important because the ground truth is known, and this can be considered as the use of numerical simulation. The detected change image obtained by the proposed method is compared using two image metrics. The first metric is image quality index that measures the similarity ratio between the obtained image and the image of the ground truth, the second metrics is edge preservation index, it measures the performance of the method to preserve edges. Finally, the method is applied to determine the changed area using two Sentinel 1 B synthetic aperture radar images of Eddahbi dam situated in Morocco.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Evaluation of the change in synthetic aperture radar imaging using transfer learning and residual network»

Evaluation of the change in synthetic aperture radar imaging using transfer learning and residual network

I. Hamdi ',2, Y. Tounsi2, M. Benjelloun1, A. Nassim 2 1 Laboratory of Physics of Nuclear, Atomic and Molecular Techniques, Chouaib Doukkali University, faculty of sciences, B,P. 20, El Jadida, Morocco, 2Measurment and Control Instrumentation Laboratory IMC, department of physics, Chouaib Doukkali University, faculty of sciences, B,P. 20, El Jadida, Morocco

Abstract

Change detection from synthetic aperture radar images becomes a key technique to detect change area related to some phenomenon as flood and deformation of the earth surface. This paper proposes a transfer learning and Residual Network with 18 layers (ResNet-18) architecture-based method for change detection from two synthetic aperture radar images. Before the application of the proposed technique, batch denoising using convolutional neural network is applied to the two input synthetic aperture radar image for speckle noise reduction. To validate the performance of the proposed method, three known synthetic aperture radar datasets (Ottawa; Mexican and for Taiwan Shimen datasets) are exploited in this paper. The use of these datasets is important because the ground truth is known, and this can be considered as the use of numerical simulation. The detected change image obtained by the proposed method is compared using two image metrics. The first metric is image quality index that measures the similarity ratio between the obtained image and the image of the ground truth, the second metrics is edge preservation index, it measures the performance of the method to preserve edges. Finally, the method is applied to determine the changed area using two Sentinel 1 B synthetic aperture radar images of Eddahbi dam situated in Morocco.

Keywords: SAR images; change detection; transfer learning; residual network.

Citation: Hamdi I, Tounsi Y, Benjelloun M, Nassim A. Evaluation of the change in synthetic aperture radar imaging using transfer learning and residual network. Computer Optics 2021; 45(4): 600-607. DOI: 10.18287/2412-6179-CO-814.

Introduction

In satellite imagery, change detection is a feature of interest for many applications, such as urban growth monitoring. The objective is to identify and analyze changes in a scene from images acquired at different dates. In this context, radar imagery appears to be one of the most relevant means. Indeed, thanks to its ability to observe at any time of the day or night, it is an indispensable means of observation in emergencies where weather conditions are unfavorable for acquisition in the optical field.

Synthetic aperture radar (SAR) is considered as an active and powerful remote sensing technology for ground information collection at any time whatever the conditions [1, 2]. The remote sensing SAR images change detection assist in assessing disasters and predicting its development trends, updating geographic data, and monitoring land use. Generally, the principle important steps for change detection include the input SAR images preprocessing, computation of the difference between these images, determination of the change information and then, evaluate the detection results [3].

Several approaches for change detection in SAR images has been proposed and exploited. Mu et al. [4] present an accelerating genetic algorithm based on search space decomposition. The difference images step is realized by decomposing the difference into sub-blocks [5], then, the detected change in each sub-block identifies the

changed, unchanged and undetermined pixels. These undetermined pixels are optimized and the final change detection results is obtained by reconstruction of all subblocks. The multi-objective Fuzzy clustering method [6], was proposed for change detection in SAR image. This method is done to optimize two conflicting objective functions constructed from the perspective of reducing speckle noise and preserving detail. Whereas, a hybrid approach based on fuzzy c-means and Gustafson-Kessel clustering for unsupervised change detection in multitemporal SAR images was constructed [7]. Other works focus on reducing speckle noise effect in SAR image for change detection accuracy improvement [8 - 10].

In other work, a method based on the salient image guidance and an accelerated genetic algorithm was proposed [4]. In their work [11], the authors apply the sali-ency detection model to the difference image in order to extract the pixels containing the change.

In the objective to improve the change detection performance and accuracy and reduce the running time, Wenyan et al. propose a method based on the weight image fusion and adaptive threshold in NSST domain [12].

Recently, a deep learning-based model has gained a great interest of researchers in the fields of change detection and SAR images analysis. Authors of reference [13] exploits the convolutional neural network (CNN) with wavelets transform for sea ice change detection. Since SAR image are characterized by strong speckle noise, the

change detection accuracy becomes low, for this reason, they introduce the wavelets thresholding approach for speckle noise reducing. Then, the CNN model classifies the image pixels into changed and unchanged pixels. Li et al. present a novel method based on CNN [14], the principle idea of their work, was the classification results generation from SAR images directly without any preprocessing step. Li et al. obtain the final change detection results by producing firstly false labels through unsuper-vised spatial fuzzy clustering, secondly, training the CNN network, and finally, the results are obtained by the trained CNN.

Gao et al. proposed two important works dedicated for change detection. The first work is based on neighborhood based ratio and extreme learning machine [15], the neighborhood based ratio is used to obtain the pixels that have high probability of being changed or unchanged. The second work of Gao et al. concerns another performant method based on channel weighting based deep cascade network [16], this work is proposed to solve some problems in other deep learning based methods as overfitting and exploding gradiens.

1. Change detection methodology

Consider two SAR images of the same around the surface and taken at t1 and t2 respectively. The goal is to design an efficient change detection method to determine the changes between the two images. After beginning the procedure of change detection between the two SAR images, an implementation of geometric correction and registration are essential to align the two input images in the same coordinate frame.

The general procedure used to detect change is based on three important steps.

1. Preprocessing: consists to realize a radiometric calibration, orthorectification, and speckle noise reduction from these images.

2. Computation of image difference: The ways to generate a difference image include the difference and ratio methods, which involve subtracting and dividing the corresponding pixels in the two images, respectively.

3. Analysis of images difference: The image difference represents a correlation between the two-state (after and before change), and its analysis concerns the extraction of information related to the change.

For a deep learning-based method, the general procedure used to estimate-predict a change from the image is show by fig. 1a.

A Binarization step is vital to train the architecture and improve the accuracy of results by using an image dataset. To detect the change from the SAR image, two GRD images with vertical-vertical polarization are used, then, a binarized image difference is computed (fig. 1b).

After that, a network was constructed, and the bina-rized difference image was sent to the input layer to train the network under supervision. Finally, after several itera-

tive trainings, a change map was obtained at the output layer of the network and is shown in fig. 1 c.

(a)

Binarized Image 1

Binarized Image 2

Difference Image Computing

Fig. 1. General flowchart for change detection using a deep

learning-based method (a); binarized image difference computing result (b); color composed change map image (c)

The dataset used for training contains 1104 images obtained after data augmentation step that will be explained in the next section.

The proposed approach in this work contains four important steps: image pre-processing, image clustering, data augmentation, and Transfer Learning based on Residual Network with 18 layers (ResNet-18):

2. Image pre-processing

This step concerns the pre-processing on the sentinel-1 datasets, this step begins by reducing speckle noise and enhance image contrast, and generating binary change bands. For multiplicative speckle noise reduction, we exploit our recent proposed convolutional neural network architecture [17].

3. Image clustering

An image clustering approach is used to segment changed regions and distinguish them from areas without change. In this work, we exploited the mask R-CNN [18]. It is a simple, flexible, and general framework for object instance segmentation. This mask was proposed in 2017, and it is considered the most powerful instance segmentation technique up to now. As shown by fig. 2, representing the structure of Mask R-CNN, it consists of a backbone CNN framework, a regional proposal network, a region of interest and three outputs branches as classification, box regression and mask prediction. Firstly, the features are searched through the regional proposal network for zones that may contains foreground.

This is illustrated by rectangles with different size that cover such regions and the suggested rectangles are used as bounding box. Secondly, these bounding boxes are exploited for regions of interests obtaining and then, performing classification and bounding box regression.

RoIAIign

Class Box

Regional Proposal network

Fig. 2. The Mask R-CNN framework 4. Data augmentation

The proposed approach started by creating our dataset by annotating 100 image patches of 224 x 224 x 3 containing two classes: changed and non-changed areas. The reason of this particular dataset image size was selected to be appropriate to the size of the input layer of Resnet-18 Deep Neural Network which was 224 x 224 x 3.

Moreover, the dataset was augmented to 1104 images by performing random translations along the x-axis and random flips and rotations along the y-axis, and resized the images and scaled the patches.

From 100 images to 1104 images, the deep learning architecture make better performance and better accuracy values with data augmentation.

5. Transfer learning and residual network

It trained a Transfer Learning framework on earlier segmented areas, thus, combining both methods to predict two classes of objects in the satellite image: changed areas, and non-changed areas (possibly we can add an unidentified area class).

Transfer Learning is an approach used to improve the learning of a new task by transferring and adapting knowledge from a similar task that has already been learned via a trained network. Its consisted essentially in reusing the values of weights of a pre-trained Deep Neural Network, while replacing the last layers with new ones which are retrained to provide a model that better fits to the target object and task.

The reason of the choice of transfer learning was motivated by the dataset size. our relatively small dataset size, which is usually the case in the specific satellite image and remote sensing applications, such as flood areas detection, while retaining the predictive power of a deep learning model.

The CNN network architecture implemented in this paper is based on the ResNet-18 architecture, which represents a good balance / ratio between deepness (time of computation) and performance. ResNet was introduced during the 2015 ImageNet Large Scale Visual Recognition Challenge and won it with an incredible error rate of 3.57 % [19] (Depending on their skill and expertise, humans generally hover around a 5 - 10 % error rate). This network was pre-trained on ImageNet database including more than a million of images. As a result, the network has learned rich feature representations for a wide range of images. The network can classify images into 1000 object categories, such as keyboard, mouse, pencil, and many animals [20].

The structure of ResNet-18 includes 18 layers organized on 5 convolutional block-stages [20] (see table 1 for more details.)

So as we can see in table 1 the ResNet-18 architecture contains the following elements / Layers:

Convolution with a kernel size of 7 x 7 and 64 different kernels all with a stride of size 2 giving us 1 layer.

Next, we see max pooling with also a stride size of 2.

In the next convolution, there is a 3* 3.64 kernel following this a 3* 3.64 kernel, these two layers are repeated in total 2 times so giving us 4 layers in this step.

Next, we see the kernel of 3*3,128 after that a kernel of 3*3.128 this step was repeated 2 times so giving us 4 layers in this step.

Where (*) is the convolution product.

After that, there is a kernel of 3*3.256 and one more kernels with 3*3.256 this is repeated 2 times giving us a total of 4 layers.

And then again, a 3* 3.512 kernel was repeated 2 times giving us a total of 4 layers.

Table 1. ResNet-18 architecture

Layer name Output size ResNet-18

conv1 112 x 112 x 64 7x7, 64, stride 2

3x3 max pool, stride 2

conv2_x 56 x 56 x 64 "3 x 3, 64" 3 x 3, 64 x 2

conv3_x 28 x 28 x 128 "3 x 3, 128" 3 x 3, 128_ x2

conv4_x 14 x 14 x 256 "3 x 3, 256" _3 x 3, 256_ x2

conv5_x 7x7x512 "3 x 3, 512" _3 x 3, 512_ x2

Average pool 1 x 1 x512 7 x 7 average pool

Fully connected 1000 512 x 1000 fully connection

softmax 1000

After that, we do an average pool and end it with a new "modified" fully connected layer containing 1000 nodes and at the end a softmax function so this gives us 1 layer.

Finally, the last three layers of our ResNet-18 were replaced by a new Fully connected layer, a softmax layer, and a new classification output layer adapted to our dataset classes (changed, non-changed).

6. Experimental results and analysis

The transfer learning Experimentation was passed in MATLAB 2020a using the deep learning toolbox model for ResNet-18 network, with a 6 cores Intel i5-9600k

CPU at 4.5 GHz and two Nvidia GPUs: RTX-2070 (8Gb) and GTX-1050 Ti (4Gb).

The entire network was trained using a modified version of ResNet-18. Time of training was 1 minute 13 second.

ResNet-18 deep learning architecture was used as a basis of the transfer learning approach. After annotating 1104 image patches extracted from the satellite image and labeled as changed areas, or non-labeled for non-changed, the dataset was augmented before launching the deep learning algorithm. After 810 iterations on 10 epochs, using a minimum batch size of 10 images, and a validation frequency of 100 iterations, an accuracy of 94.84 % was obtained. The training set and testing set examples were selected randomly from dataset images. Our dataset was randomly divided into 70 % of images for learning and 30 % for testing

7. Use of dataset

To evaluate the effectiveness and performance of the proposed method, two real multi-temporal SAR datasets acquired by different sensors are exploited here. Geometric corrections and co-registration have been done on these two datasets before applying the proposed method.

The first dataset is the Ottawa dataset, they are offered by the Defence Research and Development Canada Ottawa. It contains two SAR images with a size of 290 x 350 acquired by a sensor called RADARSAT SAR. These images have been registered by a specific algorithm in advance. The two images and corresponding available ground truth are shown in fig. 3.

(a) nHHHHB^B (b) "

Fig. 3. Images for Ottawa dataset: (a) image acquired in July 1997, (b) image acquired in August 1997,

(c) image of the ground truth

The second dataset is called Mexican dataset presented in fig. 4. This dataset contains also two SAR images with a size of 512 x 512 pixels and taken at April 2000 and May 2002. The white area in the change reference map shown in fig. 4.c. represents the changed area related to the destruction of plants after a forest fire in a Maxican city. The change reference map is obtained through relevant expert knowledge combined with real data of local geography.

The third and final dataset is the upstream data set of Shimen Reservoir in Taiwan shown by fig. 5. The two SAR images in this dataset are acquired by the FOR-MOSAT-2 satellite in August 2004 and September 2004, both with a size of 349 x 252 pixels. This dataset shows the change after being affected by Typhoon Avery.

The three-dataset used here are exploited to study the performance of our proposed method. In other words, the proposed method is applied to each dataset, and the obtained results are compared with the corresponding image of the ground truth by using two metrics: image quality index (Q) and Edge preservation index (EPI) [21, 22]. The figure 6 shows the procedure used in this paper.

The first metric is the well-established image quality index Q calculation defined as:

Q =

4ct„

{Ir)(lm)

(2+°m) (2+(imfy

.Fig. 4. Images for Mexican dataset: (a) image acquired in April 2000, (b) image acquired in May 2002,

(c) image of the ground truth

.

(a) ■ * (b) ■■ i » ' (c) I

Fig. 5. Images for Taiwan Shimen dataset: (a) image acquired in August 2004, (b) image acquired in September 2004,

(c) image of the ground truth

The image quality index measure three important magnitudes in the image such as degree of correlation, distortion of contrast, and distortion of luminance.

The symbol (.) means the average, om, and or the standard deviation of the evaluated image Im and the reference image Ir, respectively. The Q index values are obtained in the range [-1; 1] where Q = 1 means that the two compared images are similar perfectly.

The second metric measure the performance edges preservation, this metric is called edge preservation index (EPI) and it is defined as:

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

EPI (Ir, Im ) =

In:iIm (, J + 1)-Im (i, J)| i ::iiIr ( J+1)-Ir ( J )i ■

The EPI index values are obtained in the range [0; 1], the higher value of EPI means a better edge preservation. The performance of our methods is also compared with

f'Jk

two other techniques such as Deep Cascade Network (DCNet) [16] and the Fuzzy cluster method (FCM) [23]. The obtained results using our method, FCM and DCNet methods are presented in fig. 7, fig. 8 and fig. 9 for Ottawa, Mexico and Shimen dataset respectively.

Reference image

№ Quantitative appraisal

Interpretation i

Fig. 6. The procedure of the evaluation of change from two input SAR images

/

A

Fig. 7. Change-detection results of Ottawa data set: (a) FCM, (b) DCNet and (c) our method

According to the qualitative results presented by fig. 7, fig. 8, and fig. 9 and the quantitative appraisal summarized in table 1, we can deduce that the proposed method makes to detect the change with accuracy.

According to the obtained values in table1, our method presents a high degree of correlation and similarity compared to the reference image, and this is generalized for the three datasets.

Ku . i/v (C)

Fig. 8. Change-detection results of Mexico data set: (a) FCM, (b) DCNet and (c) our method

*

Fig. 9. Change-detection results of Shimen data set in Taiwan: (a) FCM, (b) DCNet and (c) our method

Table 2. performance of the proposed method in terms of Q and EPI

Method Q (%) EPI (%)

FCM 88.30 82.34

Ottawa dataset DCNet 90.56 90.23

Our method 92.51 94.20

FCM 89.01 86.59

Mexico data set DCNet 90.90 94.47

Our method 91.59 95.32

FCM 85.26 89.61

Shimen data set DCNet 93.55 95.30

Our method 94.86 95.22

8. Application to a Sentinel 1 dataset

After the validation of the proposed method form change detection on three well-known datasets in literature, we apply it to determine the change caused by flooding in EL MANSOUR EDDAHBI dam located in the south of Morocco, near the city of Ouarzazate and constructed on the river of Draâ, With coordinates between 30°55'23.1'' N, 6°51'24.4'' W and 30°58'09.4'' N, 6°41'44.8'' W, as shown in fig. 10.

The dataset that we used for change detection approach is acquired by the Sentinel-IB satellite and it's composed of two SAR images as shown by fig. 10:

1. Scene-1 before overflow (fig. 11a): acquired on September 11th, 2018, in Interferometric Wide swath (IW) Mode and Vertical send - Vertical received (VV) polarization configuration, with a pixel resolution of 10 m.

2. Scene-2 after overflow (fig. 11b): acquired on September 23rd, 2018, in Interferometric Wide swath (IW) Mode and Vertical send - Vertical received (VV) polarization configuration, with a pixel resolution of 10 m.

Fig. 10. Study Area of EL MANSOUR EDDAHBI DAM (Google Maps)

Fig. 11. Sentinel 1B SAR images of the study area acquired at (a) September 11th, 2018, (b) September 23rd, 2018

After pre-processing of the SAR images using SNAP software developed by the European Space Agency (ESA), we apply the proposed method, and the change caused by the flooding is shown in fig. 12.

Fig. 12. Change detection result

Conclusion

In this paper, a network is constructed based on the transfer learning approach and residual network to evaluate change from SAR images. Three examples of known datasets are used to study the performance of the network by using quantitative appraisal metrics based on two powerful metrics such as image quality index and edge preservation index.

The obtained experimental results verified the validity and robustness of the proposed method in front of the two other famous technique used form comparison (DCNet and FCM). This method still needs some improvement for application in flood monitoring, which is also the focus of our next work.

References

[1] Bindschadler RA, Jezek KC, Crawford J. Glaciological investigations using the synthetic aperture radar imaging system. Ann Glaciol 1987; 9: 11-19. DOI: 10.1017/S0260305500000318.

[2] Valenzuela GR. An asymptotic formulation for SAR images of the dynamical ocean surface. Radio Sci 1980; 15(1): 105-114. DOI: 10.1029/RS015i001p00105.

[3] Yang J, Sun W. Automatic analysis of the slight change image for unsupervised change detection. JARS 2015; 9(1): 095995. DOI: 10.1117/1.JRS.9.095995.

[4] Mu C-H, Li C-Z, Liu Y, Qu R, Jiao L-C. Accelerated genetic algorithm based on search-space decomposition for change detection in remote sensing images. Appl Soft Comput 2019; 84: 105727. DOI: 10.1016/j.asoc.2019.105727.

[5] Mu C-H, Li C-Z, Liu Y, Qu R, Jiao L-C. Accelerated genetic algorithm based on search-space decomposition for change detection in remote sensing images. Appl Soft Comput 2019; 84: 105727. DOI: 10.1016/j.asoc.2019.105727.

[6] Li H, Gong M, Wang Q, Liu J, Su L. A multiobjective fuzzy clustering method for change detection in SAR images. Appl Soft Comput 2016; 46: 767-777. DOI: 10.1016/j.asoc.2015.10.044.

[7] Mishra NS, Ghosh S, Ghosh A. Fuzzy clustering algorithms incorporating local information for change detection

in remotely sensed images. Appl Soft Comput 2012; 12(8): 2683-2692. DOI: 10.1016/j.asoc.2012.03.060.

[8] Zhuang H, Fan H, Deng K, Yu Y. An improved neighborhood-based ratio approach for change detection in SAR images. Eur J Remote Sens 2018; 51(1): 723-738.

[9] White RG. Change detection in SAR imagery. Int J Remote Sens 1991; 12(2): 339-360. DOI: 10.1080/01431169108929656.

[10] Bao M. Backscattering change detection in SAR images using wavelet techniques. IEEE 1999 International Geosci-ence and Remote Sensing Symposium (IGARSS'99) 1999; 3: 1561-1563. DOI: 10.1109/IGARSS.1999.772019.

[11] Mu C, Li C, Liu Y, Sun M, Jiao L, Qu R. Change detection in SAR images based on the salient map guidance and an accelerated genetic algorithm. 2017 IEEE Congress on Evolutionary Computation (CEC) 2017: 1150-1157. DOI: 10.1109/CEC.2017.7969436.

[12] Wenyan Z, Zhenhong J, Yu Y, Yang J, Kasabov N. SAR image change detection based on equal weight image fusion and adaptive threshold in the NSST domain. Eur J Remote Sens 2018; 51(1): 785-794. DOI: 10.1080/22797254.2018.1491804.

[13] Gao F, Wang X, Gao Y, Dong J, Wang S. Sea ice change detection in SAR images based on convolutional-wavelet neural networks. IEEE Geosci Remote Sens Lett 2019; 16(8): 1240-1244. DOI: 10.1109/LGRS.2019.2895656.

[14] Li Y, Peng C, Chen Y, Jiao L, Zhou L, Shang R. A deep learning method for change detection in synthetic aperture radar images. IEEE Trans Geosci Remote Sens 2019; 57(8): 5751-5763. DOI: 10.1109/TGRS.2019.2901945.

[15] Gao F, Dong J, Li B, Xu Q, Xie C. Change detection from synthetic aperture radar images based on neighborhood-based ratio and extreme learning machine. JARS 2016; 10(4): 046019. DOI: 10.1117/1.JRS.10.046019.

[16] Gao Y, Gao F, Dong J, Wang S. Change detection from synthetic aperture radar images based on channel weighting-based deep cascade network. IEEE J Sel Top Appl Earth Obs Remote Sens 2019; 12(11): 4517-4529. DOI: 10.1109/JSTARS.2019.2953128.

[17] Imad H, Yassine T, Mohammed B, Abdelkrim N. Batch despeckling of SAR images by a convolutional neural network-based method. 2020 IEEE International Conference of Moroccan Geomatics (Morgeo) 2020: 1-6. DOI: 10.1109/Morgeo49228.2020.9121890.

[18] He K, Gkioxari G, Dollar P, Girshick R. Mask R-CNN. 2017 IEEE International Conference on Computer Vision (ICCV) 2017: 2980-2988. DOI: 10.1109/ICCV.2017.322.

[19] He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. arXiv preprint 2020. Source: (http://arxiv.org/abs/1512.03385).

[20] Napoletano P, Piccoli F, Schettini R. Anomaly detection in nanofibrous materials by CNN-based self-similarity. Sensors 2018; 18(1): 1. doi: 10.3390/s18010209.

[21] Tounsi Y, Kumar M, Nassim A, Mendoza-Santoyo F, Matoba O. Speckle denoising by variant nonlocal means methods. Appl Opt 2019; 58(26): 7110-7120. DOI: 10.1364/AO.58.007110.

[22] Wang Z, Bovik AC. A universal image quality index. IEEE Signal Process Lett 2002; 9(3): 81-84. DOI: 10.1109/97.995823.

[23] Gong M, Zhou Z, Ma J. Change detection in synthetic aperture radar images based on image fusion and fuzzy clustering. IEEE Trans Image Process 2012; 21(4): 2141-2151. DOI: 10.1109/TIP.2011.2170702.

Authors' information

Imad Hamdi is a PhD Candidate in Physics and Engineering at Physics Department at University of Chouaib Doukkali, El Jadida, Morocco. His research interests include signal and image processing, satellite image information processing, computer vision, deep learning, and their applications in many fields such as synthetic aperture radar images. He received the M.Sc. degree in 2010 in Computer Engineering. E-mail: ihamdi225@gmail.com .

Yassine Tounsi was born in Morocco on 1992. He received his Ph. D degree in Applied Optics at Chouaib Doukkali University. His current research interests includes speckle metrology, image denoising, photoelasticity technique and differential SAR interferometry (DinSAR) for changes detection. Their current project is 'Advancing the Riesz transform for speckle metrology'. He is a member of the international society for optics and photonics (SPIE). E-mail: yassinetounsi132@gmail.com .

Mohammed Benjelloun is a professor at Chouaib Doukkali University. He acts as head of the Group of Nuclear Physics and Technology (GPTN) within the Laboratory of Nuclear Physics, Atomic Molecular Mechanics and Energetics. He received his doctorate from Louis Pasteur University in Strasbourg in 1984, and a PhD from the Catholic University of Louvain in 1991. The work he is carrying out within the group (GPTN) is oriented along two lines of research: A first axis of fundamental physics which is based mainly on the determination of the masses and cross sections of nuclei rich in neutrons, and a second axis on the promotion of applications of instrumentation and nuclear techniques in the field of the environment and earth sciences, namely the application of nuclear techniques to multi-elemental analysis (NAA, XRF, PIXE, DSTN, etc.), image analysis and processing applied to Solid Nuclear Trace Detectors (Hardware and Software), nuclear instrumentation (production of acquisition cards, nuclear electronics, etc.), automation and IT (development of simulation codes). E-mail: benjmoha@gmail.com .

Abdelkrim Nassim received his PhD in Physics from Chouaib Doukkali University, Morocco, in collaboration with FIAM Laboratory, Catholic University of Louvain, Belgium. During his research career, he published several papers about speckle interferometry technique in the two principal domains: speckle denoising and optical phase extraction. He is also a reviewer for JOLT journal (optics and laser technology). He has the skills and expertise in speckle interferometry, wavelet transform, and bidimensional empirical mode decomposition. E-mail: knassim58@gmail. com.

Code of State Categories Scientific and Technical Information (in Russian - GRNTI)): 29.31.15, 29.33.43, 20.53.23.

Received September 23, 2020. The final version - April 6, 2021.

i Надоели баннеры? Вы всегда можете отключить рекламу.