Научная статья на тему 'IMAGE CLUTTER METRICS AND TARGET ACQUISITION PERFORMANCE'

IMAGE CLUTTER METRICS AND TARGET ACQUISITION PERFORMANCE Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
19
11
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
CLUTTER METRIC / FALSE ALARM RATE / MEAN SEARCH TIME / PROBABILITY OF DETECTION / TARGET ACQUISITION

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Bondzulic Boban P., Bujakovic Dimitrije M., Mihajlovic Jovan G.

Introduction/purpose: Measuring target acquisition performance in imaging systems with human-in-the-loop plays an essential role in military applications. This paper presents an extended review on the application of image clutter metrics for target acquisition, with the aim of using objective measures to predict the detection probability, false alarm probability and mean search time of the target in the image.Methods: To determine the degree of clutter, simple features on the global (picture-wise) and local (target-wise) level were used as well as contrast- based clutter metrics, target size and metrics derived from image quality assessment measures. Along with the standard ones, the features derived from the distribution of mean subtracted contrast normalized coefficients were also used. To compare the results of the objective scores and the experimental results obtained on the publicly available Search_2 dataset, regression laws accepted in the literature were applied. Linear correlations and rank correlations were used as quantitative measures of agreement. Results: It is shown that the best agreement with target acquisition indicators is obtained by applying clutter metrics derived from image quality assessment measures. The correlation with the results of subjective tests is up to 90%, which indicates the need for further research. A special contribution of the paper is the analysis of the target acquisition prediction performance using simple features at the global and local level, where it is shown that the prediction performance can be improved by determining the features around the target. Furthermore, it was shown that the false alarm probability and the probability of detection can be predicted based on the mean target search time in the image with a probability higher than 90%.Conclusion: In addition to obtaining a high degree of agreement between the objective metrics of clutter and the results of subjective tests (up to 90%), there is a need to improve the existing and develop new metrics as well as to conduct new subjective tests.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «IMAGE CLUTTER METRICS AND TARGET ACQUISITION PERFORMANCE»

Image clutter metrics and target acquisition performance

Boban P. Bondzulica, Dimitrije M. Bujakovicb,

Jovan G. Mihajlovicc

a University of Defence in Belgrade, Military Academy, Department of Telecommunications and Informatics, Belgrade, Republic of Serbia, e-mail: bondzulici@yahoo.com, ORCID iD: ©https://orcid.org/0000-0002-8850-9842

b University of Defence in Belgrade, Military Academy, Department of Military Electronic Engineering, Belgrade, Republic of Serbia e-mail: dimitrijebujakovic@gmail.com, corresponding author, ORCID iD: ©https://orcid.org/0000-0001-7058-9293

c Serbian Armed Forces, General Staff, Telecommunications and Information Technology Directorate (J-6), Signal Brigade, Belgrade, Republic of Serbia, e-mail: mihajlovicjovan@yahoo.com, ORCID iD: ©https://orcid.org/0009-0001-7752-6196

DOI: 10.5937/vojtehg71-44117; https://doi.org/10.5937/vojtehg71-44117

FIELD: telecommunications ARTICLE TYPE: original scientific paper

Abstract:

Introduction/purpose: Measuring target acquisition performance in imaging systems with human-in-the-loop plays an essential role in military applications. This paper presents an extended review on the application of image clutter metrics for target acquisition, with the aim of using objective measures to predict the detection probability, false alarm probability and mean search time of the target in the image.

Methods: To determine the degree of clutter, simple features on the global (picture-wise) and local (target-wise) level were used as well as contrast-based clutter metrics, target size and metrics derived from image quality assessment measures. Along with the standard ones, the features derived from the distribution of mean subtracted contrast normalized coefficients were also used. To compare the results of the objective scores and the experimental results obtained on the publicly available Search_2 dataset, regression laws accepted in the literature were applied. Linear correlations and rank correlations were used as quantitative measures of agreement.

ACKNOWLEDGMENT: The authors thank Dr. Alexander Toet for kindly providing the Search_2 dataset and the implementation of the TSSIM clutter metric.

Results: It is shown that the best agreement with target acquisition indicators is obtained by applying clutter metrics derived from image quality assessment measures. The correlation with the results of subjective tests is up to 90%, which indicates the need for further research. A special contribution of the paper is the analysis of the target acquisition prediction performance using simple features at the global and local level, where it is shown that the prediction performance can be improved by determining the features around the target. Furthermore, it was shown that the false alarm probability and the probability of detection can be predicted based on the mean target search time in the image with a probability higher than 90%.

Conclusion: In addition to obtaining a high degree of agreement between the objective metrics of clutter and the results of subjective tests (up to 90%), there is a need to improve the existing and develop new metrics as well as to conduct new subjective tests.

Key words: clutter metric, false alarm rate, mean search time, probability of detection, target acquisition.

Introduction

The process of target acquisition as a concept used for military purposes, includes all the processes required to detect a target in an image (Li et al, 2012). In addition, discrimination between different classes of targets (recognition) or discrimination within a class (identification) may be required. Measuring target acquisition performance in human-in-the-loop imaging systems plays an essential role in many applications.

Electro-optical imaging systems detect radiation from the background and from the target of interest. Background clutter, which refers to objects or features of the scene that are similar to the target, can confuse the observer and affect target acquisition performance. Clutter plays a significant role in the detection of targets in images obtained by surveillance devices, both in the invisible and in the visible part of the electromagnetic spectrum. Therefore, there is great interest in analyzing the relationship between image content and human (operator) detection performance (Chang & Zhang, 2006a; Gavrovska & Samcovic, 2018; Lukin et al, 2023).

The presence of clutter in the image affects target detection and the search process (Schmieder & Weathersby, 1983; Chang & Zhang, 2006b). This can lead to a decrease in detection probability as some targets will be missed as opposed to situations where they will be found in the case of a less cluttered scene; it can lead to an increase in false

LO CO oo oo

LO !±

<D O d <0 E

£ <1J

o tn

zr o

ro '

<1J

E? <0

T3 c ro

<1J

E

o <1J

ro E

ro

<1J

m

•cS

>N T3 c o CO

CO CD

O >

co" <N

0 <N

01 UJ CC ZD

o o

_J <

o

X

o

LU

I— >-

a: <

i—

< -j

CD >Q

X LU I—

o

o >

alarm probability because scene clutter will be declared as the target of interest, and it can lead to an increase in detection time because the observer will spend time considering irrelevant clutter (Chang et al, 2007).

Image clutter metrics can be used to examine target acquisition performance - to predict detection probability, false alarm probability, and search time, as well as to correct imaging system performance models. Clutter metrics can be divided into global and local (Toet & Hogervorst, 2020; Mondal, 2022). Global metrics measure the clutter of the entire scene. Local measurements determine the clutter around the target. Also, clutter metrics can be without a priori knowledge about the target, while some of the metrics require additional information about the target, such as position, dimensions (width and height) or boundaries between the target and its background. Global measures use features derived from the complete scene image, such as standard deviation, entropy, probability of edge (POE) and similar (Rotman et al, 1996; Xiao et al, 2015b). These features can also be used locally. Additional information about the target allows to determine the contrast of the target with respect to the background at the local level, so different contrast-based clutter metrics have been defined (Xiao et al, 2015b). In addition to contrast, target detection probability is also influenced by its size (Wilson, 2001).

State-of-the-art reliable clutter metrics are derived from objective measures used to assess image quality. These are mathematical measures that require a priori knowledge about the target (target image), and based on the similarity (or dissimilarity) between the target image and the background, the degree of clutter is determined and the target acquisition performance is predicted.

Chang and Zhang (Chang & Zhang, 2006a) adapted the structural similarity index SSIM (Wang et al, 2004), to mathematically define a measure of clutter. A comparison of luminance, contrast and structure is made between the target and the background. The measure is called TSSIM - target structure similarity clutter metric, which quantitatively characterizes background clutter. In the paper (Toet, 2010), Toet considered predictions of search time and probability of detection based on the three components of SSIM/TSSIM. He concluded that luminance and contrast have no influence on human detection performance, while structural similarity SSIM/TSSIM component has the most influence on prediction performance, i.e., as structural similarity (correlation) is equivalent to matched filter, it was concluded that matched filtering predicts human visual performance in target detection. The BSD

measure (Xu et al, 2013) also represents the application of the structural similarity approach in the clutter metric, whereby additional weighting is performed using information content weights. This is a multidimensional measure where three scales are used. The DSIM measure (Xu & Shi, 2013) is also structural similarity based, and it can also be considered an HVS-based measure, considering brain cognitive characteristics. The clutter metric proposed in (Xiao et al, 2015a), known as Cessim, is a double structural similarity metric. In addition to structural similarity, the similarity of the histogram of oriented gradients between the target and the background is also considered and used as a weight for the structure similarity metric. The objective clutter metric from (Zheng et al, 2016) can also be classified into structural approaches with adaptive extraction of structural features and additional selection of blocks that have a decisive influence on the subjective impression of the observer and influence the performance of target acquisition. Structural comparison is implemented in (Zhao et al, 2019) by comparing gray levels between neighboring pixels in four directions. After that, the similarity between the target image and the background is determined based on the Hamming distance.

In addition to the mentioned SSIM-based measures, other reliable clutter metrics can be found in the literature. In (Yang et al, 2011) an approach was proposed that uses sparse representation for the clutter metric, where feature vectors are used to describe the background and the target and where similarity of the block in which the target is located with the background blocks is determined. The authors (Xu & Shi, 2012) used low-level image features to define the clutter metric FD using phase congruency to determine the differences between the background and the target, while directional contrast is used to calculate the differences in contrast between images. The approach from (Chu et al, 2012) measures the similarity between the background and the target in the frequency domain, whereby differences that cannot be seen are not taken into consideration while visible differences are additionally weighted according to the sensitivity of the visual system using Mannos-Sakrison contrast sensitivity function which is used to filter the frequency representations of the target and the background images. The degree of agreement with the subjective test results is at the TSSIM performance level. Two texture metrics based on the gray level co-occurrence error (GLCEcon and GLCEerg) were used in (Culpepper, 2015) to predict detection probability and mean search time. These two measures are based on the contrast and energy of the gray level co-occurrence distribution error.

5 61-8 8

5.

p. p

,e c n a m rrfo

re p

o tn

zr c a t e gr ra t

d n a

e m

o

e g

a m

ro <u m

•cS

>N T3 d o CO

CO CD

O >

co" <N

0 <N

01 UJ 01

U

o

O

_J <

o

X

0

LU

I— >-

01 <

I—

< _J

CD >o

X LU I—

o

o >

Although the images of the Search_2 dataset are in color, most research in the field of clutter metrics uses grayscale images. Researchers from (Yang et al, 2007) used images from the RGB color space for representation using quaternions (a generalization of complex numbers), which was additionally folowed with phase correlation and used to estimate clutter in color images. However, the authors concluded that color is not the dominant factor for target detection on the Search_2 dataset. The TSSIM grayscale clutter metric was extended in (Chang et al, 2010) to include color by combining channels of the perceptually uniform CIELAB color space using weighted averages. Although the degree of prediction of the probability of detection by applying color increased from 0.8 to 0.82, it can be said that there was no significant improvement in performance by applying color. The gradient clutter metric proposed in (Meehan & Culpepper, 2016) uses the Lab color domain in which the gradient is determined independently in three color channels.

Also, the interesting research is (Itti et al, 2001), in which the authors presented an effective target detector model where, in 75% of images from the Search_2 dataset, the target is detected faster using the model than by the observer.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

After the introductory part, the publicly available Search_2 dataset and the experimental results are described in the second part of the paper. The image clutter metrics are described in the third part of the paper, while their performance on the Search_2 dataset is discussed in the fourth part of the paper. At the end of the paper, conclusions and directions for further research are given.

Experimental results

The TNO Human Factors Search_2 dataset contains 44 highresolution color images (6144 x 4096 pixels), where each image contains one military vehicle considered as a target (Toet et al, 2001). The images have different complexity and were obtained by shooting in a rural environment. In addition to images, the dataset also contains binary images in which the segmentation of the target from the background was performed manually.

Also, the Excel file provides information on the conditions under which the tests were conducted, as well as the results of subjective psychophysical tests.

Search targets

Nine military vehicles were considered as targets of interest (allterrain vehicles, infantry fighting vehicles and tanks) - Fig. 1. In order to get as close as possible to the real conditions of observing these targets, the targets were recorded in different local environments (backgrounds), at different distances, in different lighting conditions of the scene, under different orientations (Fig. 2) and with different degrees of occlusion of the target with vegetation.

(a) HMMVV-Scout

(b) HMMVV-Tow

(c) BMP-1

(d) BTR-70

(e) M3-Bradley

(f) M113

(g) M1A1

(h) M60

(i) T72

Figure 1 - Nine military vehicles considered as targets of interest (oblique back view) Рис. 1 - Девять боевых машин, рассматриваемых в качестве целевых объектов

(вид сзади)

Слика 1 - Девет во^них возила разматраних као ци^еви од интереса

Figure 2 - Front view, oblique front view and oblique back view of the target (tank T72) Рис. 2 - Вид спереди, боковой вид на цель спереди и сзади на цель (танк Т72) Слика 2 - Поглед спреда, коси поглед спреда и позади на ци^ (тенк Т72)

Experimental results and discussion

For each of the 44 source images, the target, the distance at which the target is located and its aspect angle, the center of the target in the image, and its width and height in pixels are known. Luminance data are provided for the scene, the target and its background.

Figure 3 - (a) source image, (b) target image, (c) binary image after manual segmentation, (d) and (e) extraction of the target and the background based on a binary

image

Рис. 3 - (a) исходное изображение, (b) целевое изображение, (c) бинарное изображение после ручной сегментации, (d) и (e) извлечение цели и фона на основе бинарного изображения Слика 3 - (a) изворна слика, (b) слика ци^а, (c) бинарна слика након ручне сегментаци^е, (d) и (e) издва]а^е циъа и околине на основу бинарне слике

These data enable the target image to be extracted, and the binary images enable the target to be extracted from the background - Fig. 3. Researchers mostly use 39 out of the available 44 images, i.e., in the analyses they do not consider images with serial numbers 7, 15, 23 and 26, in which duplicate targets have been noticed, and they do not consider image 39 because the target detection probability is only 14.5% (Chang & Zhang, 2006a).

In the subjective tests, 62 observers participated, and the results are given through their correct, false and missed detections. In addition, search time was measured, with mean, geometric mean and median search time available. Based on the results of the subjective tests, it is possible to determine the probability of detection (Pd) and the probability of false alarm (FAR) of the target for each source image (Chang et al, 2010):

__N

d

p __correct_

Ncorrect + Nfalse + Nmissed

where Ncorrect is the number of correct detections, Nfalse is the number of false detections, and Nmissed is the number of missed targets. The false alarm probability is obtained as:

FAR _-f-, (2)

N + N,, + N j

correct false missed

while the total probability of detection is:

Ptotal _ P + FAR . (3)

Fig. 4 shows the relationship between PJFAR as a function of mean search time (mST) based on the Search_2 dataset data. From this figure, it can be seen that with an increase in the mean search time, the probability of detection decreases, that is, the probability of a false alarm increases. Also, Fig. 4 confirms that human observers in an attempt to improve their detection performance will accept higher false alarm probabilities when considering an image with pronounced clutter (Chang et al, 2007).

Table 1 shows the degree of agreement between the mean search time and the probability of detection/false alarm probability. The linear (Pearson's) correlation coefficient (LCC) and rank correlations (Spearman's, SROCC, and Kendal's, KROCC) were used as quantitative indicators. It can be concluded that the LCC between the mean search time and Pd/FAR is greater than 90%. The linear correlation between mST and Pd is greater than the correlation between mST and FAR. If the

rank correlations are considered, the greater degree of agreement is between mST and FAR.

10 15 20 mST (s)

(a)

10 15 20 mST (s) (b)

Figure 4 - (a) relationship between the probability of detection, and (b) the false alarm

rate and the mean search time Рис. 4 - (a) соотношение между вероятностью обнаружения и (b) частотой

ложной тревоги и средним временем поиска Слика 4 - (а) однос измену вероватноПе детекц^е и (b) вероватноПе лажног аларма и средшег времена претражива^а

Table 1 - Degree of agreement between the mean search time and the probability of

detection / false alarm probability Таблица 1 - Степень совпадения среднего времени поиска с вероятностью

обнаружения / вероятностью ложной тревоги Табела 1 - Степен слагала средшег времена претражива^а и вероватноПе детекц^е / вероватноПе лажног аларма

Pd FAR

LCC 0.9261 0.9038

SROCC 0.8101 0.8464

KROCC 0.6464 0.6968

From Fig. 4, it can be seen that there are several images where the detection probability is equal to one, and their mean search time is between 2 and 7 seconds. The minimum target detection probability of the considered images is 48.4%, where the mean search time is 29.8 s. Fig. 5 shows two source images with the maximum probability of detection and the source image with the lowest detection probability.

The targets in these images are framed by white rectangles and additionally shown as magnified images.

Pd=1, mST=2.8 s, target size 322 x 199 pixels

Pd=1, mST=6.4 s, target size 44 x 43 pixels

(c) Pd=0.484, FAR=0.323, mST=29.8 s, target size 38x 28pixels

Figure 5 - Examples of images from the Search_2 dataset with the maximum and

minimum probability of target detection Рис. 5 - Примеры изображений из базы данных Search_2 с максимальной и

минимальной вероятностью обнаружения цели Слика 5 - Примери слика из Search_2 базе са максималном и минималном вероватноПом детекц^е циъа

From Fig. 5(a), it can be seen that the target is in the central part of the image, with high contrast compared to the background and with a not so pronounced clutter in its background, so it is not surprising that the probability of detection is maximum. For the target in Fig. 5(b), the probability of detection is maximum, although its contrast with the background is worse than in the previous example, the target is smaller and the background clutter is more pronounced. The minimum probability of detection is obtained for a small target with low contrast to the background and with a loss of detail. The targets in Figs. 5(b) and 5(c) are HMMVV-Tow at 3 km and M1A1 at 5.4 km, respectively, which may also affect the probability of detection. There is a frontal view on both targets.

Image clutter metrics Simple clutter metrics

Gray level standard deviation (STD) and gray level entropy (E1) are often used as image clutter metrics. Statistics derived from gray level cooccurrence matrices (GLCM) are also used. The following GLCM-based metrics were used in this research: contrast, correlation, energy, homogeneity, and two-dimensional entropy (E2) (Cheng & Li, 2021). In this paper, the frequency of occurrence of pairs of gray levels at positions (m,n) and (m+1,n+1) for GLCM is analyzed, and the features derived from GLCM contain information about the structure in the image.

Spatial information (SI) and spatial frequency (SF) are used to describe the variety of source content in image and video datasets intended for quality assessment. Additionally, these features are used to analyze the complexity of images for compression purposes and to predict just noticeable differences (Bondzulic et al, 2022). In this paper, these features are considered as clutter metrics. For the grayscale image F, the SI is obtained after filtering the image using Sobel spatial masks that are sensitive to intensity changes along rows and columns:

SIstd = stdspace [Sobel (F)] , (4)

where Sobel(F) is the gradient magnitude at the local level, and where stdspace is the notation representing the standard deviation of the values in the spatial (grayscale) image plane. In addition to the standard deviation, the root-mean-square and the mean value are used in the spatial domain (Yu & Winkler, 2013):

SIm = mpace [Sobel (F)] (5)

SS1mean meanspace

[ Sobel (F)], (6)

where the mean value proved to be the best predictor of image complexity in compression (Yu & Winkler, 2013).

The gradient magnitude can also be calculated based on the difference in gray levels of adjacent pixels by rows (RD) and columns (CD) using the following equations:

RD(m, n) = F (m, n) - F (m +1, n) (7)

CD(m, n) = F (m, n) - F (m, n + 1) (8)

SF(m, n) = yj(RD(m, n) f + (CD(m, n))2 . (9)

The usual term for the gradient magnitude calculated in this way is spatial frequency (SF) (Tan et al, 2017). Based on locally calculated SF values, three characteristics can be calculated:

SFmean = meanSpace [ SF], (10)

SFm = rmsSpace [SF], and (11)

SFstd = stdspace [SF] . (12)

The probability of edge (POE) is the percentage of edge pixels relative to the number of pixels in a complete image (Chang & Zhang, 2006a). The well-known image binarization method proposed by Canny (Canny, 1986) was used to determine edge pixels.

The compression ratio (CR) is used as a measure of image complexity for compression purposes, and here it is used as a clutter metric. It is obtained as the ratio of the size of the original uncompressed image and JPEG compressed image with a quality factor of QF=100 (Corchs et al, 2016).

The only feature that uses color information in our research is colourfulness (CF). It is a well-known feature for estimating the variety and intensity of colors in an image (Hasler & Suesstrunk, 2003). CF is obtained in opponent color space derived from three RGB color planes:

rg = R - G (13)

yb = 1(R + G)-B, (14)

where CF is defined as:

CF = ,ja2g +< + 0.3^+£b , (15)

and Org and Oyb are standard deviations, while jurg and ¡uyb are the mean values in rg and yb planes, respectively.

The last three image features are derived from the mean subtracted contrast normalized (MSCN) distribution (Mittal et al, 2012). The distribution of the MSCN coefficients can be modeled using the asymmetric generalized Gaussian distribution (AGGD), described by three parameters - the shape parameter v, which controls the shape of the distribution, and the scaling parameters Oi and or, which control the spread on each side of the mode. These three parameters (v, Oi and or) were used to determine the type of image degradation and perceptual quality, and here they are used as clutter metrics.

The mentioned features were used as clutter metrics at the global (picture) level, without a priori knowledge of the target. Additionally, these features were used as clutter metrics at the local level, where the size of the region within which these features are calculated is important. Most researchers (Chang & Zhang, 2006a; Xu & Shi, 2013), use a region that is twice the apparent size of the target in each dimension. Also, some researchers, such as (Wilson, 2001), use the width and height of the target multiplied by the square root of two to determine the dimensions of the region. In this way, for a rectangular object, an equal number of pixels are used to determine the target and background features. In this paper, an image patch that is twice the height and width of the target is used to determine the region of interest.

Contrast-based clutter metrics and a target size

The second set of metrics discussed in this paper are contrast-based clutter metrics and a target size. These metrics require complete information about the target (knowing the boundary between the target and its background).

The contrast metric measures the intensity difference between the target and its background (Schmieder & Weathersby, 1983), (Wilson, 2001). The simplest contrast measure is the difference between the mean gray level values of the target, ¡ur, and its background, ¡m:

= ~Ub\. (16)

However, this metric does not consider the internal structure of the target and the background, so in practice other contrast measures are used that consider the gray level standard deviations of the target, or, and/or the background, ob, such as:

1) root sum of squares (RSS):

RSS = ^(jt-MB)2 + °2 , (17)

2) Doyle local contrast:

Doyle = ~Mb)2 + k(&T -)2 , k = 1, (18)

3) target local background contrast (TBC):

TBC JVlZ^A , and (19)

4) target interference ratio (TIR):

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

J/

4

TIR = ^ . (20)

oO. + ol

To determine the background gray level mean value and the standard deviation, the size of the region within which these two features are calculated is important. In this paper, the region of interest is twice the height and width of the target.

The target size used in this analysis is the square root of the pixels on target (RPOT).

Quality assessment measures as clutter metrics

The application of objective image quality assessment measures as clutter metrics began after the work (Chang & Zhang, 2006a) in which the target structural similarity, TSSIM, clutter metric was proposed. After extracting the target image - the region outlined in red in Fig. 6, a comparison is made with the non-overlapping regions of the considered image - the regions outlined in white in Fig. 6.

Similarity is determined for each region, and the final clutter estimate is obtained as the arithmetic mean (am in the subscripts of objective measures) or as the root mean square of the obtained values (rms in the subscripts of objective measures).

This approach to determining the degree of clutter is used in the majority of objective measures. In the TSSIM objective measure, higher values correspond to a greater similarity between the target and the background, which indicates a higher degree of clutter in the image, and which will lead to a decrease in the probability of detection.

It can be said that TSSIM metric is inversely proportional to target detection probability. Contrary to this metric, some of the objective measures are directly proportional to the probability of detection, i.e., lower values of the objective scores correspond to lower values of the probability of detection.

Figure 6 - Illustration of the principle of applying clutter metrics based on image quality

assessment

Рис. 6 - Иллюстрация принципов применения мер оценки шума на основе оценки

качества изображения Слика 6 - Илустрацц'а принципа примене мера процене клатера заснованих на

процени квалитета слике

The following objective measures were used in the analysis: TSSIM (Chang & Zhang, 2006a), structural (s) component of TSSIM metric (Toet, 2010), FD (Xu & Shi, 2012), BSD (Xu et al, 2013), DSIM (Xu & Shi, 2013), Cessim (Xiao et al, 2015a), Cmdh (Zhao et al, 2019), and texture-based metrics GLCEcon and GLCEerg (Culpepper, 2015). The objective measures calculated as mean values and the root mean square of local similarities have the subscripts am and rms.

Target acquisition modeling and the results

The relationship between clutter metrics and Pd, FAR and mST is analyzed using the regression models (Culpepper, 2015):

_ (c / Co )E 1+(C / Co )E (c / Co )E

i+(C / Co )E

Pd pred ✓ _. . o. \ E (21)

FAR pred = Pd total ,E (22)

mSTpred = x ■ C + z , (23)

where Pd pred, FARpred and mSTpred are the predictions of Pd, FAR and mST based on the image clutter metric C, Pd tota=0.988 is the total probability of detection (Chang et al, 2007), while E, C50, x, y and z are the parameters of the regression models.

Tables 2-4 show the prediction performance (LCC, SROCC and KROCC) of the probability of detection, the false alarm probability and the mean search time based on simple features, determined on the global and local levels of the Search_2 dataset images. If the performance on the local level is better than that on the global level, it is shaded in gray in the tables. Additionally, the best performance according to each of the comparison criteria is marked in bold.

Table 2 - Performance of the probability of detection prediction based on simple clutter

metrics

Таблица 2 - Эффективность прогнозирования вероятности обнаружения на основе простых показателей помех Табела 2 - Перформансе предикци]е вероватноПе детекцще на основу jедноставних мера процене клатера

Global Local

LCC SROCC KROCC LCC SROCC KROCC

Entropy, Ei 0.2897 0.3714 0.2557 0.7025 0.7697 0.5861

Stand. Dev., STD 0.1622 0.2464 0.1580 0.8239 0.7624 0.5832

Contrast 0.1557 0.3850 0.2873 0.6624 0.7038 0.5286

Correlation 0.0555 0.0166 0.0217 0.7112 0.6769 0.4937

Energy 0.4391 0.4450 0.3384 0.5661 0.7006 0.5261

Homogeneity 0.3308 0.4644 0.3450 0.3986 0.4508 0.3265

Entropy, E2 0.3353 0.4456 0.3304 0.7252 0.7822 0.6062

Spat. Freq., SFmean 0.4133 0.5130 0.3936 0.5224 0.5437 0.3936

Spat. Freq., SFrms 0.4260 0.5200 0.3993 0.6131 0.6299 0.4740

Spat. Freq., SFstd 0.4061 0.4804 0.3476 0.7222 0.7142 0.5688

Spat. Inf., SImean 0.4277 0.5479 0.4223 0.5308 0.5838 0.4309

Spat. Inf., SIrms 0.4311 0.5166 0.3936 0.6373 0.6503 0.4855

Spat. Inf., SIstd 0.4002 0.4720 0.3419 0.7266 0.6994 0.5401

Prob. of Edge, POE 0.3973 0.4021 0.2774 0.6481 0.4182 0.2875

Comp. Ratio, CR 0.3642 0.4655 0.3522 0.4936 0.4438 0.3246

CF 0.0714 0.1392 0.0919 0.6139 0.6223 0.4740

AGGD, v 0.5479 0.6357 0.4753 0.7385 0.5471 0.4050

AGGD, al 0.4538 0.6405 0.4827 0.5433 0.5969 0.4470

AGGD, ar 0.4375 0.6490 0.4815 0.4862 0.5526 0.4071

Table 3 - Performance of the probability of false alarm prediction based on simple clutter

metrics

Таблица 3 - Эффективность прогнозирования вероятности ложной тревоги на основе простых показателей помех Табела 3 - Перформансе предикцще вероватноПе лажног аларма на основу jедноставних мера процене клатера

Global Local

LCC SROCC KROCC LCC SROCC KROCC

Entropy, Ei 0.3416 0.4259 0.2964 0.6745 0.7960 0.6235

Stand. Dev., STD 0.2166 0.3030 0.2088 0.7909 0.7871 0.6147

Contrast 0.1672 0.4281 0.3198 0.7135 0.7101 0.5388

Correlation 0.0032 0.0227 0.0176 0.6228 0.7029 0.5355

Energy 0.4680 0.5030 0.3920 0.5618 0.7222 0.5525

Homogeneity 0.3488 0.5000 0.3740 0.4472 0.4555 0.3275

Entropy, E2 0.3804 0.5053 0.3782 0.7086 0.8041 0.6352

Spat. Freq., SFmean 0.4346 0.5524 0.4249 0.5706 0.5426 0.3986

Spat. Freq., SFrms 0.4482 0.5559 0.4249 0.6611 0.6297 0.4804

Spat. Freq., SFstd 0.4302 0.5120 0.3753 0.7630 0.7241 0.5738

Spat. Inf., SImean 0.4623 0.5857 0.4541 0.5693 0.5784 0.4337

Spat. Inf., SIrms 0.4694 0.5533 0.4220 0.6771 0.6462 0.4891

Spat. Inf., SIstd 0.4427 0.5084 0.3723 0.7644 0.7052 0.5446

Prob. of Edge, POE 0.4363 0.4176 0.3010 0.6102 0.4057 0.2878

Comp. Ratio, CR 0.3773 0.5005 0.3770 0.4382 0.4570 0.3373

CF 0.0735 0.1489 0.1037 0.6244 0.6293 0.4833

AGGD, v 0.6087 0.6690 0.5124 0.6583 0.5288 0.3941

AGGD, m 0.5105 0.6689 0.5125 0.5297 0.5658 0.4281

AGGD, mr 0.4893 0.6796 0.5143 0.4530 0.5343 0.3918

From Tables 2-4, it can be concluded that at the global level (without a priori knowledge of the target) the best prediction results were obtained using the features derived from the distribution of the MSCN coefficients. If the features are applied at the local level (in the region where the target is located), the prediction performance considered through LCC is improved for all metrics, while the performance improvement through SROCC and KROCC criteria depends on the choice of metric. At the local level, the best predictors of subjective test results are the gray level standard deviation and entropy. The prediction performance using the standard deviation increased significantly when applied at the local level. It is interesting that by applying correlation and colourfulness, instead of being completely uncorrelated at the global level, good prediction results were achieved at the local level.

Table 4 - Performance of the mean search time prediction based on simple clutter

metrics

Таблица 4 - Эффективность прогнозирования среднего времени поиска на основе простых показателей помех

Табела 4 - Перформансе предикцц'е средъег времена претражива^а на основу jедноставних мера процене клатера

Global Local

LCC SROCC KROCC LCC SROCC KROCC

Entropy, Ei 0.3354 0.4018 0.2750 0.7362 0.7086 0.5446

Stand. Dev., STD 0.1845 0.3233 0.2124 0.8048 0.6806 0.4901

Contrast 0.2853 0.5037 0.3976 0.6416 0.6381 0.4602

Correlation 0.0148 0.0546 0.0439 0.6922 0.6344 0.4707

Energy 0.4670 0.5164 0.4118 0.6114 0.6284 0.4779

Homogeneity 0.4106 0.4874 0.3883 0.4225 0.3739 0.2645

Entropy, E2 0.3997 0.5013 0.3785 0.7049 0.7337 0.5582

Spat. Freq., SFmean 0.4629 0.5254 0.4112 0.5489 0.4613 0.3077

Spat. Freq., SFrms 0.4650 0.5242 0.3976 0.6313 0.5687 0.4003

Spat. Freq., SFstd 0.4226 0.4984 0.3622 0.6829 0.6713 0.4983

Spat. Inf., SImean 0.4724 0.5384 0.4248 0.5609 0.4930 0.3377

Spat. Inf., SIrms 0.4689 0.5116 0.3948 0.6578 0.5941 0.4166

Spat. Inf., SIstd 0.4330 0.4772 0.3404 0.7301 0.6613 0.4983

Prob. of Edge, POE 0.3602 0.3477 0.2493 0.6914 0.3343 0.2275

Comp. Ratio, CR 0.3485 0.4880 0.3856 0.4542 0.4871 0.3458

CF 0.0925 0.2319 0.1743 0.6147 0.5648 0.4112

AGGD, v 0.5966 0.6369 0.5024 0.6287 0.4836 0.3456

AGGD, m 0.5043 0.5949 0.4493 0.5409 0.4324 0.2848

AGGD, mr 0.4862 0.6055 0.4619 0.4844 0.4257 0.2890

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

The prediction performance of the probability of detection, the false alarm probability and the mean search time using the contrast-based clutter metrics and the target size are given in Table 5. It can be concluded that the best predictors are RSS and RPOT, with RPOT being a better predictor.

The best prediction performance is for the mean search time. It is interesting to note that the performance of contrast-based clutter metrics Doyle, TBC and TIR, which use the standard deviation of the background, is significantly worse than the performance of RSS and RPOT.

Table 5 - Performance of the probability of detection, the false alarm probability and the mean search time predictions based on the contrast-based clutter metrics and the target

size

Таблица 5 - Эффективность прогнозирования вероятности обнаружения, вероятности ложной тревоги и среднего времени поиска на основе контраста и

размера цели

Табела 5 - Перформансе предикци]е вероватноПе детекци^е, вероватноПе лажног аларма и средшег времена претражива^а на основу контрастних мера процене клатера и величине циъа

Pd FAR mST

О О _l SROCC KROCC C О _i SROCC KROCC LCC SROCC KROCC mean

RSS 0.5573 0.5112 0.3850 0.6080 0.5636 0.4395 0.6638 0.6808 0.5147 0.5471

DOYLE 0.1301 0.1000 0.0747 0.1648 0.1447 0.1124 0.2039 0.2053 0.1525 0.1432

TBC 0.2301 0.2847 0.1896 0.1792 0.2485 0.1679 0.2431 0.1616 0.0899 0.1994

TIR 0.2897 0.3537 0.2499 0.2275 0.3232 0.2351 0.3223 0.2769 0.1797 0.2731

RPOT 0.6851 0.6608 0.5056 0.6514 0.6953 0.5505 0.7047 0.7662 0.5936 0.6459

Table 6 - Performance of the probability of detection, the false alarm probability and the mean search time predictions based on quality assessment measures Таблица 6 - Эффективность прогнозирования вероятности обнаружения, вероятности ложной тревоги и среднего времени поиска на основе показателей

оценки качества

Табела 6 - Перформансе предикци]е вероватноПе детекци^е, вероватноПе лажног аларма и средшег времена претражива^а на основу мера процене

квалитета

Pd FAR mST

C C C C C C с

C C C C C C C C C 01

о О О О О О О О О Е

L R R L R R L R R

S K S K S K

TSSIMrms 0.8011 0.6928 0.5372 0.7611 0.6800 0.5242 0.7936 0.7486 0.5582 0.6774

TSSIMam 0.7718 0.7617 0.5660 0.7374 0.7870 0.6001 0.7427 0.6865 0.4901 0.6826

Srms 0.6685 0.7031 0.5520 0.6498 0.7072 0.5508 0.7381 0.7901 0.5872 0.6608

Sam 0.6871 0.7160 0.5602 0.6593 0.7211 0.5621 0.7473 0.7946 0.5882 0.6707

FDrms 0.8636 0.7170 0.5524 0.8704 0.7359 0.5688 0.8536 0.7298 0.5617 0.7170

FDam 0.8552 0.7136 0.5505 0.8639 0.7323 0.5669 0.8502 0.7337 0.5654 0.7146

BSDrms 0.8785 0.8050 0.6435 0.8433 0.8241 0.6644 0.8573 0.8111 0.6372 0.7738

BSDam 0.8806 0.8065 0.6454 0.8495 0.8253 0.6663 0.8602 0.8100 0.6335 0.7753

DSIMims 0.8924 0.8000 0.6321 0.8913 0.8253 0.6644 0.9006 0.7631 0.5909 0.7733

DSIMam 0.8914 0.7977 0.6234 0.8898 0.8245 0.6585 0.8995 0.7616 0.5827 0.7699

GLCEcon 0.8263 0.7168 0.5752 0.8304 0.7388 0.6023 0.7945 0.7357 0.5356 0.7062

GLCEerg 0.8685 0.7855 0.6146 0.8479 0.8151 0.6555 0.8899 0.8700 0.7053 0.7836

Cessim 0.8696 0.8062 0.6292 0.8451 0.8280 0.6527 0.8906 0.7650 0.5936 0.7644

Cmdh 0.8686 0.7743 0.6119 0.8212 0.7868 0.6235 0.8599 0.8192 0.6209 0.7540

Table 6 shows the prediction performance of clutter metrics based on determining the similarity/dissimilarity between the target image and the background.

The best results according to each of the criteria are marked in gray and bold.

The degree of agreement between the objective measures and the results of subjective tests according to LCC goes up to 90.06%, according to SROCC up to 87% and according to KROCC up to 70.53%, so there is a need for further improvement of the existing and development of new clutter metrics.

This group of metrics provides better prediction results than simple metrics, contrast-based metrics and target size.

If we consider the mean value of the degree of prediction (the last column of the table), it can be concluded that the three first-ranked measures of clutter are GLCEerg, BSD and DSIM, with a mean degree of agreement of about 78%.

Fig. 7 shows the relationships between clutter metrics (GLCEerg and DSIMrms) and the experimental data of the Search_2 dataset (probability of detection, probability of false alarm and mean search time).

Non-linear regression trends can be observed between the objective scores and the experimental (subjective) data.

The scattering of points around the regression curves indicates the need for further research in the area of clutter assessment.

LO CO oo oo

LO !± cp

CD O c ro E

£ CD

cp

0

01

o

CO '

CD

E?

CO

T3 c CO

CD

E

o CD CO

E

CO

-l-J

CD

m

•cS

>N T3 c o CO

0.015 0.03 0.045 0.38

GLCE

0.42 0.46 DSIM

rms

(b) Pd vs. DSIMrms

0.015 0.03

GLCE

erg

(c) FAR vs. GLCEe

0.045 0.38

0.42 0.46 0.5 DSIM

rms

(d) FAR vs. DSIMrm

0.54

0.54

0.015 0.03

GLCE

erg

(e) mST vs. GLCEer

0.42 0.46 DSIM

rms

(f) mST vs. DSIMrms

0.54

Figure 7 - Objective (GLCEerg and DSIMrms) scores versus the experimental data (Pd,

FAR and mST)

Рис. 7 - Соотношение (GLCEerg и DSIMrms) баллов с экспериментальными данными (Pd, FAR и mST) Слика 7 - Однос измену об]ективних скорова (GLCEerg и DSIMrms) и експерименталних података (Pd, FAR и mST)

Conclusion

The paper summarizes the results of the metrics used to determine the target acqisition performance. Global metrics without a priori knowledge of the target, metrics that require information about the position and dimensions of the target, and metrics that require full knowledge of the target information - position and boundaries between the target and the background were used. Clutter metrics were used for comparison with the results of subjective tests, that is, the relationships between the clutter metrics and the probability of target detection, the probability of a false alarm and the mean search time were analyzed. Although clutter metrics that use objective quality assessments (similarities or dissimilarities between target and background images) have achieved better results than other metrics, there is a need for further research. The degree of agreement of these metrics with the results of the subjective tests measured through the linear correlation coefficient reached a value of 90%. Since objective measures generally do not use color as information, one of the directions of future research would be related to the application of color in the analysis of the degree of clutter. Additionally, one of the directions of future research can be the simultaneous application of multiple metrics (fusion) in the image clutter analysis. The Search_2 publicly available dataset was used to analyze the performance of clutter metrics. This database is relatively small (44 images), with a relatively high target detection probability. Therefore, it is necessary to expand the range of detection probabilities in future research, and one of the ways to do that is by considering atmospheric effects.

References

Bondzulic, B., Pavlovic, B., Stojanovic, N. & Petrovic, V. 2022. Picture-wise just noticeable difference prediction model for JPEG image quality assessment. Vojnotehnicki glasnik /Military Technical Courier, 70(1), pp.62-86. Available at: https://doi.org/10.5937/vojtehg70-34739.

Canny, J. 1986. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-8(6), pp.679698. Available at: https://doi.org/10.1109/TPAMI.1986.4767851.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Chang, H. & Zhang, J. 2006a. New metrics for clutter affecting human target acquisition. IEEE Transactions on Aerospace and Electronic Systems, 42(1), pp.361-368. Available at: https://doi.org/10.1109/TAES.2006.1603429.

Chang, H. & Zhang, J. 2006b. Evaluation of human detection performance using target structure similarity clutter metrics. Optical Engineering, 45(9), art.number:096404. Available at: https://doi.org/10.1117Z1.2353848.

Chang, H., Zhang, J. & Liu, D. 2007. Modeling human false alarms using clutter metrics. In: Proceedings of International Symposium on Multispectral

LO CO oo oo

LO !±

CD O C CO

E £ CD

O tn

ZT

o

CO '

CD

E?

CO

T3 c CO

CD

E

o CD CO

E

CO '

CD

m

■o"

>N T3 c o CO

CO <1>

O >

CO <N

0 <N

01 UJ CC ZD

o o

_J <

o

X

o

LU

I— >-

a: <

i—

< -j

CD >Q

X LU I—

o

o >

Image Processing and Pattern Recognition, MIPR 2007; Automatic Target Recognition and Image Analysis; and Multispectral Image Acquisition, 67863N, Wuhan, China, 6786, pp.1-8, November 15-17. Available at: https://doi.org/10.1117/12.750260.

Chang, H., Zhang, J., Liu, X., Yang, C. & Li, Q. 2010. Color image clutter metrics for predicting human target acquisition performance. In: 2010 6th International Conference on Wireless Communications Networking and Mobile Computing (WiCOM), Chengdu, China, pp.1-4, September 23-25. Available at: https://doi.org/10.1109/WIC0M.2010.5600622.

Cheng, X. & Li, Z. 2021. Predicting the lossless compression ratio of remote sensing images with configurational entropy. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 14, pp.1193611953. Available at: https://doi.org/10.1109/JSTARS.2021.3123650.

Chu, X., Yang, C. & Li. Q. 2012. Contrast-sensitivity-function-based clutter metric. Optical Engineering, 51(6), art.number:067003. Available at: https://doi.org/10.1117/1.0E.51.6.067003.

Corchs, S.E., Ciocca, G., Bricolo, E. & Gasparini, F. 2016. Predicting complexity perception of real world images. PLoS ONE, 11(6), e0157986. Available at: https://doi.org/10.1371/journal.pone.0157986.

Culpepper, J.B. 2015. Texture metric that predicts target detection performance. Optical Engineering, 54(12), art.number:123101. Available at: https://doi.org/10.1117/1.0E.54.12.123101.

Gavrovska, A. & Samcovic, A. 2018. Challenges in modeling of visual human map attention. In: Proceedings of the 36th Symposium on Novel Technologies in Postal and Telecommunication Traffic PosTel 2018, Belgrade, Serbia, December 4-5, pp.256-264 [online]. Available at: https://postel.sf.bg.ac.rs/simpozijumi/P0STEL2018/RAD0VI%20PDF/Telekomu nikacioni%20saobracaj,%20mreze%20i%20servisi/13.GavrovskaSamcovic.pdf (in Serbian) [Accessed: 20 April 2023].

Hasler, D. & Suesstrunk, S.E. 2003. Measuring colourfulness in natural images. In: Proceedings of Electronic Imaging 2003; Human Vision and Electronic Imaging VIII, Santa Clara, CA, USA, 5007, pp.87-95. Available at: https://doi.org/10.1117/12.477378.

Itti, L., Gold, C. & Koch, C. 2001. Visual attention and target detection in cluttered natural scenes. Optical Engineering, 40(9), pp.1784-1793. Available at: https://doi.org/10.1117/1.1389063.

Li, Q., Yang, C. & Zhang, J.-Q. 2012. Target acquisition performance in a cluttered environment. Applied Optics, 51(31), pp.7668-7673. Available at: https://doi.org/10.1364/A0.51.007668.

Lukin, V., Bataeva, E. & Abramov, S. 2023. Saliency map in image visual quality assessment and processing. Radioelectronic and Computer Systems, 1(105), pp.112-121. Available at: https://doi.org/10.32620/reks.2023.1.09.

Meehan, A.J. & Culpepper, J.B. 2016. Clutter estimation and perception. Optical Engineering, 55(11), art.number:113106. Available at: https://doi.org/10.1117/1.0E.55.11.113106.

Mittal, A., Moorty, A.K. & Bovik, A.C. 2012. No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing, 21(12), pp.4695-4708. Available at: https://doi.org/10.1109/TIP.2012.2214050.

Mondal, A. 2022. Camouflage design, assessment and breaking techniques: a survey. Multimedia Systems, 28, pp.141-160. https://doi.org/10.1007/s00530-021-00813-6.

Rotman, S.R., Cohen-Nov, A., Shamay, D., Hsu, D. & Kowalczyk, M.L. 1996. Textural metrics for clutter affecting human target acquisition. In: Proceedings of Aerospace/Defense Sensing and Controls; Infrared Imaging Systems: Design, Analysis, Modeling, and Testing VII, Orlando, FL, USA, 2743, pp.99-112. Available at: https://doi.org/10.1117/12.241951.

Schmieder, D.E. & Weathersby, M.R. 1983. Detection performance in clutter with variable resolution. IEEE Transactions on Aerospace and Electronic Systems, AES-19(4), pp.622-630. Available at:

https://doi.org/10.1109/TAES.1983.309351.

Tan, W., Zhou, H.-x., Yu, Y., Du, J., Qin, H., Ma, Z. & Zheng, R. 2017. Multi-focus image fusion using spatial frequency and discrete wavelet transform. In: Proceedings of Applied Optics and Photonics China (AOPC2017); Optical Sensing and Imaging Technology and Applications, 104624K, Beijing, China, 10462, pp.1-11. Available at: https://doi.org/10.1117/12.2285561.

Toet, A., Bijl, P. & Valeton, J.M. 2001. Image dataset for testing search and detection models. Optical Engineering, 40(9), pp.1760-1767. Available at: https://doi.org/10.1117/1.1388608.

Toet, A. 2010. Structural similarity determines search time and detection probability. Infrared Physics & Technology, 53(6), pp.464-468. Available at: https://doi.org/10.1016/j.infrared.2010.09.003.

Toet, A. & Hogervorst, M.A. 2020. Review of camouflage assessment techniques. In: Proceedings of SPIE Security + Defence; Target and Background Signatures VI; 1153604, Online Only, 11536, pp.1-29. Available at: https://doi.org/10.1117/12.2566183.

Wang, Z., Bovik, A.C., Sheikh, H.R. & Simoncelli, E.P. 2004. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), pp.600-612. Available at: https://doi.org/10.1109/TIP.2003.819861.

Wilson, D. 2001. Image-based contrast-to-clutter modeling of detection. Optical Engineering, 40(9), pp.1852-1857. Available at: https://doi.org/10.1117/1.1389502.

Xiao, C.-m., Shi, Z.-l. & Liu, Y.-p. 2015a. Metrics of image clutter by introducing gradient features. Optics and Precision Engineering, 12, pp.34723479 [online]. Available at: http://caod.oriprobe.com/articles/47854327/Metrics_of_image_background_clutt er_by_introducing_gradient_features.htm [Accessed: 20 April 2023].

Xiao, B., Duan, J., Zhu, Y., Chen, Y. & Li, G. 2015b. Survey of evaluation methods in image complexity of target and background. In: Proceedings of Applied Optics and Photonics China (AOPC2015); Image Processing and

LO CO oo oo

LO !±

CD O C CO

E £ CD

O tn

ZT

o

CO '

CD

E?

CO

T3 c CO

CD

E

o CD CO

E

CO '

CD

m

■o"

>N T3 c o CO

Analysis; 96751Q, Beijing, China, 9675, pp.1-6. Available at: https://d0i.0rg/l 0.1117/12.2199533.

Xu, D. & Shi, Z. 2012. FD: A feature difference based image clutter metric for targeting performance. Infrared Physics & Technology, 55(6), pp.499-504. Available at: https://doi.Org/10.1016/j.infrared.2012.08.001.

Xu, D., Shi, Z. & Luo, H. 2013. A structural difference based image clutter metric with brain cognitive model constraints. Infrared Physics & Technology, 57, pp.28-35. Available at: https://doi.org/10.1016/j.infrared.2012.11.005.

Xu, D. & Shi, Z. 2013. DSIM: A dissimilarity-based image clutter metric for targeting performance. IEEE Transactions on Image Processing, 22(10), pp.4108-4122. Available at: https://doi.org/10.1109/TIP.2013.2270112.

Yang, C., Zhang. J.-Q., Xu, X., Chang, H.-H. & He, G.-J. 2007. Quaternion phase-correlation-based clutter metric for color images. Optical Engineering, 46(12), art.number:127008. Available at: https://doi.org/10.1117Z1.2823489.

Yang, C., Wu, J., Li, Q. & Zhang, J.-Q. 2011. Sparse-representation-based clutter metric. Applied Optics, 50(11), pp.1601-1605. Available at: https://doi.org/10.1364/A0.50.001601.

Yu, H. & Winkler, S. 2013. Image complexity and spatial information. In: 2013 Fifth International Workshop on Quality of Multimedia Experience (QoMEX), Klagenfurt am Worthersee, Austria, pp.12-17, July 3-5. Available at: https://doi.org/10.1109/QoMEX.2013.6603194.

Zhao, Y., Song, Y., Sulaman, M., Li, X., Guo, Z., Yang, X. & Wang, F. 2019. A multidirectional-difference-Hash-based image clutter metric for targeting performance. IEEE Photonics Journal, 11(4), art.number:7801110, pp.1-10. Available at: https://doi.org/10.1109/JPH0T.2019.2922967.

Zheng, B., Wang, X.-D., Huang, J.-T., Wang, J. & Jiang, Y. 2016. Selective visual attention based clutter metric with human visual system adaptability. Applied Optics, 55(27), pp.7700-7706. Available at: https://doi.org/10.1364/A0.55.007700.

Показатели шума изображений и эффективность определения цели

Бобан П. Бонджулича, Дмитрий М. Буякович3, корреспондент,

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Йован Г. Михайлович6

а Университет обороны в г. Белград, Военная академия, кафедра телекоммуникаций и информатики, г. Белград, Республика Сербия

6 Вооружённые силы Республики Сербия, Генеральный штаб, Управление телекоммуникаций и информатики (J-6), Бригада связи, г. Белград, Республика Сербия

РУБРИКА ГРНТИ: 28.23.00 Искусственный интеллект,

28.23.15 Распознавание образов. Обработка изображений

ВИД СТАТЬИ: оригинальная научная статья

Резюме:

Введение/цель: Измерение эффективности обнаружения целей в

системах визуализации с непосредственным участием человека

играет важную роль в военных приложениях. В данной статье представлен расширенный обзор применения показателей шума изображений для обнаружения цели с целью использования объективных показателей для прогнозирования вероятности обнаружения, вероятности ложной тревоги и среднего времени поиска цели на изображении.

Методы: Для определения уровня шума использовались простые функции на глобальном (с точки зрения изображения) и локальном (с точки зрения цели) уровнях, а также показатели помех на основе контраста, размера цели и показателей, полученных на основе оценки качества изображения. Наряду со стандартными, использовались признаки, полученные на основе распределения средних вычитаемых коэффициентов нормированного контраста. Для сравнения результатов объективных оценок и экспериментальных результатов, полученных в общедоступной базе данных Search_2, были применены законы регрессии, принятые в литературе. В качестве количественных показателей согласованности использовались линейные и ранговые корреляции.

Результаты: В ходе исследования доказано, что наилучшее соответствие с показателями целевого захвата достигается путем применения показателей помех, полученных на основе показателей оценки качества изображения. Корреляция с результатами субъективных тестов составляет до 90%, что указывает на необходимость дальнейших исследований. Особым вкладом статьи является анализ эффективности прогнозирования обнаружения цели с использованием простых функций на глобальном и локальном уровнях. Данный анализ показал, что эффективность прогнозирования может быть улучшена путем определения признаков в самом окружении цели. Кроме того, было показано, что вероятность ложной тревоги и вероятность обнаружения можно предусмотреть с вероятностью более 90% на основе среднего времени поиска цели на изображении.

Выводы: Помимо того, что выявлена высокая степень соответствия между объективными показателями шума изображений и результатами субъективных тестов (до 90%), выявлена и необходимость в улучшении существующих и разработке новых показателей, а также в проведении новых субъективных тестов.

Ключевые слова: показатель помехи, частота ложных тревог, среднее время поиска, вероятность обнаружения, обнаружение цели.

ю со оо оо ю

си о с го Е

£ си

О <л

о

го '

си Е?

со ■

тз с го

си Е

о си сл го Е

го '

си т

■о"

>ы тз с о со

со ф

Ô >

со" сч о сч

OÎ ш сс

ZD О

О _j

< О

X

о ш

I— >-

СИ

с

I—

(Л <

о >о

X ш I—

о

о >

Мере за естимаци]у клатера на слици и перформансе аквизици]е ци^а

Бобан П. Бондули^3, Димитри^е М. Бу]акови^а, аутор за преписку, JoeaH Г. Миха]лови^в

а Универзитет одбране у Београду, Во]на академи]а, Катедра телекомуникаци]а и информатике, Београд, Република Срби]а

б Во]ска Срби]е, Генералштаб, Управа за телекомуникаци]е и информатику (J-6), бригада везе, Београд, Република Срби]а

ОБЛАСТ: телекомуникаци]е

КАТЕГОРИJА (ТИП) ЧЛАНКА: оригинални научни рад Сажетак:

Увод/цил>: Одре^иваше перформанси аквизицще циъа има битну улогу у во]ним применама у ко]има je човек оператер. Оваj рад представка проширено истраживаше о примени метрика клатера слике за анализу перформанси аквизицще циъа, како би се применом об}ективних мера извршила предикци}а вероватноЬе детекци]е, вероватноЬе лажног аларма и средшег времена тражеша циъа на слици.

Методе: За одре^иваше степена клатера коришПена су jедноставна обележ}а на глобалном (ниво комплетне слике) и локалном нивоу (у околини цил>а), метрике клатера засноване на контрасту, величина циъа и об}ективне мере изведене из мера за процену квалитета слике. Поред стандардних обележ}а, коришПена су и обележ}а изведена из расподеле MSCN (mean subtracted contrast normalized coefficients) коефици]ената. За поре^еше резултата об}ективних скорова и експерименталних резултата доби}ених на }авно доступноj Search_2 бази, коришЯени су регресиони закони прихваПени у литератури. Као квантитативне мере слагаша коришЯене су линеарна корелаци}а и корелаци'е рангова.

Резултати: Показано jе да се применом метрика клатера, изведених из мера процене квалитета слике, доби}а на}боъе слагаше са показатеъима аквизицще циъа. Корелаци]а са резултатима суб}ективних тестова износи до 90%, што указу}е на потребу за даъим истраживашима. Посебан допринос рада представка детаъна анализа предикци]е перформанси аквизицще циъа применом jедноставних обележ}а на глобалном и локалном нивоу, при чему jе показано да се одре^ивашем обележ}а у околини циъа могу побоъшати перформансе предикци}е. Тако^е, резултати суб}ективних тестова показу}у да се са вероватноЬом веЬом од 90% на основу средшег времена тражеша циъа на слици може проценити вероватноЬа лажног аларма и вероватноЬа детекци]е циъа.

Закъучак: Поред тога што jе добц'ен висок степен слагала обjективних метрика клатера и резултата субjективних тестова (до 90%), постоjи потреба за унапре^ешем постоjеhих и развоjем § нових метрика, као и за спрово^ешем нових субjективних тестова.

Къучне речи: метрике клатера, вероватноЬа лажног аларма, средше време претраживаша, вероватноЬа детекц^е, | аквизиц^а циъа.

оо

EDITORIAL NOTE: The first author of this article, Boban P. Bondzulic, is a current member of the Editorial Board of the Military Technical Courier. Therefore, the Editorial Team has ensured that the double blind reviewing process was even more transparent and more rigorous. The о Team made additional effort to maintain the integrity of the review and to minimize any bias by having another associate editor handle the review procedure independently of the editor -author in a completely transparent process. The Editorial Team has taken special care that the referee did not recognize the author's identity, thus avoiding the conflict of interest. КОММЕНТАРИЙ РЕДКОЛЛЕГИИ: Первый автор данной статьи Бобан П. Бонджулич является действующим членом редколлегии журнала «Военно-технический вестник». Поэтому редколлегия провела более открытое и более строгое двойное слепое рецензирование. Редколлегия приложила дополнительные усилия для того чтобы сохранить целостность рецензирования и свести к минимуму предвзятость, вследствие чего второй редактор-сотрудник управлял процессом рецензирования независимо от редактора-автора, таким образом процесс рецензирования был абсолютно прозрачным. Редколлегия во избежание конфликта интересов позаботилась о том, чтобы рецензент не узнал кто является автором статьи. РЕДАКЦ^СКИ КОМЕНТАР: Први аутор овог чланка Бобан П. Бонцулип ]е актуелни члан а> Уре^ивачког одбора Во]нотехничког гласника. Због тога ]е уредништво спровело <?

cu Е?

со ■

тз

Е

"со cu m

■о"

транспарентни]и и ригорозни^и двоструко слепи процес рецензи]е. Уложило je додатни напор да одржи интегритет рецензи]е и необ]ективност сведе на на]ма^у могуЛу меру тако што ]е други уредник сарадник водио процедуру рецензи]е независно од уредника аутора, при чему ]е та] процес био апсолутно транспарентан. Уредништво ]е посебно водило рачуна да рецензент не препозна ко ]е написао рад и да не до^е до конфликта интереса.

Paper received on / Дата получения работы / Датум приема чланка: 22.04.2023. Manuscript corrections submitted on / Дата получения исправленной версии работы / Датум достав^а^а исправки рукописа: 10.06.2023.

Paper accepted for publishing on / Дата окончательного согласования работы / Датум коначног прихвата^а чланка за об]ав^ива^е: 12.06.2023. © 2023 The Authors. Published by Vojnotehnicki glasnik / Military Technical Courier (www.vtg.mod.gov.rs, втг.мо.упр.срб). This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/rs/).

© 2023 Авторы. Опубликовано в «Военно-технический вестник / Vojnotehnicki glasnik / Military Technical Courier» (www.vtg.mod.gov.rs, втг.мо.упр.срб). Данная статья в открытом доступе и распространяется в соответствии с лицензией «Creative Commons» (http://creativecommons.org/licenses/by/3.0/rs/).

© 2023 Аутори. Об]авио Во^отехнички гласник / Vojnotehnicki glasnik / Military Technical Courier (www.vtg.mod.gov.rs, втг.мо.упр.срб). Ово ]е чланак отвореног приступа и дистрибуира се у складу са Creative Commons licencom (http://creativecommons.org/licenses/by/3.0/rs/).

i Надоели баннеры? Вы всегда можете отключить рекламу.