DOI: 10.12731/2227-930X-2018-2-40-57 UDС 004.932.2
DYNAMIC TEXTURE RECOGNITION ALGORITHM Pyataeva A.V., Raevich K.V.
Recognizing dynamic patterns based on visual processing is significant for many applications such as remote monitoring for the prevention of natural disasters, e.g. forest fires, various types of surveillance, e.g. traffic monitoring, background subtraction in challenging environments, e.g. outdoor scenes with vegetation, homeland security applications and scientific studies of animal behavior. In the context of surveillance, recognizing dynamic patterns is of significance to isolate activities of interest (e.g. fire) from distracting background (e.g. windblown vegetation and changes in scene illumination).
Methods: pattern recognition, computer vision.
Results: This paper presents video based image processing algorithm with samples usually containing a cluttered background. According to the spatiotemporal features, four categorized groups were formulated. Dynamic texture recognition algorithm refers image objects to one of this group. Motion, color, facial, energy Laws and ELBP features are extracted for dynamic texture categorization. Classification based on boosted random forest.
Practical relevance: Experimental results show that the proposed method is feasible and effective for video-based dynamic texture categorization. Averaged classification accuracy on the all video images is 95.2%.
Keywords: dymanic texture recognition; motion features; facial features; boosted random forests.
АЛГОРИТМ АНАЛИЗА ДИНАМИЧЕСКИХ ТЕКСТУР Пятаева А.В., Раевич К.В.
Постановка проблемы: Обнаружение динамических текстур на видеоизображениях в настоящее время находит все более ши-
рокое применение в системах компьютерного зрения. Например, обнаружение дыма и пламени в системах экологического мониторинга, анализ автомобильного трафика при мониторинге загруженности дорог, и в других системах. Поиск объекта интереса на динамическом фоне часто бывает затруднен за счет похожих текстурных признаков или признаков движения у фона и искомого объекта. В связи с этим возникает необходимость разработки алгоритма классификации динамических текстур для выделения объектов интереса на динамическом фоне.
Методы: распознавание образов, компьютерное зрение.
Результаты: В данной работе рассматривается обработка видеоизображений содержащих объекты с динамическим поведением на динамическом фоне, такие как вода, туман, пламя, текстиль на ветру и др. Разработан алгоритм отнесения объектов видеоизображения к одной из четырех предлагаемых категорий. Извлекаются признаки движения, цветовые особенности, фрактальности, энергетические признаки Ласа, строятся ELBP-гистограммы. В качестве классификатора использован бустинговый случайный лес.
Практическая значимость: Разработан метод, позволяющий разделить динамические текстур на категории: по типу движения (периодическое и хаотичное) и типу объектов интереса (природные и искусственные). Экспериментальные исследования подтверждают эффективность предложенного алгоритма для отнесения объектов изображения к той или иной категории. Средняя точность классификации составила 95.2%.
Ключевые слова: анализ динамических текстур; признаки движения; фрактальные признаки; бустинговый случайный лес.
Introduction
Nowadays dynamic textures recognition is particularly importance in difference computer vision community tasks in a variety of fields. The Dynamic textures (DTs) are caused by a variety of physical processes that leads to different visualization of such objects: small/large particles, transparent/opaque visibility, rigid/non-rigid structure, 2D/3D
motion. The goal of the DTs recognition can be different. In reconstruction tasks, the recognition of the DT means a creation of its 2D or 3D statistical model. DT is an extension of texture to the temporal domain. Recognizing dynamic patterns based on visual processing is significant for many applications such as remote monitoring for the prevention of natural disasters, e.g. forest fires, various types of surveillance, e.g. traffic monitoring, background subtraction in challenging environments, e.g. outdoor scenes with vegetation, homeland security applications and scientific studies of animal behavior. In the context of surveillance, recognizing dynamic patterns is of significance to isolate activities of interest from distracting background.
The recognition of the DTs remains a challenging problem because of multiple impacts appearing in the dynamic scenes that include the viewpoint changes, camera motion, illumination changes, etc. In past decades, a variety of different approaches have been proposed for recognition of the DTs, such as the Linear Dynamic System (LDS) methods [1], GIST method [2], the Local Binary Pattern (LBP) methods [3], wavelet methods [4; 5], morphological methods [6], deep multilayer networks [7], among others.
Dynamic texture features estimation
Dynamic textures can be divided into four categories on the spatiotemporal criteria [8]:
• Category I. Natural particles with periodic movement like water in the lake, river, waterfall, ocean, pond, canal, and fountain, leaves and grass under a wind in large scales;
• Category II. Natural translucency/transparent non-rigid blobs with randomly changed movement like the smoke, clouds, flame, haze, fog, and other phenomena;
• Category III. Man-made opaque rigid objects with periodic movement like flags and textile under a wind, leaves and grass under a wind in small scales;
• Category IV. Man-made opaque rigid objects with stationary or chaotic movement like car traffic, birds and fishes in swarms, moving escalator, and crowd;
According to DT categories proposed dynamic texture classification features such as moving parameters, chromatic components, geometrical (flickering) features, shape entropy measure, energy Laws characteristics.
1. Motion features
In first dynamic recognition algorithm step motion features are extracted. Moving areas estimating with SAD (Sum of Absolute Differences) criteria of Block matching algorithm by Eq. 1:
SAD = Z/ G pix
LU-I.^t)
(1)
where Pix - number of block pixels, I(t) and I._1(t) - intensity value of the pixel in two neighbored frames t(x, y). We used block size in 30^30 pixels for moving areas detection.
Also the optical flow provides the information about the local and global motion vectors.
2. Color features
In the zones where movement is identified, the colour mask is placed to detect candidate-blocks to Category I and II Eqs. 2-3. Natural translucency/transparent non-rigid blobs with randomly changed movement block like smoke, clouds, flame, haze, fog can be detected by using experiential color threshold T in RGB- color space:
'\R-G\<T
\G-B\<T (2)
The flame-colour regions detection a combination of RGB and HSV colour spaces is used:
R>G>B (3)
R>RT (4)
S > (255 —R)xST/RT (5)
In Expressions (4) to (5), RT indicates the threshold value of R; S is the value of the pixel saturation, and ST corresponds to the saturation when R value matches the knowledge of RT parameter for the same pixel. Rules (3) and (4) show that the value of the R channel is greater than of the other objects. Colour features of natural particles with peri-
odic movement in Category II estimated similarly. Objects in Category III and Category VI demonstrate various color features.
3. Fractal features
Dynamic fractal analysis is built on the concept of the fractal dimension, which measures the statistical self-similarity of a point set in a multi-scale fashion. Four measures which are suitable for the shape, motion, and fractal evaluation of the DTs[9]: pixel intensity ^ by Eq. 6, temporal brightness gradient ^ by Eq. 7, normal flow ^Fby Eq. 8 and the Laplacian ^ by Eq. 9.
The pixel intensity measure ^(p0, t0, r, r) is calculated by equation 3:
V-i{po*t0,r„rt)= It f l{p,t)dpdt (6)
kr'o ¥""<>
where I(p, t)= an intensity value of pixel p in time instant t, rs = a spatial radius, rt = a temporal radius, n^ t jr„,rt) = a 3D cube centring at point (p0, t0). The temporal brightness gradient ^B(p0, t0, r,, r) is a summation of temporal intensity changes of the DT in a 3D cube Q() This parameter is defined by a derivative of second order:
^OVo ^¡^J^dp (7>
The Laplacian ^(p0, t0, r, r) means the information of the local co-variance of pixel intensity at point (p0, t0) in the spatial-temporal domain (equation 5):
VL{PoM,rt)= \\ M{p,t)dpdt (8)
The normal flow ^F(p0, t0, r, r > is often used in motion estimation of the DTs. It measures a motion of pixels along the direction perpendicular to the brightness gradient, e.g., edge motion as an appropriate measure for chaotic motion of the DTs. This measure can be calculated by equation 9:
fU-w* <9>
The spatial texture layering as well as the type and shape of texels are also important descriptors for preliminary categorization. They can be estimated using the gradient information of the successive frames. Measures represented by Eqs. 6-9 characterize the DT as the stochastic dynamic systems with self-similarity in spatio-temporal domain.
4. Laws energy features
Laws energy approach [10] for dynamic textures classification is an successful methodology for image segmentation using texture analysis. Laws identified the following properties as playing an important role in describing texture: uniformity, density, coarseness, roughness, regularity, linearity, directionality, direction, frequency and phase. Laws energy filter applied for pre processed gray-scale moving blocks. For illumination influence removal fix size scanning window were used. The pixel intensity P[i, j] is calculated in a surrounding relatively a central pixel with intensity I[i, j] by Eq. 10 where I=(R+B+G)/3, P - input image, w - window size.
c + (w-1)/2 r + (w-1)/2
P[r, c] = I[r, c]-
j = c-(w-1)/2 i = r-(w-\)!2
I[i,j]
-,(10)
w
For natural scenes scanning window size is 15^15 [11], otherwise 5x5 pixels. Laws' texture features determine texture properties by assessing Average Gray Level, Edges, Spots, Ripples and Waves in texture. The approach uses basic convolution kernels for image filtering. The following set is a number of one dimensional kernels of a length of fiveby Eq. 11:
L5 E5
■[ 1 = [-1
4
-2
6 0
4
2
1], 1]
S5 =[-1 0 2 0 -1]
JF5 = [-1 2 0 -2 1]
R5 =[-1-4 6 -4 1]
So, L5£,57mask estimated by Eg. 1 2:
T "1 -4 6 -4 1
4 4 -16 24 -16 4
L5E5T = 6 [1-4 6-4 1] = 6 -24 36 -24 6
4 4 -16 24 -16 4
1 1 -4 6 -4 1
(11)
The 16 filtered images estimated by applying Laws filters. Energy Laws map E[r, c] is calculated by Eg. 13 where Fk[i, j] - Laws mask with index к, [i, j] - filtering pixel:
c+7 r+7
E(r,c)=Z Z \Fk(i,j)[
j=c-7 i=r-l
Fig. 1 demonstrate applying Laws energy mask S5S5.
Fig. 1. (a)original image; (b) filtered images
(13)
a
b
Symmetrical pairs of maps (like E5L5 and L5E5) are replaced by an average map according to the formula:
EJr, c) = (EK(r, c)+EL(r, c))/2 (14)
For example, mask E5L5 is describing horizontal edges, L5E5 mask - vertical edges. Average E5L5 and L5E5 determine all image edges.
5. ELBP features
The Local Binary Pattern - LBP was introduced by Ojala et al. [12] as a binary operator robust to lighting variations with low computational cost and ability of simple coding of neighboring pixels around the central pixel as a binary string or decimal value. The operator LBP(N, R) is calculated in a surrounding relatively a central pixel with intensity / by Eq. 15, where N is a number of pixels in the neighborhood, R is a radius. If (I - I ) > 0, then s(I - I ) = 1, otherwise s(I - I ) = 0.
v n c ' v n c ' v n c'
Variables In and Ic - pixel intensity in current and central point as Y coordinate from YUV color space [13-16].In our work spatio-temporal
local binary pattern was used. The STLBP gathers information from adjacent frames relative the central pixel by Eq. 16. For description of the DTs, it is necessary to introduce 3D cuboid of information, thus the application of the STLBP is reasonable. The STLBP becomes voluminous and poorly representative against to generic LBP.
STLBPr.(P) = IJ3Prj_1(P)+LBPr.(P)+LBPrj+1(P) (16)
Extended local binary pattern (ELBP) based on the uniform patterns [17] represent local texture structures. The operator ELBP(N, R) is calculated like LBP(N, R) operator. A LBP is called uniform if there are no more than three 0/1 or 1/0 bitwise transitions in its binary code, being considered as a circular code. It is reported in [18] that, the contribution of uniform pattern to is about 87.2% and 70.7% respectively. That is to say, the uniform patterns take a majority percentage of all patterns. Uniform patterns can be presented as line end, corner and edge patterns. As a result, each uniform pattern is given a unique label and all other minorities are given a mutual label in histogram calculation.
Dynamic texture recognition algorithm
The generalized algorithm is as follows:
• Step 1. Motion features estimation. Detecting moving blocks and direction vectors.
• Step 2. Color features estimation.
• Step 3. Estimate fractal measurespixel intensity temporal brightness gradient normal flow ^ and the Laplacian
• Step 4. Convert the input image into a grayscale image. Apply Laws energy approach for energy maps estimation.
• Step 5. Build a set of ELBP local descriptors for the analyzed region.
• Step 6. Apply a histogram approach for classification and store the results.
• Step 7. Combining regions with similar features.
p-1
(15)
«=o
• Step 8. Clustering using Boosted Random Forests
• Step 8. Repeat Steps 3-8 in a cycle for categorization all moving blocks.
First step of proposed algorithm is motion estimation based on Block-matching SAD criteria and the optical flow for the information about the local and global motion vectors. Block-matching algorithm evolves 2 to 5 frames of video sequences according to experiments for objects in different Categories show various motion speed. Smoke and clouds demonstrating similar EBLP texture features, but motion features for this natural transparency objects are difference. Moving smoke direction usually is from bottom to top of video frame, while smoke colored object like clouds moving across the frame. Moving features of flame is to change the boundaries of flame region from frame to frame randomly. Moreover, as itshown in [19] it is reasonable to define scene depth permits to separate images in two groups: the close scenes (till 500 m approximately) and the remote scenes (more 500 m), where "close" and "far" moving objects like smoke and other can be watched, respectively.
The next algorithm step is color, facial, entropy features estimation. Fourth step is Laws energy approach for edges, spots, ripples and wave texture features detection. The next step is ELBP descriptors computing. Then histogram approach was applied.
Chi-square distance, histogram intersection distance, Kullback-Leibler divergence, and G-statistic are usually used during classification stage. In this research, the histogram intersection and chi-square distance were chosen for histogram comparison as it is often recommended in literature by Eqs. 17-18.
Hist(f,g) = 1 - f>n(Jm,gm) (17)
m=1
l\f,g)=l-f}L~8m)1 (18) ^ m=1 Jm + Sm
Regions clustering based on boosted random forests [20]. Boosted random forests - BRFs include a boosting algorithm during random for-
est learning in order to produce the high-performance and smaller in size decision trees [21]. The BRFs include a bootstrapping similar to the Ad-aboost algorithm in the learning stage and involves estimation of class label of the training data with the trained decision trees, calculation the error of decision tree, and computation of weight of the decision tree.
During a clustering stage, an unknown sample is entered to all decision trees, and the class probabilities are stored in leaf nodes of each tree. Then all outputs of decision trees Pt(c\at) are weighted and averaged, using Eq. 19.
(19)
^ t=i
In Eq.19 K - number of decision trees, c - class, at - current sample. The class that has the highest probability is the clustering result. Categorization rate and the errors estimated in BRFs clustering results.
Experimental results
For experimental results Dyntex [22], V-MOTE [23], and Wild-Filmslndia [24], Billkent university [25] datasets were used. The test video images have different resolution with minimum values 320 x 240 pixels and maximum values 1280 x 720 pixels and depict a great variety of objects, including natural objects, man-made objects, humans, animals, etc., under the outdoor shooting. Some examples of the used images are described shortly in Table 1.
Table 1.
Description of some used videos
Description of test video Sample frame Description of test video Sample frame
File name: tj XVID_0011.avi Resolution: 720x576 pix Number of frames: 3 100 Alias: video 1 jm File name: XVID_0002.avi Resolution: 720x576 pix Number of frames: 1 800 Alias: video2 . v - j Yf rl llMÜ
End of a table 1.
File name: Flamingos.mp4 Resolution: 1280x720 pix Number of frames: 2 350 Alias: video3
File name: Pondicherry Beach - the brief of the ocean.avi Resolution: 1280x720 pix Number of frames: 1 320
Alias: video4
File name: Fish Hide From Predators.mp4 Resolution: 1280x720 pix Number of frames: 3 696 Alias: video5
File name: Republic Day Parade.mp4 Resolution: 720x576 pix Number of frames: 22 510 Alias: video6
File name: 648aa10.avi Resolution: 720x576 pix Number of frames: 950 Alias: video7
File name: 645c510.avi, Resolution: 720x576 pix Number of frames: 7 200
Alias: video8
File name: 646a510.avi Resolution: 720x576 pix Number of frames: 350 Alias: video9
File name: 54pe210.avi Resolution: 720x576 pix Number of frames: 250
Alias: video10
File name: 649a810.avi, Resolution: 720x576 pix Number of frames: 4 950 Alias: video11
File name: 645e010.avi Resolution: 720x576 pix Number of frames: 6 000
Alias: video12
File name: controlled1.avi Resolution: 400x256 pix Number of frames: 275 Alias: video13
File name: BackYardFile.avi Resolution: 320x240 pix Number of frames: 1 251
Alias: video14
Experimental results of DT categorization shown at the Table 2. The average detection accuracy was carried out in experimental studies on video sequence for DT algorithm categorization efficiency evaluating. The same one video consists of various DT category objects. The performance of the DT classification algorithm was evaluated using the CR - classification rate, FRR - false rate rejection and FAR - false alert rejection. The CR indicator is calculated as a ratio of regions with right class label to the all regions number. The FAR false operation indicates the ratio of regions with false positive operation to the total number of regions on the video image.
Table 2.
Experimental results
Video alias Number of frames histogram intersection chi-square distance
CR, % FAR, % FRR, % CR, % FAR, % FRR, %
videol 3 100 97,20 1,78 2,80 98,42 1,52 1,58
video2 1 800 98,21 0,85 1,79 99,00 0,78 1,00
video3 2 350 95,25 2,13 4,75 96,12 1,52 3,88
video4 1 320 98,31 1,02 1,89 98,89 0,99 1,11
video5 3 696 88,15 9,00 11,85 89,12 8,75 10,88
video6 22 510 91,85 9,12 8,15 92,00 8,45 8,00
video7 950 98,25 0,28 1,75 99,0 0,11 1,00
video8 7 200 96,85 3,00 3,15 97,21 3,00 2,79
video9 350 100,0 0,00 0,00 100,0 0,00 0,00
videolO 250 100,0 0,00 0,00 100,0 0,00 0,00
videoll 4 950 89,21 8,74 10,79 90,00 5,21 10,00
video12 6 000 90,01 8,77 9,99 91,27 8,00 8,73
video13 275 93,12 7,14 6,88 94,74 6,98 5,26
video14 1 251 95,27 6,45 4,73 96,52 5,89 3,48
The experiments conducted on the sequences from represented database show the best recognition results for the Categories VI and III with the averaged recognition rate 96%. Averaged classification accuracy on the all video images is 95.2%
Experimental shows that particular difficulty in DT recognition algorithm is to classify video images containing of dif-
ferent categories regions, one superimposed on the other. As an example of such images is video 11 and video 12 in table 1. At the moving features extraction DT recognition algorithm step in candidate block can by placed two-class objects and moreover. In this case FAR and FRR are observed because this block belongs to the class with higher probability. The histogram intersection and chi-square distance is adapted for measuring distances between histograms in order to analyze the probability of occurrence of code numbers for compared textures.
For the DTs based on man-made opaque rigid objects with stationary or chaotic movement, the errors of temporal features are high for the short-term series that influence on the final result. Also the samples of these categories usually contain a cluttered background. This means that a special attention ought to be paid for the temporal analysis in the further investigations. Experimental results show that the proposed method is feasible and effective for video based DT classification.
Conclusion
In this research, a classification of dynamic textures is solved using motion, color, fractal, Laws energy and ELBP features. Chi-square distance, histogram intersection distance, Kullback-Leibler divergence, and G-statistic are usually used during classification stage. Regions clustering based on boosted random forests. Averaged classification accuracy on the all video images is 95.2%. Results show that the proposed method is feasible and effective for video based DT classification.
References
1. Ravichandran A., Chaudhry R., Vidal R. Categorizing dynamic textures using a bag of dynamical systems. IEEE Transactions on Pattern Analysis and Machine Intelligence,2013, pp. 342-353.
2. Oliva A., Torralba A. Modeling the shape of the scene: a holistic representation of the spatial envelope. International Journal on Computer Vision, 2001, pp. 145-175.
3. Zhao G., Pietikainen M. Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, pp.915-928.
4. Dubois S., Peteri R., Menard M. A Comparison of Wavelet Based Spatio-temporal Decomposition Methods for Dynamic Texture Recognition. In: The Iberian Conference on Pattern Recognition and Image Analysis, Santiago de Compostela, 2009, pp. 314-321.
5. Dubois S., Peteri R., Menard M. Characterization and recognition of dynamic textures based on 2D+T curvelet transform. Signal, Image and Video Processing, 2015, pp. 819-830.
6. Dubois S., Peteri R., Menard M. Decomposition of Dynamic Textures using Morphological Component Analysis. IEEE Transactions on Circuits and Systems for Video Technology ,2012, pp. 188-201.
7. Yang F., Xia G.S., Liu G., Zhang L., Huang X. Dynamic texture recognition by aggregating spatial and temporal features via ensemble SVMs. Neurocomputing, 2016, pp. 1310-1321.
8. Favorskaya M.N., Pyataeva A.V. Convolutional recognition of dynamic textures with preliminary categorization. Photogrammetric and computer vision techniques for video Surveillance, Biometrics and Bio-medicine. Moscow, May 15-17, 2017, pp. 47-54.
9. Xu Y., Quan Y., Zhang Z., Ling H., Ji H. Classifying dynamic textures via spatiotemporal fractal analysis. Pattern Recognition, 2015, pp. 3239-3248.
10. Laws K. Rapid Texture Identification. Proceedings of SPIE - Society of Photo - Optical Intrumentation Engineers - Image Processing for Missle Guardance, 1980, vol. 238, pp. 367-380.
11. Yakovleva E.V., Panchenko I.A. Primenenie energeticheskikh kharak-teristik Lavsa dlya segmentatsii izobrazheniy [The application of the energy characteristics of Lavs for image segmentation]. Bionika intel-lekta [Bionics of intelligence], 2007, No 2 (67), pp. 94-98.
12. Ojala T., Pietikainen M., Harwood D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit, 1996, pp. 51-59.
13. Habiboglu H.Y., Gunay O., Cetin E. Real-time wildfire detection using correlation descriptors. 19th European Signal Conference (EUSIPCO 2011). Barcelona, 2011, pp. 894-898.
14. Ko B.C., Park J.O., J.-Y. Nam. Spatiotemporal bag-of-features for early wildfire smoke detection. Image and Vision Computing, 2013, vol.31, Issue 10, pp. 786-795.
15. Krstinic D., Stipanicev D., Jakovcevic T. Histogram - based segmentation fire detection system. Information technology and control, 2009, vol. 38, no.3, pp. 237-244.
16. Ojala T., Valkealahti K., Oja E., Pietikàinen M. Texture discrimination with multidimensional distributions of signed gray - level differences. Pattern Recognition, 2001, no. 34(3), pp. 727-739.
17. Liao W.H., Young T.J. Texture classification using uniform extended local ternary patterns. International Symposium on Multimedia, 2010, №4 (83), pp. 191-195.
18. Ojala T., Pietikàinen M., Maenpaa M. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. PAMI, 2002, pp. 971-987.
19. Pyataeva A.V., Favorskaya M.N. Model' fona pri detektirovanii dyma po videoposledovatel'nostyam na otkrytykh prostranstvakh [Background model for video-based smoke detection in outdoor scenes]. Informatsionno-upravlyayushchie sistemy [Information and control systems], 2016, pp. 44-50.
20. Gim J.W, Hwang M.C., Ko B.C. Real-Time Speed-Limit Sign Detection and Recognition Using Spatial Pyramid Feature and Boosted Random Forest. 12th International Conference, ICIAR 2015, Niagara Falls, Canada, 2015, pp. 437-445.
21. Favorskaya M., Pyataeva A., Popov A. Spatio-temporal smoke clustering in outdoor scenes based on boosted random forests. Procedia Computer Science, 2016, vol. 96, pp. 762-771.
22. Renaud P., Fazekas S., Huiskes M.J. DynTex. A comprehensive database of dynamic textures. Pattern Recognition Letters, 2010, vol. 31, no.12, pp. 1627-1632.
23. V-MOTE Database. http://www2.imse-cnm.csic.es/vmote/english_ver-sion/index.php (accessed 09.05.2018).
24. Database of Wildfilmsindia. www.wildfilmsindia.com (accessed 09.05.2018).
25. Bilkent dataset. http://signal.ee.bilkent.edu.tr (accessed 09.05.2018).
Список литературы
1. Ravichandran A., Chaudhry R., Vidal R. Categorizing dynamic textures using a bag of dynamical systems. IEEE Transactions on Pattern Analysis and Machine Intelligence,2013, pp. 342-353.
2. Oliva A., Torralba A. Modeling the shape of the scene: a holistic representation of the spatial envelope. International Journal on Computer Vision, 2001, pp. 145-175.
3. Zhao G., Pietikainen M. Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, pp. 915-928.
4. Dubois S., Peteri R., Menard M. A Comparison of Wavelet Based Spatio-temporal Decomposition Methods for Dynamic Texture Recognition. In: The Iberian Conference on Pattern Recognition and Image Analysis, Santiago de Compostela, 2009, pp. 314-321.
5. Dubois S., Peteri R., Menard M. Characterization and recognition of dynamic textures based on 2D+T curvelet transform. Signal, Image and Video Processing, 2015, pp. 819-830.
6. Dubois S., Peteri R., Menard M. Decomposition of Dynamic Textures using Morphological Component Analysis. IEEE Transactions on Circuits and Systems for Video Technology, 2012, pp. 188-201.
7. Yang F., Xia G.S., Liu G., Zhang L., Huang X. Dynamic texture recognition by aggregating spatial and temporal features via ensemble SVMs. Neurocomputing, 2016, pp. 1310-1321.
8. Favorskaya M.N., Pyataeva A.V. Convolutional recognition of dynamic textures with preliminary categorization. Photogrammetric and computer vision techniques for video Surveillance, Biometrics and Bio-medicine. Moscow, May 15-17, 2017, рр. 47-54.
9. Xu Y., Quan Y., Zhang Z., Ling H., Ji H. Classifying dynamic textures via spatiotemporal fractal analysis. Pattern Recognition, 2015, pp. 3239-3248.
10. Laws K. Rapid Texture Identification. Proceedings of SPIE - Society of Photo - Optical Intrumentation Engineers - Image Processing for Missle Guardance, 1980, vol. 238, pp. 367-380.
11. Яковлева Е.В., Панченко И.А. Применение энергетических характеристик Лавса для сегментации изображений // Бионика интел-лекта.2007. No 2 (67). С. 94-98.
12. Ojala T., Pietikàinen M., Harwood D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognition, 1996, pp. 51-59.
13. Habiboglu H.Y., Gunay O., Cetin E. Real-time wildfire detection using correlation descriptors. 19th European Signal Conference (EUSIPCO 2011). Barcelona, 2011, рp. 894-898.
14. Ko B.C., Park J.O., Nam J.-Y. Spatiotemporal bag-of-features for early wildfire smoke detection. Image and Vision Computing, 2013, vol. 31, Issue 10, рp. 786-795.
15. Krstinic D., Stipanicev D., Jakovcevic T. Histogram - based segmentation fire detection system. Information technology and control, 2009, vol. 38, no. 3, pp. 237-244.
16. Ojala T., Valkealahti K., Oja E., Pietikàinen M. Texture discrimination with multidimensional distributions of signed gray - level differences. Pattern Recognition, 2001, no. 34(3), pp. 727-739.
17. Liao W.H., Young T.J. Texture classification using uniform extended local ternary patterns. International Symposium on Multimedia, 2010, pp. 191-195.
18. Ojala T., Pietikàinen M., Maenpaa M. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. PAMI, 2002, pp. 971-987.
19. Пятаева А.В., Фаворская М.Н. Модель фона при детектировании дыма по видеопоследовательностям на открытых пространствах // Информационно-управляющие системы. 2016. №4 (83). С. 44-50.
20. Gim J.W., Hwang M.C., Ko B.C. Real-Time Speed-Limit Sign Detection and Recognition Using Spatial Pyramid Feature and Boosted Random Forest. 12th International Conference, ICIAR 2015, Niagara Falls, Canada,2015, pp. 437-445.
21. Favorskaya M., Pyataeva A., Popov A. Spatio-temporal smoke clustering in outdoor scenes based on boosted random forests. Procedia Computer Science, 2016, vol. 96, pp. 762-771.
22. Renaud P., Fazekas S., Huiskes M.J. DynTex. A comprehensive database of dynamic textures. Pattern Recognition Letters, 2010, vol. 31, no.12, pp. 1627-1632.
23. V-MOTEDatabase. http://www2.imse-cnm.csic.es/vmote/english_ver-sion/index.php (accessed 09.05.2018).
24.Database of Wildfilmsindia. www.wildfilmsindia.com (accessed 09.05.2018).
25. Bilkent dataset. http://signal.ee.bilkent.edu.tr (accessed 09.05.2018).
DATA ABOUT THE AUTHORS Pyataeva Anna Vladimirovna, Assistant Professor, Department of Artificial Intelligence Systems, Candidate of Engineering Sciences
Siberian Federal University
26, Kirensky Str., Krasnoyarsk, 660074, Russian Federation anna4u@list. ru
Raevich Ksenia Vladislavovna, Assistant Professor, Department of Artificial Intelligence Systems, Candidate of Engineering Sciences
Siberian Federal University
26, Kirensky Str., Krasnoyarsk, 660074, Russian Federation Ksenia_248@mail. ru
ДАННЫЕ ОБ АВТОРАХ Пятаева Анна Владимировна, канд. техн. наук, доцент Сибирский Федеральный Университет ул. Ак. Киренского, 26, г. Красноярск, 660074, РФ [email protected]
Раевич Ксения Владиславовна, канд. техн. наук, доцент Сибирский Федеральный Университет ул. Ак. Киренского, 26, г. Красноярск, 660074, РФ Ksenia_248@mail. ru