ИНФОРМАТИКА
UDC 004.932
SMOKE DETECTION ALGORITHM FOR VIDEO SURVEILLANCE SYSTEMS
N. BROVKO, R. BOGUSH (Polotsk State University)
This paper presented an efficient and reliable smoke detection algorithm on the video sequences. The key components developed in this algorithm are slowly moving blobs detection, classification of the blobs obtained and smoke regions tracking. We use preprocessing, slowly moving areas and pixels segmentation in a current input frame based on adaptive background subtraction algorithm, merge of the slowly moving areas and pixels into blobs at a stage slowly moving blobs detection. Calculation of Weber contrast is applied to classification and the primary direction of smoke propagation is considered. On a tracking step we trace texture and color smoke features using Cam Shift algorithm. The performed experiments have shown that our smoke detector quickly and reliable finds out a smoke on a complex dynamic scene. Experimental results are presented.
Introduction
Early fire detection is a key critical task for fire alarm systems. Traditional fire detectors require a position of sensor in very close proximity to fire or smoke and usually do not provide information about fire location, size. So they may be not reliable and cannot be applied into open spaces and larger areas. Due to the rapid developments in digital camera technology and video processing techniques there is a big trend to replace conventional fire detection techniques with computer vision based systems. Video surveillance systems and fire alarm systems combination in the uniform decision of the visual control of space allows reducing final cost of the equipment considerably. Smoke detection is rather for fire alarm systems when large and open areas are monitored, because the source of the fire and flames cannot always fall into the field of view. However, smoke of an uncontrolled fire can be easily observed by a camera even if the flames are not visible. This results in early detection of fire before it spreads around.
For smoke detection on the video sequences use usually motion and color. Motion information provides a key as the precondition to locate the possible smoke regions. The algorithm of background subtraction is traditionally applied to movement definition in video sequence [1 - 4]. Common technique is using adaptive Gaussian Mixture Model to approximate the background modeling process [1; 2]. In [5] optical flow calculation is applied to detection of movement of a smoke. Lacks of the given approach are high sensitivity to noise and high computational cost. Algorithms based on color and dynamic characteristics of a smoke are applied for classification of the given moving blobs. In [6] the algorithm comparative evaluation of the histogram-based pixel level classification is considered. In this algorithm the training set of video sequences on which there is a smoke is applied to the analysis. However, methods based on preliminary training are dependence of quality of classification on a training set. It demands much of qualitative characteristics of processed video images. The area of decreased high frequency energy component is identified as smoke using wavelet transforms [1; 2]. However change of scene illumination can be contours degradation reason. Therefore such approach requires additional estimations. Color information is also used for identifying smoke in video. Smoke color at different stages of ignition and depending on a burning material is distributed in a range from almost transparent white to saturated gray and black. In [1] decrease in value of chromatic components U and V of color space YUV is estimated.
The smoke on video sequences is a typical example of dynamic textures [12]. Therefore for smoke detection in video methods of segmentation and recognition of dynamic structures can be used. The existing approaches to dynamic texture recognition are based on optical flow [13] and volume local binary patterns (VLBP) [14]. At calculation of an optical flow for performance in real time usually use a normal component of an optical flow. Therefore this approach is very sensitive to a noise. Other approaches to dynamic textures recognition this modeling of textures with volume local binary patterns (VLBP). VLBP are an extension of the Local Binary Pattern (LBP) operator widely used in ordinary texture analysis combining the motion and appearance.
For support of stability and reliability of smoke detection algorithm it is necessary to trace the smoke area found on a current frame on following frames. Various tracking algorithms can be for this purpose applied [16]. However the most popular are Mean Shift [17] and Cam Shift [18] algorithms because of the simplicity and efficiency.
In this paper we propose an effective algorithm for smoke detection on the color video sequences obtained from the stationary camera. Our algorithm consisted of three basic steps: slowly moving blobs detection, classification of the blobs obtained and tracking. We use preprocessing, slowly moving areas and
pixels segmentation in a current input frame based on adaptive background subtraction algorithm, merge of the slowly moving areas and pixels into blobs at a stage slowly moving blobs detection. Calculation of Weber contrast is applied to classification and the primary direction of smoke propagation is considered. On a tracking step we trace texture and color smoke features using Cam Shift algorithm.
1. Slowly moving areas and pixels segmentation
Slowly moving blobs detection consists of following basic steps: preprocessing; slowly moving areas and pixels segmentation in a current input frame based on adaptive background subtraction algorithm; merge of the slowly moving areas and pixels into blobs.
1.1. Frame preprocessing
The preprocessing block applies some methods of image processing which increase the performance of the proposed detection algorithm and reduce false alarms. Frame preprocessing block comprises three steps: grayscale transformation, histogram equalization and the discrete wavelet transform of the current input frame. Cameras and image sensors must usually deal not only with the contrast in a scene but also with the image sensors exposure to the resulting light in that scene. Histogram equalization is a most commonly used method for improvement of contrast image characteristics [7]. To resize the image and to remove high frequencies on horizontal, vertical and diagonal details the discrete wavelet transform to Haar basis is applied. Wavelet transform to Haar basis is simplest and fastest that [8] is important for systems of video processing. Figure 1 shows results for this step of algorithm.
a) b)
Fig. 1. The current frame (a) and the discrete wavelet transform after grayscale transformation and histogram equalization (b)
1.2. Slowly moving areas and pixels segmentation
In the course of the distribution the smoke is gradually blended to a background. The adaptive algorithm of background subtraction offered by us considers this characteristic of a smoke and is based on the ideas stated in works [2; 9]. A background image Bt+1 at time instant / + 1 is recursively estimated from the image frame /, and the background image Bt of the video [9]. Moving pixels are determined by subtracting the current frame from the background and threshold. Recursive threshold estimation is also described in [9]. In the course of the distribution the smoke is gradually blended to a background. Then the foreground Ft can be estimated as follows:
It{x,y) = pFf (x, v) + (1~№t (*, v),
where p is a blending parameter between 0 and 1.
As the area of a smoke from a frame to a frame grows slowly that the pixels belonging to a smoke, quickly did not fix in a background, value of adaptation parameter a should be close to 1. Experimentally us the established values for smoke detection are a = 0,95 and 0.2 < p < 1.
1.3. Connected component analysis
On next step of algorithm for clearing of noise and connection of moving blobs the connected components analysis is used [7]. This form of analysis takes in a noisy input foreground. Morphological operations are applied to reduce the noise:
1) morphological opening to shrink areas of small noise:
S°B = (S(-)B)®B, where S - image; B - structuring element 3 x 3;
2) morphological closing to rebuild the area of surviving components that was lost in opening:
S' B = (S © B)(-)B,
where B - structuring element 3 x 3.
Then search of all contours is carried out. Next it tosses the contours that are too small and approximate the rest with polygons. The figure 2 shows the results of adaptive background subtraction and connected components analysis.
c)
d)
Fig. 2. The low frequency area after Haar transform of the current frame (a) and background (b) for this area, the noisy foreground (c) is completely clean up (d) by the connected components analysis
2. Moving blobs classification
At the beginning, when the temperature of the smoke is low, it is expected that the smoke will show color from the range of white-bluish to white [1]. So we can apply Otsu thresholding [10] to smoke segmentation of a current frame. Then the blocks matching approach is applied for pixels with value 1 in an Otsu mask. Blocks matching approach for optical flow calculation assumes that the frame is divided into small regions called blocks. It considers a primary direction of smoke propagation. In [11] it is shown, that global direction of smoke is 0...45°. This statement allows to simplify procedure blocks matching detection and, hence, considerably to reduce number of calculations. Blocks are typically square and contain some number of pixels. These blocks are not overlap. In our realization frames in the size 320 x 240 pixels divided into blocks 2 x 2 pixels. Block matching algorithm attempt to divide both the previous and current frames into such blocks and then compute the motion of these blocks. Our implementation uses a search in three directions of the original block bp™v
(in the previous frame) and compares the candidate new blocks bc"j[y_x, bc""'_x and 1 (in the current frame) with the original. This comparison is calculated as follows:
F (bpyev, b
curr 'x k.y
-1>х,уЩ2\Щ ~
max(I-
prev J
jttirr , ij '
where Ij is the intensity value of pixel on the previous frame, belonging to the block bpryv, I-u1rr is the
intensity value of pixel on the current frame, belonging to the block b divided the previous and current frame.
curr x, y ,
■y ' 'J
N is count of blocks into which
The result of this step is a binary mask of moving on the current frame, where a value 1 corresponding the maximum value F. Figure 3 shows some examples for optical flow calculation.
a) b)
Fig. 3. Otsu threshold mask (a) and the current (b) frame with moving vectors
From each blobs from previous steps we calculate Weber contrast Cw and percent p of blocks which have moved in primary direction of smoke:
С
i n 4z
Ft{x,y)-Bt{x,y) Bt{x,y) '
where Ft(x,y) - value of pixel intensity (x,y) at time instant t, belonging to a blob; Bt(x,y) - value of background pixel intensity (x,y) at time instant t under blob; n - number of the pixels belonging to a blob. If the blob has successfully checked out that we classify it as a smoke. Experimentally established values cw >0,5 and p > 20 % allow distinguishing effectively a smoke from objects with similar behavior: a fog, shadows from slowly moving objects and patches of light.
3. Smoke regions tracking
On a tracking step we trace texture and color smoke features. For this we apply Cam Shift algorithm [18]. Smoke tracking is necessary to provide low level of false alarms and to reduce number not smoke detection. The Cam Shift algorithm uses continuously adaptive probability distributions. The essence of this tracker is region matching. The match criterion is similarity based on a color histogram.
In our realization Cam Shift algorithm based on the Local Binary Pattern (LBP) texture measure [15]. The basic LBP operator is a non-parametric 3 x 3 kernel which summarizes the local special structure of an image. It was first introduced by Ojala et al. [15] who showed the high discriminative power of this operator for texture classification. Local Binary Pattern robustness to monotonic gray-scale transformation, such as varying the brightness, contrast and illumination. The problem of smoke regions tracking at known initial position resolves at next frames as follows:
1) for smoke detection area is carried out LBP as follows. At a given pixel position (xc,yc), LBP is defined as an ordered set of binary comparisons of pixel intensities between the center pixel and its eight surrounding pixels. The decimal form of the resulting 8-bit word (LBP code) can be expressed follows [15]:
ЬВР(хс,ус) = ^а„-1с) 2"
7=0
where corresponds to the gray value of the center pixel (xc, yc ), in to the gray values of the 8 surrounding pixels in and function s is defined as [15]:
s(x) =
1, if x > 0; 0, if x < 0.
For LBP of object (smoke region) the histogram which is considered as the standard for tracking is under construction. The figure 4, a shows the LBP for the current frame;
7=1
2) at frame change it is necessary having object position on the previous frame and new position of object is necessary to find the reference histogram. Algorithm Cam Shift which is carrying out this procedure consists of following steps:
a) the search area in which smoke occurrence is supposed gets out. Initial position of search window which is defined by smoke position on the previous frame gets out;
b) by back projection algorithm of histograms for search area the image of probability P (figure 4, b) is under construction. It is iterative following steps are carried out;
c) in search window for image P the zeroth moment and first moments (the center of mass of the image pixel distribution) under following equations calculate:
A- V
Ho =XXX/(X-v)
X V
and
Mn =XXv/(x-v):
X V
d) the found center of mass defines position of search window on the following iteration. The sizes of a window do not vary. Performance of steps 2.b and 2.c stops if distinctions in position of search window in the subsequently iterations are small;
e) the search window extends a little, and in it for image P the second moments which define definitive position and the size of traced object, under following equations are calculate:
x v
M02=ZZ-V2/(X-V)-
x v
a) b)
Fig. 4. LBP (a) and the image of probability P (b) for the current frame
At quick changes of object position, the tracking algorithm can lose object. However for the ignition centre quick change of a location is not characteristic. Increase of smoke area occurs slowly. Therefore Cam Shift algorithm well approaches for smoke trace on frame sequences.
4. Results and discussion
The developed algorithm has been tested in real environment with implementation at personal computer (Pentium(R) DualCore CPU T4300, 2,1 GHz, RAM 1,96 GB). Our program is implemented using Visual C++ and an open source computer vision library OpenCV. The proposed algorithm has been evaluated using data set publicly available at the web address http://signal.ee.bilkent.edu.tr/VisiFire/Demo/SampleClips.html and http://www.openvisor.org. Test video sequences contain a smoke, moving people, moving transport, a complex
dynamic background, and also a number of video sequences are not contained by a smoke. Figure 5 shows some examples of smoke detection.
g) k) Fig. 5. Smoke detection in real video sequences For each video sequence we calculate True Rate (TR) and False Rate (FR) as follows:
CTR
TR =--100 %,
CS
CFR
FR =--100 %,
CNS
where CTR - count of true found frames; CS - count of frames with a smoke; CFR - count of false found frames and CNS - count of frames without a smoke.
We have compared our approach and approach developed in Signal and image processing group from Bilkent University (http://signal.ee.bilkent.edu.tr). Results of smoke detection are presented in the Table where OAL and BAP are detection results for our algorithm and Bilkent approach respectively.
Compare smoke detection algorithms
Video seq. Fig. 5 TR (%) FR (%) Present rate
OAL BAP OAL BAP OAL BAP
a 98 78 6 0 10/12 10/43
b 60 SNF 0 SNF 20/112 SNF
c 96 41 0 0 80/87 80/100
d 66 48 12 8 30/117 204/117
e 94 - 0 - 360/388 -
f 80 - 17 - 463/469 -
g 98 - 15 - 398/400 -
k 77 - 2 - 500/657 -
Present rate (the smoke was present with/ is found with) is number of frame where smoke was present in video and number of frame where smoke is found with algorithms. The designation SNF means that the smoke has not been found out by algorithm. Our algorithm has higher true of detection thanks to application of trace and allows to find out a smoke at earlier stage. If at the moment of occurrence on a scene the smoke moves slowly and is strongly rarefied (sequences b, d, k), it gradually includes in a background. Therefore in this case, we cannot directly find out a smoke and detection time is increasing (fig. 6, a). Also our algorithm cannot be used for wildfire detection (fig. 6, b) because the smoke area is small and a part of the smoke information is lost at a post processing stage.
Fig. 6. Examples not detection
Results of researches show, that the algorithm provides early smoke detection on a complex scene. Smoke detection is achieved in real time. The processing time per frame is about 31 ms. for frames with sizes of 320 by 240 pixels. It has low false alarm rate. The algorithm has small positive factor in cases when the smoke strongly dissipates and at long distribution to one direction joins in a background. However for problems of the early prevention of a fire more important is early alarm and low false alarm rate. Therefore our algorithm can be used in video surveillance systems for early detection of a fire.
5. Conclusion
We have presented in this paper an algorithm for smoke detection in video sequences. Our algorithm consisted of three basic steps: slowly moving blobs detection, classification of the blobs obtained and tracking.
We use preprocessing, slowly moving areas and pixels segmentation in a current input frame based on adaptive background subtraction algorithm, merge of the slowly moving areas and pixels into blobs at a stage slowly moving blobs detection. Calculation of Weber contrast is applied to classification and the primary direction of smoke propagation is considered. On a tracking step we trace texture and color smoke features using Cam Shift algorithm. The efficiency of our approach is illustrated and confirmed by our experimental videos.
REFERENCES
1. Piccinini, P., Calderara, S. and Cucchiara, R., 2008. Reliable smoke detection system in the domains of image energy and color. In: 15th International Conference on Image Processing, 1376 - 1379.
2. Toreyin, B. et al., 2005. Wavelet based real-time smoke detection in video. Signal Processing: Image communication, EURASIP, Elsevier. 20, 255 - 256.
3. DongKeun Kim and Yuan-Fang Wang, 2009. Smoke Detection in Video. World Congress on Computer Science and Information Engineering, 759 - 763.
4. Toreyin, B., Dedeoglu, Y. and Cetin A. E., 2006. Contour based smoke detection in video using wavelets. In: European Signal Processing Conference, 123 - 128.
5. Comez-Rodriuez, F. et al., 2003. Smoke Monitoring and measurement Using Image Processing. Application to Forest Fires. In: Automatic Target Recognation XIII, Proceedings of SPIE 5094, 404 - 411.
6. Kristini D., Jakovevi T., Stipaniev D., 2009. Histogram-Based Smoke Segmentation in Forest Fire Detection System. Information Technology and Control 38(3), 237 - 244.
7. Bradski, G. and Kaehler, A., 2008. Learning OpenCV. O'Reilly Media.
8. Stolnitz, E., DeRose, T., Salesin, D., 1996. Wavelets for Computer Graphics: Theory and Applications. Morgan Kaufmann.
9. Collins, R.T., 1999. A System for Video Surveillance and Monitoring. In: Proc. of American Nuclear Society 8th Int. Topical Meeting on Robotics and Remote Systems, 68 - 73.
10. Otsu, N., 1979. A threshold selection method from gray-level histograms. IEEE Trans. Sys., Man., Cyber. 9, 62 - 66.
11. Rubaiyat Yasmin, 2009. Detection of Smoke Propagation Direction Using Color Video Sequences. International Journal of Soft Computing 4 (1), 45 - 48.
12. Torein, B.U., et al., 2007. Dynamic texture detection, segmentation and analysis. ACM International Conference on Image and Video Retrieval.
13. Chetverikov, D. and Peteri, R., 2005. A Brief Survey of Dynamic Texture Description and recognation. In: Proc. Int'l Conf. Computer Recognation Systems, 17 - 26.
14. Zhao, G. and Pietikainen, M., 2006. Local Binary Pattern Descriptions for Dynamic Texture Recognation. In:Proc. In'l Conf. Pattern Recognation, Vol. 2, 211 - 214.
15. Ojala, T., Rietikainen, M. and Maenpaa, T., 2002. Multiresoluttion gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Patt. Anal. Mach. Intell 24(7), 971 - 987.
16. Yilmaz, A., Javed, O., 2006. Object tracking: A survey. ACM comput. Surv. 38(4).
17. Comaniciu, D., Meer, P., 2002. Mean shift: a robust approach toward feature space analysis. IEEE Trans. Patt. Anal. Mach. Intell 24(5), 603 - 619.
18. Bradski, G., 1998. Computer vision face tracking for use in perceptual use interface. Intel Technol. J. 2(2), 12 - 21.
Поступила 21.02.2012