UDC 004.932.2
Siberian Journal of Science and Technology. 2017, Vol. 18, No. 4, P. 796-803
VIDEO BASED FLAME DETECTION ALGORITHM
A. V. Pyataeva*, O. E. Bandeev
Siberian Federal University, Institute of Space and Informatics Technologies 26, Academica Kirenskogo Str., Krasnoyarsk, 660074, Russian Federation Е-mail: [email protected]
Video based flame detection from a surveillance camera offers early warning to ensure prompt reaction to devastating fire hazards. Many existing fire detection methods based on computer vision technology have achieved high detection rates, but often with unacceptably high false-alarm rates. This paper presents an automatic flame detection method using computer vision and pattern recognition techniques. This method uses the features of fire, such as the moving parameters, chromatic components, and geometrical (flickering) features.
For experimental researches the databases of Bilkent University and Dyntex database were used. The developed method offlame detection on video provides 89.5-98.2 % of accuracy for flame sequences. The number offrames of test video sequences was 6.853, the total duration of the videos is 5 minutes. Experimental results show that the proposed method is feasible and effective for video based flame detection.
Keywords: flame detection, flame features, fire, video sequence
Сибирский журнал науки и технологий. 2017. Т. 18, № 4. С. 796-803 АЛГОРИТМ ОБНАРУЖЕНИЯ ПЛАМЕНИ ПО ВИДЕОДАННЫМ А. В. Пятаева*, О. Е. Бандеев
Сибирский федеральный университет, Институт космических и информационных технологий Российская Федерация, 660074, г. Красноярск, ул. Академика Киренского, 26
Е-mail: [email protected]
Наличие камер видеонаблюдения, в настоящее время установленных и устанавливаемых на городских территориях, на территориях предприятий, а также на территориях природных парков и лесных массивов, позволяет выполнить мониторинг возгораний на ранних стадиях пожара. Обнаружение пламени по видеоданным является актуальной задачей, поскольку позволяет своевременно ликвидировать источник возгорания, тем самым избежать возможных экономических потерь и даже человеческих жертв. Для обнаружения пламени предложен алгоритм, основанный на выделении движения, учете цветовых особенностей пламени и анализе его динамических свойств. Для проведения экспериментальных исследований использованы базы данных видеопоследовательностей Билькентского университета и Dyntex. Количество кадров тестовых видеопоследовательностей составило 6 853, общая продолжительность роликов - 5 мин. Средняя по всем видеороликам точность обнаружения пламени составила 94,47 %, что является хорошим результатом и подтверждает эффективность предложенного алгоритма.
Ключевые слова: обнаружение пламени, огонь, видеопоследовательность, пожар, признаки огня.
Introduction. In open spaces, a visual detection of fire has advantage over traditional methods, such as sampling methods of air particles or ambient temperature measurements. Application of methods based on ultraviolet or infra-red multi-spectral principles for flame detection generally requires substantial material costs.
Flame detection on video increases the likelihood of early fire detection and reduces the response time to ignition, since in the case of video analyses detection of fire is carried out in the initial phase. In addition, flame detection on video based data gives a precise location of a fire hazard. The use of the additional flame detection module in video surveillance systems will expand their scope and
improve fire safety of the sites. This will help to prevent possible losses and significantly reduce the damage caused by ignition. With the development of video surveillance systems and image analysis technologies, it has become possible to use the data to detect flames as reliable sign of fire.
Signs of flame on video. Flame has many different characteristics, such as colour, motion, shape and extent, behaviour, etc. Various processes in the flame are very rapid, so it is often impossible to see them with the naked eye.
The colour of the flame depends on many factors. First, it could be the chemical composition of a burning
object, which may change the shades of the flame under combustion. Secondly, air saturation with various gases, for example oxygen, has a great impact. The colour of the flame may also be influenced by its temperature [1].
The second equally important characteristic for flames detection via video sequences is its dynamics, or movement. It is well known that burning of fire is a very dynamic process. Flames regularly change their shape and direction, so such processes can be easily detected. In the video image flame and smoke are represented as a dynamic 2d texture [2]. Such dynamic textures may have a stochastic and regular component [3].
As a way of classifying the selected smoke areas into "flames" and "no flames", such approaches as fuzzy logic, production systems, support vector machine (SVM), artificial neural networks (neuro-nets), groups of trees and other solutions are used.
Thus, the signs of fire in the video can be attributed to [4-6]: movement, colour, change of borders, flickering on the edges and glowing flames. Among the methods of detecting flames via the video sequences, it is possible to specify approaches based on stochastic models and other mathematical methods, methods based on traffic allocation and chromaticity.
Video-based flame detection methods. So far, there are many different methods of flame detection based on video sequences.
Most of them use characteristics on the basis of motion cues and finding pixels corresponding to the colour of the flame. Since flame zones are characterized by flickering ability [7], which is to change the boundaries from frame to frame randomly, the flickering rate is 10 Hz [8]; to assess the energy components of an image on the boundary of the candidate zone frequency methods of image analysis are used.
Thus, in the present work [5] a two-phase filtering system, which consists of a high frequency and a low-frequency filter, is used. In the research [9] stochastic model is employed to simulate the spatial and temporal characteristics of zones: hidden Markov models, which are trained by a test set of images containing smoke and flames, are applied. To detect flames via video sequences, a sequential image of the flame is processed [10], starting
with a low-level representation based on pixels and ending with a high-level semantic representation of the video. Each pixel of a particular image that matches certain colour rules and motion characteristics is marked as a "pixel of the flame colour". Following that, the candidate region of the flame-like pixels is roughly formed, and the image is divided into separate units. The units are formed using specially trained dictionaries which can identify and recognize marked pixels, for more accurate segmentation of candidate regions with projected flame, and to exclude "no flame" zones, as shown in fig. 1.
Another method of flame detection via video sequences is based on the idea of processing the foreground of images and optical flow techniques [11]. Images are accumulated by processing the foreground of images which are extracted using a differentiated personnel method.
Two options are used to differentiate the candidate regions of the flame from the smoke candidate regions. Flame zones are recognized via statistical model built on the accumulation of foreground images, while the smoke zones are calculated using the optical flow and the model of the motion function. Flame burning is seen as a turbulent movement with some source. If there is no wind or air flow, the zones of continuous flame and intermittent zones of flame will be repeated at regular intervals in a given zone. Thus, the importance of flame region pixels in the foreground of the image is growing.
Flame detection on video sequences is also possible using logistic regression and temporary smoothing [12]. Since the colour of the lamp is usually heavily saturated in the red band, the red components of each fire pixel is more common than others in the RGB colour space. Due to RGB colour values are sensitive to light changes, the RGB ink colour is transformed into a colour space that can distinguish between brightness and colour. The YCbCr colour space describes the colour as bright (Y) and coloured (Cb, Cr) components. The background pixels show different shapes as well as different location of the colour factor in the distribution (fig. 2).
Thus, because of similarities in colour, fire flames and fire-like objects show similarities in terms of distribution, but with different average values along the axis of the colour coefficient.
Fig. 1. Image processing when flames are detected: а - initial video image; b - detecting the pixels of the flame colour; c - pixel movement processing
Рис. 1. Обработка изображения при обнаружении пламени: а - исходное видеоизображение; б - выделение пикселей цвета пламени; в - обработка движения пикселей
Video-based flame detection algorithm. In this work, the task flames detection on video sequences is solved as follows.
1. The video sequence in silent mode is broken into a series of video images based on 24 frames per second.
2. The resulting video is searched for a motion to separate the background zones from the flame candidate zones. To do this, the Background Subtracter of the computer Vision Library OpenCV [13] is used. Background model function is based on the algorithm of Gaussian distributions mixture. The model of Gaussian mixtures is the weighed sum of M component and can be written as:
_ m _
p(x = IPA (x), (1)
i=1
where x - D-dimensional vector of random values; b(x) -density of distribution of the constituent parts of the model; p, i = 1, ..., M, - weights of model components. Parameter X is calculated according to the formula:
_ m 1
Pi, h, XPibi (x)[. (2)
Each component is a D-dimensional Gaussian distribution function. After the movement is found, axes x and y have the most extreme detected pixels, and their coordinates draw a rectangle that separates the flame zone.
3. In the zones where movement is identified, the colour mask of the flame is placed. A combination of RGB and HSV colour spaces is used to highlight zones of flame colour:
R > G > B, (3)
R > RT, (4)
S > (255 - R) x ST / RT. (5)
In Expressions (4) to (5), RT indicates the threshold value of R; S is the value of the pixel saturation, and ST corresponds to the saturation when R value matches the knowledge of RT parameter for the same pixel. Rules (3) and (4) show that the value of the R channel is greater than of the other objects.
4. Analysis of the dynamic properties of flame (fig. 3) is performed by checking the size of the rectangular unit. Change to the unit size from the current and previous frames is taken into account:
sd = s1/s2, (6)
where s1 is the size of the candidate unit of the previous frame, s2 is the unit size of the current frame.
5. Flame geometry resulting from the formation of ions while combustion is taken into account in the following manner:
circularity = s x (4n x s/P2), (7)
squareness = s / (x x y), (8)
aspectRatio = s x (min(x, y)/max(x, y)), (9)
roughness = s*(P1/P), (10)
where, s is the area of the candidate zone, P is the perimeter of the candidate zone, x and y - the width and height of the candidate zone, P1 is the perimeter of the image.
6. The frame rate of the source video is also checked against the frame rate of the selected zones:
fr = FPS/MAXS*С, (11)
where, MAXS is the maximum unit size among all frames in the video sequence, C is the number of changes to the maximum unit size, FPS is the frame rate of the sequence. By comparing the frame rate and the frequency of the candidate zone unit, you can confirm the presence of motion on the video sequence, since each change represents a flame shift. The algorithm flow-chart is presented in fig. 4.
After completing steps 1-6 of the algorithm, the candidate region is assigned to one of the classes; "flame" or "no flame". To do this, the classification method " reference vector machine" (SVM) is used.
Experimental studies. A series of Bilkent [14] and Dyntex [15] video consequences are used for the pilot studies. The frames used for the videos and their properties are shown in tab.1. The total selection of video images includes 4.031 examples of flames and 2.822 samples with no flames, and the total length of videos is about 5 minutes. The training selection equals 80 %, the test selection - 20 % of the total sample.
Коэффициент цветности (Cb/Cr)
Fig. 2. Colour factor distribution Рис. 2. Распределение коэффициента цветности
b
Fig. 3. Size of the unit change: а - previous frame; b - current frame Рис. 3. Изменение размера блока: а - предыдущий кадр; б - текущий кадр
Начало
Обработка входной видеопослело вательности
Разбиение видео на кадры
Выделение движения на кадрах
Наложение цветовой маски пламени
Fig. 4. Flow chart of the flame detection algorithm via video data Рис. 4. Блок-схема алгоритма обнаружения пламени по видеоданным
Results of the pilot studies are shown in tabl. 2 and 3. The performance of the flame detection algorithm was evaluated using the TR-true recognition and ^MR-false alert rejection. The TR indicator is calculated as a ratio of frames in which the flame is correctly detected to the frames where the flame is skipped. The _K4R-false operation indicates the ratio of frames with false positive operation to the total number of frames on the video sequence.
As an example, fig. 5 and 6 present the frames of the flame detection in the Bilkent\barbeq.avi and Bilkent\ ForestFire1.avi video sequence, which show initial video image, highlighted colour mask (colour and motion), geometry of flame and flickering, result of the algorithm operation.
Conclusion. The work suggests the algorithm to detect fire zones on video sequences. The algorithm is based on motion analysis, dynamic properties, and flame colour. The average detection accuracy of 94.47 % was carried out in experimental studies on video sequence containing flames, which is a good result as the flame was skipped in only 247 frames out of 4.031. False alert rejection, which were investigated on a video sequence experiment without flame, was obtained in 29 out of the 2.822, averaging 1.37 per cent. Thus, the experiment results confirm the efficiency of the proposed algorithm for the flame detection via on video sequences.
а
Table 1
Test video sequences frames
Test sequence description
Frame sample
Test sequence description
Frame sample
Video sequences with flame
Bilkent\fBackYardFile, frame 334 Resolution, Pixels: 320х240 Number of frames: 1.251
Bilkent\forest4.avi, frame 113 Resolution, Pixels: 400х256 Number of frames: 251
Bilkent\barbeq.avi, frame 186 Resolution, Pixels: 320х240 Number of frames: 516
Bilkent\forest5.avi, frame 45 Resolution, Pixels: 400х256 Number of frames: 246
Bilkent\ForestFire1.avi, frame 54 Resolution, Pixels: 400х256 Number of frames: 247
Bilkent\forest2.avi, frame 154 Resolution, Pixels: 400х256 Number of frames: 273
Dyntex/66ammj00.avi, frame 158 Resolution, Pixels: 720х576 Number of frames: 227
Bilkent\ sEmptyR1.avi, frame 134 Resolution, Pixels: 400х256 Number of frames: 458
Bilkent\ sEmptyR2.avi, frame 5 Resolution, Pixels: 400х256 Number of frames: 437
Dyntex/649h320.avi, frame 120 Resolution, Pixels: 720х576 Number of frames: 206
Bilkent\fire1.avi, frame 146 Resolution, Pixels: 320х240 Number of frames: 542
Bilkent\controlled1.avi, frame 67 Resolution, Pixels: 400х256 Number of frames: 275
Dyntex/64cac10.avi, frame 104 Resolution, Pixels: 720х576 Number of frames: 203
Video sequences without flame
Dyntex/648ab10, frame 1 Resolution, Pixels: 384x288 Number of frames: 716
Bilkent\ sParkingLot.avi, frame 563 Resolution, Pixels: 400х256 Number of frames: 1.136
Dyntex/6489610.avi, frame 47 Resolution, Pixels: 720х576 Number of frames: 201
Table 2
Flame detection results (video sequences with flame)
Video sequence Total Number of frames Number of frames with true flame detection TR, % FAR, %
Bilkent\fBackYardFile.avi 1251 1127 90.09 9.91
Bilkent\barbeq.avi 516 507 98.26 1.74
Bilkent\forest4.avi 251 235 93.63 6.37
Bilkent\forest5.avi 246 234 95.12 4.88
Bilkent\ForestFire1.avi 247 240 97.17 2.83
Bilkent\fire1.avi 542 529 97.60 2.40
Bilkent\forest2.avi 273 264 96.70 3.30
Bilkent\controlled1 .avi 275 246 89.45 10.55
Dy ntex\6ammj 00. avi 227 217 95.59 4.41
Dy ntex\64cac 10. avi 203 185 91.13 8.87
Average values - - 94.47 5.53
Table 3
Flame detection results (video sequences without flame)
Video sequence Total Number of frames Number of frames with false flame detection FAR, %
Bilkent\sEmptyR1 .avi 458 3 0.65
Bilkent\sEmptyR2.avi 437 12 2.74
Bilkent\sParkinLot.avi 1136 5 0.44
Dyntex\648ab10.avi 384 6 1.56
Dyntex\6489610.avi 201 1 0.49
Dyntex\649h320.avi 206 2 0.97
Average value - - 1.37
Fig. 5. Fire detection algorithm steps. Video sequence Bilkent\barbeq.avi: а - initial frame; b - flame mask; c - geometry and flickering; d - algorithm operation result
Рис. 5. Шаги алгоритма обнаружения пламени. Видеопоследовательность Bilkent\barbeq.avi: а - исходный кадр; б - маска пламени; в - учет геометрии и мерцания; г - результат работы алгоритма
c d
Fig. 6. Fire detection algorithm steps. Video sequence Bilkent\ForestFire1.avi.: а - initial frame; b - flame mask; c - geometry and flickering; d - algorithm operation result
Рис. 6. Шаги обнаружения пламени. Видеопоследовательность Bilkent\ForestFire1.avi.: а - исходный кадр; б - маска пламени; в - учет геометрии и мерцания; г - результат работы алгоритма
References
1. Spichkin Y. V., Kalach A. V., Sorokina Y. N. [^ a guestion of features of emergence and development of desperse matarials burning]. Vestnik voronezhskogo instituta GPS MChS Rossii. 2014, Vol. 3(12), P. 7-12 (In Russ.).
2. Favorskaya M., Pyataeva A., Popov A. Spatiotemporal smoke clustering in outdoor scenes based on boosted random forests. Procedia Computer Science. 2016, Vol. 96, P. 762-771.
3. Goncalves W. N. at al. A complex network approach for dynamic texture recognition. Neurocomputing. 2015, Vol. 153, P. 211-220.
4. Bogush R. P., Tychko D. A. Algorithm for complex smoke and flame detection based on video surveillance systems data analysis. Tekhnicheskoe zrenie v sistemakh upravleniya. 2015, P. 65-71.
5. Brovko N. V., Bogush R. P. The analysis of vision-based methods for earle fire detection. Vestnik Polotskogo gosudarstvennogo universiteta. 2011, No. 12, P. 42-50 (In Russ.).
6. Han D., Lee B. Flame and Smoke Detection Method for Early Real-Time Detection of a Tunnel Fire. Fire Safety Journal. 2009, Vol. 44 (7), P. 951-961.
7. Toreyin B. U., Dedeoglu Y., Gueduekbay U. Computer vision based method for real-time fire and flame
detection. Pattern Recognition Letters. 2006, Vol. 27, No. 1, P. 49-58.
8. Toreyin B. U., Dedeoglu Y., Cetin A. E. Wavelet based real-time smoke detection in video. Signal Processing: Image Communication. 2005, Vol. 20, P. 255-260.
9. Toreyin B. U., Dedeoglu Y., Cetin A. E. Contour based smoke detection in video using wavelets. 14th European Signal Processing Conference (EUSIPCO -2006). 2006, P. 1-5.
10. Yaqin Z., Guizhong T., Mingming X. Hierarchical detection of wildfire flame video from pixel level to semantic level. Expert Systems with Applications. 2015, Vol. 42, Iss. 8, P. 4097-4104.
11. Chunyu Y., Zhibin M., Xi Zh. A Real-time Video Fire Flame and Smoke Detection Algorithm. Procedia Engineering. 2013, Vol. 62, P. 891-898.
12. Seong G. K., Donglin J., Shengzhe L., Hakil K. Fast flame detection in surveillance video using logistic regression and temporal smoothing. Fire Safety Journal. 2016, Vol. 79, P. 37-43.
13. Open Source Computer Vision Library. Available at: http://opencv.org/ (accessed 09.10.2017).
14. Bilkent database. Available at: http://signal.ee.bilkent. edu.tr/VisiFire/Demo/FireClips/ (accessed 09.10.2017).
15. Renaud P., Fazekas S., Huiskes M. J. DynTex: A comprehensive database of dynamic textures. Pattern Recognition Letters. 2010, Vol. 31, No. 12, P. 16271632.
Библиографические ссылки
1. Спичкин Ю. В., Калач А. В., Сорокина Ю. Н. К вопросу об особенностях возникновения и развития горения дисперсных материалов // Вестник Воронежского института ГПС МЧС России. 2014. Вып. 3 (12) С. 7-12.
2. Favorskaya M., Pyataeva A., Popov A. Spatiotemporal smoke clustering in outdoor scenes based on boosted random forests // Procedia Computer Science. 2016. Vol. 96. P. 762-771.
3. Goncalves W. N., Machado B. B., Bruno O. M. A complex network approach for dynamic texture recognition // Neurocomputing. 2015. Vol. 153. P. 211-220.
4. Богуш P. П., Тычко Д. А. Алгоритм комплексного обнаружения дыма и пламени на основе анализа данных систем видеонаблюдения // Техническое зрение в системах управления. М., 2015. С. 65-71.
5. Бровко Н. В., Богуш Р. П. Анализ методов обработки последовательностей видеоизображений в приложении к задаче раннего обнаружения пожаров // Вестник Полоцкого государственного университета. 2011. № 12. С. 42-50.
6. Han D., Lee B. Flame and Smoke Detection Method for Early Real-Time Detection of a Tunnel Fire // Fire Safety Journal. 2009. Vol. 44 (7). P. 951-961.
7. Toreyin B. U., Dedeoglu Y., Gueduekbay U. Computer vision based method for real-time fire and flame detection // Pattern Recognition Letters. 2006. Vol. 27, No. 1. P. 49-58.
8. Toreyin B. U., Dedeoglu Y., Cetin A. E. Wavelet based real-time smoke detection in video // Signal Processing: Image Communication, EURASIP. 2005. Vol. 20. P. 255-260.
9. Toreyin B. U., Dedeoglu Y., Cetin A. E. Contour based smoke detection in video using wavelets // 14th European Signal Processing Conference (EUSIPCO -2006). Italy, 2006. P. 1-5.
10. Yaqin Z., Guizhong T., Mingming X. Hierarchical detection of wildfire flame video from pixel level to semantic level // Expert Systems with Applications. 2015. Vol. 42, iss. 8. P. 4097-4104.
11. Chunyu Y., Zhibin M., Xi Zh. A Real-time Video Fire Flame and Smoke Detection Algorithm // Procedia Engineering. 2013. Vol. 62. P. 891-898.
12. Fast flame detection in surveillance video using logistic regression and temporal smoothing / G. K. Seong [et al.] // Fire Safety Journal. 2016. Vol. 79. P. 37-43.
13. Open Source Computer Vision Library [Электронный ресурс]. URL: http://opencv.org/. (дата обращения: 09.10.2017).
14. Bilkent database [Электронный ресурс]. URL: http://signal.ee.bilkent.edu.tr/ VisiFire/Demo/FireClips/ (дата обращения: 09.10.2017).
15. Renaud P., Fazekas S., Huiskes M. J. DynTex: A comprehensive database of dynamic textures // Pattern Recognition Letters. 2010. Vol. 31, No. 12. P. 1627-1632.
© Pyataeva A. V., Bandeev O. E., 2017