Traffic extreme situations detection in video sequences based on integral optical flow
H. Chen 1, S. Ye 1, A. Nedzvedz 3O. Nedzvedz 2, H. Lv 1, S. Ablameyko 34 1 Zhejiang Shuren University, Hangzhou, China;
2 Belarusian State Medical University, Minsk, Belarus;
3 Belarusian State University, Minsk, Belarus;
4 United Institute of Informatics Problems of National Academy of Sciences, Minsk, Belarus
Abstract
Road traffic analysis is an important task in many applications and it can be used in video surveillance systems to prevent many undesirable events. In this paper, we propose a new method based on integral optical flow to analyze cars movement in video and detect flow extreme situations in real-world videos. Firstly, integral optical flow is calculated for video sequences based on optical flow, thus random background motion is eliminated; secondly, pixel-level motion maps which describe cars movement from different perspectives are created based on integral optical flow; thirdly, region-level indicators are defined and calculated; finally, threshold segmentation is used to identify different cars movements. We also define and calculate several parameters of moving car flow including direction, speed, density, and intensity without detecting and counting cars. Experimental results show that our method can identify cars directional movement, cars divergence and cars accumulation effectively.
Keywords: integral optical flow, image processing, road traffic control, video surveillance.
Citation: Chen H, Ye S, Nedzvedz A, Nedzvedz O, Lv H, Ablameyko S. Traffic extreme situations detection in video sequences based on integral optical flow. Computer Optics 2019; 43(4): 647652. DOI: 10.18287/2412-6179-2019-43-4-647-652.
Acknowledgments: The work was funded by Public Welfare Technology Applied Research Program of Zhejiang Province (LGF19F020016, LGJ18F020001 and LGJ19F020002), Zhejiang Provincial Natural Science Foundation of China (LZ15F020001), and the National High-end Foreign Experts Program (GDW20183300463).
Introduction
Traffic flow monitoring and analysis based on computer vision techniques, especially traffic analysis and monitoring in a real-time mode raise valuable and complicated demands to computer algorithms and technological solutions. Most realistic applications are in vehicle tracking, and the critical issue is initiating a track automatically. Traffic analysis then leads to reports of speed violations, traffic congestions, accidents, or actions against the law of road users. A variety of approaches to these tasks were suggested by many scientists and researchers (Al-Sakran [1], Ao et al. [2], Cao [3], Rodríguez and García [4]). A good survey of video processing techniques for traffic applications was published in paper Kastrinaki et al. [5].
In monocular vision-based monitoring systems, the camera is typically assumed to be stationary and mounted at a fixed position to capture passing cars. Traffic analysis on urban traffic domain appears to be more challenging because of high-density traffic flow and low camera angle that lead to a high degree of occlusion (Rodríguez and García [4]).
Most of the existed papers on road traffic control are oriented for monitoring single vehicles in video images (Huang et al., [6]; Zhang et al., [7]). Our goal is to analyze car flow situation without detecting or counting cars. It would be helpful to automatically analyze car flows and predict road traffic jam, cars accumulation, accidents and many other events, especially when traffic monitoring center is understaffed.
A road monitoring system based upon image analysis must detect and react to a changing scene. This adaptability can be brought about by a generalized approach to the problem which incorporates little or no a priori knowledge of the analyzed scene. Such a system should be able to detect 'changing circumstances', which may include non-standard situations like traffic jam, rapid cars accumulation and cars divergence in road intersections (Joshi and Mishra [8], Nagaraj et al. [9], Shafie et al. [10], Khanke and Kulkarni [11]). By detecting these situations, the system can immediately inform humans about problems happening at a road.
Different approaches are used to analyze video sequences in general. Optical flow is already proved as a powerful tool for video sequences analysis due to its ability to treat group of objects as single entity and thus avoid individual tracking. Many authors use basic optical flow for motion analysis of dynamic objects (Kamath et al. [12], Cheng et al. [13]), especially for crowd behavior analysis. Zhang et al. [14] proposed a method using optical flow of corner points and CNN based on LeNet model to recognize crowd events on video. Ravanbakhsh et al. [15] employed Fully Convolutional Network and optical flow to obtain the complementary information of both appearance and motion patterns. Andrade et al. [16] presented a method combing optical flow and unsupervised feature extraction based on spectral clustering and Multiple Observation Hidden Markov Model training for abnormal events detection in crowds and correctly distinguished blocked exit situation from normal crowd flow.
Wang et al. [17] presented a feature descriptor called hybrid optical flow histogram and performed training on normal behavior samples through sparse representation, they detected change of speed in different directions of a movement as abnormal behavior for every frame. Mehran et al. [18] proposed abnormal crowd behavior detection method based on social force model. This method performed particle advection along with space-time average optical flow to emulate crowd dynamics. It computed interaction forces between particles based on their velocities. Chen et al. [19, 20] proposed to use integral optical flow for dynamic object monitoring and in particularly for crowd behavior identification. As we showed there, integral optical flow allows one to identify complex dynamic object behaviors based on the interactions between pixels without the need to track objects individually.
In this paper, we present a further extension of our method based on integral optical flow to monitor car traffic such as direct car flow and extreme situations like cars accumulation and cars divergence in videos. Together with detection of extreme situations the method allows to compute traffic parameters like direction, speed, density, and intensity without detecting and counting cars.
Integral optical flow is used to create motion maps and these maps are used to analyze and describe motions at pixel-level and region-level. Our method doesn't require training and can be effectively used for situation monitoring and analysis. We applied our method on realword videos and got good results.
1. Technology of car traffic monitoring by using motion maps The technology of car traffic monitoring by using motion maps is as the following.
Background Image Sequence of frames
Frame processing
Geometric image binding _JHEij|3 and background correction "TffigSa
I
^Accumulation I of dynamic I data
Optical flow construction ------\-----
Integral optical construction
X
Motion maps construction
I
Determination of local maxima and segmentation
Critical area of movements
N
Fig. 1. General scheme of car traffic monitoring process
At the first stage basic optical flow is calculated, the instantaneous movement of moving pixels, which represent moving cars in the video, is determined. Results of
the optical flow calculation along the video sequences are accumulated for calculation of the integral optical flow. Based on integral optical flow, we define motion maps to describe pixel motions at each position, i.e. statistical analysis of quantity and motion direction of pixels moving toward or away from each position.
At the second stage, we can identify cars movement types in any regions of interest based on threshold segmentation of local maxima in the motion maps. The general monitoring scheme is shown in Fig. 1.
2. Integral optical flow and image motion maps
Integral optical flow is an intuitive idea that accumulate optical flows for several consecutive frames. Along with the accumulation, displacement vectors of background become small, while those of foreground keep growing.
For description convenience, we use It denote t-th frame of video I and It (p) to denote pixel with coordinate p = (x, y) throughout the remainder of this paper.
Let OFt denote basic optical flow of 11. It is a vector field with each vector OFt (p) represents displacement vector of pixel It (p). Assume OF, (p) = d , we can easily determine the coordinate in Im where pixel It (p) moves, and it is p + d .
Considering optical flows for several consecutive frames have been computed, we can obtain integral optical flow for the first frame of those. Let IOFtitv denote integral optical flow of It, where itv is the frame interval parameter used to compute integral optical flow. IOFtitv is also a vector field which records accumated displacement information in time period of itv frames for all pixels in It.
For any pixel It (p), its integral optical flow IOFtitv (p) can be determined as follow:
1OFitv (p) = £ OFt+i (p,+i ),
(1)
where pt+i is the coordinate in It+i of pixel It (p). In other words, if It ( p) stays in the video scene,
It ( p), It+l ( p+l), ... , It+itv - 1 ( pt+itv-l) are the same pixel in different frames, i.e. It ( p).
Integral optical flow provides opportunity to only analyse foreground objects which make actual movements, at least for some time period. In the proposed method, geometric structure formed by pixel motion is considered. For any position or region in the scene, pixel motion paths which relate to (i.e., start from, end at, or pass through) this position or region are analysed, based on which, motion maps are created, they allow to determine whether any certain event is happening at the position or in the region.
A motion map is an image where a feature Q at each position shows information about movement of pixels whose motion path relate to this position or a region centered at this position. We defined several motion maps [20], which describe comprehensive information of pixel motion, including direction, intensity, symmetry/directionality, etc. This information reflects behaviour of objects, in this case, moving cars on the road, thus certain
i= 0
events can be detected. The following motion maps are used for traffic monitoring:
• IQ (in-pixel quantity) map - A map with a scalar value at each position indicating number of pixels moving toward the corresponding position.
• OQ (out-pixel quantity) map - A map with a scalar value at each position indicating number of pixels moving away from the corresponding position.
• ICM (in-pixel comprehensive motion) map - A map with a vector at each position indicating comprehensive motion of pixels moving toward the corresponding position.
• OCM (out-pixel comprehensive motion) map - A map with a vector at each position indicating comprehensive motion of pixels moving away from the corresponding position.
Based on these motion maps, several indicators can be defined to describe motion in regions, they are useful to identify certain types of cars movement.
• RMI (regional motion intensity) - A scalar value indicating average of displacement vector magnitudes of integral optical flow for pixels in a certain region.
• RIRQ (regional in-pixel relative quantity) - A scalar value indicating average of values on IQ map at positions in a certain region.
• RORQ (regional out-pixel relative quantity) - A scalar value indicating average of values on OQ map at positions in a certain region.
• RICM (regional in-pixel comprehensive motion) -A vector indicating average of values on ICM map at positions in a certain region.
• ROCM (regional out-pixel comprehensive motion) - A vector indicating average of values on OCM map at positions in a certain region.
• RlOl (regional in/out indicator) - A scalar value indicating whether more pixels move toward than pixels move away from a certain region. RIOI = RIRQ/RORQ.
• RIS (regional in-pixel symmetry) - A scalar value indicating how symmetrically pixels move toward a certain region. RIS = RIRQ/\\RISM\\.
• ROS (regional out-pixel symmetry) - A scalar value indicating how symmetrically pixels move away from a certain region. ROS = RORQ/\\ROSM\\.
From the above definitions, we have RIS > 1 and ROS > 1, with the equal signs work when corresponding pixels move in the same direction. The bigger RIS or ROS is, the more symmetrically the corresponding pixels move.
If region of interest is self-symmetrically shaped, for example when trying to identify cars movement in the whole scene using a sliding window, usually a square window or rectangular window, values of these indicators can be assigned to region center, thus corresponding region-level motion maps will be created.
3. Definition and identification of cars movement types
Cars in streets can move in one direction and it is called as directional car flow. However, in road intersec-
tions, especially in unregulated ones where there are no traffic lights, cars accumulation and divergence can happen. Actually, these three types of movement are the main componets that constitutes usual traffic events, such as car flow stopping, traffic congestion, traffic accidents, etc. So, we define the following types of cars movement:
- cars directional movement;
- cars accumulation;
- cars divergence.
Definition 1 (Cars directional movement). Cars directional movement means that all cars move in the same direction. Three rules are proposed to identify cars directional movement: 1) many cars move from one region to another; 2) they move above a certain speed; 3) they move in one direction.
Cars directional movement is identified in region r at time t if RMI (r), RORQt (r), and ROSt (r) meet thresholds:
(1) rmi, (r)> tn;
(2) RORQ, (r)> /12;
(3) ROS, (r)< tB .
Here t13 should be a little bigger than 1.
Definition 2 (Cars divergence and accumulation). Divergence is when cars move in different directions from a center and accumulation is when cars are moving to a center.
Cars accumulation is identified in region r at time t if RMI (r), RIRQt (r), RIOIt (r), and RISt (r) meet thresholds:
(1) RMI, (r )> t21;
(2) RIRQ, (r) > t22 and RIOI, (r) > t73;
(3) RIS, (r)> ,24. Here ,23 > 1 and ,24 > 1.
Cars divergence is identified in region r at time t if RMI (r), RORQt (r), RIOI, (r) and ROSt (r) meet thresholds:
(1) RMI, (r)> ,3,;
(2) RORQ, (r) > ,32 and RIOI, (r) < 133;
(3) ROS, (r)> ,34.
Here 0 < tss < 1 and ,34 > 1.
Examples of real car flow situations are shown in Fig. 2.
4. Cars movement parameters calculation
Integral optical flow allows one not only to define types of cars movement but also allows to calculate characteristics of cars movement. The main characteristics of cars movement are:
• direction;
• speed;
• density;
• intensity.
Direction indicates a destination where cars move. In order to determine cars movement direction, we can simply divide [0, 2n) into several intervals with equal
length and count for each interval number of pixels whose motion direction is in that interval. Interval with most pixels shows main motion direction. Suppose the interval is [2in / n, 2(i+1) n / n), 2(i+1) n / n can be chosen as the main motion direction Q„, is more meaningful when ROS is small or close to 1, e.g., ROS < 1.5. If ROS is too big, e.g., ROS > 4, it means pixels are moving symmetrically, to some extent. In this case, there will be no main motion direction.
Speed of pixel It (p) time period from It to I+itv is defined as follow:
sr (p )_
_ \lOFr (p)||
itv
(2)
To determine speed of cars inside a certain region r, only pixels move from other positions should be considered. Let MP denote the set of positions inside region r at which the corresponding pixels move from other positions based on integral optical flow, then speed of cars inside r in time period from It to It+tv is defined as follow:
1 N
Sitv (r )_ N X S" (Pi )
(3)
where pieMP, N = cardMP, and cardMP is number of elements in set MP. Accordingly, density indicates the pro-
portion of cars inside a certain region r in time period from It to It+itv and is defined as follow:
or (r ) _
N,
A'
100%,
(4)
where A is area of r.
Intensity is a measure of the average occupancy of a certain region r by cars in time period from I, to I,+itv. Let MPi denote the set of positions inside r at which the corresponding pixels move from other positions at time It+i, then intensity of r in time period from It to I,+itv is defined as follow:
1 itv-1
TI'i'(r )_ its
cardMP
A
(5)
Where card MPi is number of elements in set MPi and A is area of r.
5. Car flow monitoring results The proposed methods have been tested on several real-world videos. Let us show how the main cars movement types and parameters are identified at real images. For experiments, we chose a video with moving cars at unregulated road intersection. Fig. 3 shows consequently the results of a) cars directional movement, b) cars accumulation and c) cars divergence.
i o>) tT^m mmsm (?) \
Fig. 2. Examples of (a) cars direct flow, (b) cars accumulation, (c) cars divergence
Fig. 3. The results of identified types of cars movement: (a) cars directional movement, (b) cars accumulation and (c) cars divergence
Traffic congestion can be detected by considering parameters including speed and density. The main characteristic of a traffic congestion is that cars move very slowly, or even cannot move at all, for a relatively long time period, and thus the driveway is almost full. Table 1 shows how a traffic congestion is identified, where S -maximum speed for traffic congestion, R - minimum density for traffic congestion, T - minimum time of duration for traffic congestion. Operators of traffic monitoring center should indicate these parameters.
Table 1. Traffic congestion identification
Situation to avoid Speed Density Time of duration
Traffic congestion < S > R > T
Conclusion
Our paper is devoted to important problem of traffic analysis by stationary camera and detection of complex situations that can appear on roads. We have defined three types of cars movement: directional movement, divergence and accumulation and presented a method to identify these movements at their early stage. Our method mainly consists of the following steps: integral optical flow computation, pixel-level motion analysis, regionlevel motion analysis and threshold segmentation. The accumulative effect of integral optical flow is taken advantage to separate background and foreground and obtain intensive motion regions which are usually of inter-
est. Based on integral optical flow, pixels can be tracked, thus for traffic monitoring tasks, certain parameters including direction, speed, density, and intensity can be calculated for any region. By using these parameters, traffic congestion can be detected automatically.
The effectiveness of our method has been demonstrated and confirmed by experimental results. The performed experiments proved that integral optical flow and motion maps can be efficiently used for identifying car traffic movement in video.
References
[1] Al-Sakran HO. Intelligent traffic information system based on integration of internet of things and agent technology, International Journal of Advanced Computer Science and Applications 2015; 6: 37-43.
[2] Ao GC, Chen HW, Zhang HL. Discrete analysis on the real traffic flow of urban expressways and traffic flow classification. Advances in Transportation Studies 2017; 1(Spec Iss): 23-30.
[3] Cao J. Research on urban intelligent traffic monitoring system based on video image processing. International Journal of Signal Processing, Image Processing and Pattern Recognition 2016; 9: 393-406.
[4] Rodriguez T, Garcia N. An adaptive, real-time, traffic monitoring system. Mach Vis Appl 2010; 21: 555-576.
[5] Kastrinaki V, Zervakis M, Kalaitzakis K. A survey of video processing techniques for traffic applications. Image and Vision Computing 2003; 21: 359-381.
[6] Huang DY, Chen CH, Hu WC, et al. Reliable moving vehicle detection based on the filtering of swinging tree leaes and raindrops. J Vis Comun Image Represent 2012; 23: 648-664.
[7] Zhang W, Wu QMJ, Yin HB. Moving vehicles detection based on adaptive motion histogram. Digit Signal Process 2010; 20: 793-805.
[8] Joshi A, Mishra D. Review of traffic density analysis techniques. International Journal of Advanced Research in Computer and Communication Engineering 2015; 4(7): 209-213.
[9] Nagaraj U, Rathod J, Patil P, Thakur S, Sharma U. Traffic jam detection using image processing. International Jour-
nal of Engineering Research and Applications 2013; 3(2): 1087-1091.
[10] Shafie AA, Ali MH, Fadhlan H, Ali RM. Smart video surveillance system for vehicle detection and traffic flow control. Journal of Engineering Science and Technology 2011; 6(4): 469-480.
[11] Khanke P, Kulkarni PS. A technique on road traffic analysis using image processing. International Journal of Engineering Research and Technology 2014; 3: 2769-2772.
[12] Kamath VS, Darbari M, Shettar R. Content based indexing and retrieval from vehicle surveillance videos using optical flow method. Int J Sci Research 2013; II(IV): 4-6.
[13] Cheng J, Tsai YH, Wang S, Yang MH. SegFlow: Joint learning for video object segmentation and optical flow. Proceedings of International Conference on Computer Vision 2017: 686-695.
[14] Zhang W, Hou Y, Wang S. Event recognition of crowd video using corner optical flow and convolutional neural network. Proceeding of Eighth International Conference on Digital Image Processing 2016: 332-335.
[15] Ravanbakhsh M, Nabi M, Mousavi H, Sangineto E, Sebe N. Plug-and-play CNN for crowd motion analysis: An application in abnormal event detection. Source: (https://arxiv.org/abs/1610.00307).
[16] Andrade EL, Blunsden S, Fisher RB. Modelling crowd scenes for event detection. Proceedings of 18th International Conference on Pattern Recognition 2006: 1: 175-178.
[17] Wang Q, Ma Q, Luo CH, Liu HY, Zhang CL. Hybrid histogram of oriented optical flow for abnormal behavior detection in crowd scenes. International Journal of Pattern Recognition and Artificial Intelligence 2016; 30(2): 210224.
[18] Mehran R, Oyama A, Shah M. Abnormal crowd behavior detection using social force model. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition 2009: 935-942.
[19] Chen C, Ye S, Chen H, Nedzvedz O, Ablameyko S. Integral optical flow and its applications for dynamic object monitoring in video. J Appl Spectrosc 2017; 84: 120-128.
[20] Chen H, Ye S, Nedzvedz O, Ablameyko S. Application of integral optical flow for determining crowd movement from video images obtained using video surveillance systems. J Appl Spectrosc 2018; 85: 126-133]
Authors' information
Huafeng Chen (b. 1982) got his PhD from Zhejiang University in 2009, majoring in Earth Exploration and Information Technology. Currently he works as vice professor at College of Information Science and Technology of Zhejiang Shuren University. Research interests are image processing, remote sensing, and GIS applications. Email: eric.hf.chen@hotmail.com .
Shiping Ye (b. 1967) got his master's degree from Zhejiang University in 2003, majoring in Computer Science and Technology. He works as Vice President and professor at Zhejiang Shuren University. Research interests are image processing, remote sensing, and GIS applications. E-mail: zjsruvsp@J63.com .
Alexander Nedzvedz (b. 1970) got his DSc from National Academy of Sciences of Belarus in 2013, majoring in Computer Science and Technology. He works as Head of Computer Technology and Systems department of Belarusian State University. Research interests are image processing, feature extraction, algorithms of medical image segmentation, segmentation of color images, pattern recognition, mathematical morphology, software architecture, and machine learning. E-mail: nedzveda@tut.bv .
Olga Nedzvedz (b. 1975) graduated from Physics Faculty, Belarusian State University in 1997, majoring in Medical and Biological Physics and Computer Science and Technology in medicine. She works as senior lecturer of Medical
and Biological Physics department of Belarusian Medical University. Research interests are analysis of medical images, mathematical simulation of medical processes, biophysics and biophysical education. E-mail: olga_nedzved@tut.by .
Hexin Lv (b. 1964) graduated from Hangzhou Dianzi University in 1986, majoring in Computer Software. He works as professor, Full-time Deputy Director of the Academic Committee, Director of Software R&D Center, and Head of "13th Five-Year Plan" Zhejiang Provincial First-Class Discipline (B) of Computer Science and Technology of Zhejiang Shuren University. Research interests are artificial intelligence, computer applications, and intelligent information systems. E-mail: hexinl024@sohu.com .
Sergey Ablameyko (b. 1956) got his PhD in 1984, DSc in 1990, majoring in Information Processing Systems. He is Academician of National Academy of Sciences of Belarus and professor at Belarusian State University. Research interests are image analysis, pattern recognition, digital geometry, knowledge-based systems, geographical information systems, and medical imaging. E-mail: ablameyko@hsu.by .
Code of State Categories Scientific and Technical Information (in Russian - GRNTI)): 29.31.15, 29.33.43, 20.53.23.
Received January 14, 2019. The final version - April 18, 2019.