Научная статья на тему 'IMPLEMENTATION OF MOVING OBJECT TRACKER SYSTEM'

IMPLEMENTATION OF MOVING OBJECT TRACKER SYSTEM Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
61
27
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
IMPLEMENTATION / MOVING OBJECT TRACKER

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Abdulhamid Mohanad, Olalo Adam

The field of computer vision is increasingly becoming an active area of research with tremendous efforts being put towards giving computers the capability of sight. As human beings we are able to see, distinguish between different objects based on their unique features and even trace their movements if they are within our view. For computers to really see they also need to have the capability of identifying different objects and equally track them. This paper focuses on that aspect of identifying objects which the user chooses; the object chosen is differentiated from other objects by comparison of pixel characteristics. The chosen object is then to be tracked with a bounding box for ease of identification of the object's location. A real time video feed captured by a web camera is to be utilized and it's from this environment visible within the camera view that an object is to be selected and tracked. The scope of this paper mainly focuses on the development of a software application that will achieve real time object tracking. The software module will allow the user to identify the object of interest someone wishes to track, while the algorithm employed will enable noise and size filtering for ease of tracking of the object.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «IMPLEMENTATION OF MOVING OBJECT TRACKER SYSTEM»

Information and Communication Technologies Информационно-коммуникационные технологии

DOI: 10.17516/1999-494X-0367 УДК 004.93'1

Implementation

of Moving Object Tracker System

Mohanad Abdulhamida and Adam Olalo*b

aAL-Hikma University Baghdad, Iraq b University of Nairobi Nairobi, Kenya

Received 12.09.2021, received in revised form 14.10.2021, accepted 21.11.2021

Abstract. The field of computer vision is increasingly becoming an active area of research with tremendous efforts being put towards giving computers the capability of sight. As human beings we are able to see, distinguish between different objects based on their unique features and even trace their movements if they are within our view. For computers to really see they also need to have the capability of identifying different objects and equally track them. This paper focuses on that aspect of identifying objects which the user chooses; the object chosen is differentiated from other objects by comparison of pixel characteristics. The chosen object is then to be tracked with a bounding box for ease of identification of the object's location. A real time video feed captured by a web camera is to be utilized and it's from this environment visible within the camera view that an object is to be selected and tracked. The scope of this paper mainly focuses on the development of a software application that will achieve real time object tracking. The software module will allow the user to identify the object of interest someone wishes to track, while the algorithm employed will enable noise and size filtering for ease of tracking of the object.

Keywords: implementation, moving object tracker.

Citation: Abdulhamid, M., Olalo, A. Implementation of moving object tracker system, J. Sib. Fed. Univ. Eng. & Technol., 2021, 14(8), 986-995. DOI: 10.17516/1999-494X-0367

© Siberian Federal University. All rights reserved

This work is licensed under a Creative Commons Attribution-Non Commercial 4.0 International License (CC BY-NC 4.0). Corresponding author E-mail address: [email protected], [email protected]

Практическое применение системы отслеживания за движущимися объектами

Моханад Абдулхамида, Адам Олалоб

аУниверситет Аль-Хикма, Ирак, Багдад б Университет Найроби Кения, Найроби

Аннотация. Область компьютерного распознавания образов приобретает популярность среди исследований, при этом прилагаются огромные усилия для наделения компьютеров этой способностью. Согласно способностям, свойственным людям, мы можем видеть, различать разные объекты на основе характерных особенностей, а также отслеживать их движения, если они находятся в пределах нашего поля зрения. Для того чтобы компьютеры действительно могли видеть, они также должны иметь возможность распознавать различные объекты и в равной мере отслеживать их. Данная статья посвящена проблеме идентификации объектов, которые выбирает пользователь; для выбранного объекта проводится разграничение с другими объектами путем сравнения характеристик пикселей. Выбранный объект затем следует отслеживать с помощью ограничительного прямоугольника, чтобы облегчить определение его местоположения. Необходимо использовать видеопоток в реальном времени, снятый веб-камерой, и именно из этого внешнего окружения, видимого в поле зрения камеры, следует выбирать и отслеживать объект. В рамках данной статьи основное внимание уделяется разработке программного обеспечения, которое позволит отслеживать объекты в реальном времени. Программный модуль позволит пользователю распознать интересующий объект, который нужно отследить, в то время как используемый алгоритм позволит убрать помехи и определить размер объекта, позволяющие снять ограничения в его отслеживании.

Ключевые слова: практическое применение; устройство, отслеживающее движущиеся объекты.

Цитирование: Абдулхамид, Моханад. Практическое применение системы отслеживания за движущимися объектами / Моханад Абдулхамид, Адам Олало // Журн. Сиб. федер. ун-та. Техника и технологии, 2021, 14(8). С. 986-995. DOI: 10.17516/1999-494Х-0367

1. Introduction

The use of video cameras is becoming common in applications such as traffic monitoring, ATM security cameras, surveillance in shopping malls, banking halls and even police surveillance. The video displayed by cameras represents a sequence of images being streamed as successive frames.

Tracking in image processing refers to finding the location of an object from the video being processed and following it up in each successive frame being received. Various algorithms that enable tracking by video cameras are used, but various processes exist that cut across most of these methods. They include processes such as gray scaling, thresholding, size filtering, labeling and noise filtering.

For a computer vision system to function effectively as the equivalent biological system, it should be able to cope with static and dynamic background environments, moving and changing objects, changing view points and even changes in illumination.

With the exception of illumination changes and scintillating motion of features in the environment such as water bodies and leaves; for most rigid bodies changes are usually due to object or camera motion. Moving object detection can be classified as follows:

(A)-Based on the camera state

1-Stationary camera moving object case.

2-Moving camera moving object case.

(B)-Based on the number of objects being tracked

1-Single object tracking in real time.

2-Multiple objects tracking in real time.

(C)-Based on the number of cameras being used to track

1-Single camera tracking.

2-Multiple camera tracking.

Some works related to our topic can be found in the literatures[1-7].

2. Implementation

In this paper, the object tracking system is developed and implemented using Matlab as illustrated in the following sections.

2.1. Main flow chart

The proposed approach can be illustrated with a flowchart as shown in Fig. 1. It starts with image acquisition of the live video feed by the web camera after which the user initiates the object of interest which is identified by various selection algorithms. If the start tracking button is pressed, a flag value which indicates whether the button is pressed or not is checked. If it is pressed, motion is detected in the video and the motion of the specific object of interest localized thereby achieving tracking of the object; otherwise tracking is not initiated unless the button is pressed.

On the other hand if the stop tracking button has not been pressed tracking continues infinitely as long as the object is within the camera view, but once it's pressed tracking stops and the user can either exit the application or initiate a different object to track. Similarly there also exists a flag value that keeps track of the state of the stop tracking button (whether it's pressed or not). The next sections explain the processes of the main program flowchart in Fig. 1 in detail.

2.1.1. Image acquisition process

Fig. 2 shows the flow chart for the image acquisition process. This process launches the graphical user interface (GUI) handle onto which the video will displayed, it also contains the command buttons for initiating tracking, stopping and exiting. In addition the live video acquired by the web camera is also previewed at the GUI handle at this juncture.

2.1.2. User initiating process

Fig. 3 shows the detailed flow chart for the user initiating process. In this process the user first selects the object someone wishes to track with a resizable rectangle, the selected object is then modeled by its unique properties (colour and pixel intensity). It is these unique model properties of the object that aid in the object's segmentation and its subsequent tracking in the midst of other objects.

2.1.3. Motion detection process

Fig. 4 illustrates the detailed motion detection process flow chart. Before motion detection commences, first there is the initialization of the tracking process which consists of the video input

- 988 -

C Slop )

Fig. 1. Main program flow chart

User Initiating Process

Fig. 2. Image acquisition process

Motion Detection Process

Fig. 3. User initiating process

Object Processing and Tracking Process Fig. 4. Motion detection process

object capturing the web camera video feed as per the set frame grabber properties. The frame grabber sets the number of frames captured by the video input object per second and also enables capturing the snapshot of the current frame.

From the current frame snapshot, the object of interest is segmented according to the user initiated object properties (colour and pixel intensity) which were used in its modeling. After that motion is extracted using background subtraction technique and possible regions of motion of the object localized within image sequences as successive frames are grabbed. It is from these successive frames that the motion of the object of interest will be segmented and tracked accordingly.

2.1.4. Object processing and tracking process

In this process (Fig. 5), the moving region with the probable location of the object of interest is gray thresholded and then binary masked so as to separate the foreground from the background; our

Motion DetectionProeess

r

Gray Threshold

Binary Mask

r

Noise and Size Filtering

Object Labelling

T

Stop ^

Fig. 5. Object processing and tracking process

object of interest will be represented by the foreground(or binary mask '1', with the background marked as binary mask '0').

To get a precise mask representing the object, noise and size filtering is carried out so as remove pixels that can be mistaken to represent the object. After that object labeling is carried out from the moving region and it is this labeled object that will be tracked by a bounding box.

After tracking is stopped by the user, the video input object is deleted and memory cleared so as to free up memory resources of the system which were being utilized in tracking of the object.

2.2. Program algorithm

The program implementation stages can be summarized by the following algorithms:

• Using Matlab's video input object the installed adapter of the web camera is called by the program.

• Matlab's preview function utilizes the installed adapter of the web camera to acquire its video feed and displays it in real time at the already launched GUI.

• User initiates object selection.

• An algorithm is then employed that grabs a certain amount of frames per second from image sequences being received.

• A model of the foreground is then computed from the current frame based on the color and pixel intensity values of the user initiated object.

• Background subtraction algorithm is then carried out so as to segment the foreground region which depicts the probable object of interest.

• Morphological dilation is then carried out so as to close the small gap regions of the segmented foreground.

• Further segmentation of the object from the foreground region is attained by employing noise and size filtering algorithms.

• Objects are then labeled based on the number of connected components in the filtered foreground region by employing contour finder techniques. From these labeled objects our object of interest is identified and uniquely tracked.

3. Experimental results

This section illustrates the results of the program when tested with different video sequences provided. The program algorithm is as per developed using the Matlab's environment and the video sequences represent the real time data acquired using a low resolution webcam with an image size of 320*240 pixels.

In this analysis two video sequences are taken into consideration; one for two different coloured objects and another for similar objects. The results are analysed stepwise from object segmentation, background subtraction, noise and size filtering, object labelling and the final results as perceived by the user.

Fig. 6 and Fig. 7 give an illustration of the two captured video sequences as perceived by the web camera, while, Fig. 8 and Fig. 9 indicate the user selected regions in the respective video sequences.

3.1. Results from object segmentation

Fig. 10 and Fig. 11 illustrate the binary image representation of the first frame for both video sequences; the white regions represent the possible locations of the object as per the set color constraints used in segmentation of the object.

3.2. Results from background subtraction

Background subtraction detects motion present globally thereby distinguishing the moving object from a static background. Background subtraction alone does not show a clear contrast between the object and the static background in some cases because the camera reads an image by its pixel values.

Fig. 10. Object segmentation of 1st sequence Fig. 11. Object segmentation of 2st sequence

Hence in this program gray thresholding is applied prior to the background subtraction process leading to a static background that is enhanced and more apparent as shown in Fig. 12 and Fig. 13.

3.3. Results from noise and size filtering

This process has the effect of removing noise and small sized pixels which are less than 300px that may be mistakenly taken to represent the object. Thereby leading to a sharp image that is free from noise with a clear distinction of the possible object location as per the set colour range constraints.

This effect is illustrated in the video sequences shown in Fig. 14 and Fig. 15, with Fig. 15 showing two possible object detections since the two objects are of similar characteristics.

- 992 -

3.4. Results from object labeling

In this procedure, all the connected components are labeled and every connected component is uniquely identified as a different object by the program. From the visual point of view there is no difference between the results of object labeling and those of filtering, but actually the detected objects are labeled at this instance at the backend of the program.

From the 1st video sequence only one object was detected as shown in Fig. 16, which was thus labeled as number 1. From the second video sequence two objects were detected as shown in Fig. 17, the one on the left was labeled as number 1 while the one on the right was labeled as number two.

Based on the assumption that the selected object would not have moved significantly from the instance of object selection to when the object is detected in the next captured frame. A relative position of the object is computed and it's on this positional basis that the rightful selected object is tracked in the second video sequence.

3.5. Final results from the user point of view

When a moving object specified by the user is being detected a blue boundary box is drawn surrounding the object, as the program's algorithm is implemented between successive frames the bounding box moves with the object.

Fig. 18 to Fig. 21 illustrate the motion of the chosen object in various frames as perceived by the program from web camera feed of the first video sequence.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Fig. 16. Object labeling of 1st sequence

m

Fig. 18 1st frame result of first video sequence

Fig.20. 20th frame of first video sequence

Fig. 17. Object labeling of 2st sequence

Fig. 19. 5th frame result of first video sequence

Fig. 21. 45th frame of first video sequence

Fig. 22

1st frame of second video sequence

Fig. 23

5th frame of second video sequence

Li J!

Fig. 24. 20th frame of second video sequence Fig. 25. 45th frame of second video sequence

Fig. 22 to Fig. 25 also illustrate the motion of the chosen object as perceived by the program from the web camera feed of the second video sequence.

From both video sequences, the selected object was successfully detected and tracked by a blue bounding box as it moved within the camera view.

4 Conclusion

The main objective of this paper was to develop a program that demonstrates the tracking of a user selected object of interest that is within a stationary camera field of view. The program achieves this by employing various algorithms for object segmentation, gray thresholding, background subtraction, filtering and object labeling which lead to object localization and its eventual tracking. Though the algorithms employed had a reasonable success rate as far as stationary controlled scenes are concerned; for deployment in real world situations such as video surveillance applications, guiding of autonomous vehicles, automatic target recognition and missile guidance work needs to be done in order to fine tune the algorithms for better tracking performance.

References

[1] C. Lee, Moving object detection at night, Master thesis in Computer and Microelectronic Systems, Malaysia Technical University, Malaysia, 2007.

[2] Adam Olalo, Moving object tracker, Graduation Project, University of Nairobi, Kenya, 2011.

[3] S. Parekh, G. Thakore, K. JaJiya, A survey on object detection and tracking methods, International Journal of Innovative Research in Computer and Communication Engineering, Vol. 2, Issue 2, 2014.

[4] B. Deori, D. Thounaojam, A survey on moving object tracking in video, International Journal on Information Theory, Vol. 3, No. 3, 2014.

[5] P. Panchal, G. Prajapati, S. Patel, H. Shah, J. Nasriwala, A review on object detection and tracking methods, International Journal for Research in Emerging Science and Technology, Vol. 2, Issue 1, 2015.

[6] S. Balaji, S. Karthikeyan, A survey on moving object tracking using image processing, 11 th International Conference on Intelligent Systems and Control (ISCO), 2017.

[7] R. Hatwar, S. Kamble, A review on moving object detection and tracking methods in video, International Journal of Pure and Applied Mathematics Vol. 118, No. 16, 2018.

- 995 -

i Надоели баннеры? Вы всегда можете отключить рекламу.