Научная статья на тему 'Multispectral optoelectronic device for controlling an autonomous mobile platform'

Multispectral optoelectronic device for controlling an autonomous mobile platform Текст научной статьи по специальности «Медицинские технологии»

CC BY
71
10
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
Компьютерная оптика
Scopus
ВАК
RSCI
ESCI
Ключевые слова
multispectral sensor / control device / autonomous mobile platform / image recognition.

Аннотация научной статьи по медицинским технологиям, автор научной работы — V.S. Titov, A.G. Spevakov, D.V. Primenko

The paper substantiates the use of multispectral optoelectronic sensors intended to solve the problem of improving the positioning accuracy of autonomous mobile platforms. A mathematical model of the developed device operation has been suggested in the paper. Its distinctive feature is the cooperative processing of signals obtained from sensors operating in ultraviolet, visible, and infrared ranges and lidar. It reduces the computational complexity of detecting dynamic and stationary objects within the field of view of the device by processing data on the diffuse reflectivity of materials. The paper presents the functional organization of a multispectral optoelectronic device that makes it possible to detect and classify working scene objects with less time spending as compared to analogs. In the course of experimental research, the validity of the mathematical model was evaluated and there were obtained empirical data by means of the proposed hardware and software test stand. The accuracy evaluation of the detected object, at a distance of up to 100m inclusive, is within 0.95. At a distance of more than 100 m, it decreases. This is due to the operating range of a lidar. Error in determining spatial coordinates is of exponential character and it also increases sharply at a distance close to 100 m.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Multispectral optoelectronic device for controlling an autonomous mobile platform»

IMAGE PROCESSING, PATTERN RECOGNITION

Multispectral optoelectronic device for controlling an autonomous mobile platform

V.S. Titov1, A.G. Spevakov1, D. V. Primenko1 1 Southwest State University, 305040, Russia, Kursk, ul.50 Let Oktyabrya, 94

Abstract

The paper substantiates the use of multispectral optoelectronic sensors intended to solve the problem of improving the positioning accuracy of autonomous mobile platforms. A mathematical model of the developed device operation has been suggested in the paper. Its distinctive feature is the cooperative processing of signals obtained from sensors operating in ultraviolet, visible, and infrared ranges and lidar. It reduces the computational complexity of detecting dynamic and stationary objects within the field of view of the device by processing data on the diffuse reflectivity of materials. The paper presents the functional organization of a multispectral optoelectronic device that makes it possible to detect and classify working scene objects with less time spending as compared to analogs. In the course of experimental research, the validity of the mathematical model was evaluated and there were obtained empirical data by means of the proposed hardware and software test stand. The accuracy evaluation of the detected object, at a distance of up to 100m inclusive, is within 0.95. At a distance of more than 100 m, it decreases. This is due to the operating range of a lidar. Error in determining spatial coordinates is of exponential character and it also increases sharply at a distance close to 100 m.

Keywords: multispectral sensor, control device, autonomous mobile platform, image recognition.

Citation: Titov VS, Spevakov AG, Primenko DV. Multispectral optoelectronic device for controlling an autonomous mobile platform. Computer Optics 2021; 45(3): 399-404. DOI: 10.18287/2412-6179-C0-848.

Introduce

At present, optoelectronic devices (OED) are widely used in various branches of the national economy, science, and industry. They allow forming an image of objects observed and analyzing their parameters under changing observation conditions [1 - 2]. Such devices can be used in tracking systems, contactless monitoring of the state of objects, environmental situation recognition, and route construction [3 - 4].

One of the goals is to build an optoelectronic control device for an autonomous mobile platform. Such platforms can be applied under the conditions that are dangerous to human health, in the analysis of radiation, chemical, and bacteriological contamination, and in round - the-clock monitoring of geographically remote places [5]. But the operation of these devices implies their application in territories being not prepared beforehand, with a complex landscape, and the presence of obstacles, both temporary and permanent ones [6 - 7].

Data processing in modern OED is based on information received from an optoelectronic sensor, methods and algorithms for image analysis, criteria for evaluating the quality of the device's operation, and is usually due to observation in a separate wavelength range, which does not always provide the necessary quality for solving problems of recognizing complex, multicomponent images. Therefore, to improve the quality of image object

recognition and the ability of the device to work with various external interference, it is necessary to use multi-spectral sensors that allow you to obtain images in several spectral ranges.

Using data obtained in different spectral ranges makes it possible to improve the quality and information content of the result when identifying and selecting image objects. The analysis of scientific publications has revealed that the works presented by scientists do not throw enough light on the problem of using the information from sensors working in different spectral ranges. While developing control devices for autonomous mobile platforms it is necessary to use multifunctional methods and image processing algorithms to improve performance and reliability of the results, which reduces computational complexity and increases the accuracy of the results [8 -9]. This allowed us to formulate the main objectives of the study.

Research objective

Currently, the problem of all existing autonomous robotic systems is the navigation subsystem [10 - 12]. For successful orientation in space, a mobile platform subsystem must plot a route, manage motion parameters, receive and reliably process data about the environment from sensors, specify its location based on georeferencing. An autonomous platform must determine its own coordinates and select the motion vector only independently, without

human intervention, based on sensor data. Artificial intelligence-based control systems being developed for autonomous mobile platforms are designed to support continuous sensors' scanning for rapid decision - making about path changes. There can be several such cycles - one is responsible for avoiding obstacles, the other for following the trajectory, and so on.

To construct a route, you need to get information about objects and obstacles located on the working scene, the shape and size of which may vary. A special attention should be paid to the situation when obstacles are mobile, i.e. they can change their location during the mobile platform movement. In this case, they may end up on the path of a robot following a pre-formed route, which leads to the initialization of the process of changing the original route. A similar situation occurs when an obstacle located behind another object is detected [13 - 14].

The structure and nature of obstacles are also different, for example, it is difficult to determine a water barrier or cliff, which must be taken into account when laying a route.

To handle such problems, it is necessary to use specialized devices that allow us reliably and with a given accuracy to localize areas of the working scene containing obstacles in constantly changing observation conditions. As the input data of a control device for autonomous mobile platforms contain uncertainty in the process of movement due to the location of surrounding objects, both dynamic and stationary ones, the complexity of the landscape, and weather events, such devices must operate automatically without the constant presence of a person.

The engineering challenge is to create a multispectral optoelectronic device for an autonomous mobile platform that localizes the objects of the working scene from images having been obtained in various spectral ranges, controls the platform and identifies stationary and dynamic objects within the field of view of the device.

1. Mathematical model of operation of multispectral optoelectronic device for an autonomous mobile platform

The mathematical model Mps of operation of a multi-spectral optoelectronic device for an autonomous mobile platform (AMP) consists of the following particular submodels:

- data input model M,„pfrom optoelectronic sensors:

In ( x, y) = Minp ( E (u, w), S ),

(1)

where In - pixel brightness amplitude in the image from sensor n, (x, y) - pixel coordinates in the image, E - radiation flow from a continuous working scene from a point with coordinates (u, w), S - sensor spectral range; - data input model Minp from lidar:

P( R) = MinpL

PoARpn(R)A,R-2 exp

-2ja(r )dr

(2)

where P (R) - instantaneous power received from a distance R; Po - laser pulse power; AR = (c * x / 2) -spatial resolution; c - light velocity; x - laser pulse duration; p^ - volume coefficient of back scattering; Ay- effective area of receiving aperture; a - attenuation ratio;

- model Mfn for filtering systematic and random image noise:

Ifn( x, y) = Mn(Vi(V2( In))),

(3)

where f (x, y) - value of brightness level of image point after filtering with coordinates (x, y), obtained from sensor n, vi, V2, - filter function of image noise; model Mq for detecting a set of objects in the field of view of the device by obtaining a multispectral image combined from frames being received in several spectral areas (400 - 500 nM, 500 - 600 nM, 600 - 700 nM):

Q„ = Mq ( In, P( R)),

(4)

where Q„ - a set of objects in space obtained by combining individual image segments in different spectral ranges into single objects, based on the distance to them;

- model MXyz of spatial reference of detected objects relative to the device coordinate system;

- model of route update with regard to detected obstacles:

< xi, yi >=

= Mmap (< Xi Ji > < xpn, ypn, zpn, Axpn, Aypn, Azpn >),

(5)

where < x, yt > - a set of objects in space obtained by of updated route for AMP, <X, Yt> - a set of points of the original route, having been built with the use of the positioning system.

The developed mathematical model MPS is written as follows:

Mps =

= Mmap (Mxyz (Mq (Mfn (Mmp (E(u, w), S)),MmpL )))

(6)

and it describes the process of building a route for an AMP, taking into account the detected dynamic and static objects on the route.

The novelty of the developed mathematical model consists in calculating the three-dimensional coordinates of the geometric center and the dimensions of objects detected in space from a sequence of images obtained in various spectral ranges from optoelectronic sensors and a mobile observation system under complex conditions, which allows you to adjust the original route of an AMP that has been formed with the help of positioning systems taking into account the detected obstacles, thereby increasing the accuracy of spatial reference. Improved speeding is achieved by detecting heterogeneous objects in different spectral ranges by color and classification based on albedo value. Accuracy increase of calculating the distance to the object is achieved by using information from a laser rangefinder.

2. Method and algorithms of multispectral optoelectronic device operation

Based on the developed mathematical model, a method has been proposed for identifying dynamic objects from a mobile platform. Identifying dynamic images is achieved through images obtained in different spectral ranges and lidar data, which includes: filtering images from random and systematic interference, forming a mul-tispectral image combined from several images in different spectral zones, identifying spatial objects due to the difference characteristics of spectral images, adjusting the range estimation of detected objects based on data obtained from a lidar, calculation of the three-dimensional coordinates of scene objects taking into account the movement of a mobile platform, visual alignment of its own position, and dynamic calibration of a multispectral optoelectronic device.

Fig. 1. Algorithm for identifying dynamic objects from a mobile platform from images obtained in different spectral ranges and lidar data

The algorithm for identifying spatial objects and calculating the three-dimensional coordinates of their geometric center and dimensions (Fig. 1.) consists in performing the following operations: input and preprocessing data from multispectral optoelectronic sensors

(blocks 1 - 2), forming a multispectral image (block 3), the block diagram of a multispectral image shaping algorithm is shown in fig. 2, identifying image objects (block 6), calculating the three-dimensional coordinates of identified objects (blocks 7 - 9), performing calibration of a multispectral optoelectronic device (blocks 10 - 15).

The mobile platform control algorithm makes it possible to increase the accuracy of spatial reference in a dynamically changing environment by adjusting the original route that has been formed through positioning systems with regard to the detected obstacles.

The novelty of the proposed algorithms consists in forming a pre-compressed, quantized, multispectral image from a sequence of images obtained from a multi-spectral optoelectronic sensor, which reduces computational costs when identifying stage objects. In its turn, it results in the accuracy increase of calculating the three-dimensional coordinates of geometric centers of objects and their sizes by adjusting the range estimation, the accuracy of spatial reference of a mobile platform and updating the initial route, taking into account the detected obstacles.

Start

/ Input 0 / of 5 spectral J / planes /

31

Selecting the minimum spectral plane

X

Formation of a multispectral image

©

X

Subtraction of the minimum spectral plane

from the spectral plane -1

1

Formation of CD areas with a size of 16x16

X

Discrete Cosine Transform

X

Quantization y

I '

Coding y

T

Eng

Fig. 2. Algorithm for forming a multispectral image

3. Development of the functional organization of a multispectral optoelectronic device for an autonomous mobile platform

The functional organization of a multispectral optoelectronic device for an autonomous mobile platform is shown in fig. 3. The device contains a group of individual modules that perform operations according to the developed algorithms. All modules, with the exception of mul-tispectral optoelectronic sensor (MOES), optoelectronic sensor (OES), lidar (LD), positioning system controller (PSC), RAM unit (RAM), radio transmission unit (RTU) are implemented in FPGA [15 - 16]. The novelty of the optoelectronic device for localization of working scene objects based on images obtained in various spectral ranges is the introduction of multispectral sensors and an object identification module, which reduces the computational complexity of the problem of identifying working scene objects. The other novelties include modules for correcting the range estimation, calculating the three-

dimensional coordinates of objects, a calibration module for a reference object with radiation sources in the ultraviolet (UV), visible (VI) and infrared (IR) regions of the

spectrum, and clarifying the initial route, which makes it possible to increase the accuracy of autonomous mobile platform positioning.

MOES

OES

MOES

OES

Filtration module

DATA 4

Multispectral imaging module

TS"

FPGA

bus

SL

Stage object selection module

HZ

Control block

T

T

Range correction module

Initial route refinement module

Module for calculating three-dimensional coordinates

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Engine controller box

Fig. 3. Multispectral optoelectronic device for autonomous mobile platform

RAM

T

RTU

The device operates as follows. The image from MOES in the form of an array of five frames obtained in different spectral ranges (475 nm, 560 nm, 670 nm, 720 nm, 850 nm) enters the filter module where the process of detecting and smoothing systematic and random interference takes place, then the image array is transmitted to the multispectral image generation unit (MIGU) where images are processed in accordance with the algorithm shown in fig. 3. Concurrently, the image frame from the optoelectronic sensor (OES) is transmitted to the filter module and processed. The device initialization command and the initial route of autonomous mobile platform movement in the form of a sequence of points presented in the format - degrees, minutes and seconds -(51°44'42.8", 36°11'58.2") is received by the radio transmission unit (RTU).

Range estimation adjustment module (REAM) receives an array of data on the distance to working scene objects with a horizontal angular resolution step of 0.05° from LD. Then, a multispectral image is received by the working scene objects identification module (WSOIU) where the identified and selected image areas corresponding to the objects are transmitted to RAM unit (RAM) and a three-dimensional coordinates calculation module (TCCM) where the calculation of geometric centers of the identified and selected objects and their sizes occurs. The calculation result as an array of coordinates (x, y, z) of object n is received by REAM, where they undergo a process of clarification about the data obtained from LD.

It improves the accuracy of determining the three-dimensional coordinates and dimensions of objects. Then, they are passed to the initial route refinement module. By processing a multispectral image, the device makes it possible to identify and select objects located on the motion plane, such as ground depressions filled with liquid. The initial route refinement module performs the calculation of the identified objects' effect on the specified route of an autonomous mobile platform. If an object was detected on the route, it is reconstructed with regard to obstacle avoidance and transmitted to the control unit; the control unit generates commands for the engine controller unit. The data received from the current frame is stored in RAM unit. For further use, the data obtained in the previous step must be deleted.

4. Experimental results

The study of parameters of a multispectral optoelectronic device for an autonomous mobile platform was carried out in accordance with the experimental test procedure based on real images of the working scene with a priori known location of reference objects. The obtained data on the number of identified and selected objects, their three-dimensional coordinates, and the coordinates of the specified route were used to estimate the calculation error.

Fig. 4 shows an image of the working scene captured by a multispectral sensor in the ultraviolet, visible, and infrared ranges.

Fig. 4. Working scene captured by a multispectral sensor in the ultraviolet (a), visible (b), and infrared ranges (c)

During the operation of the presented device, images obtained in different spectral ranges are used to detect heterogeneous objects in color and size, and classify them based on albedo value.

The classifier of the reflective power of various materials was obtained by machine learning of the working scene objects identification module under different environmental conditions. The learning was aimed at obtain-

ing the result of object classification with a predetermined veracity. As a result of the experiment, the following groups were added to the classifier: water bodies, green plants biomass, soil, plant bark - tissue that is not capable of photosynthesis, and metals.

Fig. 5 shows the result of identifying and selecting a water feature located on the working scene.

Fig. 5. Allocation of water body (a), working scene image (b)

After classifying the image and coloring the identified objects in different colors, we obtain the result shown in fig. 6.

vm

'i

Fig. 6. Object classification result

As a result of the conducted experiments, probability estimation P of a reliable classification of objects was performed, depending on the size and distance to them (fig. 6). In this case, the objects were divided into three groups, the assignment of an object to a specific group was determined based on value N.

N = f (L).

(7)

Dependence N is calculated based on the parameters of the lens used and the resolution of an optoelectronic device matrix receiver. The following rules were used for classification:

N > 10 ^ q e K,5 < N < 10 ^ q eC,N < 5 ^ q £M, (8)

where q - detected object, K - a set of large objects, C - a set of medium size objects, M - a set of small objects.

The figure shows that the probability of classification decreases with increasing distance to an object and depends on the size. This is due to the fact that it is more difficult to classify objects by diffusive reflection at a great distance, but it is still possible to distinguish them without classification, which reduces the computational complexity of image recognition problem [17 - 19].

Pf

0.8

0.6

0.4

0.2

0

K c M

N.

\ \ \ v s

\ \ \

\ \ s L,m

10 30 50 70 90 110 130 150 Distance to the object of observation L (m)

Fig. 7. Dependence of the threshold of distinguishability

of objects on the distance to the object

0 20 40 60 80 100 120 140

Fig. 8. Measurement error of coordinates of model objects and experimental data

Fig. 8 shows a comparative analysis of the measurement error of the identified objects' coordinates obtained during modeling, marked in red, and by means of experiment, in black.

Conclusions

In the course of the research, the developed multi-spectral optoelectronic device for an autonomous mobile platform was studied under different conditions and at different distances of the detected object from the mobile platform with a technical vision system. The data obtained allow us to assess the correspondence of the error in determining the coordinates of the observed object by calculation and experimental means. The analysis of the experimental research and mathematical modeling results has proved the adequacy of the developed method and algorithm and determines the three-dimensional coordinates of objects with the acceptable error. The accuracy of the

identified object, at a distance of up to 1oo m inclusive, is

within 0.95, at a distance of more than 100 meters, the reliability decreases; this is due to the range of a lidar.

References

[1] Hafizov RG, Okhotnikov SA. Recognition of continuous complex-valued image contours [In Russian]. Priboro-stroenie 2012; 55(5): 3-9.

[2] Titov DV, et al. Processing of multi-spectral images for solving the recognition problem [In Russian]. Telecommunications 2018; 5: 35-38.

[3] Sagdullaev YuS, Kovin SD. Perception and analysis of multi-spectral images [In Russian]. Moscow: "Sputnik" Publisher; 2016.

[4] Spevakov AG, Spevakova SV, Matiushin IS. Detection objects moving in space from a mobile vision system. Radio Electronics, Computer Science, Control 2019; 51(4): 103110. DOI: 10.15588/1607-3274-2019-4-10.

[5] Kerl C, Sturm J, Cremers D. Dense visual SLAM for RGB-D cameras. In Book: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems 2013; 2100-2106.

[6] Spevakova SV. Building the route of a mobile robot based on the analysis of multispectral data [In Russian]. Intel-lektual'nye i Informatsionnye Sistemy 2019; 334-337.

[7] Bekhtin YuS, Emelyanov SG, Titov DV. Theoretical foundations of digital image processing of embedded optoelectronic systems [In Russian]. Moscow: "Argamak-Media" Publisher; 2016.

[8] Shirabakina T.A. Stereoscopic optoelectronic system for determining the parameters of dynamic objects in real time [In Russian]. Datchiki i Sistemy 2004; 6: 65-67.

[9] Kalutskiy I, Spevakova S, Matiushin I. Method of moving object detection from mobile vision system. 2019 International Russian Automation Conference (RusAutoCon) 2019; 1-5. DOI: 10.1109/RUSAUT0C0N.2019.8867632.

[10] Spevakov AG. Method of selection of moving objects [In Russian], Izvestiya Yugo-Zapadnogo Gosudarstvennogo Uni-

versiteta, Seriya: Upravlenie, Vychislitel'naya Tekhnika, In-formatika; Meditsinskoe priborostroenie 2013; 3: 233-237.

[11] Tanygin MO. Investigation of the probability of the occurrence of one type of errors in the system for determining the source of information packages [In Russian]. Izvestiya Yugo-Zapadnogo Gosudarstvennogo Universiteta, Seriya: Upravlenie, Vychislitel'naya Tekhnika, Informatika; Med-itsinskoe Priborostroenie 2013; 3: 233-237.

[12] Spevakova SV, Kalutskij IV. Movable stereoscopic device for selecting dynamic objects [In Russian]. Pat RF of Invent N 2714603 C1 of Febryary 18, 2020, Russian Bull of Inventions N5, 2020.

[13] Newcombe RA. KinectFusion: Real-time dense surface mapping and tracking. 2011 10th IEEE International Symposium on Mixed and Augmented Reality 2010; 127-136. DOI: 10.1109/ISMAR.2011.6092378.

[14] Krishnamoorthy S, Soman K. Implementation and comparative study of image fusion algorithms. Int J Comput Appl 2010; 9(2): 25-35. DOI: 10.5120/1357-1832.

[15] Li C. Illumination-aware faster R-CNN for robust multi-spectral pedestrian detection. Patt Recogn 2019; 85: 161171. DOI: 10.1016/j.patcog.2018.08.005.

[16] Jiang G. A simultaneous localization and mapping (SLAM) framework for 2.5 D map building based on low-cost LiDAR and vision fusion. Appl Sci 2019; 9(10): 2105. DOI: 10.1007/978-3-540-30301-5.

[17] Bondarenko MA, Drynkin VN. Assessment of the information content of combined images in multispectral systems of technical vision [In Russian]. Programmnye Siste-my i Vychislitel'nye Metody 2016; 1: 64-79. DOI: 10.7256/2305-6061.2016.1.18047.

[18] Tsvetkov OV, Tananykina LV. A preprocessing method for correlation-extremal systems. Computer Optics 2015; 39(5): 738-743. DOI: 10.18287/0134-2452-2015-39-5738-743.

[19] Zubarev YuB. Spectrozonal methods and systems in space television [In Russian]. Voprosy Radioehlektroniki. Seriya Tekhnika Televideniya 2009; 1: 47-64.

Authors' information

Vitaliy Semenovich Titov (b. 1943), Doctor of Engineering, professor, Head of Computer Technology department of SWSU. Research interests: theoretical and methodological foundations of the construction of adaptive optoelectronic systems used in the automation of technological processes and industries for various purposes. E-mail: tas_06@mail.ru .

Alexander Gennadyevich Spevakov, (b. 1979), Candidate of Engineering, associate professor, Information Security department, Southwest State University. Research interests: computer vision, image processing. E-mail: aspev@yandex.ru .

Dmitry Vladimirovich Primenko, (b. 1995), post-graduate student, Computer Science department, Southwest State University. Research interests: computer vision, image processing. E-mail: dima-primenko777@yandex.ru .

Received December 10, 2020. The final version - February 8, 2021.

i Надоели баннеры? Вы всегда можете отключить рекламу.