-□ □-
Запропоновано метод i дана оцшка точнос-тi визначення динамiчних параметрiв руху на основi алгоритму обчислення оптичного потоку з зважуванням блотв зображень. Представлен результати моделювання руху по тдстиль-ног поверхш i дана оцшка точностi визначення параметрiв руху за допомогою оптичного датчика. Результати експериментiв тдтвер-джують, що використання текстурного аналiзу тдвищуе точтсть визначення оптичного потоку при оцтщ параметрiв руху
Ключовi слова: БПЛА, оптична навиащя, щильний оптичний потж, поле руху, параметри руху
□-□
Предложен метод и дана оценка точности определения динамических параметров движения на основе алгоритма вычисления оптического потока с взвешиванием блоков изображений. Представлены результаты моделирования движения по подстилающей поверхности и дана оценка точности определения параметрам движения с помощью оптического датчика. Результаты экспериментов подтверждают, что использование текстурного анализа повышает точность определения оптического потока при оценке параметров движения
Ключевые слова: БПЛА, оптическая навигация, плотный оптический поток, поле движения, параметры движения -□ □-
UDC 004.042: 004.67: 681.51: 681.5.015
|DOI: 10.15587/1729-4061.2017.108449
ESTIMATION OF ACCURACY IN DETERMINING THE TRANSLATIONAL VELOCITY OF A VIDEO CAMERA USING DATA FROM OPTICAL FLOW
A. Molchanov
Assistant* E-mail: [email protected] V. Kortunov
Doctor of Technical Sciences, Professor, Head of Department* E-mail: [email protected] R. Mohammadi Farhadi
Postgraduate student* E-mail: [email protected] *Department of Aircraft Radio-Electronic Systems Manufacturing M. E. Zhukovsky National Aerospace University "Kharkiv Aviation Institute" Chkalova str., 17, Kharkiv, Ukraine, 61070
1. Introduction
Recent years have seen a surge of interest in mini and micro UAVs, associated with the possibilities of low-cost monitoring and for the purposes of remote sensing of the Earth's surface [1, 2]. At present, most UAVs employ a satellite navigation system (GNSS) and an inertial navigation system (INS). Such navigation solutions, however, cannot work in the environments with a weak or missing GNSS signal and are not applied under such conditions as navigation in the mountains/hollows, navigation in an enclosed space or a flight in urban conditions. 2D and 3D laser scanners can provide additional information for navigation in such difficult terrain, but their shortcoming is high cost, considerable weight and unsuitability for installing at a mini or micro UAV. Stereoscopic video sensors are relatively light and information-oriented, related algorithms for video processing require significant computational requirements, making their application in real time extremely problematic. Optical flow (OF) can be used to estimate the speed, orientation, and motion trajectories after the estimated values for OF and INS passed through stochastic filters. Such systems of OF/INS provide tremendous opportunities to support small or micro UAVs in short-range navigation, rather than relying on a GNSS signal. There is therefore a need for new procedures for the motion analysis
to navigate in the environments with a weak or missing GNSS signal.
2. Literature review and problem statement
With the development of efficient algorithms of OF and miniature sensing devices, it has become possible to use optical flow to support the navigation of unmanned platforms. OF cameras are of particular interest for study because of the simple representation of speed. In order to calculate relative motion, two main concepts are employed: optical flow and "motion field". Optical flow is an apparent motion of the luminance picture. Motion field is defined as follows: each point of the image is matched with a velocity vector. It was noticed that honeybees use optical flow for landing, flight speed control and avoidance of obstacles [3]. Inspired by the insects' flight pattern, many developments were focused on the application of modified video sensors to measure an optical flow based on various robotized platforms [4]. There are variants of using the sensors of an optical mouse, installed at UAVs, to measure a motion field [5], but low resolution and impossibility to install additional focusing optics yielded unsatisfactory result in determining the optical flow [6]. Based on biological experiments, a large number of different types of algorithms for computing the optical flow were de-
©
veloped using computer vision. Including approaches based on gradient methods, local image characteristics and block brute force methods [7]. Block-based methods employ an entire optical flow field. These methods take into account discrepancies in available data and are stable enough for the numerical solution [8]. There is a need for new, more precise, approaches to calculate the optical flow and motion parameters, working under real time mode.
To measure optical flow, both simple monocular cameras and special optical flow sensors are applied. Such solutions for the airborne optical flow estimation, working in real time, employ GPU-FPGA parallel computing scheme [9]. The feasibility of applying block optical flow methods is predetermined by their ability to parallelization on graphic processors. An optical flow direction in two points in the image plane unambiguously detects motion of the camera during pure movement. Optical flow is sensitive to noise and there is a need to develop a reliable method to determine the motion parameters that would take it into account.
_3. The aim and objectives of the study_
The goal of present research is to demonstrate the feasibility and effectiveness of using an optical sensor for the estimation of motion parameters of UAV by the data of optical flow at a weak texturization of the underlying surface.
To accomplish the set goal, the following tasks had to be solved:
- to develop a method for estimating velocity parameters of the translational motion of the camera under conditions of textural difference in the underlying surface;
- to compare the proposed method and demonstrate the advantages of using a texture analysis [10] to estimate the optical flow in comparison with a standard block method;
- to run an analysis of accuracy of the proposed method for determining translational motion parameters under conditions of different texturization.
4. Determining motion parameters by the optical flow _data of a video camera_
Different points of objects in space are represented by camera optical system in the space of images at different distances from the focal plane. However, if the distance between the camera and the monitored scene is considerably larger than the focal length of the optical system, it is possible to assume that the image is built in its focal plane. In this case, it is possible to use a projective model of the camera in which the image of a three-dimensional object is obtained by projecting it onto the focal plane (image plane) through a single point that is called the optical center. A straight line perpendicular to the image plane and crossing this point is called the optical axis of the camera, the intersection point of the optical axis and the image plane is called the main point. The motion of objects in front of the camera or camera movement in a still environment leads to the corresponding changes in the picture and this dimension can be used to recover the corresponding motion. The camera moves in a static environment. Motion field is created by projecting the speed onto the image plane.
Point p corresponds to point P on the Earth's surface (Fig. 1). These two points are connected by the projection
equations. The important thing is that any point of the image can be assigned with some vector. These vectors form a field of motion.
Fig. 1. Coordinate system of a projective camera
We shall connect the coordinate system with a camera so that the Z axis coincides with the optical axis of the camera. Let us denote by r - vector connects point O to point P = [X,Y,Z] , f denotes focal length. Projected pixel coordinates P in the image plane are determined by [11]:
fP
P = fT
(!)
In a general case, the measurement of coordinates in a photodetector is performed in units other than the units that assign coordinates in the standard system. For a full description of the camera, it is necessary to express coordinates of point p in the natural units of the photodetector. Under the new system, coordinates of the projection of point p will take the form:
fX fY
u =--+ u0, v = -—+ v0
wZ hZ
(2)
where (u0,v0) are the coordinates of the main point relative to the coordinate origin of the photodetector (in the natural coordinates of the photodetector); w and h are the scales along the ox and oy axes (for example, distances between cells in a matrix photodetector along the rows and columns).
For the subsequent presentation, we shall introduce a 3-dimensional vector corresponding to point P=(X, Y, Z), and a two-dimensional vector p = (x,y) , corresponding to point p. We shall also determine a vector of homogeneous internal coordinates of the camera W = (u, v,1) . Using these denotations, the ratios can be represented in a compact vector-matrix notation [12]:
ZW = AP,
where
(4)
A =
| u0
f / h v0
0
!
is the matrix of internal settings of the camera, containing parameters only of the optical system and the camera's pho-todetector.
Let OXYZ be the global coordinate system, O'X'Y'Z' is the standard coordinate system of the camera. A transition from system OXYZ to system O'X'Y'Z' can be performed by
rotating the coordinate axes to system OX''Y''Z'' and subsequent shift of the coordinate origin. Then a relationship between coordinates of point P in the global and standard system can be represented as:
flow k fZVtotal - V2P
Dt Vcam Po f Z2 '
For the x and y components, a motion field can be written as:
P ' = RP +1,
(5)
where P and P' are the vectors of spatial coordinates of point P in the global and standard systems, respectively; R is the 3x3 dimensionality matrix, describing a rotation of the standard coordinate system relative to the global; t is the three-dimensional vector of the coordinate origin offset of the global system relative to the coordinate origin of the standard.
Fig. 2. Transition from a global coordinate system to a standard coordinate system of the camera
Figure 2 shows a schematic of the coordinate transformation. Here a, p, y are the angles formed by the OX'' axis with the OX, OY and OZ axes, respectively. Vector t = (tx,ty,t2) is the coordinate origin shift of the global system relative to the coordinate origin of the standard. To obtain internal camera settings, as well as for the compensation of radial and tangential distortions, a calibration procedure of the video camera is run [13].
Coordinate origin in the plane of the camera's image coincides with the main point u0 = v0 = 0, while the measurement units of coordinates in the global system and in the camera's image plane are the same (w = h = 1). Motion of a rigid body can be decomposed into two components: (V) is the translational motion, (ro) is the rotational motion around the axis passing the coordinate origin. Velocity of the point will take the form [14]:
Vota, =-V,
where
v = (Vx ,Vy )T
is the translational motion;
= (rox , ®y , ®z )T
(6)
Vx
u = —- f + x Z
V
v = —- f + y Z
z + x^_y
Z f
z + x^_y
Z f
-royf + ГОzУ,
-rozx + roxf.
(8)
These equations can be written in the form of u = ut + ur and v = vt + vr. Optical flow will be divided into translational component (ut, vt) and rotational component (ur, vr).
Method for estimating velocity parameters of translational motion
There are parametric and non-parametric methods for parameter estimation. The parametric identification methods are typically distinguished in the following way: methods of evaluation before deciding on the structure of a dynamic model - a method of artificial neural networks; evaluation methods upon selection on the structure of a dynamic model - a least squares method, a method of auxiliary variables, a likelihood method or an output error method, a method of stochastic approximations, filtering techniques (Kalman filter, a simulation error method) [15].
Optical flow, which is measured, is prone to noise and it is desirable to develop a reliable method, which takes this into account. Thus, it is proposed to use the method of least squares to determine the motion parameters (that is, the best fit method in relation to a certain norm). We shall consider a case when the camera executes translational motion only. To proceed, we shall assume that the image plane is a rectangle x s [-w, w] and y s [-h, h]. The same method is applied if the image has a different shape. In addition, suppose that 1/Z is a limited function and that a set of points where 1/Z is the discontinuous has a measure of zero. This condition at 1/Z assures that all necessary calculations of integrals can be performed.
Let us consider a few possible variants of initial conditions for the motion of a video camera or a platform. We shall employ a least square method to estimate parameters of motion [16]. Distance between the camera and the surface in each point of the image is known. Let us work to evaluate parameters of the translational motion Vx,Vy. One should minimize expression:
minjj
v, v
-Vx + xV ,
u--x-- +
Z
V+yv2
\2
Z
dxdy, (9)
The distance from the camera to the surface in each point of the image is known. In this case, determine the best fit method relative to norm ML2, which is found from
II II h w
lf (x, y 1=1 J[f (x, y)] dxdy.
Let us introduce a reduction:
(10)
are the rotation parameters.
By taking the time derivative, we obtain a ratio between speed P in the keyframe of the camera and speed p in the image plane, with an error function p0.
a = -Vx + xV-; |3 = -Vy + yV2.
(11)
Note that the expected flow, assigned by Vx,Vy, takes the form:
u = a/Z, v = P/Z.
Next, we can rewrite expression (10) in the form dxdy,
minJJ
KK
a
u--
Z
v-f
(12)
(13)
entiation of integrand expression (16) with respect to Z and, when setting the result equal to zero, we obtain
(17)
Now we shall pass over to the method of minimization. By differentiating integrals by Vx,Vy and equating the resulting equations to zero, we obtain:
V=
Vy =
Vz JJ xdxdy - Z JJ udxdy (w ■ h)
Vz JJ ydxdy - Z JJ vdxdy (w ■ h)
(14)
If the translational motion along the Z axis is missing, Vz=0. In this case, expression (14) is simplified and we receive parameters of the translational motion:
V=
Vy =
-Z JJ udxdy
(w ■ h) -Z JJ vdxdy (w ■ h)
uJ v-%. Zl Z2 ( Z Z2
It follows from (17):
Z = 0±f, ua +vf
Therefore, it is required to minimize expression min JJ(uf- va)2 dxdy. (18)
V, Vy V
Denote integral to minimize through g (Vx ,Vy ,Vz). Then
g (v V v )=
= « V,2 + bVy2 + cVz 2 + 2dVVy + 2eVyVz + 2 fVzVx, (19) where
a = Jjv2dxdy, b = \\u 2dxdy, c = ^(xv - yu)2 ^dy,
Let us consider a condition under which it is necessary to determine motion parameters Vx,Vy, Vz with unknown magnitude Z. The application of the least square method to expression (9) does not produce a single solution, because the resulting equations are nonlinear relative to Vx,Vy, Vz.
There is, however, a procedure to develop the least square method, which makes it possible to represent a closed form solution for the motion parameters [16]. Instead of minimizing (9), we shall try to minimize expression
d = -JJuvdxdy,
e = JJu( xv - yu)dxdy, f = -Jv( xv - yu)dxdy.
(20)
Now g(Vx,Vy,Vz) cannot be negative, and g(Vx,Vy,Vz) = 0 for Vx = Vy = Vz = 0. In order to determine velocity of the translational motion by the least square method, it is necessary to solve the following homogeneous system relative to V:
GV = 0,
min JJ
V. Vy Vz
-V + xV,
-Vy + yvz
2
-P2)dxdy, (15)
where G is a matrix
G =
obtained by multiplying the integrand expression (9) by a2 +p2. Next, we shall apply the same method of least squares, as previously, in expression (15). When the measured optical flow is not distorted with noise, both (15) and (9) can become equal to zero, substituting correct motion parameters. Thus, we obtain the same solution to the parameters of motion, reducing to a minimum (15) or (9). If the measured optical flow is incorrect, then, by using expression (15) for the minimization, we shall receive the best match relative not to the ML2 norm, but to another norm MLap
a d f d b e f e c
(21)
(22)
h w
f (x,y)|a|i = J J[f (x,y)] (a2 + P2)dxdy.
(16)
In this case, we are dealing with a minimization at which a contribution to error is weighed, greater importance is attached to the points where the optical flow speed is larger. This is most appropriate when measurement of high speeds is more accurate. The norm that yields the best results depends on the properties of noise in the measured optical flow. The first norm is better suited for a situation when noise in measurements does not depend on the magnitude of optical flow.
Consider the method of least squares in the case when the norm is MLap. First, we determine Z through the differ-
Consequently, (21) has a nonzero solution when and only when the determinant G is equal to zero. Then three equations (21) are linearly dependent and V can be determined to a constant factor. Since the data contain noise, function g (Vx ,Vy ,Vz) cannot be brought to zero for the non-zero translational velocity and, thus, V = (0,0,0) will be the only correct solution. Note that function g takes the form
g (kVx, kVy, kVz ) = k2 g (Vx ,Vy ,Vz),
where k is the constant. It is clear that g(Vx,Vy,Vz) reaches its minimum at Vx = Vy = Vz = 0.
The direction of vector V that minimizes g, - for a given length V. Hence, the conditions are imposed so that vector V is single. If V is of single length, the minimum of function g is the smallest eigenvalue of matrix G. That is, the value of V at which g reaches a minimum can be found as a natural vector corresponding to the given eigenvalue. This follows from that g is a quadratic form, which can be written in the form g(Vx,Vy,Vz) = VTGV, in this case, G is a positive half-determined Hermitian matrix since a > 0, b > 0, c > 0, ab > d2, bc > e2 and ca > f2. Therefore, all eigenvalues Xi
are valid and not negative, they are derived as the roots of a polynomial of third degree:
X3 -(a + b + c)X2 +(ab +bc + ca-d2 -e2 - f 2)X +
+(ae2 + bf2 + cd2 -abc - 2def ) = 0. (23)
There is an explicit formula to express the smallest positive root in terms of real and imaginary parts of the roots of the quadratic resolvent of cubic equation. In this case, it yields the smallest desired root, because none of the roots can be negative. Note that X = 0 is an eigenvalue when and only when G is degenerate, that is, free term of the polynomial is equal to zero. If determinant G is zero, we can find translational velocity V, which draws the function g to zero. As follows from the theorem of mathematical analysis, this can happen only if the optical flow is correct, error-free, determined almost everywhere, that is, when a set of points in which correctness is disturbed has a measure of zero. This theorem asserts that if the integral of the square of bounded continuous function is zero, then the function itself equals zero. Therefore, errors can occur only in the points where optical flow is breaking, and these are exactly those points where the Z function's continuity is disrupted.
It is impossible for two eigenvalues to be equal exactly to zero because this would mean that a coefficient of the first degree X in a polynomial is equal to zero, but when squared X2 is not equal to zero. This would mean that ab = d2, bc = e2 and ca = f2, where a, b and c are not all equal to zero. However, for the case of equality in the Cauchy-Schwarz inequalities, it is essential that u and v are proportional to xv - yu. This might be true only if all six integrals (20) and all three eigenvalues are equal to zero, too. This case is not of interest since this could happen only at zero optical flow. The velocity in this case is also zero. When one knows the least eigenvalue, it is possible to directly find translational velocity that best matches the original data. To determine the eigenvector corresponding to eigenvalue X4, it is necessary to solve the following homogeneous linear system of equations:
(a -X4 )Vx + dVy + fVz = 0,
dVx + (b-X4 )Vy-eVz =0,
fVx + eVy + (c -X4 )VZ = 0.
(24)
possible to discretize the expression and obtain equations similar to (20) where, instead of the integral, summation of the corresponding expressions is applied.
To simplify the calculations, it is proposed to use a motion field generated based on the estimates of shifts not of the points but rather individual blocks, into which an image is preliminary split.
Every frame of the video sequence the size of M x N pixels is divided into a set of non-overlapping blocks Bij the size of n x n pixels, where i, j are the coordinates of the block center.
Bj(t) = {l(x,,yj,t):xt,y eBj}, i - n / 2 < k < i + n /2; j - n /2< l < j + n /2,
(26)
where I (x, y, t) is the illumination of the CCD matrix in image point (x, y) at time t.
The blocks are arranged sequentially from left to right, top to bottom. The split is performed so that all the blocks cover the entire frame, that is, their total area equals the frame area. The problem on determining a field of vectors is reduced to the problem on the search for motion vectors (u, v)j for each block.
Then in time t + Dt, we shall obtain a brightness function for block Bjj:
Bj (t + Dt) ={I ( x, + Uj, yj + vtj, t + Dt)}.
(27)
Because X1 is an eigenvalue, the equations are linearly dependent. Assume that all eigenvalues are different, that is, the rank of matrix (G -XI) is equal to 2, where I is the matrix of identity transformation. Then we obtain:
Vx = (b-X4 )(c-X4)-f (b-X4 )-d (c-X4 ) + e (f + d-e), Vy =(c-X4)(a-X4)-d(c-X4)-e(a-X4) + f (d + e-f), (25) Vz = (a-X1 )(b-X1 )-e (a-X1)-f (b-X1 ) + d (e + f - d).
It should be noted that value X4 should be small at good data and it is possible to simply approximate the exact solution by employing these equations at X4 = 0. In any case, the resulting velocity can now be normalized so that its magnitude is equal to unity. One difficulty remains, however, resulting from the fact that if V is a solution to the minimization problem, then - V is also the solution. Yet, only one of these solutions will match positive values of Z. It is easy to see by evaluating (25) in any point of the image. Then it is possible to determine Vx,Vy and Vz, by a numerical method from the last three equations (25). For this purpose, it is
In order to determine optical flow, article [10] applies a block mapping method that uses adaptively variable size and adaptive motion vector search strategy with weighting the measurements of image blocks, where each block is matched with a texture indicator. A standard block method for estimating optical flow does not imply using the weightings of image blocks.
It is proposed to use a texture analysis for the estimation of motion parameters. We introduce parameter Ay, which is
the determinant of data matrix VB (t + Dt). Matrix A TA■■,
ij\ > ,j ij'
further denoted Gj, is the information matrix and an original object for different types of a multivariate analysis. A measure of texturization of the block image and, consequently, the accuracy of the computed motion vector, is determined based on the number of conditionality of matrix Gj.
For determining an optical flow, motion vector (u,v)j is determined from expression:
(u v)j = arg min(F ^ j, uvj
(u,v)jEBj
SAD(t,i, j,u,v) = wtj X |B(p,t)-B(p + (u,v),t + Dt)|,
P^Bij■
1,ifcond (Gjj) < 100, 0, ifcond G) > 100,
(28)
where F(t,i,j,(u,v){-) is the function of block match, this is a measure of the closeness of blocks from the current and previous frames. An example of such a function is SAD (Sum of Absolute Differences).
The proposed method takes into account the texturiza-tion of each image block and discards uninformative blocks when assessing velocity of the translational motion of the entire image frame.
w-: =
5. Results of modeling and estimation of the motion parameters of a video camera
In order to study accuracy of the algorithm operation, we developed a software using the MATLAB software. The motion was simulated by the image of a two-dimensional underlying surface with varying texture (data on the images are given in Table 1). To create the effect of a UAV flight, the underlying surface coordinates will be kept unchanged; we shall change the coordinates and orientation of the camera. A change in the position and orientation of the camera are assigned by analytical equations. When a flight simulation is enabled, a display will show the underlying surface from a particular point in space and at specific angles whose value depends on the current position and orientation of the camera.
A general view of the coordinate system, simulated for the system of observation, is shown in Fig. 3. The coordinate origin is on the surface of the underlying surface in the selected point. The initial camera position is characterized by the coordinates (X, Y, Z) and orientation angles (a, p, y).
Fig. 3. Simulation of video camera motion
Initial simulation data: average motion speed is 16 m/s, camera motion height is 100 meters, camera angle is 90, focal distance is 1 mm, size of the CCD matrix is 256*256 pixels, frame processing frequency is 30 frames per second. Estimation of the translational motion velocity with a compensation for the rotational motion, a change in the angular velocity Y, X, z=[-10:10],
During a flight, a change in the brightness picture of the image occurs. The resulting current image is split into blocks the size of 8*8, and the optical flow estimation procedure is performed, as well as of the vector field of the current image frame. Parameters of the motion vectors (magnitude and direction) are recorded into the corresponding matrices. As the problem is being solved on measuring a velocity of the translational motion, a rotational component of the optical flow is compensated for in expression (8) by the known parameters of angular velocities for each image frame. Depending on the task being solved, we select an algorithm for the analysis of motion vectors (motion field). Parameters of the translational velocity were evaluated by norm mlap with unknown parameter Z. For comparison, an estimation of the translational motion velocity is carried out using a texture analysis and by a standard method based on equidistant measurements. Total texturization of the entire image or a frame was determined relative to the estimate of the number of conditionality of matrix Gj for each image block:
1 NM
kX-
1
-x 100,
NM-jcond (g.. ) where NM is the number of blocks in the frame.
(29)
Simulation of the motion was performed on 8 images of the underlying surface at high resolution (4412*4779 pixels) with a different texture.
Fig. 4. Motion trajectory of video camera
Based on the obtained results of the evaluation of translational velocity, it is possible to analyze the accuracy of the proposed method.
Fig. 5. Translational motion velocity of the camera: d1 — true translational velocity; d2 — translational velocity estimated by a standard block method; d3 — translational velocity estimated by weighted image blocks
For a texture analysis, paper [10] proposed a parameter describing the degree of image texturization by the assessment of the information matrix. Texture analysis of the underlying surface relative to the assessment of conditionality number is 9.6348, which is a signature of high texturization of the entire image.
6
i 4 В
о 2
g о
о "й -2 6
-4
W
-6
г
Л Ж 11\1 ! - 1 Ni
f I
M !
о
50 100
Time, s
150
Fig. 6. Errors in the estimates of translational velocity for the standard method of optical flow
For the estimates performed on the basis of a standard block method, the mean estimated value of a translational velocity error was 0.154 m/s, RMS error in determining the translational velocity was 1.4389 m/s.
Fig. 8. Motion trajectory of video camera along the surface
Vector field
S
ment error was 2.5128 m/s, RMS error in determining the translational velocity was 1.1727 m/s.
Calculation of translational velocity by applying a texture analysis yielded the following result: the mean estimated value of a translational velocity measurement error was 0.0221 m/s, and RMS error in determining the translational velocity was 0.7223 m/s.
Fig. 7. Errors in the estimates of translational velocity for a texture method of optical flow
Calculation of translational velocity employing the weighted image blocks yielded the following result: the mean estimated value of a translational velocity error was 0.0880 m/s, RMS error in determining the translational velocity was 0.6392 m/s. Research results for the Aerial photography texture are given in Table 1.
Fig. 8 shows results of velocity estimation for the image of an underlying surface with a weak index of texturization. Simulation of the video camera motion and the study were carried out on the homogeneous texture of a water surface.
10 15 20
Fig. 9. Calculation of the vector field for the current and previous frames in a video sequence
Based on the received data about results of the transla-tional velocity estimation, it is possible to assess the accuracy of the proposed method for a weakly textured underlying surface.
A texture image analysis of the underlying surface relative to conditionality number was 6.6203, indicating a weak texturization of the image (Fig. 8).
For the estimates on the basis of a standard method, the mean estimated value of a translational velocity measure-
Fig. 10. Translational motion velocity: datal — true translational velocity; data2 — translational velocity estimated by a standard block method; data3 — translational velocity estimated by weighted image blocks
Fig. 11. Errors in the estimates of translational velocity for a standard method of optical flow
Fig. 12. Errors in the estimates of translational velocity for a texture method of optical flow
Table 1 gives estimation results of the translational motion parameters for different images of the underlying surface.
Table 1
Estimation results of the translational motion parameters
Estimation parameters Texture of underlying surface
Topographic map Aerial photography Aerial photography 2 Field Texture Tarmac Texture Concrete Texture Grass Texture Water
Texture analysis 2.7736 9.6348 4.7706 3.3554 2.5556 2.1907 3 6.6203
Mathematical expectation of an error in determining the translational velocity 0.33 m/s 1.38 m/s 1.1 m/s 0.29 m/s 0.18 m/s 0.2 m/s 0.32 m/s 2.51 m/s
RMS error in determining the translational velocity 0.6m/s 1.44 m/s 1.57 m/s 0.57 m/s 0.62 m/s 0.7 m/s 0.76 m/s 1.17 m/s
Mathematical expectation of an error in determining the translational velocity using a texture method 0.27 m/s 0.09 m/s 0.03 m/s 0.29 m/s 0.16 m/s 0.2 m/s 0.26 m/s 0.02 m/s
RMS error in determining the translational velocity using a texture method 0.47 m/s 0.64 m/s 0.41 m/s 0.5 m/s 0.61 m/s 0.7 m/s 0.39 m/s 0.72 m/s
6. Discussion of estimation results of the translational motion parameters
Research findings confirm the applicability of the proposed techniques. Estimation models of the motion field of optical flow could be appropriate for the use of optical flow for navigation purposes. Thus, it can be argued that the optical navigation system is one of the most versatile tools for determining the motion. That is why a special attention is being paid to this technology at present, and it is undergoing a rapid development. Development of digital cameras and increasing computing power of different devices also contribute to it. Such information on the motion as rotation speed, translational velocity and information about terrain can be estimated separately, by integrating optical flow with inertial data and height information. Present paper explores a variant of using optical system in a combination with traditional navigation sensors. Data on the orientation angles, required for damping the motion of robotized platform, can also be obtained on the basis of an analysis of video sequences received on board, using the methods of photogrammetry theory and the theory of computer vision [17]. Studies have shown that errors of the estimated parameters are comparable to the errors of satellite navigation systems [18]. The proposed techniques might be applied not only under conditions of a missing GNSS signal, but also for adjusting the readings of satellite navigation systems.
Using a texture analysis when estimating a motion field improves performance by recruiting a reduced number of vectors, as well as proves to be more accurate in comparison with traditional block-based methods. Application of fast algorithms for computing an optical flow makes it possible to process measurements in real time, while during post-processing this significantly increases the efficiency of data analysis. The devised techniques are adapted to the
complicated conditions of observation in certain actual tasks and are applicable in those cases when existing standard algorithms fail to produce reliable results. The developed software tool system for the visual implementation of research into algorithms could be used to test algorithms under different modes, processing the sequences of multidimensional data, including algorithms running on the graphic processor.
7. Conclusions
1. We devised a method for estimating motion parameters using data from the optical flow of a video camera or a platform, which takes into consideration noise in the measured data. The method proposed also takes into account the texture features of each image block and disregards uninformative blocks when assessing velocity parameters of the translational motion of the entire image frame. The method is based on determining and formation of optical flow with weighting the measurements for image blocks.
2. Motion simulation and accuracy analysis of the proposed method's operation were performed using the MAT-LAB software package. A change in the position and orientation of the camera was assigned by analytical equations. Based on simulation data for test images and check on a set of actual images, the efficiency of optical flow estimation method using a texture analysis was shown. Employing a motion estimation method with weighted measurements of image blocks yields a more accurate result in comparison with standard block methods for determining an optical flow.
3. Research findings confirm that when using a texture method, a weak texturization does not add a mistake in determining velocity parameters of the translational motion. The study was carried out for different kinds of the underlying surface.
References
1. Casbeer, D. W. Forest fire monitoring with multiple small UAVs [Text] / D. W. Casbeer, S. M. Li, R. W. Beard, T. W. McLain, R. K. Mehra // Proceedings of the 2005, American Control Conference, 2005. - P. 3530-3535. doi: 10.1109/acc.2005.1470520
2. Chao, H. Remote Sensing and Actuation Using Unmanned Vehicles UAVs [Text] / H. Chao, Y. Q. Chen. - Hoboken, New Jersey: Wiley-IEEE Press, 2012. - 232 p.
3. Franceschini, N. A Bio-Inspired Flying Robot Sheds Light on Insect Piloting Abilities [Text] / N. Franceschini, F. Ruffier, J. Serres // Current Biology - 2007. - Vol. 17, Issue 4. - P. 329-335. doi: 10.1016/j.cub.2006.12.032
4. Garratt, M. A. Vision-based terrain following for an unmanned rotorcraft [Text] / M. A. Garratt, J .S. Chahl // Journal of Field Robotics. - 2008. - Vol. 2, Issue 4-5. - P. 284-301. doi: 10.1002/rob.20239
5. Beyeler, A. Vision-based control of near-obstacle flight [Text] / A. Beyeler, J.-C. Zufferey, D. Floreano // Autonomous robots. -2009. - Vol. 27, Issue 3. - P. 201-219. doi: 10.1007/s10514-009-9139-6
6. Herisse, B. Landing a VTOL unmanned aerial vehicle on a moving platform using optical flow [Text] / B. Herisse, T. Hamel, R. Mahony // IEEE Transactions on Robotics. - 2012. - Vol. 28, Issue 1. - P. 77-89. doi: 10.1109/tro.2011.2163435
7. Brox, T. Large Displacement Optical Flow: Descriptor Matching in Variational Motion Estimation [Text] / T. Brox, J. Malik // IEEE Transactions on Pattern Analysis and Machine Intelligence. - 2011. - Vol. 33, Issue 3. - P. 500-513. doi: 10.1109/tpami.2010.143
8. Butler, D. A naturalistic open source movie for optical flow evaluation. [Text] / D. Butler, J. Wulff, G. Stanley, M. Black // Lecture Notes in Computer Science. - 2012. - Vol. 7577. - P. 611-625. doi: 10.1007/978-3-642-33783-3_44
9. Sanada, A. Self-localization of an omnidirectional mobile robot based on an optical flow sensor [Text] / A. Sanada, K. Ishii, T. Yagi // Journal of Bionic Engineering. - 2010. - Vol. 7. - P. S172-S176. doi: 10.1016/s1672-6529(09)60232-8
10. Molchanov, A. A. Metod ocenki dvizheniya opticheskogo potoka s vzveshivaniem izmerenii blokov izobrazheniya [Text] / A. A. Molchanov, V. I. Kortunov // Sistemy obrobky informatsii. - 2015. - Vol. 3, Issue 128. - P. 26-31.
11. Hartley, R. Multiple View Geometry in Computer Vision. 2nd edition [Text] / R. Hartley, A. Zisserman. - Cambridge, U.K.: Cambridge Univ. Press, 2004. - 655 p.
12. Zhang, Z. A Flexible New Technique for Camera Calibration [Text] / Z. Zhang // IEEE Transactions on Pattern Analysis and Machine Intelligence. - 2000. - Vol. 22, Issue 11. - P. 1330-1334. doi: 10.1109/34.888718
13. Heikkila, J. A Four-step Camera Calibration Procedure with Implicit Image Correction [Text] / J. Heikkila, O. Silven // Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1997. - P. 1106-1112. doi: 10.1109/ cvpr.1997.609468
14. Von Mises, R. Theory of Flight. 1st edition [Text] / R. Von Mises. - Dover Publications, 1959. - 672 p.
15. Farhadi, R. M. UAV motion model and estimation of its uncertainties with flight test data [Text] / R. M. Farhadi, V. I. Kortunov, A. Mohammadi // 22nd Saint Petersburg International Conference on Integrated Navigation Systems, 2015. - P. 131-133.
16. Horn, B. Robot Vision [Text] / B. Horn. - MIT Press, 1986. - 509 p.
17. Kortunov, V. I. Video camera motion detection according to the optical flow [Text] / V. I. Kortunov, A. O. Molchanov// 22nd St. Petersburg International Conference on Integrated Navigation Systems, 2015. - P. 81-82.
18. Kaplan, E. D. Understanding GPS: Principles and Applications. 2nd edition [Text] / E. D. Kaplan, C. Hegarty. - Artech House, 2006. - 726 p.