Научная статья на тему 'ALGORITHMS TO SOLVE ABSOLUTE ORIENTATION PROBLEM FOR GL(3), O(3), AND SO(3) GROUPS'

ALGORITHMS TO SOLVE ABSOLUTE ORIENTATION PROBLEM FOR GL(3), O(3), AND SO(3) GROUPS Текст научной статьи по специальности «Физика»

CC BY
42
8
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
ABSOLUTE ORIENTATION PROBLEM / ITERATIVE CLOSEST POINTS (ICP) / CLOSED FORM SOLUTION / EXACT SOLUTION / ORTHOGONAL TRANSFORMATION / AFFINE TRANSFORMATION

Аннотация научной статьи по физике, автор научной работы — Makovetskii A., Voronin S., Voronin A., Makovetskaya T.

The most popular algorithm for aligning of 3D point data is the Iterative Closest Point (ICP). The point-to-point variational problem for orthogonal transformations is mathematically equivalent to the absolute orientation problem in photogrammetry. In this paper the survey of the known closed form methods to solve point-to-point ICP variation problem is proposed. Also, the new extension of the Horn algorithm for O(3) group to SO(3) group is obtained. Computer simulation illustrates the difference of performance for considered methods.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «ALGORITHMS TO SOLVE ABSOLUTE ORIENTATION PROBLEM FOR GL(3), O(3), AND SO(3) GROUPS»

Chelyabinsk Physical and Mathematical Journal. 2022. Vol. 7, iss. 1. P. 97-112.

DOI: 10.47475/2500-0101-2022-17107

ALGORITHMS TO SOLVE ABSOLUTE ORIENTATION PROBLEM FOR GL(3), O(3) AND SO(3) GROUPS

A. Makovetskii1a, S. Voronin1, A. Voronin1, T. Makovetskaya2

1 Chelyabinsk State University, Chelyabinsk, Russia

2South Ural State University (National Research University), Chelyabinsk, Russia a artemmac@csu.ru

The most popular algorithm for aligning of 3D point data is the Iterative Closest Point (ICP). The point-to-point variational problem for orthogonal transformations is mathematically equivalent to the absolute orientation problem in photogrammetry. In this paper the survey of the known closed form methods to solve point-to-point ICP variation problem is proposed. Also, the new extension of the Horn algorithm for O(3) group to SO(3) group is obtained. Computer simulation illustrates the difference of performance for considered methods.

Keywords: absolute orientation problem, Iterative Closest Points (ICP), point-to-point, closed form solution, exact solution, orthogonal transformation, affine transformation.

Introduction

Creating a 3D spatial environment for a robot or sensor is based on algorithms for registering point clouds. Aligning two point clouds means finding either an orthogonal or affine transformation in R3 that maximizes consistent overlap between the two clouds. The iterative closest points algorithm (ICP) is the most common method for aligning point clouds based on exclusively geometric characteristics. The algorithm is widely used to record data obtained with 3D scanners. The ICP algorithm, originally proposed by Besl and Mackay [1], consists of the following iteratively applied basic steps:

1) a search for a correspondence between the points of two clouds;

2) a minimization of the error metric (variational problem of the ICP algorithm). The two steps of the ICP algorithm alternate among themselves, that is, the

estimation of the geometric transformation based on the fixed correspondence (step 2) and updating the correspondences to their closest matches (step 1). The key point of the ICP algorithm [2] is the search for either an orthogonal or affine transformation, which is the best with respect to a metric for combining two point clouds with a given correspondence between points.

The variational problem of the ICP algorithm contains the following three basic components: functional to be minimized; class of geometric transformations; functional minimization method. The most common types of functional is point-to-point [1].

The point-to-point variational problem for orthogonal transformations is mathematically equivalent to the absolute orientation problem in photogrammetry [3]. Geometric transformations can belong to the groups of GL(3), O(3), SO(3) (affine transformations, orthogonal transformations, orthogonal transformations with positive determinants, respectively) extended by translations. A minimization method can be

The work was supported by the RFBR (grant 20-47-740007).

either iterative or closed (closed-form solution). A closed-form solution can be an exact solution to a variational problem or its approximation. In this paper we consider closed form solutions of the point-to-point variational problem. Note that the solution to a variational problem in the class of orthogonal transformations is mathematically more complicated than in the class of affine transformations, since in the former case it is necessary to deal with the manifold O(3) (or SO(3)) in R9.

Many different variants of the variational problem have been proposed. In [4] a closed form solution is described for the point-to-point affine problem.

Closed-form solutions to the point-to-point problem in the class of orthogonal transformations were obtained by Horn [5; 6]. In [5], the solution is based on quaternions and belongs to SO(3). In [6], the transformation matrix belongs to O(3) and may have a negative determinant. In this case, the ICP algorithm does not converge to the true transformation. This problem was solved by modifying the Horn's algorithm for the class of SO(3) in [7].

If the source and target clouds are located far from each other, then a common algorithm of searching for a correspondence between clouds matches all points of the source cloud with a small subset of the target cloud. In this case the affine variational problem finds a transformation that strongly distorts the source cloud. Also the bad correspondence significantly reduces the probability to obtain a good answer for orthogonal variants of variational problems. Thus, the probability of obtaining an acceptable transformation as a result of the ICP algorithm with an initial poor correspondence is the comparative criterion for different types of variational problems.

In this paper a survey of methods to solve the point-to-point variation problem (or the absolute orientation problem) for different cases is proposed. We consider the following types of the point-to-point variational problem:

1) a closed form solution of the affine point-to-point [4];

2) an approximation of the exact orthogonal solution by a projection of the affine solution onto O(3) [6];

3) an approximation of the exact orthogonal solution by SVD of the affine solution;

4) a closed form exact solution (O(3) case) [6];

5) a closed form exact solution (O(3) case, SVD) [7];

6) a closed form exact solution (SO(3) case);

7) an approximation of the exact orthogonal solution by a projection of the affine solution onto SO(3) [7].

Note that all the considered algorithms are known, except for the algorithm described in Section 5.1.

Especially note that the method described in Section 3.3 is used in numerous papers in the point cloud registration field and it returns the solution belongs to O(3) (not SO(3)) group.

The paper is organized as follows. In Section 1, we formulate the point-to-point variational problem. In Section 2, the problem is reduced to the form without the translation vector. In Section 3, we consider algorithms for O(3) group, in Section 4 for GL(3) group, in Section 5 for SO(3) group. In Section 6, the list of considered variational problems and methods to their solutions is proposed. In Section 7, computer simulation results are presented and discussed.

1. Absolute orientation problem

Let P = [pi,... ,ps} be a source point cloud, and Q = [qi,..., qs} be a target point cloud in R3. Suppose that the relation between points of P and Q is given in such a

manner that for each point pi there is a corresponding point q^ Note that a point qi from Q can correspond to the several points from P. Denote by J the functional

J(R,T) = ^ ||Rp + T - q

i=1

where R is an orthogonal matrix belongs to O(3) or SO(3) group, pi = (pi1 pi2 pi3)t, Qi = (qi1 Qi2 qi3)4. Consider the following constrained variational problem:

(R*,T*) = argmin J(R,T), (2)

R,T

subject that RtR = I (O(3) case), or subject that RtR = I and det(R) = 1 (SO(3) case). The variational problem (2) is called absolute orientation problem or point-to-point ICP variational problem.

2. Translation vector exclusion

Let us compute the gradient J(R, T) with respect to T. Let h be the increment with respect to T. Note that

s s

J(R, T) = ^ ||Rpi + T - Qi|2 = ^(Rpi + T - Qi, Rpi + T - Qi) =

i=1 i=1 s

= ^(Rpi - Qi, Rpi - Qi) + 2(Rpi - Qi,T) + (T,T),

J (R,T + h) =

¿=1

s

= ||RPi + (T + h) - 9i||2 = ^(Rpi + (T + h) - qi, Rpi + (T + h) - qi) =

¿=1 i=1 s

= ^(Rpi - qi, Rpi - qi) + 2(Rpi - qi, (T + h)) + ((T + h), (T + h)) =

¿=1

s

= ^(Rpi - qi,Rpi - qi) + 2(Rpi - qi,T) + 2(Rpi - qi,h) + (T,T) + 2(T, h) + (h,h).

i=1

The residual of J (R, T + h) and J (R, T ) is

s

J (R, T + h) - J (R, T ) = ^ 2(Rpi - qi, h) + 2(T, h > +(h, h) =

i=1

ss

= ^(2(T + (Rpi - qi)), h) + (h, h) = (2 ^(T + (Rpi - qi)), h) + o(h). (3)

i=1 i=1

It follows from (3) that gradient VJ(T) is

s

VJ(T) = 2 ^ T + (Rpi - qi).

i=1

Let us compute the extreme value of the variable T: VJ(T) = 0,

1 s

T = -V qi - Rpi. (4)

s 1=1

2

We substitute the expression T* through R into (1)

J(R, T*) = E ||Rpi + T* - = E

i=1

i=1

Rpi + ( - E qj - Rp

j=i

E

i=1

Rpi — E RPj

— ■ / / • —

E

qj

j=1

E

i=1

R pi- - E pj

j=1 -

— ■ I / • —

j=1

s

qj

j=1

Let pi and q'i be

- s - s

pi=pi-s^pj, q' = qi-s^pj, j=1 j=1

where i = 1, 2,..., s. The functional (1) with respect to (5) and (6) takes the form

J (R) = E IIRpi

qi II2.

i=1

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

The variational problem (2) for point clouds P' = {p1,... ,p's} and Q' = {qi,..., q's} is reduced to

R* = argmin J (R), (8)

R

subject that R*R = I (O(3) case), or subject that R*R = I and det(R) = 1 (SO(3) case). Denote by P' and Q' the following matrices:

p11 . • . p's1 \ /qi 1 .. . qS 1

p'12 . • . pS2 1 , Q' = 1 q' 2 .. . qS 2

1' 3 . ps3 / V qi 3 .. . qS 3

Note that the functional (7) can be rewritten as

J(R) = ||RP' - Q'|2. (10)

3. Algorithms for O(3) group

3.1. Horn algorithm for matrices

Let us consider variational problem (8) with functional in form (10). Note that we denote by (A, B) (where A and B are same size matrices) the matrix dot product, i.e. the sum of pointwise products of matrices elements.

2

E IIRpi - qiII2 = IIRP' - Q'I2 = (RP' - Q', RP' - Q')

i=1

= (RP', RP') - 2(RP', Q') + (Q', Q') = (R*RP', P') - 2(RP', Q') + (Q', Q') = = (P', P') - 2(RP', Q') + (Q', Q') = (P', P') - 2(R, Q'(P')4) + (Q', Q').

Let us denote by M the matrix

M = Q'(P')\

11)

2

q

2

s

2

Note that Mt is the covariance matrix of point clouds P' and Q'. Since (P', P') and (Q', Q') are constants with respect to the variational problem, problem (8) is reduced to

R* = argmin(-(R, M)) = argmax(R, M). (12)

RR

Let us consider the matrix MtM. This matrix is symmetric and positive semi-definite. Suppose that rank(M) = 3 (matrix M is the sum of matrices of rank one). Since rank(MtM) = rank(M), we have that rank MtM is equal to three. Thus, three eigenvalues of the matrix MtM are strictly greater than zero. Let us write eigen decomposition of the matrix MtM

MtM = KA^, (13)

where K is an orthogonal matrix consists of the eigenvectors, A1,A2, A3 are eigenvalues and A = diag(A1, A2, A3) . Suppose that A1 > A2 > A3.

Proposition 1. The matrix S = (MtM)1/2 can be expressed as

S = (MtM)1/2 = KVA^, (14)

where VA = diag(vA, vA, vA). The matrix S = (MtM)1/2 is symmetric. Proof. We have

(kVA^X&VA^) = KVAVA^ = = MtM, (MtM )t = (KAKt)t = K(A)tKt = = Mt M.

Proposition 2. The matrix U = M(MtM)-1/2 can be expressed as

U = M (MtM )-1/2 = M K(VA)-1Kt, (15)

where (v^A)-1 = diag(, , ). The matrix U = M(MtM)-1/2 is orthogonal. Proof. We have the equalities

((MtM)-1/2 (MtM)-1/2 )-1 = ((K(VA)-1Kt)(K(VA)-1Kt))-1 = = (K(VA)-1(VA)-1Kt )-1 = ((KA-1Kt))-1 = = MtM,

UtU = (M K(VA)-1Kt)tM K(VA)-1Kt = (K(VA)-1KtMt)M K(VA)-1Kt = = K(VA)-1Kt(MtM )K(VA)-1Kt = №(VA)-1Kt(KAKt)K(VA)-1Kt =

= K(VA)-1 A(VA)-1Kt = = I.

Proposition 3. The equality

M = US (16)

is valid.

Proof. It is obvious that US = M(MiM)-1/2(MiM)1/2 = M. Substitute (16) to the functional in variational problem (12) and obtain

R* = argmax(R, M) = argmax(R, US) = argmax(UiR, S) =

arg max(U 4R, KvW) = argmax(^iUiRK, v7!),

subject that R4R = I.

The matrix К4U4RK is orthogonal. Let us denote the diagonal elements of this matrix by r1; r2, r3. In this case the dot product can be written as

(^U'RK, VA) = Г1 VXi + Г2^Л2 + Гз^Лз.

Maximum value of dot product of an orthogonal matrix and the matrix л/А is equal to V% + VA2 + V%, i.e ri = 1, Г2 = 1, гз = 1 and = I.

Therefore, we have that

R = U = M (M4M )-1/2. (17)

3.2. Nearest orthogonal matrix (O(3) case)

Let us denote by M a 3 x 3 matrix, rank(M) = 3. We call a nearest orthogonal matrix for the matrix M such matrix R*, that

R* = argmin ||R - M||2, (18)

R

subject that R4R = I. We rewrite the functional in variational problem (18) as

||R - M||2 = (R - M, R - M) = (R, R) - 2(R, M) + (M, M) = = (R4R, I) - 2(R, M) + (M, M) = (I, I) - 2(R, M) + (M, M). (19)

Since terms (I, I) and (M, M) are constant with respect to variational problem (19), the variational problem takes the form

R* = argmax(R,M), (20)

subject that R4R = I.

Variational problems (20) and (12) are coincide. Therefore, the nearest orthogonal matrix R for the the matrix M is

R = M (M4M )-1/2. (21)

3.3. Solution of the variational problem by Singular Value Decomposition

Let M be a 3 x 3 matrix and rank(M) = 3. Let us apply Singular Value Decomposition (SVD) to the matrix M = uDv4, where u and v4 are orthogonal matrices, D = diag(v/A1, л/Л2, л/Л3) and Л1,Л2,Л3 > 0 are eigenvalues of the matrix M 4M.

Formula (21) describes the solution of variational problem (12). Substitute SVD of the matrix M to (21) and get

R = M (M4M )-1/2 = (uDvt)((uDvt)t (uDv4))-1/2 = = (uDvi)(vDuiuDvi)-1/2 = (uDv4)(vD2 v4)-1/2.

Since ((vD-1vt)(vD-1 vt))-1 = (vD-2vt)-1 = vD2vt, we have that (uDvt)(vD2vt )-1/2 = (uDvt)(vD-1vt) = uvt,

and

R = M (MtM )-1/2 = uvt. (22)

Remark 1. The sign of the matrix R determinant in (22) is defined by the sign of the matrix M determinant

det(R) = det(M (MtM )-1/2) = det(MK(VA)-1)Kt) = = det(M) det(K) det((VA)-1)) det(Kt) = det(uvt),

where the matrices K and (v^A)-1 are defined in (13) and (15), also note that det(K) = det(Kt) > 0 and det((VA)-1)) > 0.

If the condition det(M) = det(Q'(P')t) < 0 holds (we use here (11)), then we have that det(R) = -1.

Remark 2. If on an iteration of ICP algorithm we obtain a geometrical transformation with det(R) = -1, then ICP practically can not to converge to the right solution, because we have in this situation clouds RP and Q that can not be aligned by rotations and translations.

4. Algorithms for GL(3) group

4.1. Solution for GL(3)

Here we consider variational problem (8) with functional in form (10). We will interpret here the matrix R as matrix of an affine transformation. Let us denote this matrix as Ra. Rewrite the considered functional by the following way:

J (Ra) = ||RaP' - Q'll2 = (RaP' - Q', RaP' - Q') = = (RaP', RaP') - 2(RaP', Q') + (Q', Q').

Let us denote by h the increment with respect to Ra. The increment of the functional takes the form

J (Ra + h) = ((Ra + h)P' - Q', (Ra + h)P' - Q') = = ((Ra + h)P', (Ra + h)P') - 2((Ra + h)P', Q') + (Q', Q') = = (RaP', RaP') + 2(RaP', hP') + (hP', hP') - 2(RaP', Q') - 2(hP', Q') + (Q', Q').

We consider the difference

J (Ra + h) - J (Ra) = 2 (RaP', hP') + (hP', hP') - 2(hP', Q') = = 2(RaP' - Q', hP') + (hP', hP') = 2((RaP' - Q')(P')t, h) + (hP', hP').

Thus, the gradient VJ(Ra) is VJ(Ra) = 2(RaP' -Q')(P')t, and the equation VJ(Ra) = 0 takes the form (RaP' - Q')(P')t = 0, RaP'(P')t - Q'(P')t = 0, and

Ra = (Q'(P')t)(P'(P')t)-1.

(23)

4.2. Projection of affine solution to O(3)

We can obtain an approximated orthogonal solution R = Ra(RaRa)-1/2 of the variational problem (12) by applying affine solution (23). The matrix R is projection of R„ to O(3) by formula (21).

5. Algorithms for SO(3) group

It follows from Remark 2 that eliminating the possibility of obtaining an orthogonal matrix with a negative determinant increases the convergence of ICP to true geometric transformations. Therefore, the algorithms to solve variation problem (12) in the SO(3) class have better performance than algorithms for O(3).

5.1. Modified Horn algorithm for SO(3) case

Consider the following variational problem:

R = argmax(R,M), (24)

subject that R'R = I and det(R) = 1. Denote by sgndet the sign of a matrix determinant. Note that sgndet(R) = sgndet(U) = sgndet(M(M(M)-1/2) = sgndet(M). If sgndet(M) = 1, then the solution R = U belongs to SO(3).

Suppose that sgndet(M) = -1. Since M = US and S = (M(M)1/2 (formulas (16) and (14)) we have

(R, M) = (R, US) = (UR, S) = (UR, (MM)1/2) = (UR, R^R) = (RURR, VA).

The matrix R'U'RR is orthogonal and sgn det(RtUtRR) = sgndet(U) = sgndet(M) = — 1. Note that any element Ro of the group O(3) can be represented as Ro = RsE, where Rs is an element of SO(3) and E = diag(±1, ±1, ±1). Particularly, we have that R'U'RR = RsE,, where i = 1, 2, 3, 4 and E1 = diag(1,1, —1), E2 = diag(—1, —1, —1), E3 = diag(—1,1,1), E4 = diag(1, —1,1).

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Note that any rotation in R3 can be represented by a value a of the rotation angle, and a rotation axis. The rotation axis contains the origin of the coordinates system and is defined by a direction vector v = (x,y,z). We suppose that ||v|| = 1. The matrix Rs can be written as

(cos a + x2(1 — cos a) xy(1 — cos a) — z sin a xz (1 — cos a) + y sin a xy(1 — cos a) + z sin a cos a + y2(1 — cos a) zy(1 — cos a) — x sin a zx(1 — cos a) — y sin a zy(1 — cos a) + x sin a cos a + z2(1 — cos a)

Proposition 4. For any matrix Rs from SO(3) the following condition holds (recall that A1 > A2 > A3 > 0) :

(RsE1, VA) < VA1 + УА2 — УАЗ. (25)

Proof. Let us rewrite (25) as vA + vA — v^A3— < RsE1, VX > 0. Since

(RsE1, л/Л) = (cosa + x2(1 — cos a))\/A + + (cos a + y2(1 — cos a)^\/A2 — (cos a + z2 (1 — cos a))\/A = = cos a^\ZA1 + \/A2 — л/Аэ) + (1 — cos a)^\/A1x2 + v^y2 — \/A3z2),

we have that

vA + vA — \/Лз -

-(cosа(\А1 + - \/Лз) - (1 - cosа)(\А1ж2 + л/Ау2 - vAz2)) = = (1 - cos а)(\А1 + \/Л2 - л/Лз) - (1 - cos aXvAx2 + л/Азу2 - vAz2) = = (1 - cos а)(Ул!(1 - x2) + л/А2(1 - у2) + ТА!(z2 - 1))

Note that (1-cos a) > 0, (1-x2) > 0, (1-у2) > 0 and (z2-1) < 0, since x2+у2+z2 = 1, and also vA > vA > л/Аз. It follows that

/1(1 - x2) + /Л2(1 - у2) + /A3(z2 - 1) > > /А2(1 - x2) + /А2(1 - у2) + /¡(z2 - 1) = = /А2(2 - (x2 + у2)) + /A3(z2 - 1) > 0.

Consider the following inequalities (RsEj, л/А) < л/А7^л/А2-л/Аз, where i = 2, 3, 4. Denote by Eaux2, Eaux3 and Eaux4 the following matrices:

Eaux2 = diag(-1,-1,1), Еаихз = diag(-1,1,-1), Eaux4 = diag(1,-1,-1).

Note that sgn det(EaUx2) = sgn det(EaUx3) = sgn det(Ea^) = 1 and

Eaux2E1 = E2, Eaux3E1 = E3, Eaux4E1 = E4.

Note that it is possible represent the expression (RsEj, л/А) in the form (RsEi, VA) = ((RsEaUxi)E1, VA), i = 2, 3,4.

The matrix RsEauxi is an element of SO(3). It follows that the following conditions holds: (RsEi, л/А) < (Eb>/A), where i = 1, 2, 3, 4. In such a way, the dot product takes maximum value when Ro = E1, and

K^RK = E1. (26)

Let us express R from equation (26): R = UKE1K4. Note that if sgndet(U) = -1, then sgndet(R) = 1. So, we obtain that R = UKE1Kt. The solution of the variational problem (12) for SO(3) is

f M(M4M)-1/2, if sgn det(M) = 1,

= jM(M4M)-1/2 K diag(1,1,-1) K4, if sgn det(M) = -1, ( )

where K is the matrix consisting of eigenvectors of the matrix M4M and eigenvalues of M4M ordered in the descending order.

5.2. Algorithm for SO(3) based on Lagrange multipliers

Let us consider here variational problem (24). The constrained variational problem (24) can be reformulated as unconstrained problem by applying Lagrange multipliers (R*, C*, A*) = arg max J(R, C, A), where

R,C,A

J(R, C, A) = (R, M) - (C, R4R - I) - A(det(R) - 1),

and C is a symmetric matrix of Lagrange coefficients, the number A is the Lagrange coefficient. Let us denote J1 = (R, M), J2 = (C, R*R — I) and J3 = A(det(R) — 1), then

J (R, C, A) = Ji(R, C, A) — J2(R, C, A) — J3(R, C, A).

Let us compute the gradient V J(R) with respect to R. The gradient V J1 is V J1(R) = M. Compute the gradient V J2(R). J2(R, C, A) = (C, R*R — I) = (C, R*R) — (C, I). Let us denote by h the increment with respect to R, then

J2(R + h, C, A) = (C, ((R + h)* (R + h) — I)) = (C, (R + h)* (R + h)) — (C, I),

(C, (R + h)4(R + h)) = (C, (R + h)(R + h)) = (C, R4R + Rh + h4R + h4h) = (C, R4R + R4h + h4R + h4h) = (C, R4R) + (C, R4h) + (C, h4R) + (C, h4h),

J2(R + h, C, A) - J (R, C, A) = (C, Rh) + < C, h4R) + o(h),

(C, Rh) + < C, hR) = (RC, h) + (C, hR) = (RC, h) + (I, h4RC4) = = (RC, h) + (h, RC4) = (RC + RC4, h) = (2RC, h).

We have that VJ2(R) = 2RC. Compute the gradient VJ3(R).

Proposition 5. Let R be 3 x 3 orthogonal matrix, then

d det(R) dR

Proof. We have that

R.

rii ri2 ri3 det(R) = det | r2i r22 r23

r31 r32 r33

rii(r22r33 - r23r32) - ^2(^^33 - r23r3i) + ^3(^^32 - r22r3i) = riir22r33 - riir23r32 - ^^^33 + ri2r23r3i + ^3^32 - ri3r22r3i,

d det(R) dR

V

d det(R) d det(R) d det(R)

drii dri2 dri3

d det(R) d det(R) d det(R)

dr2i dr22 dr23

d det(R) d det(R) d det(R)

dr3i dr32 dr33

/

r22r33 - r23r32 -r2ir33 + ^3i r2ir32 - r22r3i -ri2r33 + ri3r32 rUr33 - ri3r3i -rnr32 + ri2r3i ri2r23 - ri3r22 -rnr23 + ri3r2i rnr22 - ri2r2i

R

i

det(R)

r22r33 - r23r32 -ri2r33 + r^32 ri2r23 - ri3r22

-r2ir33 + r23r3i rUr33 - ri3r3i -rnr23 + ri3r2i

r2ir32 - r22r3i rnr32 + r^i rnr22 - ri2r2i

t

R

i

id det(R)Y

R

d det(R) dR '

1

So, we obtain that

VJ(R) = VJi(R) - VJ2(R) - VJs(R) = M - 2RC - AR = M - 2RC - RAm,

where Am = diag(A, A, A). We are looking for such orthogonal matrix R with postive determinant that VJ(R) = M - 2RC - RAm = 0,

R(2C + ATO) = M. (28)

Note that 2C + Am and (2C + Am)-1 are symmetric matrices. Consider the transposed equation for (28)

(2C + Am)iRi = Mt. (29)

We multiply each side of (29) with each side of (28) and obtain

(2C + Am)tRtR(2C + Am) = MtM, (2C + Am)2 = MtM, (2C + Am) = KVAs^, where s = diag(s1, s2, s3), Sj = 1 or Sj = -1, i = 1, 2, 3, and

(2C + Am)-1 = №(VA)-1s№t. It follows from (28) that R = M(2C + Am)-1 = M&(VA)-1sr,

(r,M ) = (M ^(VA)-1e^t,M) = (^(VA)-1e^t,MtM) = = (K(VA)-1er, KA^) = ((VA)-1s, A) = (s, (VA)-1A) = (s, (VA)).

Note that sgndet(R) = sgndet(M)sgndet(s). If sgndet(M) = 1, then we obtain that sgndet(s) = 1 and maximum value of the expression (R,M) = (s,>/A) can be reached, when (R, M) = (s, VA) = VA" + VA + VA, i. e. s = I.

If sgndet(M) = -1 then we obtain that sgndet(s) = -1 and maximum value of the expression (R,M) = (s, (VA)) can be reached, when

(R, M) = (s, VA) = + VA - TA,

i. e. s = diag(1,1, -1).

The solution of variational problem (12) for SO(3) takes the form

R I MK(VA)-1^t, if sgndet(M) = 1, _

[M^(VA)-1diag(1,1, -1)^t, if sgndet(M) = -1, =

M(MtM)-1/2, if sgndet(M) = 1, M(MtM)-1/2 & diag(1,1, -1) Kt, if sgndet(M) = -1,

where & is the matrix consists of eigenvectors of the matrix MtM and eigenvalues of MtM ordered in the descending order.

Remark 3. Let us consider the singular value decomposition (SVD) of the matrix M = uDvt, where u and vt are orthogonal matrices, D is a diagonal matrix. Since

Mt M = vDutuDvt = vD2v =

we obtain that v = & and D2 = A. Note that MK(VA)-1^ = uDvtvD-1vt = uvt, M&(VA)-1diag(1,1,-1)&t = uDvtvD-1diag(1,1,-1)vt = udiag(1,1,-1)vt. So, the solution of the variation problem can be written as

„ i uvt, if sgndet(u) = sgndet(v),

R = (30)

ludiag(1,1,-1)vt, if sgndet(u) = sgndet(v).

5.3. Nearest orthogonal matrix (SO(3) case)

Let us denote by M a 3 x 3 matrix, rank(M) = 3. We call a nearest orthogonal matrix for the matrix M a matrix R*, such that R* = arg min ||R - M II2, subject that

R4R = I and det(R) = 1. By analogy with the item 3.2 we obtain that the variational problem takes the form

R* = argmax(R,M ), (31)

subject that R*R = I and det(R) = 1. Therefore, the nearest orthogonal matrix R for the the matrix M can be computed by (30), where M = UDV*.

5.4. Projection of affine solution to SO(3)

We can obtain an approximated orthogonal solution R of variational problem (12) by applying the affine solution (23) in (30), where Ra = uDv*. The matrix R is the projection of Ra to SO(3) by formula (30).

6. List of types of variational problems

The approaches described above allow us to solve the following variants of variation problem (2). The input data for all algorithms are the point clouds P and Q, the output data are the matrix R and the vector T. The matrices P' and Q' are described in (9).

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

6.1. Affine point-to-point

A closed form solution of variation problem (2) in this case is given by formula (23) for the matrix R and by formula (4) for the vector T.

6.2. Approximation of the exact orthogonal solution by projection of the affine solution onto O(3)

A closed form solution of (2) in this case is given by formula (23) (affine solution) and by formula (21) (projection onto O(3)) for the matrix R and by (4) for the vector T.

6.3. Approximation of the exact orthogonal solution by SVD of the affine solution

A closed form solution of variation problem (2) in this case is given by (23) (affine solution) and by formula (22) (SVD of the matrix Ra) for the matrix R and by formula (4) for the vector T.

The algorithms described in items 6.2 and 6.3 are equivalent.

6.4. Closed form exact solution (O(3) case)

A closed form solution of variation problem (2) in this case is given by formula (17) for the matrix R and by formula (4) for the vector T.

6.5. Closed form exact solution (O(3) case, SVD)

A closed form solution of variation problem (2) in this case is given by formula (22) for the matrix R and by formula (4) for the vector T.

The algorithms described in items 6.4 and 6.5 are equivalent.

6.6. Closed form exact solution (SO(3) case)

A closed form solution of problem (2) in this case is given by formula (27) or (30) for the matrix R and by formula (4) for the vector T.

6.7. Approximation of the exact orthogonal solution by projection of the affine solution onto SO(3)

A closed form solution of variation problem (2) in this case is given by formula (23) (affine solution) and by formulas (31) or (30) (projection onto SO(3)) for the matrix R and by formula (4) for the vector T.

7. Experimental comparison of some considered algorithms

We compare here the algorithms described in items 6.6 and 6.7. The ICP algorithm based on a closed form exact solution for SO(3) is denoted as PtP_es, the ICP algorithm based on the approximation of the exact orthogonal solution by projection of the affine solution onto SO(3) is denoted as PtP_pr. We compare these algorithms in terms of the convergence rate (i.e. the frequency of convergence of ICP algorithm to a correct solution).

The algorithms PtP_es and PtP_pr use the standard method for searching the correspondence between clouds based on k-d tree. Our experiments are organized as follows. An orthogonal geometric transformation given by a known matrix is applied to a source point cloud. The source and transformed clouds are input to a tested algorithm. The ICP algorithm converges if the reconstructed transformation matrix coincides with the original matrix with a given accuracy.

Stanford Bunny

angle (deg) - PtP es--PtP pr

Fig. 1. Convergence rate for Stanford Bunny

100 90 80 g 70 E 60 § 50 40 30 20

0 20 40 60 80

angle (deg)

- PtP-es--PtP-pr

Fig. 2. Convergence rate for Stanford Armadillo

In the experiments, we use two following point clouds: Stanford Bunny and Stanford Armadillo. The Stanford Bunny and Stanford Armadillo clouds consist of 1024 points. All points lie in a centered unit sphere. The statistical experiments are organized as follows. Let us fix the value of the rotation angle. We take a random, uniformly distributed direction vector that defines a line containing the origin of the coordinate system. This line is the axis of rotation at a fixed angle. In addition, the components of the translation vector are a random variable uniformly distributed in the interval (0,1). The synthesized geometric transformation matrix (true matrix Mtrue) is applied to the source cloud P. The tested algorithms are applied to the clouds P and Q. We say that the registration algorithm converges to true data, if the reconstructed transformation matrix Mest coincides with the original matrix with a given accuracy. To guarantee statistically correct results, 1000 trials for each fixed rotation angle are carried out. The rotation angle varies from 0 to 90 degrees with a step of 10 degrees.

Fig. 1 shows the convergence rate of the PtP_es and PtP_pr algorithms for Stanford Bunny. Fig. 2 shows the convergence rate of the PtP_es and PtP_pr algorithms for Stanford Armadillo.

References

1. BeslP., McKay N. A Method for registration of 3-D shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992, vol. 14, no. 2, pp. 239-256.

2. TurkG., Levoy M. Zippered polygon meshes from range images. In: Computer Graphics Proceeding. Annual Conference Series. ACM SIGGRAPH, 1994, pp. 311-318.

3. Thompson E.H. On exact linear solution of the problem of absolute orientation. Photogrammetria. 1958-1959, vol. 15, pp. 163-179.

4. DuS., ZhengN., MengG., YuanZ. Affine registration of point sets using ICP and ICA. IEEE Signal Processing Letters, 2008, vol. 15, pp. 689-692.

Stanford Armadillo

N \

s \ s \ s > \

\ \ \ \

\ < \ \ \

N \ N \ \ \

\ \ N \ \ \ \

\ N \

5. HornB. Closed-form solution of absolute orientation using unit quaternions. Journal of the Optical Society of America A, 1987, vol. 4, no. 4, pp. 629-642.

6. HornB., HildenH., Negahdaripour S. Closed-form solution of absolute orientation using orthonormal matrices. Journal of the Optical Society of America Series A, 1988, vol. 5, no. 7, pp. 1127-1135.

7. UmeyamaS. Least-squares estimation of transformation parameters between two point patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1991, vol. 13, no. 4, pp. 376-380.

8. Stanford University Computer Graphics Laboratory. The Stanford 3D Scanning Repository website. Available at: http://graphics.stanford.edu/data/3Dscanrep, accessed 20.11.2021.

Article received 18.01.2022.

Corrections received 28.02.2022.

Челябинский физико-математический журнал. 2022. Т. 7, вып. 1. С. 97-112.

УДК 519.7 DOI: 10.47475/2500-0101-2022-17107

АЛГОРИТМЫ РЕШЕНИЯ ЗАДАЧИ АБСОЛЮТНОЙ ОРИЕНТАЦИИ ДЛЯ ГРУПП GL(3), O(3) И SO(3)

А. Маковецкий1'", С. Воронин1, А. Воронин1, Т. Маковецкая2

1 Челябинский государственный университет, Челябинск, Россия 2Южно-Уральский государственный университет (национальный исследовательский университет), Челябинск, Россия " artemmac@csu.ru

Наиболее используемый алгоритм регистрации облаков точек в трёхмерном пространстве — итеративный алгоритм ближайших точек (ICP). Вариационная задача типа point-to-point для ортогональных преобразований математически эквивалентна задаче абсолютной ориентации в фотограмметрии. В данной статье предлагается обзор известных методов решения в замкнутой форме вариационной задачи point-to-point. Здесь также получена новая модификация алгоритма Хорна для группы SO(3). Компьютерное моделирование иллюстрирует разницу в точности работы рассматриваемых методов.

Ключевые слова: задача абсолютной ориентации, итерационный алгоритм ближайших точек (ICP), point-to-point, 'решение в замкнутой форме, точное 'решение, ортогональное преобразование, аффинное преобразование.

Поступила в редакцию 18.01.2022. После переработки 28.02.2022.

Сведения об авторах

Маковецкий Артем Юрьевич, кандидат физико-математических наук, доцент кафедры вычислительной механики и информационных технологий, Челябинский государственный университет, Челябинск, Россия; e-mail: artemmac@csu.ru.

Воронин Сергей Михайлович, доктор физико-математических наук, доцент, профессор кафедры математического анализа, Челябинский государственный университет, Челябинск, Россия; e-mail: voron@csu.ru.

Воронин Алексей Вячеславович, студент математического факультета, Челябинский государственный университет, Челябинск, Россия; e-mail: ununus@mail.ru. Маковецкая Татьяна Юрьевна, кандидат физико-математических наук, доцент кафедры системного программирования, Южно-Уральский государственный университет, Челябинск, Россия; e-mail: lymarti@susu.ru.

Работа поддержана РФФИ (грант 20-47-740007).

i Надоели баннеры? Вы всегда можете отключить рекламу.