Научная статья на тему 'An approach for computing the Delaunay triangulation and the Voronoi diagram in Ed'

An approach for computing the Delaunay triangulation and the Voronoi diagram in Ed Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
37
10
i Надоели баннеры? Вы всегда можете отключить рекламу.

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Tereshchenko V.M., Taran D.S.

In this paper we propose a new approach to constructing the Delaunay Triangulation for the case of multidimensional spaces (d > 2). Analysing the modern state, it is possible to draw a conclusion, that the ideas for the existing effective algorithms developed for the case of d ≤ 2 are not simple to generalize on a multidimensional case, without the loss of efficiency. We offer for the solving this problem an effective algorithm that satisfies all the given requirements.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «An approach for computing the Delaunay triangulation and the Voronoi diagram in Ed»

UDC 004.925.8, 004.272.2

AN APPROACH FOR COMPUTING THE DELAUNAY TRIANGULATION AND THE VORONOI DIAGRAM IN Ed © V. M. Tereshchenko, D. S. Taran

National Taras Shevchenko University of Kiev Faculty of Cybernetics 64, Volodimirska str., Kyiv, 01601, Ukraine e-mail: vtereshch@gmail.com, eqis.mail@gmail.com

Abstract. In this paper we propose a new approach to constructing the Delaunay Triangulation for the case of multidimensional spaces (d > 2). Analysing the modern state, it is possible to draw a conclusion, that the ideas for the existing effective algorithms developed for the case of d < 2 are not simple to generalize on a multidimensional case, without the loss of efficiency. We offer for the solving this problem an effective algorithm that satisfies all the given requirements.

Introduction

The problem of the effective algorithm construction for the Delaunay triangulation has been actual for a long time already. Therefore, many approaches were offered for her solving [1-10]. In the case of a plane the task is completely done as there is the Worst-Case Optimal algorithm (the Fortune algorithm [11]).

In [16] authors present theoretic deterministic algorithm for computing the convex hull of n points in Ed in optimal O(nlon(n) + n^d/2J) time. Considering direct connection between a convex hull and Voronoi Digram, they obtain an algorithm for computing the Voronoi Diagram (Delaunay Triangulation) of n points in optimal O(nlon(n)+n^d/2J) time. In particular, one of the most rapid algorithm of construction the convex hull is considered Quick Hull [12]. Also there exists efficient practical randomized realizations [17, 18].

However, except theoretical complication there is need in the generalization of the practical estimations of time of algorithms' work. For solving the given task there are several general methods and all known algorithms work in accordance with one of them, in particular: divide-and-conquer, based on a sweep plane (or space), successive points addition (random incremental method) and the direct algorithms.

Sweep Method. The idea of sweep line method consists in a moving imaginary line on a plane, stopped on certain points. Thus, passing the way on the points of events, a sweep line solves the set problem. This approach gives the good results of the Delaunay triangulation construction on a plane (Fortune's Algorithm[11]), however for obvious reasons this algorithm can't be adapted for a spatial case. In multidimensional spaces this algorithm has the type of a motion of a certain surface (more often of a hyperplane).

To support the front hyperplane with the sweep line it is necessary to deal with (d — 1) — dimensional structures. That is why all operations for such structures acquire high calculable complexity and have a high complexity of implication.

Random Incremental Method. The basis of this approach consists in adding the set points one by one and fixing of triangulation [4, 5]. The operation of fixing consists in consideration of "suspicious" pairs of nearby simplexes and so-called "flipping" operation for improper pairs.

The problem of this method is that in multidimensional spaces the flip operation is very complex and consists of many different cases. Another problem is that it is difficult to assess the algorithm complexity, because in different cases the amount of flips may be starting with several ones and ending with a complete change of previously constructed simplexes. Also the number of simplexes formed at a certain step can far exceed the number of resulting simplexes, that will increase radically the working time of the algorithm. This algorithm "ungovernability" can somehow be compensated choosing the order of points bypassing. The order would be ideal when only the resulting simplexes of triangulation are added each time. It would turn this algorithm into the direct one.

Direct algorithms. Direct algorithms build the final result step by step. In this case, the direct algorithm discovers another simplex of resulting triangulation at each step. In general, the algorithm consists of the following steps:

1. To find the original edge and add it to the line.

2. To get the next edge out of the turn.

3. To find a simplex of triangulation based on this edge.

4. If there is a simplex, add all other edges of the simplex to the turn.

5. To go to Step 2.

Unobviously here there are only steps 1 and 3. But step 1 is done once for the entire algorithm, that's why its optimization does not cause much concern. As for step 3, there is an obvious approach to its interpretation. It consists in reviewing the entire set of points and selecting the necessary one (this point forms the simplex of a minimum radius). This approach to doing this task gives us the slowest algorithm. There is a time estimate O(mn) where m is number of resulting simplexes. So even on a plane this algorithm is quadratic. But unlike all other approaches the direct algorithm is exactly Output-Sensitive. There are several optimizations of finding the next simplex. The main is the grid method (the hash space points) and the method of bubble inflating. But these optimizations are unstable and in some cases may work for an indefinite time.

Divide-and-Conquer Method. Divide-and-conquer is a general and universal approach for solving wide class of tasks. So, in [19], we offer a new conception — "the common

algorithmic space" for solving a set of interrelated geometrical problems of multitask modeling of complex dynamic processes. Basis of the concept is a parallel-and-recursive algorithm "divide-and-conquer", which uses common algorithmic tools for all complex of tasks: data structures and a set of merge procedures for corresponding problems.

The main problem, which can limit the application of the "divide-and-conquer" technique for solving problems of computational geometry is nonlinearity of the merge stage and linear inseparability of the set of points [13]. For removing the first restriction, in [20], have been offered the date structures, which use simultaneous orderliness on x and y co-ordinates for each point. It allows you to find a median in the list of points on the dividing stage, using only the information about their quantity, and does not depend on their arrangement. At the same time, using the weighted concatenable queue and efficient parallel procedures at the merging stage of algorithm also solves the second problem. It allows for the n points to get O(log n) time for the dividing stage and O(log2 n) time for the merge stage, using "weighted concatenable queue" as data structures [20].

Using data structures, presented in [20], allows to get the efficient solving for all complex of considered problems (including the Delaunay Triangulation and the Voronoi Diagram) in a sequential implementation case. In this case we will get: O(nlog n) -processing stage; O(logn) — dividing stage and O(n)-merge stage.

It is necessary to notice, that the stage of preliminary processing and the dividing stage are common for all set of problems, and the algorithm has the same efficiency for case 3D. In particular, the concatenable queue in 3D we can illustrate on an example of truncated pyramid which can simulate using the Voronoi Diagram for an orthogonal point set on a plane, fig. 1, 2.

• • •

c b a

• 1 * ♦ *

d S g

Fig. 1. Example of the concatenable queue in 3D (3d object)

Fig. 2. Example of the concatenable queue in 3D (date structures)

1. The research aim

As it is seen from the previous review of the existing algorithms for doing the sums, all the algorithms have some drawbacks and in certain situations their work time can be very high. Let's formulate what exactly we would like to receive from the new algorithm:

1) the optimal work time;

2) stability of work on any sets of points;

3) and the universality of multidimensional spaces.

As for these demands, a direct algorithm has high expectations, because it meets quite simply the 2) and 3) requirements. Although the first requirement is the most important, but the direct algorithm leaves much space for optimization. The proposed algorithm in this paper is the achievement of the optimum working speed of direct algorithms. An appropriate implementation of the algorithm and the analysis of its working time were done.

2. Algorithm

2.1. The general idea. As it was mentioned earlier, the offered algorithm is a direct one. This means that its general steps correspond to the steps of the direct algorithm. But each step is optimized to get the optimal working speed. The step of searching the next simplex underwent the major changes, as this step is the weakest point of the direct algorithm. There is an open edge at this point for which the top is to be found, forming together with the edge the resulting simplex of triangulation. The idea is to reject the review of failed points. Let's have at some stage a nominee for a successful point and consider the location of points that fit better, figure 3.

Fig. 3. A successful point and the bettered location of points.

The more optimal points are to be found inside the circle (sphere) circumscedgeing the considered simplex. Then having found a point inside we'll narrow the area of further search. Continuing in this way we will find the point which has no other points inside the descedgeed circle. By the Delaunay triangulation definition this is the point that forms a triangle of triangulation. It is easy to notice that this method of search is not limited by the case of a plane. It is also suitable for multidimensional spaces.

2.2. Searching for the initial edge. The first step is to find the initial edge, where the first simplex will be built. The edge is called a face of simplex. In d-dimensional space simplex seems to be d + 1 point. The edges of this simplex are all subsets of points that are of d size. Obviously, all faces of the convex hull of the set of points belong to certain simplexes of triangulation of these points. Therefore, the initial edge can be any face of a convex hull. So to find the initial face we can use the same algorithm search Gift Wrapping [13, 14]. The idea is to find consistent hyperplanes, each of which has one point of the convex hull more than the previous one. First find the point of the least first coordinate. This point clearly belongs to the convex hull. Fix the hyperplane perpendicular to the first axis coordinate. The normal of the plane n = (1, 0,..., 0). This will be the initial hyperplane F (figure 4 (from book [13] page 162 fig. 3.25 (b))).

Then one after another the next points are searched by the resulting face. Such a point is chosen which forms the largest angle built by the hyperplane on this step. Let's consider how to find the angle between the point and the hyperplane, figure 4.

Vector a is a perpendicular to the edge (pip2), which lies in the hyperplane. Vector n is a normal to the hyperplane. vk — is a vector from the new vertex to one of the edge's vertexes. The cotangent of an angle can be computed using the following formula:

ctg(ip) = pravk/prnvk

Fig. 4. Finding the angle between the point and the hyperplane.

The projection of vector a on vector b is computed as follows:

prha = (a, b)/|b|

With cotangents of angles, the largest angle can be found , minimizing the value of the cotangent. Now let's consider in detail how to build vectors a and n. At some stage we have hyperplane Fj — 1 built on the j — 1 points. There is also its normal n. Vector a must be a perpendicular to vector n, vectors vivi (j — 2 vectors) and the axes of coordinates Xj+1,Xj+2,..., Xd. These d — 1 correlations set the system of linear equations, which having been done we obtain vector a. With vectors n and a the cotangents of all angles can be found, then find the next point Vj. Now, with the hyperplane Fj, it is necessary to recalculate normal n. The normal is to be perpendicular to vectors v1vi (j — 1 vector) and the axes of coordinates Xj+i,Xj+2,... ,xd. Having done the corresponding system of lineare quations we get normal n to the new hyperplane pj. Thus for d — 1 approaching we get the face of the convex hull.

The Gauss method is used for solving systems of linear equations. It is comfortable to implement and this is its main advantage. Not really the optimal time of its work in our case is not critical. So let's have a system of linear equations:

aiiXi + ai2X2 + ... + ai„x„ = bi

O 21X1 + 0,22X2 + ... + «2 nXn = b2 On iXi + an2X2 + ... + ann Xn = bn

«TaepiucbKWU eicnuK inifiopMamuKU ma MameMamuKU», №2 (21)' 2012

The first part of the algorithm results matrix A = (a^) into a triangular form. For this the line with a nonzero coefficient is selected first for this. The place of this line is to be changed with the first one, and then divide all the coefficients on an. Thus we obtain a unit as the first coefficient of the first line. Reset then all the coefficients of the first column with the help of the first line. For this subtract the first row multiplied at ail from each line with number i (i > 2). Then similarly with the second line "destroy" all the other lines. As a result we get the following system:

xi + 0 + ... + 0 = b[

0 + Ж2 + ... + 0 = b'2

0 + 0 + ... + xn = b'n

Solving this system is found in an understandable way:

x* = (bi,b2,...,bn)

It should be noted that this algorithm is general and we considered the square case for convenience. If there are less equations than variable ones (let them m < n), we will get half triangled matrix as result transformations. In this case all the variables xm+1,... ,xn are independent. Assign them all the value 1, and then calculate all other variables considering this. Also, sometimes during the transformation a zero column can be obtained. This also means that the variable that corresponds to this column is independent. We can also choose 1 for its meaning.

2.3. Hashing edges. At each step of finding a new simplex, it is necessary to perform certain operations on the edges: add edges to the list of "open", to check the presence of edges in this list. As these operations are repeated many times, it is necessary to ensure their optimality. All marks on the open edges must be stored in hashes. Thus we get the speed of doing all the operations for time O(1) we need. A edge is a list of d points that are stored by their numbers. Let us solve the task for N points. Then you can choose the hash function of the following form:

hash(e) = (N0e.v1 + N 1e.v2 + N2e.v3 + ... Nd-1e.vd)modP

Here P is some large prime number, which is caused by memory size, which we are ready to make a hash. With hashing function, edges can be recorded in the corresponding cells array size P. Searching for a particular edge one should simply check the corresponding cell on the presence of this edge. It should be noted that for avoiding

differences it is necessary to sort the list of the edge's vertexes. Otherwise, for different permutations of the same vertexes different hashes will be received.

2.4. Search for the next simplex. The subtasks is to find the vertex for an open edge, which together with that edge forms simplex of resulting triangulation. Modification is that, instead of searching all vertexes , find the desired vertex using modified KD-tree.

The search starts from an infinite search region [—ro; So first let's find any

point that can form a simplex with this edge. Now we can narrow the search region, since all the "better" points for the given one are inside the hypersphere fixed around the current simplex (formed by the edge and point-challenger). The hypersphere (which is the region of the next search) is approximated to the hypercube.The larger dimensionality of space d is, the rougher this approximation is. Therefore, with increasing of dimension space the efficiency of the algorithm is lost. The search ends when no single point is found within the gotten region. This means that the chosen point is the vertex of simplex triangulation. The situation is possible when no point was found in the search process. This means that the edge is the face of the convex hull. In this case, this edge closes without adding the simplex. If the point was found, then the corresponding simplex is constructed and recorded into the resulting list of triangulation simplexes. In addition, d new edges simplex are attached to all open edges. When adding a new edge it is possible that this edge is in the queue already. In this case, it must be removed from the queue and marked as "complete". Hash is used for quick checking of the edge's availability as descedgeed in the previous section.

KD-tree [13, 15] is a structure used for fast search the points belonging to the query range. Query ranges are rectangles (parallelepipeds) with the sides parallel to axes of coordinates. It uses O(n) memory. It is built for the time O(n log n). Each query is done on average for O(logn), in the worst case, for O(n1-1/d). In our case the search region should be narrowed in finding the point. The new region is considered as approximation of circumscedgeed sphere around the simplex-applicant to hypercube. Supposing, for finding the next point, we have a hypersphere centered at the point c = (c1, c2,..., cd) and radius R. The query range will be parallelepiped, parallel to the axes of coordinates. For each coordinate k =1, 2,..., d limited by numbers ck — R, ck + R. Using KD-tree without changes does not provide with a pure logarithmic search time. In many cases, the time is linear, which makes the algorithm a simple direct algorithm (which is the slowest of all). But the next two optimizations will return the algorithm the desired speed.

2.4.1. The first optimization. Sometimes the search procedure does not result scienter. An example is the convex hull faces, for which there is no simplex, that closes the open edge.

In this case, the search should review all the points and this will increase the working time of the algorithm significantly. But in some cases we can cut off the large search region without a detailed verification. It should be noted that there is 1 or 2 simplexes for every edge of triangulation. Open edges are the edges one of the simplexes of which has already been found, and the other one is to be found. Therefore every open edge actually has a direction — the points search is only in one half-space. That's why the search on regions that lie completely in the other semispace can be immediately discarded. Checking whether a parallelepiped belongs completely to a certain half-space is simple. To do this, check each of the vertexes belonging to this half-space. If they all belong to it — then the whole parallelepiped belongs to this half-space.

2.4.2. The second optimization. In the original KD-tree it does not matter from which subset to start the search if it should be continued on both. This is due to the fact that the query range is fixed, that's why the search must check both subsets. In our case, checking one of the subsets, the region may be narrowed and there will be no need in checking the other one. So you need to start the search with a more reliable subset. If the edge is wholly belongs to one of the subsets, it is advantageous to start the search from it. This is due to the fact that the best points in general are closer to the edge than others.

2.4.3. The update of the query range. When finding a new vertex-pretender the query range must be updated. The hypersphere circumscedgeing the simplex must be found. The simplex is given with d + 1 point. Let's write the system of equations for finding the center of the circumscedgeed hypersphere. This is the point, the distance from which to all points of simplex is the same:

(c1 - pn)2 + (c2 - p12)2 + ... + (cd - p1d)2 = (c1 - pn)2 + (c2 - pi2)2 + ... + (cd - pid)2, (2 < i < d + 1)

Reordering the variables we obtain the following equality:

c1(2p11 - 2pi1) + ... + cd(2p1d - 2pid) = (pn - pa) + ... + (p1d - pld)

Thus, we have a system with d (2 < i < d +1) linear equations for variables c = (c1,c2,..., cd). Solving it by the Gauss method, we obtain the solution — the center of circumscedgeed hypersphere. With the center the distance to one of the points of simplex can be calculated, the radius is obtained.

2.5. The final processing. Each time, getting a new simplex, it is necessary to write it into the resulting list simplex. It is also necessary to maintain the structure of the neighborhood. For this each of the open edges is put an according simplex, for which this

edge is recorded in the line and hash. After finding the simplex, a closing edge, a mark on the neighbouring resulting simplex and the simplex responding to this edge is made. Also, adding a edge there may be a situation when it is already in the queue. It is necessary to make a mark on the neighbouring simplexes, that generated these edges, and the very edges should be removed from the line. Parallelly the Voronoi diagram of these points may be built. The vertexes of the diagram are the centers of the hyperspheres circumscedgeed around simplexes. They were computed in the search simplex process. The edges are the segments between the centers of neighbouring simplexes, figure 5.

First of all, it should be considered the amount of memory used by the algorithm. Let's consider all the structures necessary for the work:

1. The set of all points is stored in the KD-Tree — O(n).

2. Hash operations with the edges — O(P).

3. The list the resulting simplex — O(m).

So the memory evaluation required for the algorithm is O(n + m + P). Let's consider a sequence of basic operations needed to perform each step of the algorithm:

1. Searching for the initial edge (d repetitions).

1.1. The Gauss method for finding vector a — O(d3).

1.2. Finding the point with a maximum angle — O(dn).

1.3. The Gauss method for finding a normal — O(d3).

Fig. 5. Constructing the Delaunay Triangulation.

3. Estimation of the algorithm complexity

«TaepiücbKUü eicnuK inifiopMamuKU ma MameMamuKU», №2 (21)' 2012

2. Finding the next simplex (m repetitions):

2.1. Get the next edge out of the line — O(1).

2.2. Finding the desired point — O(n1-1/d) (on average O(logn)).

2.3. Adding the simplex into resulting list — O(1).

2.4. The hash of new edges — O(1).

Hence there is an estimate of the algorithm complexity in the worst case O(2d4 + d2n + mn1-1/d). But the complicated case is practically unattainable, and such assessment is due only to the complexity to estimate it less roughly. In practice, the expected complexity of the algorithm is more important. It leads to the following estimation of the algorithm complexity O(2d4 + d2n + m log n). Assuming a constant space dimension, we obtain a more concise evaluation of the expected complexity O(mlog n).

Comparing work time is given in Table 1. For reference let's take the fastest algorithm in the plane. For this the implementation of the Fortune algorithm [11]. Both algorithms are to be tested on one computer (usually a laptop with usual characteristics) under the same conditions.

Table 1. Comparing work time of algorithms

Number of points D-Range Fortune Calls K

100000 2788 1413 27,52 1,973107

200000 5782 3110 29,04 1,859164

300000 8803 4708 29,4 1,869796

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

400000 12248 6777 30,91 1,807289

500000 14988 8472 30,35 1,769122

600000 18539 10831 32,46 1,711661

700000 21882 12545 32,4 1,744281

800000 24739 15466 31,91 1,599573

900000 28388 17602 32,69 1,612771

1000000 32512 20428 33,92 1,591541

In the first column the number of points for which the tests were conducted is specified. In the second and the third ones the specific time of both algorithms (in milliseconds) is given. The fourth column shows the average number of calls searching for the optimal point of building a new simplex. Coefficient K indicates how many times the proposed algorithm is slower than the Fortune algorithm.

Despite expectations, both algorithms are close to linear time work. Paying attention to coefficient K it is evident that it monotonously decreases with increasing number of points. Some of the D-Range algorithm slowness is connected with the cost of time on the preprocessing (KD-tree and the search of the initial edge). But, unlike offered algorithm (D-Range algorithm) the Fortune algorithm is to work on the plane and is not adapted in a multidimensional space.

Conclusion

The result of this work is a new efficient algorithm that is applicable to the cases of multidimensional problem solving of Delaunay triangulation or the Voronoi diagram construction. Its practical implementation is confirmed. Also, paying attention to set requirements, you can specify the stability of the algorithm work and a low dependence on any specific sampling points. The algorithm behaves identically on any sets. Although the algorithm is not theoretically optimal, it is optimal in terms of the expected speed.

References

1. J.E. Goodman and J. O'Rourke, eds.Handbook of Discrete and Computational Geometry. Second Edition, Chapman and Hall/CRC Press, 2004.

2. P. Cignoni, C. Montani, R. Scopigno.DeWall: A Fast Divide and Conquer Delaunay Triangulation Algorithm in Ed. Computer-Aided Design Vol. 30 (5):333-341, 1998.

3. M. de Berg, O. Cheong, M.van Kreveld, M. Overmars. Computational Geometry: Algorithms and Applications. Springer-Verlag, Berlin, 2008.

4. L. Guibas, D. Knuth, M. Sharir. Randomized incremental construction of Delaunay and Voronoi diagrams. Algorithmica 7:381-413,1992.

5. H. Edelsbrunner, S. Nimish. Incremental Topological Flipping Works for Regular Triangulations. Algorithmica 15 (3): 223-241,1996.

6. M. Caroli, M. Teillaud. Delaunay triangulations of point sets in closed euclidean d-manifolds. In Proc. of the 27th annual ACM symposium on Computational geometry, pages 274-282, 2011.

7. M. Hoffmann., Y. Okamoto. The minimum weight triangulation problem with few inner points. Computational Geometry. 34 (3):149-158, 2006.

8. J. Gudmundsson, H. Haverkort, and M. van Kreveld. Constrained higher order Delaunay triangulations. Comput.Geom. Theory Appl., 30:271-277, 2005.

9. R.I. Silveira, M. van Kreveld. Towards a Definition of Higher Order Constrained Delaunay Triangulations. In proc. 19th Canadian Conference on Computational Geometry, pages 322-337, 2007.

10. T. de Kok, M. van Kreveld and M. Loffler. Generating realistic terrains with higher-order Delaunay triangulations. Comput. Geom. Theory Appl., 36:52-65,2007.

11. S. Fortune. A sweepline algorithm for Voronoi diagrams. Algorithmica, 2:153-174,1987.

12. C.B. Barber, D.P. Dobkin, and H.T. Huhdanpaa. The Quickhull algorithm for convex hulls. ACM Trans. on Mathematical Software, 22(4):469-483, 1996.

13. F. Preparata and M.I. Shamos. Computational Geometry: An introduction. Springer-Verlag, Berlin, 1985.

14. D. R. Chand and S. S. Kapur. An algorithm for convex polytopes. JASM 17(1): 78-86, 1970.

15. J. L. Bentley. Muldimensional binary search trees used for associative searching. Communications of the ACM 18: 509-517, 1975.

16. B. Chazelle. An optimal convex hull algorithm in any fixed dimension. Discrete Comput. Geom., 10:377-409, 1993.

17. K. L. Clarkson and P. W. Shor. Applications of randomsampling in computational geometry. Discrete Comput. Geom.,4:387-421, 1989.

18. O. Devillers. The Delaunay hierarchy. Internat. J. Found. Comput. Sci., 13:163-180, 2002.

19. V. Tereshchenko and A. Anisimov. One Conception of Creating Tools for Geometric Modeling. In proc. of 7-th International Symposium on Voronoi Diagrams in Science and Engineering, Quebec, Canada, pages:260-265,2010.

20. V. N. Tereshchenko and A. V. Anisimov. Recursion and parallel algorithms in geometric modeling problems. Journal: Cybernetics and Systems Analysis 46 (2): 173-184, 2010.

Статья поступила в редакцию 01.06.2012

i Надоели баннеры? Вы всегда можете отключить рекламу.