Научная статья на тему 'An approach to partitioning based on simulated evolution'

An approach to partitioning based on simulated evolution Текст научной статьи по специальности «Математика»

CC BY
145
38
i Надоели баннеры? Вы всегда можете отключить рекламу.
i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «An approach to partitioning based on simulated evolution»

Раздел II ГЕНЕТИЧЕСКИЕ АЛГОРИТМЫ

y^K 658.512

V. M. Kureichik, E.D. Goodman, W.F. Punch An approach to partitioning based on simulated evolution

Introduction

The rapid design of VLSI systems plays an important role in the progress of science and technology. The conflicting requirements of making the VLSI systems more complex, decreasing the desing time, and increasing the quality of the design cannot be satisfied by a simple increment in the number of designers and computer systems used, since the capability to perform the design process in parallel is rather limited, and the engineering personnel and complement of workstations in a design office often cannot be significantly enlarged. The solution to this problem is the wider use of new computer technology for solving VLSI design problems. One such new technology is based on simulated evolution [1,2,19,20].

There are two major types of VLSI design systems, characterized by the sequence of steps performed:

Bottom-up design: Initially,the designer solves the problems at the lowest level, and gradually proceeds to higher levels of the design process.

Top-down design: The process in carried out in the reverse order.

Computer- aided design is usually carried out using a top-doun design methodology [3-5]. In the general case, several vertical levels can be described in a VLSI CAD system. Theyare typically: design specification, functional design, logical design, circuit design, physical design, and fabrication. Between levels, we must perform functional and logical aimulation, circuit analysis, extraction,and verification. Physical design automation for VLSI systems consists of six major levels or steps. They are partitioning, placement, assignment and floorplanning, routing, symbolic layout and compaction, and layout analysis and verification. Inthis paper, we describe problems of partitioning based on simulated evolution.

The Simulated Evolution-Based Approach

In 1975, Holand [1] described a methodology for studying adaptive systems and designing artificial adaptive systems. Itis now frequently used us an optimization method, based on analogy to. the process of natural selection in biology. The biological basis for the adaption process is evolution from one to the next, bused on elimination of weak elements and retention of optimal and near-optimal elements ("aurvival of the fittest"). References [1,2] contain a theoretical analysis of a class of adaptive systems in which the space of structural modifocations is represented by sequences (strings) of symbols chosen from some alphabet (usually a binary alphabet). The searching of this representation space is performed using so-cslled "Genetic Algorithms" (GA's). The genetic algorithm is now widely recognized as an effective search paradigm in artificial intelligence, image processing, VLSI circuit layout, optimization of bridge structures, solving of nonlinear equations, correlation of test date wich fanctional grupings, and many other areas [1,6-17].

The classes of problems encountered in VLSI design include many thet are not easily solved with effective algorethms—i.e., NP-hard and some NP-complete problems. The computation of an optimal solution to the problem of physical design of a VLSI system is usually possible only for a very small circuit. Some heuristic methods must typically be appliend to reduce the search space and generate sets of approximate (near-optimal) solutions. In the genetic algorithm approach, a solution (i.e., a point in the search space) is called a "chromosome" or string [1,7,16]. A GA approach require«. .1 population of chromosomes (strings) representing a combination of features from 1 he sot of features, and requires a cost function (called an evaluation or fitness function) F (n). where n is the number of elements in a chromosome [16]. This function calculates ihe fitness of each chromosome. The algorithm manipulates a finite set Population) ol chromosomes, based loosely on the mechanism of natural evolution. In each generation, cromosomes are subjected to certain operators, such as crossover, inversion, and mutation, anaiogous to processes which accur in natural reproduction. The crossover of two chromosomes produces a pair of offspring chromosomes wich are syntheses or combinations of the trits their parents. Inversion in a chromosome produces a mirror-image reflection of a subset of the features on the chromosome. A mulation on a chromosome produces a nearly identical chromosome with only local alterations of some regions of the chromosomes.

Crossover

The classic crossovr operator is describeed as follows. Here are the chromosomes before crossover:

chromosome 1 ai 02 аз a4 B5 a« ,

chromosome 2 b. b2 Ьэ b< b5 b6

(parent 1)

(parent 1)

(crossover point randomly chosen) Î

After the crossover operation is performed ( at the randomly selected point shown), the following chromosomes result:

chromosome 1’

ai B2 a.i B4 b5 b6

(child 1 or offspring 1)

chromosome 2’

b, b2 bj b4 as a«

(child 2 or offspring 2)

For partitioning and placement in VLSI systems design, an two additional operation have been found to be veery useful. They are called order crossover and cycle crossox ei (similar to operations describbed in [20] and [2]).

These operators solve a problem that classic crossover introduces, which is the problem of permutations. We need to design an operator that, during crossoveer, creates a new ordering of existing data (such as in partitioning), not create a new sets of nodes. This work was first used in solving problems like the Traveling Salesman Problem. Consider the operation of order crossover for permutation problems.

chromosome 1 ai 82 ai a« Us a« (parent 1)

chromosome 2 (crossover p and we get: chromosome 1’ a« ai аг a^ ад as (parent 1)

oint) t

a« fli a2 аз a4 as (offspring 1)

chromosome 2’ ai 32 86 аз as ад (offspring 2)

As before we establish a random crossover point. The offspring is created by first adding the elements before the crossover point of one parent. We then fill in 'he remaining offspring elements from the other parent, taking elements lefi-to-right from the other parent but skipping any elements al ready in the offspring. Thus chromosome i' is created by taking the a6 and al elements from parent 2, then filling in the remaining elements by copying from parent 1, left to right, skipping any redundant elements already in the offspring. *

Cycle crossover was designed to address the same problem (permutation) but operates quite differently. Here, we do not select a crossover point but instead "alternate" selections of elements to be included in the offspring from each of the two parents.

chromosome 1

chromosome 2

ai a2 аз a4 as a»

a« ai a4 85 a3 аг

(parent 1)

(parent 2

To create the offspring,we now select an element from parent I (the leftmost, al) and add it to offspring. Since the goal is to alternate, the selection of al from parent I means that corresponding element in parent 2, a6, is next added to the offspring from its poition in parent 1. Selection of a6 means the corresponding parent 2 element a2, is next added. a2's corresponding element is al, meaning we have completed a cycle, leaving the following offspring: chromosome 1' al a2 a6 (offspring 1, partially completed)

chromosome 1’

ai

(offspring 1, partially computed)

The remaining elements are filled in using reqular crossover, yielding the following two offspring:

chromosome 1’ a( 82 34 85 ai a6 (offspring 1)

chromosome 2* a« ai аз a4 as a2 (offspring 2) Examples are shown): (example 1) (example 1)

Inversion - Before the inversion operation (a unary operator; two

chromosome 1 chromosome 2 ai 82 аз 84 as 86

b, b2 Ьз b4 bs b.

(inversion point at middle of t chromosome) After inversion:

chromosome 1* ai 82 »4 aj 85 a« . (offspring 1)

chromosome 2* b* b2 b4 b, bs b, (offspring 2)

Mutation Mutation, which usually represents a random modification of a randomly selected feature on a chromosome,is defined as follows (again, two examples of this unary operator are shown): i 4 (points to swap)

chromosome 1 ai 82 аз as 8« (example 1) (points to swap) (example 1) (offspring J)

I

chromosome 2 After mutation chromosome 1’ b, b2 b3 b4 bs b*

:

B| 82 a« B4 as аз

chromosome 2’ b2 b, b, b4 bs b6 (offspring 2)

For partitioning in VLSI systems design, an additional operation has been found to be very useful. It is called crossmutation (CXM) [15]. The CXM proceeds by first selecting at random two crossing points in each chromosome, defining the substrings to be interchanged, for example:

chromosome 1

chromosome 2

ai a2 a3 à4. as 9*

I I

as ai ai a4 aj

(parent 1) crossover points (parent 1)

To construct the first offspring, the segment of parent 2 is injected into a crossover point of parent 1.

Then the duplicated features in the remainder of the chromosome are deleted, yielding:

chromosome 1

chromosome 2

a« ai ai a3 a4 as

d is obtained similarly, and we set:

аз a* as .a« .82 , 8|

(offspring 1) (offspring 1)

The optimization process Is performed in cycles called generations. In Flgurl, we show the principle of genetic algorithms as applied to physical design of VLSI systems. During each generation, a set of new chromosomes is created using the crossover, inversion, mutation, and crossmutation operators. Sinpe the population size is finite, only the best chromosome^ are allowed to survive to the next cycle of reproduction. (There is considerable variation among various Implementations of the genetic algorithm approach In the strictness with which the principle of "survival of the fittest” is applied. In some systems, the fitness affects only the probability of survival, whereas in others, only the N most fit individuals are allowed to survive at each generation.) The crossover rate usumes quite high values (on the order of 80-85%), while the mutation rate is small (typically 1-15%) for effictient search [7,9].

The Simulated Evolution Model for Simultaneous Partitioning

In order to develop an evaluation function for the partitioning problem, we must examine formally how this problem is represented. We define VLSI partitioning problem (PP) as follows. Let us represent a VLSI system as a hypergraph H=(X,E,W), where X representes the set of nodes in the hypergraph, E the set of hyperedges and W a weight. Each node xi e X has an associated weight w. The weight W is defined as the weight sum: ^ Wj .Note that for any hi c H, wi of hi is such that it does not exceed some limit

Xj eX

value B, that is wt<B, wieW, B*0. Let 1»={Pi,P2,...P«} be the set of partitions of the hypergraph H, Let each partition Pi contain the elements Petpi.pi.-.p,,}, n= IXI.

The PP of hypergraph H is to obtain the partition PieP such that

\f(P,eP) (P,*0)

ViPi'Pj ePi) ([Pi *Pj -*Xi'r\Xj = 0] a [ £( r^Ej = Etj v£j r>Ej = 0])

u/) ¡\\Ji-: K"

i=i i=i

The cost function for partitioning hypergraph H into Hi.Hi.-.Hi parts

is:

Figure I. The Principle of a Gcnclic Algorithm as Applied to Physical Design of VLSI Systems

* = {if *<,/.('*»■

-i=iy=i

The task of partitioning is to minimize K.. This is the criteria we will use for the evaluation fanctions.

The algorithm starts with an initial (randomly generatated) population of partitions (solutions) (Pi-Pk)- We sort this popluation based on each solution's K value and then calculate K.aVg- Once Kavg is calculated, all solutions Pi with Ki<K,Vg survive to the next generation, all Pi whose Ki>=K»vg are removed from the population.

Figure 2 shows the application of the simulated evolution method to the partitioning problem, at the first stage, a set of alternative variants of solutions (partitionings) is generated by a constructivfe gruping algorithm [3-5,18], which includes a step of random search. This stage is the formation of the initial population of solutions. At the second stage, we calculate Ki,...Kk for the memobersof the population and estimate the average K.. We then perform the sorting and selecting procedures. We get a set of pairs of solutions (Pi,P2),,...(P.,Pt). Esch pair, with some probability, is

subjected to one of a set problem-specific crossover operators, COi, CO2.................CO5.

Crossover operator COi, for example, works as follows: call the selected pair of elements parent I and parent 2. Elementts from come row or some area with high fitness are selected from parent 2, and these elements are passed to parent I. Notice that while genetic algorithms in general select regions for performing operations at random along a chromosome, we have chosen to create a more specialized operator. This is enabled by the fact that our fithess function, F(n), can be evaluated for an arbitrary number, n, of elements in a chromosome (or subset of a chro-mosome). The corresponding operation

is performed on parent 1. This results in formation of two new solution variants (children or offspring).

Crossover operations can be performed on all pairs or a subset of pairs of the j mlation. Then the offspring are added to the population, and all members are ranked according to the evaluation or fitness function (objective function, in optimization terminology). In this particular implementation, lowest ranking individuals are dropped at this point until the cardinality of the new set is the same as that of the initial population.The survivors form a new set of solutions,or new current population. This very stringent selection policy strongly concentrates the search in the region of the local optima represented in the current population. The final stage is to perform mutation, inversion, and crossmutatitm on this set. The goal of this stage is to increase the diversity in the current population of solutions. The mutation operator modifies some members of the population by performing a series of random interchanges, such deliberate degradation of some solutions gives new information and helps to overcome the (evolutionarily) overly stringent selection performed at the crossover stage, it is one of the mechanisms used to try to avoid premature convergence at local optima. After that stage, the process is repeated, using the current population resulting after application of these final operators.

Figure 2. The Method ofSimulatcd Evolution as Applied to Partitioning

Note that in partitioning process above, we are seeking to minimize K.(H), the number of interconnections among the various partitions of the circuit. However, thet partitioning also tends strongly to minimize the wire length, L(H), and the number of .intersectioms, 1(H). Therefore, we are simultaneously performing a major portion of the placement task.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Ws further note that the genetic operators defined for solving this problem, and the directed method of their application (as described for COi, for example), together with tfte stringency of the selection operation,are intended to reduce the search time for finding useful solutions for relatively large problems. However,they are sufficiently different from the operation of the genetic algorithm, as described, for example, by Holland [I], that his results regarding robustness and convergence to global optima

cannot be assumed to hold. Instead, the usefulness of the scheme described is demonstrated by its ability to solve realistic problems in practical times on very available computer hardware. The degree of faithfulness to the evolutionary model is not maximal, but that model serves as a valuable source of motivation for new techniques and a framework in which to understand the behuviorn of the algorithms.

Results

This method of above has been realized on an IBM PC/AT. The heurristic process described and coded as a 'C'-language program is reasonably efficient, approximately quadratic in /X/.This algorithm worked about 20% faster than a branch-and-bound procedure searching one best branch,and produced identical results for typical examples.

Tablet gives some comparison of various types of runs as compared to the approach of iterative pair change using the same hardware.

Table 1:

Experiment Number Number of Elements in Experiment Population n=l8 Population n=38 Population n=48 to 200 Iterative Pair Change

1 50 1.11 1.15 1.24 1.15

2 180 1.13 1.21 1.28 1.22

3 300. 1.14 1.21 1.26 1.17

4 500 1.14 1.24 1.30 1.23

5 1000 1.14 1.24 1.32 1.24

The values stored in the columns are K.', the ratio of internal to external edges, based on the average K.' from I06 runs. More formally, K' is defined as the sum of all edges beetween nodes within the same partition set pt in P, the external edges. One can note that for all examples of population size 48 - 200, the GA approach outperformed the iterative pair change approach. Moreover, even for small population sizes of 18, the GA approach gave equal or similar results for the iterative pair change approach.

Concluding Discussion

The genetic algorithm strategy is a powerful method for avoiding premature convergence at local optima. It has proved its efficiency in partitioning and placement on gate array chips, together with efficient channel routing. A very important open question in genetic algorithms is the optimum or near optimal size for the population. The genetic algorithm, by its nature, admits of easy parallelization, and parallel versions of genetic algorithms hold the promise of providing nearly linear speedup of calculation with processor number.

REFERENCES

1. J.Holland,Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, 1975.

2. D.E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wessley Publishing Company, Inc., 1989.

3. V.Koryachko, V.Kureichik, I.Norenkov, CAD Basics, Mir Publishers, Moscow, 1990 p. 325.

4. B. Precis and M.Lorenzetti, eds., Physical Design Automation of VLSI Systems, Benjamin Cummings Publishing Company, Inc. 1988.

5. T.Lengauer, Combinatorial Algorithms For Integrated Circuit Layout, Applicable Theory in Computer Science, John Wiley and Sons Ltd., 1990.

6. E.P.Stabler I M kun'u'hik <m<l I \ Ka!o\hmkov "Placement Algorithm by Partitioning for Optimal Kci't:Higul.!i Placement" in Proc 16th Design Automation Conf.. san Diego. June 1979. pp. 24-25.

7. J.P.Cohoon V U Hegdc l( N \hi/tin anil ß S Kn luiril\. "Distributed Genetic Algorithms lor lloorplan Design Problem. "IEEE 11 ansactions on CAD, vol. 10. No. 4. April 1991, pp. 483-492.'

8. J.P Column. M IXParis. ’’Geneti.c Placement. 1 LiE-U Transactions on CAD, Vol.

6, No.6, November 1987, pp. 956-964.

9. R.M. Kling and P Banerjtv, "ESP: Plasement by Simulated Evolution," IEEE Transactions on CAD, Vol. 8, No.3, March 1989, pp. 245-256.

10. R.M. Kling and P. Banerjee, "Empirical and Theoretical Studies of the Simulated Evolution Method Applied to Standard Cell Placement," IEEE Transactions on CAD, Vol. 10, No. 10, October 1991.

11. B.Kernighan and S. Lin, "An Efficient Heuristic Procedure for Partitioning Graphs," Bell System Technical Journal, Vol. 49, Feb. 1970, pp. 291-307.

12. Y. Saab and V Rao, "An Evolution-Based Approach to Partitioning ASIC Systems," Proc. 26th Design Automation Conference, June, 1989, pp. 767-770.

13. Y. Saab and V Rao, "Stochastic Evolution: A Fast Effective Heuristic for some General Layout Probles," Proc. 27th Design Automation Conference, June 1990, pp. 16-31.

14. W Siedlecki and J.Sklansky, "A Note on Genetif Algorithms for Large-scale Feature Selection, "Pattern Recognition Letters, October 1989, pp. 335-347.

15. E. Falkenauer, "A Genetic Algorithm for Clustering", pers. comjrt., 1992.

16. B. Punch, P. Min and E. Goodman, A. Lai, "Intelligent Clustering of High-Dimensional ity Data Using Genetic' Algorithms," manuscript in preparation.

17. E. Faikenauer, "A.Genetic Algorithm for Grouping, "Proceedings of the 5th International Symposium on Applied Stokastic Models and Data Analysis, Granada, Spain, April 1991, pp. 23-26.

18. M.A.Brewer, "Min-cut Placement, "Design Automation & Fault-Tolerant Computing, Vol. 1, No. 4, Oct. 1977, pp. 343-362.

19. R. Ghandrasekharam, S. Subhramanian and S Chadhury. "Genetic Algorithm for node partitionning problem and application in VLSI design", IEE Proceedings-E, Vol. 140, No. 5, Sept. 1993.

20. K. Shahookar and P. Mazumder. "Genetic Approach to Stan-dard Cell Placement Using Meta-Genetic Parameter Optimization", IEEE Transactions on Computer-Aided Design, Vol9, No. 5, May 1990.

УДК 681.324

V.V. Miagklkh, A.P. Topchy, S.A. Chertkov GENETIC ALGORITHMS: SOME NEW FEATURES FOR PREMATURE CONVERGENCE AVOIDANCE

One of the major difficulties with Genetic Algorithms (GAs) (and In fact with most search algorithms) is that sometimes premature convergence, i.e. convergence to a suboptimal solution, occurs. This paper describes some new features in GA with local optimization/preferences aimed to premature convergence avoidance. Described approach was successfully applied to genetic solution of the well known traveling Salesman Problem [I] (TSP) and the Graph Coloring Problem [5].

In case of TSP, standard Greedy Crossover [1] very easily kills all 'bad1 changes which are produced by randomizing genetic operators. It was experimentally noted, that recombination of the best ranking individual with not so good one almost always produces an offspring, which is the same as the better of the parents. If place such an individual into the population, then it doesn't introduce anything new. Such 'good'

i Надоели баннеры? Вы всегда можете отключить рекламу.