Научная статья на тему 'Algorithm for building structures optimization based on Lagrangian functions'

Algorithm for building structures optimization based on Lagrangian functions Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
99
20
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
Magazine of Civil Engineering
Scopus
ВАК
RSCI
ESCI
Ключевые слова
modified Lagrangian multiplier method / building structures / optimum design / steel beam / truss structure / finite element model

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Tatiana Dmitrieva, Khukhuudei Ulambayar

A review of modern algorithms and optimization programs is presented, based on which it is concluded that there is no application software in the field of optimal design of building structures. As part of solving this problem, the authors proposed numerical optimization algorithms based on conditionally extreme methods of mathematical programming. The problem of conditional minimization is reduced to a problem of an unconditional extreme using two modified Lagrange functions. The advantage of the proposed methodology is a wide range of convergence, the absence of requirements for convexity of functions on an admissible set of variation parameters, as well as high convergence, which can be achieved by adjusting the parameters of the objective and constraint functions. Verification of the developed methodology was carried out by solving a well-known example of ten-bar truss optimization. A comparison of the results obtained by other sources with the copyright ones confirmed the effectiveness of the presented algorithms. As an example, the problems of optimizing the cross-section of a steel beam have also been solved. Automation of the algorithms is performed in mathematical package MathCAD, which allows you to visually trace the sequence of commands, as well as obtain graphs that reflect the state of the task at each iteration. Thus, the authors obtained an original methodology for solving the optimization problem of flat bar structures, which can be extended to solve the problem of optimal design of general structures, where the optimality criterion is defined as material consumption, and the given structural requirements are presented as constraint functions.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Algorithm for building structures optimization based on Lagrangian functions»

Magazine of Civil Engineering. 2022. 109(1). Article No. 10910

Magazine of Civil Engineering

journal homepage: http://engstroy.spbstu.ru/

ISSN 2712-8172

DOI: 10.34910/MCE.109.10

Algorithm for building structures optimization based on Lagrangian functions

T.L. Dmitrieva» , Kh. Ulambayar"

a Irkutsk National Research Technical University, Irkutsk, Russia b Mongolian University of Science and Technology, Mongolia *E-mail: dmitrievat@list.ru

Keywords: modified Lagrangian multiplier method, building structures, optimum design, steel beam, truss structure, finite element model

Abstract. A review of modern algorithms and optimization programs is presented, based on which it is concluded that there is no application software in the field of optimal design of building structures. As part of solving this problem, the authors proposed numerical optimization algorithms based on conditionally extreme methods of mathematical programming. The problem of conditional minimization is reduced to a problem of an unconditional extreme using two modified Lagrange functions. The advantage of the proposed methodology is a wide range of convergence, the absence of requirements for convexity of functions on an admissible set of variation parameters, as well as high convergence, which can be achieved by adjusting the parameters of the objective and constraint functions. Verification of the developed methodology was carried out by solving a well-known example of ten-bar truss optimization. A comparison of the results obtained by other sources with the copyright ones confirmed the effectiveness of the presented algorithms. As an example, the problems of optimizing the cross-section of a steel beam have also been solved. Automation of the algorithms is performed in mathematical package MathCAD, which allows you to visually trace the sequence of commands, as well as obtain graphs that reflect the state of the task at each iteration. Thus, the authors obtained an original methodology for solving the optimization problem of flat bar structures, which can be extended to solve the problem of optimal design of general structures, where the optimality criterion is defined as material consumption, and the given structural requirements are presented as constraint functions.

1. Introduction

Most of these problems are formulated in the form of a nonlinear programming problem and were solved using gradient methods of the 1st or 2nd order or direct (zero order) methods of constrain and unconstrain minimization [1].

For the first time, the most general statement of the optimization problem was proposed by L. Schmitt [2], where he indicated the admissibility of the application of structural analysis using the finite element model and the nonlinear programming method in the presence of various forms of constraints. In 1979, a monograph was published by American scientists E. Haug (Edward J. Haug) and J. S. Arora (Jasbir S. Arora) [3]. This work gave a serious impetus to the development of the applied direction of optimization. It outlined general approaches to solving the problems of analysis and synthesis of mechanical systems. The 70-80s of the last century also accounted for numerous software implementations of optimization algorithms.

Note that the nonlinear programming methods embedded in these algorithms have a rigorous mathematical justification of the convergence conditions, but are rather laborious in solving large-

Dmitrieva, T.L., Ulambayar, Kh. Algorithm for building structures optimization based on Lagrangian functions. 2022. 109(1). Article No. 10910. DOI: 10.34910/MCE.109.10

© Dmitrieva, T.L., Ulambayar, Kh., 2022. Published by Peter the Great St. Petersburg Polytechnic University. This work is licensed under a CC BY-NC 4.0

dimensional problems when it is necessary to optimize, for example, the topology of plane and spatial objects, explore dynami processes, etc.

Nevertheless, these algorithms still find application in applied optimization problems, as well as in their software implementations [4, 5]. Wherein, the problem of static and dynamic analysis of structural is performed using the finite element method algorithm, which over the past decades has been developed for tasks of a special kind [6, 7].

Since the 90s, a large number of studies have appeared in the field of optimization of engineering systems that use metaheuristics. Metaheuristic algorithms explore the search space using probabilistic transition rules [8-10]. Here, a possible solution is sought by random selection, combination and variation of the desired parameters with the implementation of mechanisms resembling biological evolution, or physical processes occurring in nature. Here are just some examples of heuristic algorithms: genetic algorithm (GA) [11], ant colony algorithm (ACO) [12], artificial bee colony algorithm (ABC) [13], particle swarm optimization (PSO) [14], firefly algorithm (FA) [15], raven search algorithm (CSA) [16], gray wolf algorithm (AGW) [17], bat algorithm (BA) [18], annealing simulation algorithm (SA) [19] and others [20-22]. In [23-28], practical problems of optimizing building structures using metaheuristic algorithms are presented.

Most theoretical research in the field of numerical optimization of structures was accompanied by their software implementation. Among the leading software systems, including optimization modules, the "heavy" multipurpose universal software packages "MSC Nastran", "ANSYS" and "ABAQUS" should be noted first of all. At the same time, in spite of many approaches, the problem of solving the problems of optimizing complex technical objects in an acceptable time and with a given degree of error still remains relevant. Over the past decade, a new direction has emerged related to the construction of optimization models based on the use of neural networks, where the search process is controlled by intelligent systems [29].

In the Russian design, the most fully automated calculations related to structural optimization are implemented in the field of aircraft manufacturing (just call the "IOSO NM" software package). A sufficient number of publications are devoted to the issues of optimizing building structures, which describe algorithms and software developments [30], however, these programs are of a research or narrow applied nature. In general, it should be noted that the optimization of construction projects is not yet widely used in real design.

This is due to the fact that the designer, on the one hand, does not have to possess the intricacies of optimization algorithms, and at the same time, such calculations require concomitant "manual" control, which is expressed, for example, in setting some parameters that significantly affect the convergence in tracking local extremes, etc. Thus, the problem of constructing universal optimization algorithms with a wide area of convergence, and at the same time not requiring serious "tuning" to the task, is quite demanded, significantly determining the implementation of these algorithms in design practice.

2. Methods

In this paper, we propose an algorithm for the numerical optimization of flat rod systems. In its most general form, this problem can be formalized as a conditionally-extremal nonlinear programming problem (NLP) [31-34].

minimize f (x), x e Enx, (1)

subject to: gj (x)< 0, j = 1,2...m, (2)

XL < xi < xU, i = 1,2...nx, (3)

where fx) and {x} are the objective function of variable parameters and the vector of these parameters on the interval {xL} - {xU}, and g(x) is the constraint functions in the form of equalities and inequalities.

The problem of constrain minimization of the function of many variables (1-3) can be reduced to the problem of unconstrain minimization using the Lagrangian function Fl

Fl = f ( x) + {Y }T { g}, (4)

where {Y} is the Lagrange multiplier vector. In this case, the solution of problem (1-3) coincides with the saddle point of the Lagrangian function, for the determination of which the condition of its stationarity of this function with respect to x and y is used:

dFL

dxi

dF,

L

dy

0, (5)

= a j = 1,2,...m.

j

These conditions were obtained for the convex optimization problem by Kuhn and Tucker, and can serve as a check of a reliable optimum.

Since the problem is solved in the space of two vectors {X} and {Y}, the vector {X} is usually defined as a vector of direct variables, and the elements of the vector {Y} are called dual variables.

One of the significant drawbacks of the Lagrangian function is that it is applicable to a limited class of convex, separable programming problems. To construct methods with a wider region of convergence and applicable for finding a local extremum in non-convex problems, it is necessary to introduce additional terms into the structure of the function, which lead to an optimum shift in the search iterations, thus modifying the standard Lagrangian function. Various versions of the methods of a modified Lagrangian function are described in [35-39].

The purpose of this article is to illustrate the effectiveness of the methods of a modified Lagrangian functions and algorithms based on them in solving problems of optimizing geometric parameters of flat bar structures by the criterion of the minimum of their volume when fulfilling the standard requirements for strength and rigidity. The stated goal involves solving the following tasks:

• investigation of the properties of a modified Lagrangian functions;

• development of numerical algorithms based on these functions;

• solving problems using the proposed methods;

• comparison of the efficiency of algorithms for the rate of convergence and error in the results.

We introduce two modified Lagrangian function Fp and Fm

FP = kfFL + 0.5 {g}T [J] [k] {g} + 0.5kf {Y}T ([J] - [/]) {AZ}, (6)

FMM ^(M-O^ff {f[ <7>

Here, [/] is identity matrix; [k] is diagonal matrix of penalty coefficient; kf is normalization factor introduced to increase computational stability; t is convergence control constant; AZ/- is constraint shear amount to an allowable area; [5] is diagonal matrix of Boolean variables whose elements are determined from the condition:

if gj + AZ - > 0, then 5jj = 1, otherwise 5-- = 0.

Thus, the function Fp can be interpreted as the sum consisting of the Lagrangian function and the penalty for violation of restrictions, shifted by an amount AZ to an allowable area.

The function Fm is the sum of the Lagrangian function and the penalty for not fulfilling the stationarity conditions at the point {X *}. If {X*}, {Y*} are solution of the problem (1-3), then the ratio is true:

k ( X • , Y )S Fl ( X •, Y • )£ k ( X, Y •).

An equal sign in this expression is possible only with X = X*; Y = Y*, since at the same time penal additives turn into zero.

Consider iterative algorithms using a modified Lagrangian functions, which at each iteration in turn include two main procedures:

determination of the vector of direct variables {X i+1}; definition of a vector of dual variables {Y '+1}. The iterative process terminated by the condition of convergence:

M-M <x M

g ■ < s

'J- g

j = 1,...m

(8)

or if the specified limit number of iterations is exceeded it_lim.

In expression (8), {g} is a vector of potentially active constraints of dimension m*; Ex, Eg are predetermined calculation error; t is iteration number.

2.1. Direct method for solving the problem NLP

Consider an iterative algorithm for solving problem (1-3), which operates only with a function Fp (x, y) and reduces to finding the saddle point of this function from the condition

max

_ T?m

ye E

min

xeEn

FP ( x y ).

The most stable version of this algorithm is when direct variables are determined from the condition of minimizing the function Fp

{M}e Arg min FP (X', Yl )

{ xl }<{ X }<{ XU },

(9)

and the dual conditions from the equality of stationarity with respect to X are the Lagrangian functions Fl (expression (4)) Fp and functions (expression (6)), which after the corresponding transformations has the form:

r

yJl = max

A

* + org ( x'+1 ) kf

J

This algorithm is investigated in [30]. In a comparative analysis with another algorithm, we designate it as "direct", since the search for an extremum at iteration t here is carried out only by direct variables {X}.

3. Results and Discussion

Based on the foregoing, the authors propose a numerical algorithm to solve problems (1-3) using two modified Lagrangian functions - function (6) and function (7).

3.1. Combined method for solving the problem NLP

At each iteration of the algorithm, to find the vector {Xt+1}, the function Fp in X(6) is also minimized, and dual variables are determined directly through straight lines by maximizing the function Fm.

{Yt+1}e Arg max Fm (^, Yt) Y e Em.

(10)

Expression (10) is more convenient to obtain explicitly. For this, we substitute the expression of the Lagrangian function Fl in function Fm (7). To maximize the function Fm with respect to y, we perform differentiation with respect to this variable and equate the result to zero:

dF

M

dy

■kf (1 ( x )}-

-0.5r

2 [S]

ôg (x )

dx

df ( x ) '

dx

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

+2 S]

ôg (x )

ôg (x )

dx

dx

[S]{Y }

= 0.

Expand the brackets and simplify this expression:

kf (1 -t)[j]{g(x)}-

-r[S]

dg(x) dx

df ( x ) ^

dx

-r[S]

dg (x )

dx

dg (x )

dx

№ } = 0.

(11)

Since the function Fm is quadratic, expression (11) is reduced to a symmetric system of linear algebraic equations:

[W ]{y } = {p},

(12)

where [W] is a matrix whose dimension is equal to the number of active constraints (m* x m*), {Y} is vector of dual variables reduced to dimension m*. Each element of this matrix is determined by the product of vectors:

d gj

dx

(13)

In expression (13) а dash sign indicates that derivatives are taken only by active restrictions. Matrix

dimension

dx

- nx*m.

The element i of the vector P is formed by the expression:

— ^t

(14)

I dg- I {df ]

In (14) vectors < ——),< —) have dimension nx, where nx is the number of variable parameters

[Sx J [Sx J

that belong to an admissible region (x;L < x- < x-U).

A distinctive feature of the combined algorithm is that when solving a conditionally extremal problem on its basis, there is no requirement for accuracy in the search for direct variables, i.e. their calculation at the iteration may have an error, and reflect only a certain movement towards the optimum.

A mixed form of these algorithms is possible when, at the first iteration, if a good initial approximation of the variables {X 0} is known and there are no recommendations for the assignment of dual variables, a combined approach is used, and at subsequent iterations, the dual variables are recalculated using the direct method.

3.2. Algorithm for solving conditional-extreme problems for NLP using the combined method

The sequence of operations of the above algorithms differs only in the calculation of dual variables.

Since in the combined method this procedure is much more complicated, we present here a flowchart of the algorithm of the combined method (Fig. 1).

Figure 1. The flowchart of the algorithm for solving the conditional problem NLP.

3.3. Illustration of the proposed algorithm

Let us illustrate the operation of the algorithm by the example of the problem of optimal design of flat web steel I-beam, working on bending (Fig. 2).

Figure 2. Calculation scheme of beam. Initial data: L = 9 m; q = 0.08 MN/m.

Physical characteristics of the beam material: the modulus of elasticity E = 2.06-105 MPa; the stress restrictions Ru = 230 MPa; Rs = 133 MPa.

Problem statement

It is required to select the parameters of the beam section at a given interval by minimizing its volume, provided that the regulatory requirements for strength and rigidity are met.

3 parameters of the cross section of the composite I-beam vary (Table 1).

Table 1. Variable parameters.

_Parameter number_Designation_Description_

X1 h Web depth

X2 B Flange width

X3 tf tw Flange and Web thickness

The initial value of the varied parameters and the limits of their change:

Xi = h = 0.50 m [0.20*1.20 m]; X2 = B = 0.1 m [0.07*0.60 m]; X3 = tf = tw = 0.03 m [0.0008*0.08 m].

The objective function f (x) represents the volume of the beam:

f ( x ) = ( X1X3 + 2X2 X3 ) L.

The restrictions used in the task are checks for strength and stability, and are set in accordance with the sections of Russian Construction Rules SP 16.13330.2001 "Steel structures". All restrictions reduced to dimensionless form are listed below.

The condition that ensures the local stability of the wall, in accordance with paragraph 8.5.1. the above Russian Construction Rules SP:

« 3.5tf

where Ru is the calculated resistance by yield strength. The check on the strength by normal stress:

M , „

«2 =--1 * 0,

W ■ R

n,min ^u

where M and Wnmin are the calculated value of bending moment and the minimum moment of resistance. The check on the strength by shear stress:

«3=-Q^ -1 * 0,

Jtf Rs

where Q and Rs are the calculated value of shear force and the calculated resistance by shear, J is moment of inertia and S is the static moment of inertia.

The check by rigidity:

«4-1 * 0,

where Amax and [A] are the maximum value of displacement in the beam and the maximum permissible value of displacement ([A] = L/500 = 0.018 m).

Thus, the number of variables is nx = 3, and the number of constraints is m = 4.

The parameters of the optimization algorithm in the direct method were adopted as follows:

• minimum value of coefficient penalty ^min) - 80;

Dmitrieva, T.L., Ulambayar, Kh.

• maximum value of shear (AZmax) - 0.2;

• normalization coefficient of the objective function (kf) - 3106;

• normalization coefficient of constraint functions (kg) - 50.

Since the beam is statically determinable, the calculated forces do not change during the variation of the dimensions of the cross section.

For numerical solution, 2 iterative algorithms of modified Lagrangian function were applied - direct and combined.

Here is the sequence of operations of the direct method with the notation used in the MathCAD program:

1. Input of initial data:

1.1. Assignment of deterministic geometric (L) and physical (E, Ru, Rs) parameters of the problem, the value of the intensity of the uniformly distributed load q, as well as the limiting value of the displacement

Alim.

1.2. The purpose of the optimization algorithm parameters: the number of components of the constraint vector m; normalization coefficients of the objective function and the constraint vector kf, kg; the minimum value of the penalty coefficient kmin and the maximum value of the shift value outside the permissible range AZmax.

1.3. Assignment of initial parameters of variation (vector X) and range of variation (vectors Xmin and

Xmaxx).

1.4. The appointment of the initial values of the components of the vector of dual variables yr (i = 1 ...m).

2. Formalization of the objective function f(X) and the vector of the constraint function g(X) through variable parameters:

2.1. Formation of the objective function fX) through variable parameters.

2.2. Writing a subroutine for calculating the moment of inertia of the I-section J(X).

2.3. Writing a subroutine for calculating the static moment of the I-section ^(X).

2.4. Formation of expressions of constraints g1-g4 by substituting in them the calculated values of the bending moment and shear force, which are expressed through variable parameters.

2.5. Formation of the constraint vector g(X), where its components g1-g4 are multiplied by the normalization factor kg.

3. Organization of a cycle where iterations of the search optimization algorithm are performed:

3.1. Assigning an it iteration counter.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

3.2. Formation of a diagonal matrix of penalty coefficients by assigning its diagonal elements ki,i (i = 1.. .m).

3.3. Formation of the elements of the shift vector of the admissible region of restrictions AZi (/ = 1..m).

3.4. Formation of a diagonal matrix is a matrix of Boolean variables by assigning its elements 5i,u

3.5. Formation of the Lagrange function Fl(X), as well as the modified Lagrange function Fp(X).

3.6. Formation of the block for finding the minimum of the function Fp(X) by X on the interval Xmin and Xmax, where 2 search methods of unconstrained minimization are included: Conjugate Gradient method and Quasi-Newton method. When testing each of these, a practical coincidence of results was obtained.

3.7. Formation of the vector of values of the objective function at iterations F.

3.8. Identifying the maximum value in the vector of constraints on iterations it and entering it into the vector G.

3.9. Finding the ordinal number of the maximum element in the vector of constraints on iterations it and entering it into the vector kk.

3.10. Calculation of the elements of the vector of dual variablesyi (i = 1...m).

4. Output of results:

4.1. Numerical values of the vectors of change in the target and maximum values of the restrictive functions at iterations.

4.2. The values of the optimal parameters of the section of the beam X.

4.3. Values of residual constraints without taking into account the coefficient kg.

4.4. Graphs of changes in the target and maximum values of the restrictive functions at iterations.

4.5. Rounding of the beam section parameters to the required bit depth.

The solution was carried out in universal mathematical package MathCAD. MathCAD program listings are shown in Fig. 3-13.

The results of comparative calculations for the two methods are given in Table 2.

y Bea rr> _dir_En g .x m cd | " || B ||wSa«|

[ORIGIN := l| L := 9 Aliin := L - 500 E := 206000 Rh := 230 Rt := 133 q := 0.08

haaaaaaaaaaaaaaat_i aw. j

m = 4 kg := 50

MV* AW(

|~*e objective function] f {X) \= [X\ +2-X2)XyL

¡Constraints by strength and rigidity!

= ks

9-

3.5 X3 ) -J E 2^ ( Xi + 2X3

■ - 1

^J(X) Xf Rs

(,f}r_ys

-1

- 1

3 q

384

1

J{X)-E ) AUui

1

- 1

X :=

Xiitin :=

Xjrutx '.=

10 v3 f 20 > 7

\ 0.08 j f 120 > 60 V 8 ./

-10

-10

-10

kf := 3 -10°

kmi/i := 80 AZmax := 0.2 i .= 1.. in yi := 0

<> = 1 MM = fW Gtr := mo^giX)} kkit := match(ma.x^{X)),g(X))lA

'-42.04425 N

17.14469

*<x) ••

-48.29053

v 75.50342 j

Figure 3. Implementing of numerical methods in mathematical package MathCAD:

initial data of the task.

fl Beam_tfrr_Engjrmd I uffljB |

(xif Ix s< - X, ■ J 12 A_S <^XVX3 A_p i-Xj-Xi ¡{Xjf-X^ , (Xi xrf Jx_p <r- - + A_p -I — 4- — L 12 J 2 ) J < Jx_s + 2J.\_p S_{X) := - , 2 + 2 ) V

< >

Figure 4. Subroutines used in the algorithm.

Beam _dir_Eng.xrncd

Magazine of Civil Engineering, 109(1), 2022

H^Û

Begin of optimization cycle

ü := it + 1 = 2 I

tau___I

kij-=¥

yi

AZmax

■ + kit tin

AZj :=

kfyi ki.i

1 if g(X)i+AZi > 0 0 otherwise

FLQC) :- / (X) + j- -S-g (X) Pp{X) kfFL{X) + 0 5 g {X)T_S k g(X) Given Xmin < X < Xnitix

~L2 T| f' L69.52315 ^

0.6 20.37058

A := \flnnni:e(I'jf , X") :

2.60936 x 10

SW =

kkj, := watch (max ig (X» .gTOij

ki,ig(X)i

-33.87974 V 11.11631 j

Rector y I yi -_= maxi 0 +

¥

Fit=f(X) G pi = ^ig(X)}

Figure 5. Begin of optimization cycle.

Beam _dir_Eng.xmcd ii := 1.. it itéra = i* Results of optimization Direct method

Linear convergence in y

ii = 11

ft» = 50

\hniji = 80

AZmax = 0.2

Iterations The objective function

Maximum residuals in The number of active

constraints

constraint

1 1

1 1 1 0.199

2 2 z 0.05636

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

3 3 3 0.12223

4 4 4 0.12279

5 5 F - 5 0.1227

6 6 S 0.12272

7 7 7 0.12271

8 a 0.12271

9 9 9 0.12271

10 10 10 0.12271

11 11 11 0.12271

67,- G , kg

1 i

1 1.51007 1 4

2 3.39046 2 i

3 0.01005 3 4

4 4.66777-10^ 4 4

5 3.03695-10-3 kk = 5 2

6 -3.74623-10-5 6 1

7 2.9343-10-5 7 4

3 1.15701-10-5 8 1

9 5.29282' 10-6 g 1

10 2.62441-10-6 10 1

11 1.36121-10-6 il 1

Values of constraints

- \

4.31828 x 10

, e(X) tfl , -6.27293 x 10"5

kg -0.93455

t 1.36121 x 10~6 ^

X =

,-i

Xmax =

The vector of variation parameters

' 1.2

h B

* ( 0.2 '> Xmin =

1.02705 0.18175

9.8051 x 10

/ (X) = 0.1227098

0.6 V0.08

0.07

\Sx 10 4j

Figure 6. Optimization results.

Figure 7. Optimal solutions with considering rounding.

Figure 8. Graphs of changes in the objective function and maximum residuals

of constraints at iterations.

The combined method algorithm differs in the following paragraphs:

P.1.2, where the parameters of the optimization algorithm are assigned, the assignment of the parameter ris added, which is included in expression (14).

P.3.10, where the vector of dual variables {Y} is formed, includes the solution of the system of equations (12). For this, the matrix [W] and the vector {P} are formed in the optimization cycle. This requires writing subroutines: dr_ f (formation of the vector of derivatives of the objective function with respect to the varied parameters X) and dr_ g (the formation of a matrix of derivatives of the functions of constraints with respect to the varied parameters X). In these subroutines, the input parameter A specifies the step of the offset from the current point in the numerical determination of derivatives.

Figure 9. Subroutines used in the algorithm.

r21 Beam _comb_Eng.xmcd

Begin of optimization cycle

it := it + 1 = 2 I

Kikikii_I_I

j>i kfyi

:= kf —-+ h"<" AZi '-= -- M -' ■=

AZmax kj j

1 if g(X)i + AZi > 0 0 otheni'ise

FL(X) =f(X)+yT-d-slFp(X) := kf -FL(X) + 0.5-g(X)T &-k g(X)

Given

Xmiti < X < Xjiulx

f 1.11977

Minimize {Fp . X) = 0.16924

8.47338 x 10" ^J

g(X) =

■ -168.17701 >

68.57877 -193.16212 V 302.01367 ;

Rector y~|

kkit := match («v (g (X)) ,g{X))1 A Fit := f (X) Gu = max ig (X))

dg := dr_g(m,„.g .X ,A) df := drjif ,X ,A) W;= (dg) -(dg) W1 := WW(W,S,m) P .=-kf dgT df - X) y = »T'p yj := «m.y(o,v;)

Figure 10. Begin of optimization cycle.

Beam _comb_Eng.xmcd

Sas

ii := 1.. it iter a '.= ii

Results of optimization Combined method |;y = 5 | \kg = 200 | |kmin = 100~| Iterations. The objective function Maximum residuals in The number of active

2

3

4

vs;

F =

' 0.189 0.111207 0.115363 0.123432 \ 0.123432 j

Gl .= — = kg

constraints

f 1.51007 0.26163 0.25327

-4.99989 x 10

constraint

f4\

kk =

1 1 1

vV

Values of constraints

' -4.99989 x 10 4 1

si —4.99997 x 10" 4

kg -0.93814

-0.01111 J

The vector of variation parameters

'1? >

X =

1.03812 0.1725 9.91578 x 10"

XlIULX =

Xjitin =

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

0.6 \ 0.08j 0.2 0.07

V&x 10 4y

Figure 11. Optimization results.

Figure 12. Optimal solutions with considering rounding.

&eam_comb_Eng.Hmcd p © f

The graph of objective function The graph of maximum residual change in 4

change at iterations constraints at iterations

Figure 13. Graphs of changes in the objective function and maximum residuals

of constraints at iterations.

Table 2. Comparative results of calculations.

№ Name Numerical methods

Direct method Combined method

1 Iteration number 11 5

2 Value of the objective function, m3 f (x) = 0.122791 f (x) = 0.123432

3 Discrepancies in active constraints g1 = 0.431828-10-6 g2 = -0.627293-10-4 g4 = 0.136121 ■ 10-5 g1 = -0.499989-10-3 g2 = -0.499989-10-3

4 X1, m 1.02705 1.03812

5 X2, m 0.18175 0.1725

6 X3, m 9.8051-10-3 9.91578-10-3

The combined method showed better convergence with lower accuracy in constraint residuals, which does not change on subsequent iterations. It should be noted that the value of the objective function here is 0.6% higher than in the direct method.

The conditions for achieving the optimal solution at iterations, according to expression (8), were as follows:

a) equality of variable parameters at 2 adjacent iterations;

b) not exceeding the limits of the allowable area, taking into account the accepted errors for the constraint functions.

Thus, the accuracy of the solution was consistent with the accuracy in the calculation of potentially active constraints. The resulting solutions can be rounded to the required bit depth, which is reflected in the program listings.

4.1 Testing the proposed algorithm

The ten-bar truss optimization design problem is presented as verification of the proposed methodology. Let us consider a variant of this problem, set forth in the monograph [3] and article [40]. The geometry and loading conditions of ten-bar truss are shown in Fig. 14.

Figure 14. The geometry and applied loads for a ten-bar truss.

Initial data: d = 9.144 m (360 in); F = 444.822 kN (100 kips). All the members were constructed from a material with the modulus of elastic E = 6.89476-107 kPa (104 ksi); the stress restrictions are Ru = ± 1.72369 105 kPa (±25.0 ksi). The displacements for all nodes in all directions had to be less than [A] = ±0.0508 m (±2.00 in).

Problem statement. It is required to select the cross-sectional area of the truss at a given interval by minimizing its volume, provided that regulatory requirements for strength and stiffness are met.

The range of the available cross-sectional areas for each member in the truss vary between Amin = 6.451610 -5 m2 (0.1 in2) and Amax = 0.02258 m2 (35 in2). Initial areas values Ao = 3.2258 10-3 m2 (5 in2).

The objective function f (x) represents the weight of the truss.

The constraints are as follows:

The check on the strength in i element of the truss:

* = RA-1 s0 i=1'2 'ia

In the test examples given in various sources, the calculation is performed without taking into account the buckling loss of compressed rods.

The constraint on vertical movement at joint 2:

A 2

a=iAr1 £ 0

Thus, the number of variables is nx = 10 and the number of constraints is m = 11. Were applied 2 iterative algorithms of modified Lagrangian function - direct and combined. A feature of the implementation of the algorithms as applied to this problem was as follows:

1. Given that the truss is statically indeterminate, the analysis task was performed using the algorithm of the finite element method in displacements (FEA subroutine).

2. The functions of constraints on strength and stiffness contain state parameters (stress and displacement). Thus, the task of FE analysis, taking into account the changes in areas in the elements of the truss in this task, was recounted at each iteration.

Here is the sequence of operations of the direct method with the notation used in the MathCAD program:

I. Input of initial data:

II.1. Assignment of deterministic geometric (d) and physical (E, Ru) parameters of the problem, as well as the limiting value of displacements of nodes A_lim.

11.2. Assignment of truss geometry:

• setting the constants: ne (the number of members), np (the number of nodes), nop (the number of support reactions);

• setting vectors xp, yp (coordinates of nodes along the x and y axes);

• setting an array ni, which lists the connections of jointes for each members;

• specifying the vector iop, which lists the ordinal numbers of zero degrees of freedom connected by supports.

11.3. The purpose of the optimization algorithm parameters: the number of components of the constraint vector m; normalization coefficients of the objective function and the constraint vector kf, k*; the minimum value of the penalty coefficient kmin and the maximum value of the shift value outside the permissible range AZmax.

1.8. Assignment of initial parameters of variation (vector X) and range of variation (vectors Xmin and

Xmax).

1.9. The appointment of the initial values of the components of the vector of dual variables yr (i = 1 ...m).

2. Formalization of the objective function f(X) and the vector of the constraint function g(X) through variable parameters:

2.1. Writing the FEA (X) subroutine that implements a finite element calculation of the truss. The output parameters of this subroutine are contained in an array, which includes three vectors - Z (nodal displacements), Nabs (values of the internal forces in absolute value), L (lengths of elements).

Thus, the indices 1, 2, 3 that occur when calling this subroutine indicate which of these three arrays is being evaluated. Nodal displacements and internal forces are functions of variable parameters X (areas of sections of elements), therefore, they are defined as Z(X) and Nabs(X). The length of the elements L does not change in the course of optimization, therefore, it is calculated once.

2.6. Formation of the objective function fX) through variable parameters.

2.7. Formation of the constraint vector *i (X), which contains 10 strength checks for each truss element, multiplied by the normalization factor k*.

2.8. Formation of the stiffness constraint g2 (X), which contains a check for the vertical displacement of the truss node 2 (this displacement has index 4 in the general vector of nodal displacements Z). The constraint is also multiplied by the normalization factor kg.

2.9. Formation of the complete constraint vector g (X) by connecting the vectors gi (X) and g2 (X).

This vector has 11 elements, where 10 corresponds to strength constraints and 11 corresponds to stiffness constraints.

3. Organization of a cycle where iterations of the search optimization algorithm are performed:

3.1. Assigning an it iteration counter.

3.2. Formation of a diagonal matrix of penalty coefficients by assigning its diagonal elements ki, i (i = 1.. .m).

3.3. Formation of the elements of the shift vector of the admissible region of restrictions AZi (I = 1..m).

3.4. Formation of a diagonal matrix is a matrix of Boolean variables by assigning its elements 5i,i.

3.5. Formation of the Lagrange function Fl (X), as well as the modified Lagrange function Fp(X).

3.6. Formation of the block for finding the minimum of the function Fp(X) by X on the interval Xmin and Xmax; The main difference in solving this problem is that each time the constraint function is accessed, the internal forces and displacements in the elements are recalculated, since function call g (X) assumes calling in subroutine FEA (X).

3.7. Formation of the vector of values of the objective function at iterations F.

3.8. Identifying the maximum value in the vector of constraints on iterations it and entering it into the vector Gi.

3.9. Finding the ordinal number of the maximum element in the vector of constraints on iterations it and entering it into the vector kk.

3.10. Calculation of the elements of the vector of dual variablesyi (i = 1...m).

4. Output of results:

4.1. Numerical values of the vectors of change in the target and maximum values of the restrictive functions at iterations.

4.6. Values of optimal areas of the truss elements (vector X).

4.7. Values of residual constraints without taking into account the coefficient kg.

4.8. Graphs of changes in the target and maximum values of the restrictive functions at iterations.

The results of the calculations are shown in Fig. 15-20. A comparison of solutions from sources [3, 40] with solutions obtained by authors is given in Table 3.

i'J] TrusslO el_dir_Eng.xmcd

[ORIGIN := 3 jgnvfrfjon of unit:, | U := 0.02541 [kip := 4.44S2Î [foi := 6S94.75729317À

bùùùflùùùùflùùâaaat_I 1-' wu_I I •'■•■■>■'■■_I_I Imi___I

d -.= 360-ijj = 9.144 Fp := 100 kip = 444.822 geometry of Truss | |up := 6| |»e := 1C| ¡nop := -l|

xp := (2d 2d d d 0 0)

VP = (d 0 if 0 d 0)

iop := (9 10 11 12)J

5364313513 3142426442

P := (0 0 0 -Fp 0 0 0 -Fp 0 0 0 0)

:= 25 A« = 1.72369 x 103 E := MO4-ksi = 6.89476x 10' A Fun := 2 in = 0.0508 |

Xmiitl := 0.1 in2 = 6.4516 x 10 5 Xmaxl := 35-iji2 = 0.02258

XI := 5 in2 = 3.2258 x 10 3 i,'.\ . I j . n.\ A y XI Xminj := Xminl\

¡Z(A-) - F£U(.Y)i| |L = F£4(.Y)=j |Xab.iX) := lF£4(X);i| |Ths objective function | / (X) :=

V (/.i x<)

! = [

Constraint by strengthl

ponslraint by rigidity"! ¡~i..- -,ector o' constrains j

kg-= 40 kg! := 50 VIj := 1 gl(X) := %-j - 17

vX

g2{X-) ■= kgl

1

Alim

in := 11 f{X) = 0.34384

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

[|(Z(Jt))4|]

- 1

jjPQ = slack igîiX) .g2(Xj)

kmin := 30 AZmax .= 0.3 if = 2-1CT i '.= 1.. m

lit :=

.v; := 0

kkir := match (max (g (X» -g{X ))lA

1

1 22.51677

2 -27.16012

3 25.48318

4 -20.83989

g(X) = 5 -28.64333

S -27.16012

7 3.15725

8 7.35238

9 -21.84167

10 -12.90351

11 146.97868

X =

I

1 3.2258'10-3

2 3.2258 '10-3

3 3.2258'10-3

4 3.2258-10-3

5 3.2258'10-3

6 3.2258-10-3

7 3.2258'10-3

8 3.2258-10-3

9 3.2258-10-3

10 3.2258'10-3

Figure 15. Implementing of numerical methods in mathematical package MathCAD:

initial data of the task.

Magazine of Civil Engineering, 109(1), 2022

REH

Figure 16. Subroutine FEA.

TrusslO el_dir_Eng,xmcd ii := 1.. it iter a := ii Results of optimization Direct method

Iterations The objective function

it = 6 I I kg = 40 I \kmiri = 30

'r 0.34384 1

2 0.46038

3 0.84033

F =

4 0.82457

5 0.82311

t 0.8231 ;

Values of constraints

Maximum residuals in The number of active constraints constraint

3.67447 1.05852 0.01747

-1.97597 x 10"3

-1.67874 x 10~S

6.52183 x 10" 7 The vector of variation parameters

kg

kk =

a y

1

1 -0.73093

2 -0.95452

3 -0.65568

4 -0.73907

R/-= ^ = 5 -1.21465-10"6

6 -0.95452

7 -0.72027

S -0.25792

9 -0.93563

10 -0.73937

11 6.52133 'ÎO"7

1

1 0.019435

2 6.4516-10"5

3 0.014795

X = 4 9.931763-10"3

5 6.425609-10"5

6 6.4516-10"5

7 0.013337

3 4.739997-10"3

9 6.4516-10"5

10 0.014046

1

1 0.02258

2 0.02258

3 0.02258

4 0.02258

5

Xmut =

I

1 6.4516-I0"5

2 6.4516-10-5

3 6.4516-10-5

4 6.4516-10-5

5

Figure 17. Optimization results.

Figure 18. Graphs of changes in the objective function and maximum residuals

of constraints at iterations.

Figure 19. Optimization results.

Figure 20. Graphs of changes in the objective function and maximum residuals

of constraints at iterations.

Table 3. Comparative results of calculations.

Numerical methods

№ Name Edward B. Farchi Direct Combined

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

J. Haug [3] [40] method method

1 Iteration number 15 6 5

2 Value of the objective function, f (x), m3 0.8294574 0.8294187 0.82312 0.83641

3 Discrepancies in active -0.322*10-4; -0.9067*10-4; -0.1215*10-5; -0.4999*10-6;

constraints, g5; gn -0.425*10-4 -0.4315*10-4 0.6522*10-6 -0.2*10-6

4 X1 (A1), m2 0.19374-10-1 0.19691-10-1 0.19435-10-1 0.2044-10-1

5 X2 (A2), m 2 0.64516-10-4 0.64516-10-4 0.64516-10-4 0.69724-10-4

6 X3 (A3), m 2 0.15015-10-1 0.14970-10-1 0.14795-10-1 0.13870-10-1

7 X4 (A 4), m 2 0.98619-10-2 0.98212-10-2 0.99318-10-2 0.87102-10-2

8 X5 (A5), m 2 0.64516-10-4 0.64516-10-4 0.64516-10-4 0.21881 -10-3

9 X6 (A6), m 2 0.35903-10-3 0.35612-10-3 0.64516-10-4 0.68764-10-4

10 X7 (A 7), m 2 0.13676-10-1 0.13570-10-1 0.13387-10-1 0.13730-10-1

11 Xg (Ag), m 2 0.48182-10-2 0.48174-10-2 0.47899-10-2 0.44921-10-2

12 X9 (A9), m 2 0.64516-10-4 0.64516-10-4 0.64516-10-4 0.7439410-4

13 X10 (A 10), m 2 0.13947-10-1 0.13890-10-1 0.14046-10-1 0.15710-10-1

The solution of the problem by the direct and combined method gave close results in terms of the number of iterations, the accuracy of the results and the nature of convergence, although in the combined method an overestimated value of the objective function was obtained by 0.8 %. Solutions close to optimal, were obtained already at the 3rd iteration with refinement at subsequent iterations to the required degree of accuracy. When comparing with the results obtained in two other sources, it should also be noted that the values in the areas are close with greater accuracy in the residuals of constraints (by 3-4 orders of magnitude).

In general, the results shown in the table show the high efficiency of the proposed algorithms, which give the best performance in almost all parameters compared with the results given in [3, 40].

The main difficulties in implementing the above examples were related to their adjustment to the task, which was carried out by setting optimization parameters, such as normalization coefficients of the target and restrictive functions, and the penalty coefficient. Guided by the experience of solving examples, we can relate the influence of these parameters on the convergence of the algorithm with the structure of the modified Lagrange function Fp. In the allowable region, this function coincides with the target, and outside the allowable values, the parameters kg and kmtn affect its curvature. It is revealed that the algorithm gives the most stable convergence if the values of the objective function are in the same order as the penalty additives.

4. Conclusions

In conclusion, we give the following conclusions:

1. The solution of practical examples of optimization of flat rod systems showed the effectiveness of the presented algorithms for searching for a constrain extremum. It was revealed that the solutions obtained using 2 algorithms of a modified Lagrangian functions in the first example gave a complete match, and in the second some discrepancy, which hindicates the presence in the tasks of this class of several extremes.

2. Some difficulties associated with tuning algorithms to stable convergence are noted. If the objective function has a low order (volume in m3), and the constraints are normalized to unity, then it is necessary to increase the values of the objective function and penalty additions in the Fp function to a close order.

For this, the corresponding penalty and normalizing coefficients are introduced.

3. It is shown that the combined method for solving a conditionally extremal problem, although it has some complications in the definition of dual variables, can give better convergence. In addition, the

problems of searching for direct and dual variables in this algorithm are not interconnected. This is important when solving optimization problems of mechanics that have a complex formulation (for example, nonlinear problems, dynamics problems, etc.), when the problem of a conditional extremum in direct variables often becomes incompatible (especially in the initial iterations).

4. The study of the test problem of optimizing the ten-bar truss confirmed the efficiency of the developed algorithm. The results were obtained, giving the best indicators for the speed of convergence and accuracy compared with known solutions.

5. The optimization algorithm is presented in mathematical package MathCAD, which allows open access to its teams, confirms its adequacy and allows you to implement this algorithm to solve similar problems in design, scientific or educational activities.

6. In the software implementation of the practical problems of optimizing complex systems with a large number of variation parameters, it is more expedient to search for the unconditioned extremum using zero-order methods, such as the deformable polyhedron method, or based on metaheuristic algorithms.

7. Further research in the direction of optimization of planar and spatial rod systems can be effectively implemented by constructing optimization models based on neural networks.

References

1. David, M.H. Applied nonlinear programming. McGraw-Hill. Texas, 1972. 498 p.

2. Schmit, L.A. Structural Design by Systematic Synthesis. Proceedings, 2nd Conference on Electronic Computation, ASCE. 1960. Pp. 105-122.

3. Edward, J. Haug., Jasbir, S.A. Applied optimal design: mechanical and structural systems. A Wiley-Interscience. Michigan, 1979. 506 p.

4. Yamasaki, S., Kawamoto, A., Nomura, T., Fujita, K. A consistent grayscale-free topology optimization method using the level-set method and zero-level boundary tracking mesh. International Journal for Numerical Methods in Engineering. 2015. Vol. 101. Pp. 744-773. DOI: 10.1002/nme.4826

5. Chen, J., Xiao, Z., Zheng, Y., Zheng, J., Li, C., Liang, K. Automatic sizing functions for unstructured surface mesh generation. International Journal for Numerical Methods in Engineering. 2017. Vol. 109. Pp. 577-608. DOI: 10.1002/nme.5298

6. Lalin, V.V., Rybakov, V.A., Ivanov, S.S., Azarov, A.A. Mixed finite-element method in V.I. Slivker's semi-shear thin-walled bar theory. Magazine of Civil Engineering. 2019. 89 (5). Pp. 79-93. DOI: 10.18720/MCE.89.7

7. Lalin, V.V., Yavarov, A.V., Orlova, E.S., Gulov, A.R. Application of the Finite Element Method for the Solution of Stability Problems of the Timoshenko Beam with Exact Shape Functions. Power Technology and Engineering. 2019. No. 53 (4). Pp. 449-454. DOI: 10.1007/s10749-019-01098-6

8. Gandomi, A.H., Yang, X.S., Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Engineering with Computers. 2013. No. 29 (1). Pp. 17-35. DOI: 10.1007/s00366-011-0241-y

9. Hasangebi, O., Qarba§, S., Dogan, E., Erdal, F., Saka, M.P. Performance evaluation of metaheuristic search techniques in the optimum design of real size pin jointed structures. Computers and Structures. 2009. Vol. 87. Pp. 284-302. DOI: 10.1016/j.compstruc.2009.01.002

10. Saka, M.P., Hasangebi, O., Geem, Z.W. Metaheuristics in structural optimization and discussions on harmony search algorithm. Swarm and Evolutionary Computation. 2016. Vol. 28. Pp. 88-97. DOI: 10.1016/j.swevo.2016.01.005

11. Serpik, I.N., Alekseytsev, A.V. Optimization of flat steel frame and foundation posts system.

12. Kaveh, A., Ghafari, M.H. Plastic analysis of planar frames using CBO and ECBO algorithms. Iran University of Science & Technology. 2015. No. 5 (4). Pp. 479-492.

13. Aydin, D., Yavuz, G., Stutzle, T. ABC-X: a generalized, automatically configurable artificial bee colony framework. Swarm Intelligence. 2017. No. 11 (1). Pp. 1-38. DOI: 10.1007/s11721-017-0131-z

14. Bonyadi, M.R., Michalewicz, Z. Particle swarm optimization for single objective continuous space problems: A review. 2017. No. 25 (1). Pp. 1-54. DOI: 10.1162/evco_r_00180

15. Nekouie, N., Yaghoobi, M. A new method in multimodal optimization based on firefly algorithm. Artificial Intelligence Review. 2016. No. 46 (2). Pp. 267-287. DOI: 10.1007/s10462-016-9463-0

16. Askarzadeh, A.A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Computers & Structures. 2016. Vol. 169. Pp. 1-12. DOI: 10.1016/j.compstruc.2016.03.001

17. Mirjalili, S., Mirjalili, S.M., Lewis, A. Grey Wolf Optimizer. Advances in Engineering Software. 2014. Vol. 69. Pp. 46-61. DOI: 10.1016/j.advengsoft.2013.12.007

18. Cai, X., Wang, H., Cui, Z., Cai, J., Xue, Y., Wang, L. Bat algorithm with triangle-flipping strategy for numerical optimization. International Journal of Machine Learning and Cybernetics. 2017. No. 9 (2). Pp. 199-215. DOI: 10.1007/s13042-017-0739-8

19. Samma, H., Mohamad-Saleh, J., Suandi, S.A., Lahasan, B. Q-learning-based simulated annealing algorithm for constrained engineering design problems. Neural Computing and Applications. 2019. Pp. 1-15. DOI: 10.1007/s00521-019-04008-z

20. Cheng, M.-Y., Prayogo, D., Wu, Y.-W., Lukito, M.M. A Hybrid Harmony Search algorithm for discrete sizing optimization of truss structure. Automation in Construction. 2016. Vol. 69. Pp. 21-33. DOI: 10.1016/j.autcon.2016.05.023

21. Jalili, S., Talatahari, S. Optimum Design of Truss Structures Under Frequency Constraints using Hybrid CSS-MBLS Algorithm. KSCE Journal of Civil Engineering. 2017. No. 22 (5). Pp. 1840-1853. DOI: 10.1007/s12205-017-1407-y

22. Rao, R.V., Saroj, A. Multi-objective design optimization of heat exchangers using elitist-Jaya algorithm. Energy Systems. 2016. No. 9 (2). Pp. 305-341. DOI: 10.1007/s12667-016-0221-9

23. Lieu, Q.X., Do, D.T.T., Lee, J. An adaptive hybrid evolutionary firefly algorithm for shape and size optimization of truss structures with frequency constraints. Computers & Structures. 2018. Vol. 195. Pp. 99-112. DOI: 10.1016/j.compstruc.20-17.06.016

24. Degertekin, S.O., Hayalioglu, M.S. Optimum Design of Steel Space Frames: Tabu Search vs. Simulated Annealing and Genetic Algorithms. International Journal Of Engineering & Applied Sciences. 2009. No. 2 (1). Pp. 34-45.

25. Kaveh, A., Zakian, P. Improved GWO algorithm for optimal design of truss structures. Engineering with Computers. 2018. No. 34 (4). Pp. 685-707. DOI: 10.1007/s00366-017-0567-1

26. Jalili, S., Hosseinzadeh, Y. Combining Migration and Differential Evolution Strategies for Optimum Design of Truss Structures with Dynamic Constraints. Iranian Journal of Science and Technology - Transactions of Civil Engineering. 2019. No. 43 (1). Pp. 289-312. DOI: 10.1007/s40996-018-0165-5

27. Ho-Huu, V., Nguyen-Thoi, T., Vo-Duy, T., Nguyen-Trang, T. An adaptive elitist differential evolution for optimization of truss structures with discrete design variables. Computers and Structures. 2016. Vol. 165. Pp. 59-75. DOI: 10.1016/j.compstruc.2015.11.014

28. Khatibinia, M., Yazdani, H. Accelerated multi-gravitational search algorithm for size optimization of truss structures. Swarm and Evolutionary Computation. 2018. Vol. 38. Pp. 109-119. DOI: 10.1016/j.swevo.2017.07.001

29. Donskoy, V.I. Extraction of Optimization Models from Data: an Application of Neural Networks. Taurida Journal of Computer Science Theory and Mathematics. 2018. No. 39 (2). Pp. 71-89.

30. Dmitrieva, T.L. Parametricheskaya optimizatsiya v proektirovanii konstruktsii, podverzhennykh staticheskomu i dinamicheskomu vozdeistviiu [Parametric optimization in the design of structures subject to static and dynamic effects]. Izd-vo IrGTU. Irkutsk, 2010. 176 p. (rus)

31. Fiacco, A.V., McCormick, G.P. Computational Algorithm for the Sequential Unconstrained Minimization Technique for Nonlinear Programming. Management Science. 1964. No. 10 (4). Pp. 601-617. DOI: 10.1287/mnsc.10.4.601

32. Allaire, G., Dapogny, C., Estevez, R., Faure, A., Michailidis, G. Structural optimization under overhang constraints imposed by additive manufacturing technologies. Journal of Computational Physics. 2017. Vol. 351. Pp. 295-328. DOI: 10.1016/j.jcp.2017.09.041

33. Chou, J.S., Ngo, N.T. Modified firefly algorithm for multidimensional optimization in structural design problems. Structural and Multidisciplinary Optimization. 2017. No. 55 (6). Pp. 2013-2028. DOI: 10.1007/s00158-016-1624-x

34. Golshtein, E.G., Tretyakov, N.V. Modified Lagrangians and Monotone Maps in Optimization. Wiley&Sons Publ. Co. New York, 1996. 438 p.

35. Bertsekas, D.P. Constrained Optimization and Lagrange Multiplier Methods. 1rd ed. Athena Scientific. New York, 1996. 410 p.

36. 36. Andreani, R., Birgin, E.G., Martínez, J.M., Schuverdt, M.L. On augmented Lagrangian methods with general lower-level constraints. SIAM Journal on Optimization. 2007. No. 18 (4). Pp. 1286-1309. DOI: 10.1137/060654797

37. Wang, X., Zhang, H. An augmented Lagrangian affine scaling method for nonlinear programming. Optimization Methods and Software. 2015. No. 30 (5). Pp. 934-964. DOI: 10.1080/10556788.2015.1004332

38. Bertsekas, D.P. Nonlinear Programming. 3rd ed. Athena Scientific. New York, 2016. 861 p.

39. Long, W., Liang, X., Huang, Y., Chen, Y. A hybrid differential evolution augmented Lagrangian method for constrained numerical and engineering optimization. CAD Computer Aided Design. 2013. No. 45 (12). Pp. 1562-1574. DOI: 10.1016/j.cad.2013.07.007

40. Farshi, B., Alinia-Ziazi, A. Sizing optimization of truss structures by method of centers and force formulation. International Journal of Solids and Structures. 2010. No. 47 (18-19). Pp. 2508-2524. DOI: 10.1016/j.ijsolstr.2010.05.009

Contacts:

Tatiana Dmitrieva, dmitrievat@list.ru Khukhuudei Ulambayar, Ulambayar_kh@yahoo.com

i Надоели баннеры? Вы всегда можете отключить рекламу.