Научная статья на тему 'A Logarithmic Barrier Approach Via Majorant Function for Nonlinear Programming'

A Logarithmic Barrier Approach Via Majorant Function for Nonlinear Programming Текст научной статьи по специальности «Математика»

CC BY
35
10
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
nonlinear convex programming / logarithmic penalty method / line search / majorant function / secant technique / нелинейное выпуклое программирование / метод логарифмических штрафов / линейный поиск / мажорантная функция / метод секущих

Аннотация научной статьи по математике, автор научной работы — Boutheina Fellahi, Bachir Merikhi

In this paper, we are interested in solving an optimization nonlinear programming problem using a logarithmic barrier interior point method, in which the penalty term is taken as a vector r ∈ Rn +. The descent direction has been calculated using a classical Newton method, however the step size has been calculated with a new technique of majorant functions and a secant technique. The numerical simulations show us the efficiency of our approach compared to the classical line search method.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Логарифмический барьерный подход с использованием мажорантной функции для нелинейного программирования

В данной статье нас интересует решение оптимизационной задачи нелинейного программирования с использованием метода внутренних точек с логарифмическим барьером, в котором штрафной член берется в виде вектора r ∈ Rn + . Направление спуска было рассчитано с использованием классического метода Ньютона, однако размер шага был рассчитан с использованием новой техники мажорантных функций и техники секущих. Численное моделирование показывает нам эффективность нашего подхода по сравнению с классическим методом линейного поиска.

Текст научной работы на тему «A Logarithmic Barrier Approach Via Majorant Function for Nonlinear Programming»

EDN: TEUNYB УДК 519.85

A Logarithmic Barrier Approach Via Majorant Function for Nonlinear Programming

Boutheina Fellahi* Bachir Merikhi^

Laboratory of Fundamental and Numerical Mathematics

Department of Mathematics Ferhat Abbas University Setif 1, Setif, Algeria

Received 12.03.2023, received in revised form 18.04.2023, accepted 04.06.2023 Abstract. In this paper, we are interested in solving an optimization nonlinear programming problem using a logarithmic barrier interior point method, in which the penalty term is taken as a vector r £ R+. The descent direction has been calculated using a classical Newton method, however the step size has been calculated with a new technique of majorant functions and a secant technique. The numerical simulations show us the efficiency of our approach compared to the classical line search method.

Keywords: nonlinear convex programming, logarithmic penalty method, line search, majorant function, secant technique.

Citation: B. Fellahi, B. Merikhi, A Logarithmic Barrier Approach Via Majorant Function for Nonlinear Programming, J. Sib. Fed. Univ. Math. Phys., 2023, 16(4), 528-539. EDN: TEUNYB.

1. Introduction and preliminaries

In this paper, we are interested in the barrier logarithmic penalty method when using a new majorant function technique instead of the classical line search method to determine the step size ([1,2,4]).

1.1. The problem formulation

The problem to be studied in this paper is as follows:

( min g{x) ( )

\ x e K C r" • (P1)

In which: K = {x e r" : Bx = c, x ^ 0} is the set of feasible solution of (P1). 1.1.1. Assumptions

A1 g is nonlinear, convex, twice continuously differentiable function on K. A2 B e rmx" is a full rank matrix, c e rm (m<n). A3 There exists x0 > 0 such that Bx0 = c.

*boutheina.fellahi@univ-setif.dz

1 bmerikhi@univ-setif.dz © Siberian Federal University. All rights reserved

A4 The set of optimal solutions of (P1) is nonempty and bounded.

Rm,v* £ R+ such as:

For x* be an optimal solution in the problem (P1), there exists two Lagrange multipliers u* £

n

+

Vg(x*) + Blu* — v* =0

Bx* = c . (1)

< v*,x* >= 0

We can write g* = g*(x*) = minxes* g(x) .

In the following, we replace the nonlinear constrained problem (P1) with a perturbed problem. What is new in our work is that the term of penalty is taken as a vector r £ r+.

1.2. The perturbed problem

In this section, we firstly define the function 0 : r+ x rn ^ r U which is convex, lower

semicountinuous and proper function. 0 defined as follows:

r ln(rj) — r ln(xj) if x,r > 0

0(r,x)=l i=1 i=1 . (2)

^ 0 if r = 0, x > 0

if not

Now, the convex, lower semicountinuous and proper function 0 : r+ x rn ^ r U is defined by:

I g(x) + y^ ri ln(ri) ri ln(xi) if Bx = c; x,r > 0 çr (x) = $(r, x) = < f—' f—'

i=1 i=1 if not

(3)

Finally, the convex function m is defined by:

m(r) = inf{0r(x); x £ rn} (P2)

x

m is clearly convex since of the convexity of 0r.

We notice that the two problems (P1) and (P2) are coincided when || r 0, then g* = m(0).

Our idea is to develop a new approach, which consist to determine the step size using a majorant function technique. We begin by studying the existence and the uniqueness of the optimal solution of the perturbed problem (P2) followed by the convergence study. The resolution of the perturbed problem is based on the Newton descent direction and the majorant function technique to determine the step size.

1.2.1. Existence and uniqueness of optimal solution of perturbed problem

In order to prove that (P2) admits one unique optimal solution, suffice it to prove that the cone of recession of 0r is reduced to zero.

Proof. According to the fourth assumption, (P1) admits one unique optimal solution then the cone of recession Cg of g is reduced to zero, we have:

Cg = {d £ rn : [g]x(d) < 0, Bd = 0,d > 0} = {0}

[g]œ(d) is the asymptotic function of b, which define by:

b(xo + td) — b(xo)

[g]^(d) = lim

We have:

[$r }c

g^(d) if Bd = 0, d > 0 +œ if not

Then we deduce that: {d G M"; [$r]^q < 0} = {0}, wich means that C^ = {0}. By taking into account that $r is strictly convex, we come to conclusion that the perturbed problem (P2) admits one unique optimal solution which is denoted by x(r) G K, the set of strictly feasible solution of (P2), in which

K = {x G m" : Bx = c, x > 0}. 1.2.2. Convergence of perturbed problem

According to the necessary and sufficient optimality conditions, there exists X(r) £ Rm (assumption 2) verify:

( Vg(x(r)) - rX+ BtX(r) = 0

\ Bx(r) - c = 0 ' (4)

In which X is the diagonal matrix with diagonal entries Xn = xi Vi = l,n. We impose that

wt ( \ M w iyg(x(r)) - rX- + BtX(r)\

F(x(r)>X(r))=[ Bx(r) - c )=°-

The two functions r ^ x(r) and r ^ X(r) are differentiable on R+, by using the implicit function theorem, we get

'V2g(x(r)) + RX-2 Bf\ [Vx(r)\ = (X-i\ B 0 J \VX(r) J { 0 )'

(5)

Vx(r)

/ dxi dxi dxi \

dri dr2 drn

dx2 dx2 dx2

dri dr2 drn

dxn dxn dxn

\ dri dr2 drn'

, VX(r)

= ri Vi = 1, n. And

( dXi dXi dXi \

dri dr2 drn

0X2 dX2 dX2

dri dr2 drn

dX dX m

\ dri

dr2.

dXm drn )

Remember that the function m wich is differentiable on m+ is define by: m(r) = g(x(r)) ri ln(ri) — ^ r ln(x4(r)).

t

We have

Vm(r) = (Vx(r))t (Vg(x(r)) - Xr 1 r) + (e + zx - z2).

In which e = (1,1, . . . , 1)^ zi = (ln ri, ln r2, . . . , ln rn)' and z2 = (ln xi, ln x2,..., ln i„)!. According to (4) and (5), we get

Vm(r) = -(Vx(r))tBt\(r) + (e + zi - z2) =

= -(BVx(r))tX(r) + (e + zi - Z2) =

= e + zi - Z2.

For x(r) G K and since of the convexity of m, we get:

m(0) ^ m(r) - rtVm(r) ^

n n

> g(x(r)) + 52 ri ln ri - ri ln xi(r) - rt(e + zi - Z2) >

i=i i=i n n n n n

> g(x(r)) + ri ln ri -¿2 ri ln xi(r) -¿2 ri-¿2 ri ln ri +52 ri ln xi(r) >

i=i i=i i=i i=i i=i

n

> g(x(r)) - 52ri.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

i=i

Taking into account that: g* = m(0) = minœ g(x(r)).

n

Then, we come conclusion g* < g(x(r)) < g* + ^ ri.

i=i

For the rest, we are interesting on the trajectory of x(r) when y r y tends to zero.

a) The case in which g is only convex.

This case is a little complicated, we impose that ||r||œ < 1, and for that we note

x(r) G {x; Bx = c, x > 0, g(x) ^ n + g*} .

This set is convex, bounded and non empty, its cone of recession is reduced to zero. It follows that each accumulation point of x is an optimal solution of (P1) only if ||r|| ^ 0.

b) The case in which g is strongly convex with coefficient 7 strictly positif.

We have

n

y> > g(x(r)) - g(x*) > < Vg(x*),x(r) - x* > + 2 || x(r) - x* ||2 .

i=i 2

Using (1), we obtain n

Yr > < v*,x(r) > > 0.

Then

(2 n ^ 2

y x(r) - x* y < ( - Yn

(Y ê *)è

We come to conclusion that the convergence of x(r) to x* is of order ^.

Remark 1.1. If the problem (P1) or the perturbed problem (P2) will have an optimal solution and the values of their objective functions are equal and finite, the other problem has an optimal solution.

The general prototype of our method is as follows:

0 Starting by (ro,xo) e r+ x K.

1 Find an approximate solution of (P2) has been noted by xk+1 such that:

rk ,xfc + i) < 0(rk,Xk ).

2 Take: ||rk+iNTO < IK

The iterations continue until we obtained the approximate solution.

2. Some useful inequalities

Taking into consideration the statistical serie of n real numbers {z1,..., zn}, we define their arithmetic mean z and their standard deviation az. These quantities are defined as follows:

1 1 1

z=-E^ aZ = z2-z = ~Xl(zi-z)2.

n z—' n z—' n z—'

i=i i=i i=i

For the following result see [3,6] Proposition 2.1.

- /-T ■ — a

z - &zV n - 1 ^ min zi ^ z -

in

Vn-r

z +— z ^ max Zi ^ z + azVn — 1. Vn — 1 i

In the case where zi are all positifs, we deduce that:

n

ln (z — ozVn — 1) ^ ln(xi) ^ ln (z + ozVn — 1) .

i=i

Theorem 2.1 ([2]). Assume that zi > 0 for all i = 1,n, then:

n

Ai ^Y, ln(zi) < A2,

i=i

with:

A1 = (n — 1)ln ( z + gz ) + ln (z — az Vn — 1) , V Vn — 1/

A2 = ln (z + az Vn — 1) + (n — 1) ln ( z — ) .

n— 1

3. Solving the perturbed problem

Consider the following perturbed problem defined as follows:

{ n }

m(r) = min < (x) = g(x) + 0(ri, xi) : Bx = c, x ^ 0 > .

In this section we are interested in the numerical solution of the problem (P1), we begin our work by calculate the descent direction and the step size in which we use a new technique of majorant functions.

3.1. The descent direction and line search function

A descent direction d can be computed by various methods, in this task we choose the Newton's method and therefore d is given by solving the following quadratic convex minimization problem:

- < V(x)d, d> + < V0r(x) ^ 2

Bd = 0

min^1 < V24>r (x)d, d> + < Vfo(x), d > j

According to the necessary and sufficient optimality conditions, there exists v G Mm such that:

( V2^r(x)d + V^r(x) + Bln = 0 \ Bd = 0 '

Which is equivalent to

)(d)=T" -Vs(x))-

V2g(x) + RX-2 B*\ (d\ = (X-1r - Vg(x)

B 0 u

From which we get

d » Vg(x)+ RX 2 B0) $ = d o) - yg(x^.

Then

< V2g(x)d, d> + < Vg(x), d >=< r, X-1d > - < RX-1d, X-1d > . (6)

This system is equivalent to

(XV2g(x)X + R XBf\ (X-1A = fr - XVg(x)\

v bx o ){ v ) = { o J.

(7)

The Newton descent direction being calculated.

3.2. Computation of the step size

Generally, the most used methods in the search line are the classical itterative methods as Armijo-Goldstein, Wolfe, Fibonnaci, ..., but the computational cost in there becomes high when n is very large.

In this part, we are interested to avoid this difficulty. The method that we use bellow is simple and more effective than the first, it consists on the use of majorant function of the function 0. The choice of the step size t* > 0 must give us a significant decrease of the convex function 0, we have:

0o(t) = (x + td) — 4*r (x) =

= g(x + td) - g(x) nln(1 + tyi), y = X 1d.

i=i

According to Proposition 2.1, we have: p ^ mini ri ^ ri Vi = 1, n. In which p = r - or\/n - 1. Then, we obtain

0(t) = WQ < 9i(t) = 1 (g(x + td) - g(x)) - Vln(1 + tyi). p p i=i

We have

e (t) = 1 l< Vg(x + td),d> - Vr " ) ,

p ( T=i 1 + tyi)

1 / n 2 \

e" (t) = - < V2g(x + td)d,d> +V ri V J . p\ (1 +tVi) )

And

1 n

1 ^^ t , +J\ J - V^ Vi

Bi(t) = - < Vg(x + td), d > - 52

P - + t Vi

1 n V2 el (t) = - < v2g(x + td)d,d> + V .

P i=i (1 +tVi)2

We deduce from (6), that 0(0) + 0' (0) = 0, and we have 0'(0) > 0 wich give us that 0(0) < 0. Now it must to prove that ex(0) < 0, we have:

a If Vi > 0, it is clearly that 0^(0) < 0.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

b If yi < 0, we deduce from (6) that $i(0) + ei (0) < 0 and as ei (0) > 0 we come conclusion that ei(0) < 0.

What is prove the significant decrease of ei.

3.3. The first majorant function

The choice of t* in which 0 (t*) = 0 (topt) = 0 consists of some numerical complications, so generally we can't obtain t* directly. To solve this problem, we propose to find an approximation function of e.

This method is based on the use of a majorant function 02 of the function 0. In the following, we take: xi = 1 +1v,, x = 1 + ty, and ax = tay.

n

Applying the inequality Yl ln(xi) > Ai (Theorem 2.2), we get that ei(t) < e2(t) such that

i=i

e2(t) = 1 (g(x + td) - g(x)) - (n - 1)ln(1 + ta) - ln(1 + tp). P

In which

We have

a = V+ , y 1 , P = V- VyVn - 1. n- 1

e2(t) = 1 < Vg(x + td), d > -(n - 1): a P

p K ' K '1+ ta 1+ tp

1 a2 P2

e'2(t) = - < V2g(x + td)d,d> +(n - 1W-+ '

p ^ J ' V '(1 + ta)2 (1+ tp)2 •

The domains of 02 is H2 =]0, T[ in which T = max{t : 1 + tp > 0}, this domain is content in the domain of the line search function e.

We notice that : 0(0) = 0i(0) = 02(0) = 0, ei(0) = o2(0) < 0 and ei(0) = o2 (0) > 0.

We prove that the strictly convex function e2 is a good approximation of e1 in a neighbourhood of 0, hence the unique minimum t* of e2 guarantee a significant decrease of the function e1, and we have the follows inequalities:

e(t*) < ei(t*) < e2(t*) < 0.

3.4. Case when g is linear

We impose that g(x) = ctx, x,c G 1", the auxiliary function w is given by the following form:

w(t) = njt — (n — l)ln(l + ta) — ln(l + tp)

ctd

in which: j = —.

n

w have the same properties as 02, the unique root of w (t) = 0 is the minimum of 02. The unique t that we have guarantee a significant decrease of the function ^r along the newton descent direction d.

3.5. Case when g is only convex

In this case, the equation 0'2(t) =0 is no longer reduces to an equation of second degree, we thought to look at another function greater than 02, for this we use the secant technique. Given t G]0, T[ for all t g]0, t], we have

g(x + td) — g(x) < g(x + td) — g(x) P ^ pt '

Then the auxiliary function w is define as follows:

w(t) = nrjt — (n — l)ln(l + ta) — ln(l + tp).

Such as, we take

g(x + td) — g(x)

j =-t-'

nPt

and we calculate t* the root of the equation w (t) = 0.

1. If t = l and T > l then t is the optimal solution.

2. If t = l, then

a If t* < t, in this case we have 0(t*) < 0i(t*) < 02(t*) < w(t*), which means that we assure a significant decrease of the function ^r along the direction d.

b If t* > t, we must to choose another t G]t*,T[ and calculate t* for the new auxiliary function and repeat this until we have that t* < t, for example we choose

t = t* + Z(T — t*); Z G [0, l].

3.6. Minimization of the auxiliary function u

We have

w(t) = njt — (n — l)ln(l + ta) — ln(l + tp).

It is easy to calculate

ap

w

(t) = nn - (n - 1)

1+ ta 1+ tp'

22

w (t) = (n - 1) + '

'(1+ ta)2 (1+ tp)2' Then: w(0) = 0, w' (0) = n(n - y), w' (0) = n(y2 + a2x ) =|| y y2

We impose that w (0) < 0 and w (O) > 0.

For getting t*, we need to calculate the root of the equation w (t) = 0 Equivalent to

näßt2 + (n(a + ß) - aß)t + n - y = 0

1. if n = 0, t* =

2. if a = 0, t* = y - n

3. if ß = 0, t* =

y - n

na

4. if naß = 0, in this case we have two roots of the equation of the second degree but there is just only root t* which belongs to the domain of definition of w, both roots are:

«=K1 - a -1 -rf-1*=1 d -i -

1 2 Vn a ß ' J ' "2 2 Vn a ß '' J

In which

a = — + + i-- — -(2n-a) (1 - 1

n2 a2 ß2 aß y nn J \a ß/

3.7. The second majorant function

Here, we thought to find another approximation of 01 simpler than and has one logarithm. Remember that:

1 n _

0i(t) = - + td) - g(x)) - ln(1 + tyi); p = r - ar\/n - 1.

p U

Using the inequality:

n

Eln(1+ tyi) > (|| y || +ny)t + ln(1 -1 || y II).

i=1

Then, we get a second majorant function of 0 noting by 0s such that:

0s(t) = 1 (g(x + td) - g(x)) - (|| y || +ny)t - ln(1 -1 || y ||) p

and

e'3(t) = 1 < Vg(x + td),d> - y y y -ny + 11 y 11

p — — " ' 1 - t y y y

63'(t) = 1 < Vg(x + td)d,d> +- " y "

P Jy " (1 -1 y y II)2' The domains of 63 is H3 = [0, Ts[, with T3 = max{t; 1 -1 || y ||> 0}. We remark that:

• 63(0) = 6i(0) =0,

• 63(0) = p < Vg(x), d > -ny = 61(0) < 0,

• e'3 (0) = l < v2g(x)d, d> + II y I|2= e;(0) > 0. P

The strictly convex function e3 is a good approximation of e; in a neighbourhood of 0, the unique minimum t* of e3 guarantee a significant decrease of the function e;, and we have:

e;(t*) < e2(t*) < e3(t*).

3.7.1. Minimization of an auxiliary function

Let us define the convex function w2, where it's minimum is reached at t*.

W2(t) = nrt — (I y II +ny)t — ln(l — t || y ||).

It is easy to calculate

w2 (t) = nr— II y II —ny + l —^^ ||,

w ''(t)= II y II2

Then: W2(0)=0, w2(0) = n(r — y), J'(0) =|| y ||2. We impose that w2(0) < 0 and w2 (O) > 0.

For getting t*, we need to calculate the root of the equation w2(t) = 0.

4. Description of the algorithm

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

In this part, we present the algorithm which resume our study to obtain the optimal solution x* of the problem (Pl).

4.1. Algorithm

1. Input e > 0, rs > 0 , x0 G K, X with X(i, i) = x0(i), r G 1+, 6 G [0, l]".

2. Iteration

* Calculate d and y = X-1d

(a) If || y II>e do

a1 Calculate r, a,p and solve the equation J (t) = 0 for obtain t*. a2 Calculate x = x + t*d, and return to (*).

(b) If || y || < e, we obtained a good approximation of m(r).

i If ||r| ^ rs, r = 6 x r and return to (*).

With 6 x r = (6; x r;,.. .,6n x rn).

ii If ||r|| < rs, Stop. We have a good approximation of the optimal solution.

5. Numerical tests

In the tables bellow, Iter represents the number of iterations to obtain x*, Min represents the minimum and T(s) represents the time in seconds. Method l corresponds to the method of majorant function introduced in this work, method 2 corresponds to the method of majorant function introduced in [1] and method 3 corresponds to the classical line search method.

5.1. Examples with variable size

Example 1. Quadratic case [4]

We consider the following quadratic problem with n = m + 2

In which:

With

g* = min{5(x) : Bx = c,x ^ 0}.

9(x) = x <x,Qx> .

Q[i,j] =

2 if i = j = 1 or i = j = m

4 if i = j and i = {1,m}

2 if i = j — 1 or i = j + 1

0 otherwise

B[i,j ] =

gi = 1 Vi = 1, n, yj = 1, m.

We test this example for different value of n.

Example 2. Erikson's problem [5] Consider the following convex problem:

g* = min[g(x) : Bx = c, x ^ 0].

" (^), Oi,

\ai /

B[i,j ] = |

We test this example for different values of n, ai and bi.

Where g(x) = ^ xi — ), ai,bi £ R are fixed, and

i=1 ^ai

1 if i = j or j = i + m 0 if not

1 if i = j

2 if i = j - 1

3 if i = j - 2 .

0 otherwise

6. Tables

Table 1. Numerical simulations for Example 1

Method 1 Method 2 Method 3

n Min Iter T(s) Min Iter T(s) Min Iter T(s)

4 0.285 8 0.0061 0.285 9 0.007 0.285 14 0.019

50 5.37 6 0.019 5.372 7 0.021 5.374 14 0.053

100 10.924 6 0.065 10.927 7 0.08 10.93 14 0.188

500 55.3722 8 14.5 55.372 7 13.8 55.374 14 29.815

Table 2. The case where ai = 1 and bi = Vi = 1.n (Example 2)

Method 1 Method 2 Method 3

n Min Iter T(s) Min Iter T(s) Min Iter T(s)

10 32.94 3 0.0038 32.95 3 0.004 32.95 4 0.018

50 164.79 4 0.026 164.79 4 0.025 164.79 6 0.082

500 329.57 5 0.1 329.58 5 0.076 329.58 6 0.23

Table 3. The case where ai =2 and bi = 5, yi = 1.n (Example 2)

Method 1 Method 2 Method 3

n Min Iter T(s) Min Iter T(s) Min Iter T(s)

10 0.62 х 10~8 2 0.0021 0.66 х 10-7 2 0.002 9.09 х 10~ь 3 0.01

50 0.1 х 10-7 3 0.0028 0.11 х 10~8 3 0.003 1.86 х 10~4 4 0.07

500 0.75 х 10-7 3 0.0045 0.76 х 10-7 3 0.04 1.03 х 10~4 5 0.22

Conclusion

The effective numerical simulations show that our approach is a very important alternative and gives an encouraging results compared to the classical line search method. However, it competes with the method introduced in [1] where r G R.

References

[1] L.B.Cherif, B.Merikhi, A penalty method for nonlinear programming, RAIRO Oper. Res., 53(2019), 29-38. D01doi.org/10.1051/ro/2018061

[2] J.P.Crouzeix, B.Merikhi, A logarithm barrier method for semidefinite programming, RAIRO Oper. Res, 42(2008), 123-139.

[3] J.P.Crouzeix, A.Seegerm, New bounds for the extreme values of a finite sample of real numbers, J. Math. Anal. Appl., 197(1996), 411-426.

[4] M.Ouriemchi, Resolution de problemes non lineaires par les methodes de points interieurs, Theorie et algorithmes, Thèse de doctorat, Univerité du Havre, France, 2006.

[5] E.Shannon, A mathematical theory of communication, Bell Syst. Tech. J., 27(1948), 379-423 and 623-656 .

[6] H.Wolkowicz, G.-P.-H.Styan, Bounds for eigenvalues using traces, Linear Algebra Appl., 29(1980), 471-506 .

Логарифмический барьерный подход с использованием

и 1

мажорантной функции для нелинейного программирования

Бутейна Феллахи Бачир Мерихи

Лаборатория фундаментальной и вычислительной математики

Университет Ферхата Аббаса Сетиф 1, Сетиф, Алжир

Аннотация. В данной статье нас интересует решение оптимизационной задачи нелинейного программирования с использованием метода внутренних точек с логарифмическим барьером, в котором штрафной член берется в виде вектора r G R+ . Направление спуска было рассчитано с использованием классического метода Ньютона, однако размер шага был рассчитан с использованием новой техники мажорантных функций и техники секущих. Численное моделирование показывает нам эффективность нашего подхода по сравнению с классическим методом линейного поиска.

Ключевые слова: нелинейное выпуклое программирование, метод логарифмических штрафов, линейный поиск, мажорантная функция, метод секущих.

i Надоели баннеры? Вы всегда можете отключить рекламу.