Научная статья на тему 'Logarithmic Barrier Method Via Minorant Function for Linear Programming'

Logarithmic Barrier Method Via Minorant Function for Linear Programming Текст научной статьи по специальности «Математика»

CC BY
181
74
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
linear programming / logarithmic barrier methods / line search. / линейное программирование / метод логарифмического барьера / поиск линии.

Аннотация научной статьи по математике, автор научной работы — Assma Leulmi, Soumia Leulmi

We propose in this study, a new logarithmic barrier approach to solve linear programming problem. We are interested in computation of the direction by Newton’s method and of the displacement step using minorant functions instead of line search methods in order to reduce the computation cost. Our new approach is even more beneficial than classical line search methods. This purpose is confirmed by many interesting numerical experimentations shown the effectiveness of the algorithm developed in this work.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Метод логарифмического барьера через минорантную функцию для линейного программирования

В этом исследовании мы предлагаем новый логарифмический барьерный подход для решения задачи линейного программирования. Мы заинтересованы в вычислении направления по методу Ньютона и шага смещения с использованием функций миноранта вместо методов поиска строк, чтобы уменьшить стоимость вычислений. Наш новый подход еще более полезен, чем классические методы линейного поиска. Он подтверждается многими интересными численными экспериментами, показавшими эффективность алгоритма, разработанного в данной работе.

Текст научной работы на тему «Logarithmic Barrier Method Via Minorant Function for Linear Programming»

УДК 519.21

Logarithmic Barrier Method Via Minorant Function for Linear Programming

Assma Leulmi*

Department of Mathematics, Faculty of Sciences Ferhat Abbas University of Setif-1, 19000

Algeria

Soumia Leulmi

Department of Mathematics University Mohamed Khider of Biskra

Algeria

*

Received 14.10.2018, received in revised form 10.01.2019, accepted 13.02.2019

We propose in this study, a new logarithmic barrier approach to solve linear programming problem. We are interested in computation of the direction by Newton's method and of the displacement step using minorant functions instead of line search methods in order to reduce the computation cost.

Our new approach is even more beneficial than classical line search methods. This purpose is confirmed by many interesting numerical experimentations shown the effectiveness of the algorithm developed in this work.

Keywords: linear programming, logarithmic barrier methods, line search. DOI: 10.17516/1997-1397-2019-12-2-191-201.

Introduction

Interior-point methods are one of the efficient methods developed to solve linear and non linear programming problems.

Several algorithms have been proposed to solve the linear programming problem, where, we distinguish three fundamental classes of interior point methods namely: projective interior point methods and their alternatives, central trajectory methods, barrier/penalty methods [2]. Our work is based on the latter type of interior point methods for solving linear programming problems.

In this paper, we propose a logarithmic barrier interior-point method for solving linear programming problems (LP). In fact, the main difficulty to be anticipated in establishing an iteration in such a method will come from the determination and computation of the step-size. Various approaches are developed to overcome this difficulty. It is known [2,6] that the computation of the step-size is expensive specifically while using line search methods. Leulmi and al. [5] proposed efficient and less expensive procedures in semidefinite programming not only to avoid line search methods, but also to accelerate the algorithm's convergence. The purpose of this paper is to exploit this idea for LP problems.

We consider the following linear programming problem

Where A G 1mx", such that rangA = m < n, c G 1" and b G Rm.

* [email protected] © Siberian Federal University. All rights reserved

The problem (D) is the dual of the following linear program

maxcT y y

(P) \ ATy = b,

y e R", y > 0.

The problem (D) can be written in the following standard form

{maxcT x

x

ATx - c = s, x e Rm, s e R", s > 0.

A priori, one of the advantages of the problem (D) with respect to its dual problem (P) is that variable of the objectif function is a vector instead to be a matrix in the type problem (P). Furthermore, under certain convenient hypothesis, the resolution of the problem (D) is equivalent of the problem (P) in the sense that the optimal solution of one of the two problems can be reduced directly from the other through the application of the theorem of the sladeness complementary, see for instance [7].

In all which follows, we denot by

1. X = {x e Rm : ATx - c > 0}, the set of feasible solutions of (D).

2. X = {x e Rm : ATx - c> 0}, the set of strictly feasible solutions of (D).

3. F = {y e R" : Ay = b,y > 0}, the set of feasible solutions of (P).

4. F = {y e R" : Ay = b,y > 0}, the set of strictly feasible solutions of (P).

Let u,v e R", their scalar product is defined by

"

(u, v) = UTv ^^UiVi .

i=l

We suppose that the sets X and F are not empty.

The problem (D) is approximated by the following perturbed problem (Dn)

(Dv) {min fv (x): x e Rm, (1)

with the penalty parameter n > 0, and fn is the barrier function defined by

{"

bTx + nn ln n — n ^ ln (ei, ATx — c) if ATx — c > 0

i=i

if not,

where (e1,e2,..., en) is the canonical base in R". We are interested then in solving the problem (Dn).

The idea of this new approach consists to introduce one original process to calculate the step-size based on minorant functions.

The main advantage of (Dn) resides in the strict convexity of its objective function and its feasible domain. Consequently, the conditions of optimality are necessary and sufficient. This fosters theoretical and numerical studies of the problem.

We study in the next section, the existence and uniqueness of optimal solution of the problem (Dn), and we show its convergence to problem (D), in particular the behavior of its optimal value and its optimal solutions when n ^ 0, then lim x>n — x is an optimal solution of (D).

In Section 3, we propose an interior point algorithm based on the Newton's approach which allows us to solve the nonlinear system resulting from the optimality conditions. The iteration of this algorithm is of descent type, defined by xk+1 = xk + akdk, where dk is the descent direction and ak is the step-size. Also, we present different steps-size by minimizing a minorant functions which approximate the unidimensional function 0(ak) = minf(x + ad). The last section, is

a>0

dedicated to the presentation of comparative numerical tests to illustrate the effectiveness of our approaches and to determine the most efficient algorithm.

The main advantage of (Dn) resides in the strict convexity of its objective function and its feasible domain. Consequently, the conditions of optimality are necessary and sufficient. This, fosters theoretical and numerical studies of the problem.

Before this, it is necessary to show that (Dn) has at least an optimal solution.

1. Existence and uniqueness of optimal solution of

perturbed problem and its convergence to problem (D)

1.1. Existence and uniqueness of optimal solution of perturbed problem

Firstly, we give the following definition

Definition 1.1. Let f be a function defined from Rm to R U {œ}, f is called inf-compact if for all n > 0, the set Xn (f ) = {x G Rm : f (x) ^ n} is compact, which comes in particular to say that its cone of recession is reduced to zero.

To prove that (Dn) has an optimal solution, we show that fn is inf-compact. For that, it is enough to prove that the cone of recession

X((fn)J = {d G Rn, (fv)(d) < 0} ,

is reduced to the origin, i.e., ((fn)œ (d) < 0) ^ (d = 0), where (fn)œ is defined by

( ) (d)= am fn (x + ad) - fn (x) = bTd.

a a

This needs to prove the following proposition. Proposition 1 ( [6]). d = 0 whenever bTd ^ 0 and ATd G XX. Then, The problem (Dn) has an optimal solution.

We know that the Hessian matrix H = ▽2fv (x) is positive definite, then the problem (Dn) is strictly convex and if it has an optimal solution then it is unique. We have

n

fn (x) = bTx + nn ln n — n ^^ {ei, ATx — c)

i=i

then

n Aei

▽ fn (x) = b — n g {e,ATx — c), and n

▽2fn (x) = n E lAeiAei)\2 •

i=1 {ei, A1 x — c)

As fn is inf-compact and strictly convex, therefore the problem (Dn) admits a unique optimal solution.

We denote by x(n) or xn the unique optimal solution of (Dn).

1.2. Convergence of the perturbed problem to the problem (D)

For x G X, let's introduce the symmetrical definite positive matrix Bi of rank m, i = 1,... ,n and the lower triangular matrix L, such that

Bi = Aei (Aei)T = LLT,

which implies that H is a positive definite matrix.

In what follows, we will be interested by the behavior of the optimal value and the optimal solution x(n) of the problem (Dn). For that, let us introduce the function d defined by

0 (x, n) = fn (x)

{

fn (x) if ATx - c> 0, if not.

Proposition 2 ( [6]). For n > 0, let xn an optimal solution of the problem (Dn ), then there exists x G X an optimal solution of (D), such that, lim xn = x.

n—

Remark 1. We know that if one of the problems (D) and (P) has an optimal solution, and the values of their objective functions are equal and finite, the other problem has an optimal solution.

2. Computation of the Newton descent direction

In this part, we are interested in the numerical solution of the problem (Dn). Interior point methods of types logarithmic barrier are conceived for solving this problem type while being based on the optimality conditions which are necessary and sufficient, Xn is an optimal solution of (Dn) if it satisfies the following condition

▽fn (Xn) = 0. (2)

To solve (1), we use the Newton's approach which means to find at each iteration a vector xnk + dk checking the following linear system

Hkdk = -▽fn (xnk). (3)

As Hk = ▽2 fn (xnk) is a symmetric positive definite matrix, the Cholesky methods and the conjugate gradient methods are the best convenient for solving the system (2).

To ensure the convergence of the algorithm towards an optimal solution x* of (Dn), it should be made sure that all the iterate xnk + dk remains strictly feasible. For that, we introduce a step-size ak checking the condition AT (xnk + akdk) — c > 0.

In the next section, we calculate the step-size for our new approach.

3. Computation of the step-size with the minorant functions

In the descent methods, the line search methods are known to compute the optimal step-size ak. It suffice to minimize the unidimensional function 9(ak) = minf (x + ad).

a>0

The most used methods of the type line search are those of Goldstein-Armijo, Fibonacci, etc. Unfortunately, these methods are expensive in computational volume, and even inapplicable to semidefinite problems. To avoid this difficulty, we exploit the idea suggested by J.P. Crouzeix and B. Merikhi [2] which approaches the function

<p(a) = If (x + ad) - fn (x)j, (4)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

by the simple majorant function giving at each iteration k, a step-size ak in an easy way, simple and much less expensive than line search methods.

But, we propose a new idea, we suggest the simple minorant functions, we approaches the function (4).

Remark 2. To keep the function y(a) well defined, it is necessary that for all x £ X, (x + ad) still in X. Which returns to find a > 0 such that for any a £ [0, a], x + ad £ XX.

/e. aTd)

Proposition 3 ([6]). Let a = sup {a, 1 + zia] with zi = ---, Wi = 1,... ,n. Far all

(ei, A1 x — c)

a £ [0, a], the following function y(a) is well defined

(n \ n

^""^Zi I a — ||z||2a — ln(1 + zia), a £ [0, a] .

i=i / i=i

2.1. Some useful inequalities

Before determining these functions, we need the following results

The following result is caused by H. Wolkowicz and al. [9], see also J.P. Crouzeix and al. [3] for additional results.

Proposition 4 ([9]).

x — ox\/n — 1 ^ mini xi ^ x —

Ox

ox (5)

x +----^ maxi xi ^ x + ox\Jn — 1.

n—1

Let's recall that, B. Merikhi and al. (2008) [2] proposed some useful inequalities related to the maximum and to the minimum of xi > 0 for any i = 1 , . . . , n

n

nln(x — oxVn—l) < A < E!n(xn) < B < nln(x), (6)

i=i

with

A = (n — 1)ln (x +--x ) + ln(x — o^^n—T),

n — 1

B = ln(x + oxsJn — 1) + (n — 1)ln (x--x ).

n — 1

Such that x and ox are respectively, the mean and the standard deviation of a statistical series {x1, x2,..., xn] of n real numbers. These quantities are defined as follows

1 n 1 n 1 n

x = — xi and o2 = — x2 — x2 = — y^(xi — x)2.

n n n

i=1 i=1 i=1

Based on this results, we give in the following, new notions of the non expensive minorant functions for y, that offers some variable steps-size to every iteration with a simple technique.

Thanks to definite positivity results in linear algebra, we propose three different alternatives that offers some variable steps-size a to every iteration.

The efficient one to the other can be translated by numerical tests that we will present at the end of this work.

1.3. The minorant functions

We seek a minorant function (p of ( on [0, ôj], i = 1, 2 and 3, such that

INI2 = n(z2 + a2J = ("(0) = -P(0), (p (0) = 0. In the following, we take xi = 1 + azi, x = 1 + az and ax = aaz.

First minorant function

This strategy consists to minimize the minorant approximations ( of ( over [0, a[. To be efficient, this minorant approximation needs to be simple and sufficiently near (. In our case, it requires 0 = ((0), ||A||2 = (''(0) = -('(0).

n

By applying inequalities (6), we give ln(xi) < B. Where

=i

a NI2 + è ln(1 + Zia)\ > - (ß + a \\zf) ,

^ i=i ' , n \

^a \\zf + ln(1 + Zia)j > nz - (ß + a\z\\2) .

nz-laHzll2 + Y^ ln(l + zia)| ^ nz — [B + a WzW2

Thus the first minorant function can be defined as follows

2

¥>1 (a) = nz — (B + a ||z||2j ,

y>1 (a) = 51a — (n — l) ln (l + ¡1 a) — ln (l + 71a),

with J1 = nz — WzW2, ¡1 = z--°z and y1 = z + azVn — l.

n — l

The minorant function is definite and convex on [0, a1] and we have (a) < ^ (a), with

(0) = 0 and <p'1 (0) = —^ (0) = Wz|2 .

This minimum is obtained in a1 = aopt, such that, <^1(a) = 0. We are then coming back to solve the second order following equation

a2 — 2ba + c = 0, with b = ^fn — 4---M and c = —^^.

2 ¡1 H) P1Y1O1

The roots of this equation are of the type a = b ± Vb2 — c. Let's take one root of the two roots that belong to [0, a1 [.

Second minorant function

One can also thought of simpler functions than involving only one logarithm. We consider functions of the following type

<p(a) = 5a — 5ln(l + ¡3a), a G [0, 5[,

where in order to fulfill the requirements

llMl2 = l/2 = 55 — 5, 5 = sup[a :l + a/ > 0]. (6)

We can also think of another minorant function 2 better approximating than ^3, i.e.,

fs(a) < ^2(a) < ^(a),

O IIZ II2

such that 02 = ¡31 = z--z , S2 = y202 — IIzl and we are looking for y2 = „ which

V n — 1 P-2

checks (6), which gives

y2 (a) = 62a — Y2 ln(1 + ^2a),

y2 (a) = — M2} a — ^ ln(1 + fca).

Third minorant function

Another minorant function simpler than y1 to extract from the known inequality

nn

M|z|| —E zA a — J2 ln(1 + zia) — ln(1+ ||z||a) < 0,

^ i=1 ' i=1

where

y>3 (a) = 63a — ln (1 + ^3a), a £ [0, S3] ,

with S3 = ^,63 = —||A||(||A|| — 1) and & = UH.

ft3

The minorant function y2 is definite and convex on [0, —3] and we have y (a) > y3 (a), with y3 (0) = 0 and y3' (0) = —y'3 (0) = HzH2.

Proposition 5. yi, i = 1, 2, 3, is strictly convex over [0, — [, with a = min (a, —1, 0.2), y(a) ^ when a ^ a. So we have

y3(a) < y2(a) < y1(a) < y(a), Wa £ [0, 0[ .

Proof. The first inequality is obvious. The inequality y(a) > y1(a) is a direct consequence of (6). Let's consider, g(a) = y2(a) — y1(a). Since 01 = 02 and 01 < y1, we have for any a £ [0, 0[

) = 72 ft — (n — 7? < ___iL < 0

9[) (1 + A-)2 (1 + Y1-)2 " (1+ 01 a)2 (1+ 71a)2 ^ '

and since g(0) = g'(0) = 0 and g''(a) < 0, it becomes g(a) < 0 for any a £ [0, 0[. Then, let's put h(a) = y3 (a) — y2 (a), so

q2 y Q2

h(0) = h'(0) = 0 and h''(a) = — -r+f-2.

(1 + P3-)2 (1 + P2-)2

Since Hz||2 = j202 and so 03 = ||z||

h"(-) = ||z||2( or+A-F — (r+W1 <0

because 02 < 03.

Therefore h(a) < 0 for any a £ [0, 0[. □

Thus, we deduce that the function yi reaches its minimum at a unique point ai which is the root of Gi(a) = 0. Thus, the three roots are explicitly calculated, for i = 1,2,3, we have

Z1 = b — y/bP—l with b =Un — 1 — and c = ^^,

2 V61 01 71/ P17161

72 1

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Z2 = 62—112, Z3 = ^.

For a1, we take the root that belongs to the interval (0, a).

Thus, the three values ai, i = 1, 2, 3 are explicitly computed, then, we take ai, a2 and a3 are belongs to the interval (0, a — e) and ('(a) < 0, with e > 0, is a fixed precision.

Remark 3. The calculation of a is performed by a dichotomous procedure, in the cases where ai £ (0, a — e), and ((a) > 0, as follows

put a = 0 and b = a — e while \b — a\ > 10-4

,a + b a + b

if < 0 then b = —2~

else a = —, so a = b. 2 '

This calculation guarantees a better approximation of the minimizer of ('(a) while remaining in the domain of (.

Proposition 6 ( [6]). Let xk+i and xk two strictly feasible solutions of (Dn ), obtained respectively at the iteration k +1 and k, so we have fn (xk+i) < fn (xk).

4. The algorithm

In this section, we present the algorithm of our approach to obtain an optimal solution x to the problem (D).

For simplicity, we consider xk instead of xnk and x instead of xn.

Begin algorithm Initialization

xo is a strictly feasible solution of (D), d0 G Mm, e > 0 is a given precision. Iteration

• While |bT dk | > e do

1. Solve the system Hkdk = —▽ fn (xk).

2. Compute the step-size using the strategies Si, i = l, 2,3.

3. Take the new iterate xk+1 = xk + akdk.

4. Take k = k + l.

• End while End algorithm

This approach reaches to reduce the number of the iteration and the time of calculation. In the following Section, we present some examples.

5. Numerical tests

The following examples are taken from the literature see for instance [1,4,8] and implemented in MATLAB R2013a on Pentium(R) Dual Core CPU T4400 (2.20 GHz) with 3.00 Go RAM. We have taken e = l.0e — 006. In the table of results, (size) represents the size of the example, (itrat) represents the number of iterations necessary to obtain an optimal solution, (time) represents the time of computation in seconds (s) and (st) represents the strategy.

We note that the matrices used in the numerical tests are full matrices.

1. Examples of fixed sizes

Example 01:

A =

1 -1 0 1 1 1

Example 02:

Example 03:

Example 04:

A=

2 3 12

3 0-21

b =

b =

1 -1 1 1 3

A= 2 1 -1 2 , b = 4

1 1 1 2 5

2 1 0 -1 0 0 0

A= 0 0 1 0 1 -1 , b = 0

1 1 1 1 1 1 1

Example 05:

-1 1 1 -1 1 0 0 1

A= 0 2 -3 2 0 1 0 , b = 2

-3 2 1 0 0 0 1 0

3 5 4 0.5 0 0 0 2

and c = [ 1 1 0 ]4.

and c = [ 4 1 2 0

and c = [ 3 2 1 3 ]4

and c = [ 3 -1 1 0 0 0

and c = [ 1 1 0 0 1 1 -2 ]4

Example 06:

0 1 2 -1 1 1 0 0 0 1

1 2 3 4 -1 0 1 0 0 2

A = -1 0 -2 1 2 0 0 1 0 , b = 3

1 2 0 -1 -2 0 0 0 1 2

1 3 4 2 1 0 0 0 0 1

and c = [ 1 0 -2 1 1 0 0 0 0

Example 07:

A=

and c =

Example 08:

The matrix A is

1 0 -4 3 1 1 1 0 0 0 00

5 3 1 0 -1 3 0 1 0 0 00

4 5 -3 3 -4 1 0 0 1 0 00

0 -1 0 2 1 -5 0 0 0 1 00

-2 1 1 1 2 2 0 0 0 0 10

2 -3 2 -1 4 5 0 0 0 0 01

-4 -5 -1 -3 5- -8 0 0 0 0 0 0 ]

1

4

, b = 4

5

7

5

1 1 0 0 0 0 0 0 0 0

0 2 1 0 0 0 0 0 0 0

0 0 3 -1 0 0 0 0 0 0

0 0 0 2 4 0 0 0 0 0

0 0 0 0 6 2 0 0 0 0

0 0 0 0 0 -1 2 0 0 0

0 0 0 0 0 0 4 -1 0 0

0 0 0 0 0 0 0 0 3 1

0 0 0 0 0 0 0 0 3 -1

0 0 0 0 0 0 0 0 0 1

0 0 0 0 0 0 0 0 0 0

0 0 0 0 00 00 00 00 00 00 00 2 0 1 2

00 00 10 01 00 00 00 00 00 00 00

00 00 00 00 00 00 10 00 00 00 00

the vector c and b are

c = [2 -l -3 5 -2 04 l 2 -l l -l 0200000000000 ]t, b = [84625l26394]t.

The last examples can be given in the following table

size st1 St2 St3

itrat time itrat time itrat time

2 x 3 2 0.0016 2 0.0016 4 0.0025

2 x 4 5 0.032 5 0.032 7 0.045

3 x 4 1 0.001 1 0.0019 4 0.0036

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

3 x 6 6 0.044 6 0.042 9 0.076

4 x 7 9 0.045 9 0.045 10 0.063

5 x 9 8 0.049 8 0.049 12 0.085

6 x 12 9 0.055 9 0.032 13 0.090

11 x 25 13 0.048 13 0.055 15 0.099

2. Example cube

n = 2m, A[i, j] = 0 if i = j or (i + l) = j.

A[i, j] = A[i, i + m] = l, b[i] = 2, for i,j = l,... ,m.

The following table resumes the obtained results

size st1 St2 st3

itrat time itrat time itrat time

50 x 100 1 0.031 4 0.096 9 0.25

100 x 200 1 0.053 5 0.10 11 0.45

200 x 400 2 0.088 5 0.29 14 0.555

450 x 900 3 0.096 6 0.55 19 0.99

Commentary

These tests show, clearly, the impact of our three strategies offer an optimal solution of (D) and (P) in a reasonable time and with a small number of iterations.

We notice that the 1st strategy is the best. The obtained comparative numerical results favor this last strategy moreover, it requires a computing time largely low vis-a-vis the other two strategies. This seems to be quite expected, because theoretically the strategy st1 uses the function that is the closest (best approximation) of the function y>.

Conclusion

In spite of the mathematical development in the domain of the linear programming, a lot of problems remain to develop. For it, in our survey, we treated a theoretical and numerical survey of our new approach, based on the notion of majorant functions. This allows us to determine the displacement step by a simple and easy manner.

The numerical simulations confirm the effectiveness of our approaches. Our algorithm converges to the same optimal solution, using any strategy among the three proposed strategies. The first strategy is the best approach versus computing time and number of iterations.

Thus, the numerical tests prove that our approache was reducing the cost of iteration for the linear programming. Our survey, opens interesting perspective for the non linear programming (PNL).

References

[1] J.F.Bonnans, J.C.Gilbert, C.Lemarechal, C. Sagastizabal, Numerical optimization, theorit-ical and pratical aspects. Springer-Verlag, 2003.

[2] J.P.Crouzeix, B.Merikhi, A logarithm barrier method for semidefinite programming, RAIRO Oper. Res., 42(2008), 123-139.

[3] J.P.Crouzeix, A.Seeger, New bounds for the extreme values of a finite sample of real numbers, Journal of Mathematical Analysis and Applications, 197(1996), 411-426.

[4] R.M.Freund, S.Mizuno, Interior point methods: Current status and future directions, Mathematical Programming, Optima, no. 51, 1996.

[5] A.Leulmi, B.Merikhi, D.Benterki, Study of a Logarithmic Barrier Approach for Linear Semidefinite Programming, Journal of Siberian Federal University. Mathematics & Physics, bf 11(2018), no. 3, 1-13.

[6] L.Menniche, Dj.Benterki, A logarithmic barrier approach for linear programming, Journal of Computational and Applied Mathematics, 312(2017), 267-275.

[7] R.T.Rockafellar, Convex analysis, Princeton University Press, New Jerzy, 1970.

[8] G.Savard, Introduction au methodes de point interieur, Extrait denotes, Departement de Mathematiques et Genie Industriel, Ecole Polytechnique de Montreal, Fevrier, 2001.

[9] H.Wolkowicz, G.P.H.Styan, Bounds for eigenvalues using traces, Linear Algebra and Appl., 29(1980), 471-506.

Метод логарифмического барьера через минорантную функцию для линейного программирования

Ассма Леулми

Кафедра математики, факультет наук Ферхат Аббас Университет Сетиф-1, 19000

Алжир

Ооумия Леулми

Кафедра математики Университет Мохамеда Хидера в Бискре

Алжир

В этом исследовании мы предлагаем новый логарифмический барьерный подход для решения задачи линейного программирования. Мы заинтересованы в вычислении направления по методу Ньютона и шага смещения с использованием функций миноранта вместо методов поиска строк, чтобы уменьшить стоимость вычислений.

Наш новый подход еще более полезен, чем классические методы линейного поиска. Он подтверждается многими интересными численными экспериментами, показавшими эффективность алгоритма, разработанного в данной работе.

Ключевые слова: линейное программирование, метод логарифмического барьера, поиск линии.

i Надоели баннеры? Вы всегда можете отключить рекламу.