Научная статья на тему 'Study of a logarithmic barrier approach for linear semidefinite programming'

Study of a logarithmic barrier approach for linear semidefinite programming Текст научной статьи по специальности «Математика»

CC BY
85
11
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
ПОЛУОПРЕДЕЛЕННОЕ ПРОГРАММИРОВАНИЕ / SEMIDEFINITE PROGRAMMING / INTERIOR-POINT METHODS / МЕТОД ЛОГАРИФМИЧЕСКОГО БАРЬЕРА / LOGARITHMIC BARRIER METHODS / ПОИСК СТРОК / LINE SEARCH / МЕТОД ВНУТРЕННЕЙ ТОЧКИ

Аннотация научной статьи по математике, автор научной работы — Leulmi Assma, Merikhi Bachir, Benterki Djamel

Inthispaper,wepresentalogarithmicbarrier interior-point methodfor solvinga semidefiniteprogramming problem. Newton’s methodis used to compute the descent direction, and minorant function are used as an efficient alternative to line search methods to determine the displacement step along the direction in order to reduce the computation cost.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Исследование логарифмического барьерного подхода для линейного полуопределенного программирования

В настоящей работе представлен логарифмический барьерный метод внутренней точки для решения задачи полуопределенного программирования. Метод Ньютона используется для вычисления направления спуска, а минорантная функция используется как эффективная альтернатива методам линейного поиска для определения смещения шага в направлении, чтобы уменьшить порядок вычислений.

Текст научной работы на тему «Study of a logarithmic barrier approach for linear semidefinite programming»

УДК 519.21

Study of a Logarithmic Barrier Approach for Linear Semidefinite Programming

Assma Leulmi*

Department of Mathematics University of Skikda Algeria

Bachir Merikhi Djamel Benterki

Department of Mathematics Ferhat Abbas Setif University Algeria

Received 16.04.2017, received in revised form 06.12.2017, accepted 07.03.2018

In this paper, we present a logarithmic barrier interior-point method for solving a semidefinite programming problem. Newton's method is used to compute the descent direction, and minorant function are used as an efficient alternative to line search methods to determine the displacement step along the direction in order to reduce the computation cost.

Keywords: semidefinite programming, interior-point methods, logarithmic barrier methods, line search. DOI: 10.17516/1997-1397-2018-11-3-300-312.

1. Background information

In semidefinite programming (SDP) we minimize a linear function subject to the constraint that an affine combination of symmetric matrices is positive semidefinite. Such a constraint is nonlinear and nonsmooth, but convex, so positive definite programs are convex optimization problems. Semidefinite programming unifies several standard problems (eg, linear and quadratic programming) and finds many applications in engineering. Although semidefinite programs are much more general than linear programs, they are just as easy to solve. Most interior-point methods for linear programming have been generalized to semidefinite programs. As in linear programming, these methods have polynomial worst-case complexity, and perform very well in practice.

Interior-point methods [1] are one of the efficient methods developed to solve linear, semidefinite and nonlinear programming problems.

In this paper, we particularly propose a logarithmic barrier interior-point method for solving semidefinite programming problem (SDP). In fact, the main difficulty to be anticipated in establishing an iteration in such a method will come from the determination and computation of the displacement step. Various approaches are developed to overcome this difficulty. It is known [4,5] that the computation of the displacement step is expensive specifically while using line search methods.

The purpose of this paper is to propose alternative ways to determine the displacement step which is more efficient than classical line-searches.

* as_smaleulmi@yahoo.fr © Siberian Federal University. All rights reserved

The SDP problem and its dual are defined as

i=i

E Vi Ai - C e s+,

m

v e Rm.

min bt y

Where S+ designs the cone of the symmetrical semidefinite positive n x n matrix, matrices C, Ai, with i = 1,... ,m, are the given symmetrical matrices and b e Rm.

And we denote by {C, X), the trace of the matrix (CtX). It is recalled that (.,.) corresponds to an inner product on the space of n x n matrices.

Now, we make assumptions about the primal-dual pair (P, D). First, we define the following feasibility sets

With int(S+) is the set of the symmetrical definite positive n x n matrices. Assumption 1.1. The system of equations {Ai, Y) = hi, i = 1,... ,m is of rank m. Assumption 1.2. The sets Y and F are not empty.

Let r > 0 be a barrier parameter and fr : Rm ^ be a barrier function defined as

Then solving problem (D) is equivalent to solving the perturbed unconstrained optimization problems

The focus of this paper is on solving the perturbed problem (1). This paper is organized as follows. In Section 2, we briefly recall some results in linear semidefinite programming and give some preliminary results. In Section 3, after considering the existence and uniqueness of the optimal solution of perturbed problem (1), we show the convergence of the last problem to problem (D) in the sense that the optimal solution of problem (1) approaches the optimal solution of (D) as r ^ 0. The solution of this problem is of descent type, defined by yk+i = Vk + tkdk where dk is the descent direction and tk is the displacement step.

In Section 4, we propose an interior-point algorithm for solving the perturbed problem (1). Newton's method is applied to compute the descent direction d by solving the linear system resulted from the optimality conditions associated with problem (1). As an effective and less expensive alternative to line search methods, the so-called minorants function are used to determine the displacement step t along the descent direction. Section 5 contains some concluding remarks.

F = {X e S+ : {Ai, X) = bi, yi =l,...,m}, F = {X e F : X e int(S+)} .

(1)

2. Background and preliminary results

This section provides the necessary background for the upcoming development. In Subsection 2.1, we review some results in linear semidefinite programming. We refer the reader to [9,12], for more details. In Subsection 2.2, we review some statistical inequalities.

2.1. A brief background in linear semidefinite programming

We know that (see [1,10])

a) the sets of the (P) and (D) optimal solutions are non empty convex, compact sets;

b) if XX is an optimal solution of (P), then y is an optimal solution of (D) if and only if

y G F and (jtmAi - c) XX = 0;

c) if y is an optimal solution of (D), then XX is an optimal solution of (P) if and only if X G F and (jtyiAi - C) XX = 0.

In these conditions, the (D) problem resolution permits to obtain that of (P) and vice versa.

2.2. Preliminary inequalities

Let xi,x2,... ,xn G R be a sample of size n, then its mean x and its standard deviation ax are respectively defined as

1 n 1 n 1 n

x = n^xi and aX = -x2 = n(x -x)2.

n '—' x n '—' ' n

i=1 i=1 i=1

Proposition 1. Assume that x G 1", then we have

x — ox\Jn — 1 ^ min xk ^ X--x and X +—, x ^ max xk ^ X + ox\/n — 1.

l^fc^" y/n — 1 Y n — 1

In particular, if xk > 0 for all k = 1,... ,n, then we also have

"

n ln(x — ax\/n — 1) < A ln(xj) < B < n ln(x), (2)

i=i

with

A = (n — 1) ln(x +--, x ) + ln(x — ox\Jn — 1),

n—1

B = ln(x + ox\/n — 1) + (n — 1) ln(x--. x ).

n—1

The first statement in Proposition 1 is due to [12] and the second statement is due to [9].

3. The theoretical aspects of perturbed problem

In this section, we show that perturbed problem (1) has at least one optimal solution and that this optimal solution converges to the optimal solution of problem (D) when r goes to 0.

Firstly, we start with the fundamental properties of fr. For y G Y, let's introduce the symmetrical definite positive matrix B(y) of rank m and the lower triangular matrix L(y) such

that B(y) = EyiAi — C = L(y)Lt(y), and let's define, for i, j = 1,... ,m

i=i

Ai(y) = [L(y)]-1Ai[Lt (y)]-1,

hi(y) = trace(Ai(y)) = trace(AiB-1(y)),

Aij (y) = trace(B-1(y)AiB-1 (y)Aj) = trace(Ai(y)Aj (y)).

Thus, h(y) = (hi(y))i=1 m is a vector of Rm and A(y) = A(y))i „

= 1 ... m is a symmetrical

matrix of rank m.

The previous notation will be used in the expressions of the gradient and the Hessian H of fr. To show that perturbed problem (1) has a solution, it is sufficient to show that fr is inf-compact.

Theorem 2 ( [6]). The function fr is twice continuously differentiable on Y. Actually, for all y £ Y we have

(a) Vfr (y) = h — rh(y).

(b) H = V2fr(y)= rA(y).

(c) The matrix A(y) is definite positive.

Since fr is strictly convex, (1) has at most one optimal solution, then, we give the following definition.

Definition 1. Let h be a function defined from Rm to R U {to} and a ^ 0. Then

(i) The set Ca (h) = {y £ Rm : h(y) ^ a} is called the a-level set of h.

(ii) The function h is called inf-compact if the level sets Ca (h) are compact for all a > 0.

(iii) The recession function of h is the function (h: Rm ^ R U {to} defined by

Woo (△y) = lim

fr (y + tAy) - fr (y)

t

(iv) The recession cone of h is the 0-level set of the recession function of h, denoted by C0 ((h)00).

As the function fr takes the value +to on the boundary of Y and is differentiable on Y, then it is lower semi-continuous. In order to prove that (1) has one optimal solution, it suffices to prove that recession cone of fr defined by

Co ((fr)J = {d £ Rm, (fru(d) < 0} ,

is reduced to zero i.e.,

d = 0 if (fr)^(d) < 0,

where (fr)is defined for y £ Y as

fr (y + td) — fr (y)

(fr)(d) = tfim This needs to the following proposition.

e (t) =

Proposition 3 ( [6]). If bTd < 0 and J2 di A, G Y then d = 0, where d is the descent direction.

i=i

As fr is inf-compact and strictly convex, therefore the approximated problem admits a unique optimal solution.

We denote by y(r) or yr the unique optimal solution of perturbed problem (1).

t

3.1. Convergence of perturbed problem to (D)

Now, we show that perturbed problem (1) converges to problem (D) as r ^ 0. We have the following lemma.

Lemma 1. For r > 0, let perturbed problem (1) have y (r) as an optimal solution, then problem (D) has y* = lim y (r) as an optimal solution.

Proof. Let y G Y be arbitrary and r > 0 be given. Let us introduce the function f : Rm x R ^ ]—œ, +œ], defined by

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

fr (y) if r> 0,

y, r) = fr (y) = { by if r = 0, y G Y,

if not.

It is easy to verify that the function f is convex and lower semi-continuous on Rm xR, see [11], then there exists an optimal solution yr of (1) such that

Vy fr (yr ) = V y f (yr,r) =

since f (y) = f (y, 0), we have

f (y) > ^(Ur,r) + (y — -yr)T ▽y^(yr,r) + (0 — n)dr^(yr,r) > ^(yr,r) — i^d-^(yr,r) > f (yr) — rn. which implies

f (yr) — rn < minf (y) < f (yr).

yeY

On the other hand, we have

d...... d

When r tends to 0, we conclude that

f (yr ) > minf (y).

yeY

f (yr ) = minf (y).

yeY

Therefore y is an optimal solution of (D). □

Remark 1. We know that if one of the problems (D) and (P) has an optimal solution, and the values of their objective functions are equal and finite, the other problem has an optimal solution.

4. The numerical aspects of perturbed problem

4.1. Newton descent direction

In this part, we are interested in the numerical solution of the problem (1). With the presence of the barrier function, the problem (1) can be considered as without constraints. So, interior point methods of types logarithmic barrier are conceived for solving this problem type while being based on the optimality conditions which are necessary and sufficient. yr is an optimal solution of (1) if it satisfies the following condition

▽fr (yr) = 0. (3)

To solve Eq. 3, we use the Newton's approach which means to find at each iteration a vector yrk + dk checking the following linear system

[V2fr(yr)] d = —Vfr(yr). (4)

By virtue of the Theorem 2, the linear system is equivalent to the system

A(yr )d = b(yr) — 1 b, (5)

r

where b(y) and A(y) are defined in (3).

As Hk = V2 fr (yr) is a symmetric positive definite matrix, the Cholesky methods and the conjugate gradient methods are the best convenient for solving the system (4).

To ensure the convergence of the algorithm to wards an optimal solution y* of (1), it should be made sure that all the iterate yrk + dk remains strictly feasible. For that, we introduce a displacement step tk checking the condition

B(y) = + tk dk )Ai - C> 0.

A prototype algorithm for solving problem (1) is formally stated in the next Algorithm. For the sake of simplicity we drop the index r from yr and yrk, and write y instead of yr and yk instead of yrk.

4.2. Prototype algorithm for solving the perturbed problem Begin algorithm

Initialization: Start with y0 G Y is a strictly feasible solution of (D), k = 0, e > 0 is a given precision, n > 0, p> 0 and a G ]0,1[ are a fixed parameters.

1 — Solve the system [V2fr(yr)] dk = —Vfr(yr).

2 — Compute the displacement step tk.

3 — Take the new iterate yk+1 = yk + tkdk.

4 — If nn > e, do yk = yk+i, n = an and go to (D).

5 — If \bTyk+i — bTyk\ > npn, do yk = yk+i and go to (D).

6 — Take k = k + 1.

7 — Stop: yk+1 is an approximate solution of the problem (D). End algorithm

We know from the preceding facts, that the optimal solution of problem (1) is an approximation of the solution of (D). More r tends to zero more approximation is good. Unfortunately, when r approaches zero, the problem (1) becomes ill-conditioned. It is the reason for which we utilize at the beginning of the iteration the values of r that are not near to zero, that verifies nr < e. We can explain interpretation the update r as follows : if y(r) is an exact solution of (1), it is non necessary to keep on the calculus of the iterates when \b*y — b*y\ < pnr.

4.3. Computation of the displacement step

The most known methods used to compute the optimal displacement step tk are the line search methods, which require minimizing the unidimensional function

ф (t) = min fr (y + td) . - 305 -

The most used methods of the type line search are those of Goldstein-Armijo, Fibonacci, etc. Unfortunately, these methods are expensive in computational volume, and even inapplicable to semidefinite problems.

To avoid this difficulty, we exploit the idea suggested by J.P. Crouzeix and B. Merikhi [6] which approaches the function 0(t) defined as

0(t) = -[fr(y + td) - fr(y)], y + td e y, r

by the simple minorant function giving at each iteration k, a displacement step tk in an easy way, simple and much less expensive than line search methods. To simplify the notations, we consider

m m

B = B(y) = J2 yiAi - C and H = J2 dA

i=i i=i

where B is a symmetrical and positive definite, there exists a lower triangular matrix L such that B = LLT.

Next, let's put E = L-1H(L-1)T, since d = 0, the assumption 1 implies that H = 0 and then E = 0, with this notation, for any t > 0, I + tE is positive definite. Let's denote by Ai the eigenvalues of the symmetric matrix E.

Remark 2. It is necessary that the point y + td still in Y for all Y to keep function 9(t) well defined. This in turns requires finding it > 0 such that y + td e Y for any t e [0,y.

Lemma 2. Let t = sup {t : 1 + t\i > 0, i = 1,... ,n}. For all t e [0, t [, the following function 9(t) is well defined

n

°(t) = J2 [t(Ai - x2) - ln(i + t>i)], t e [0,Y. (6)

i=i

Proof. We have

o(t) = 1[fr (y + td) - fr(y)] = itbd - lndet ^^^ y + td) Ai - cj + lndet(^ yiAi - cj ,

or

m /m \ m

J2(Vi + td) Ai - C = (J2 ViAi - C + tJ^diAi = B + tH = B (I + tB-1H) .

i=1 \i=1 J i=1

Then

e(t) = i-btd - lndet B (y) - lndet (I + tB-1H) +lndet(B(y)), (7)

0(t) = -btd - lndet (i + tB (y)-1 H) = -btd - lndet (I + tE), r V J r

but B = LLt, then B-1 = (Lt) 1 L-1.

Since Vfr(y) = b-rb(y), we have bi(y) =trace(Ali(y)) =trace(AiB-1(y)) and [V2fr(yr)] d = = -Vfr (yr ).

dtb = éVfr (y) + réb(y), (8)

Due to the fact that the direction d satisfies [V2fr (yr)] d = -Vfr (yr), then, we have

dt [V2fr(yr)] d = -d*Vfr(yr). (9)

Substituting (9) into (8), we get

dtb = dtVfr (y) + rdtb(y) = -dt [V2fr (yr)] d + rdtb(y)= (10)

= -dt [V2fr(yr)] d + r^VtraceAB-1(yr)) = -dt [V2fr(yr)] d + rtrace(E),

but we have

dt [V2fr (yr)] d = dt [rA(yr)] d = rdt [trace(B-1(yr)A,B-1(yr)Aj)] d =

n

= r ^d2trace (B-1(yr)AiB-1(yr)Aj) = rtrace (E2) .

i=i

Then, by substituting in (10), we get

dtb = rtrace(E) - rtrace(E2). (11)

Substituting (11) into (7), we obtain

0 (t) = t (trace(E) - trace(E2)) - lndet (I + tE). (12)

Let's designate by X,, the characteristic values of the symmetrical matrix E, so

n

0(t) = X) [t(Xi - X?) - ln(1+ tXi)] , t G [0, i[,

i=1

with i = sup [t : 1 + tXi > 0 for all i]. The proof is complete.

A lower bound t0 of t is based on the first statement of Proposition 1 and is given by

to = sup[t : 1 + ta0 > 0], with a0 = X--A ,

\jn - 1

1 n 1 n

where, as defined in Section 2, X = — ^ X, and al = — ^2X2 - X2.

n i=1 n i=1 Another bound t2 is based on the inequality |X,| < ||Xy for i = 1,... ,n, and is given by

t2 = sup[t : 1 + ta2 > 0], with a2 = ||X||.

Unfortunately, it does not exist an explicit formula that gives topt, and the resolution of the equation 0'(topt) = 0 through iterative methods need at each iteration to calculate 0 and 0'. These calculations are too expensive because the expression of 0 in (12) contains the determinant that is difficult to calculate and (6) necessitates the knowledge of the characteristic values of E. It is a numerical problem of large size. These difficult conduct us to look for other new alternatives. From the data of the matrix E, it is easy to obtain the following quantities

The computation of the displacement step by classical line-search methods is undesirable and in general impossible. [ [

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Now, we look for a minorant function G of the function 0 on [0, t0[ which can be used as an lower approximation of 0. Such an lower approximation may be more efficient to manipulate than 0. The function 0 is chosen to be simple and close enough to 0 and to satisfy the following properties

G (0)=0 and G'' (0) = -G' (0) = HXf ,

where G and G denote the first and the second derivative of G respectively.

Based on Proposition 1, we give in the following, new notions of the non expensive minorant function for 0, that offers displacement steps with a simple technique.

1. The first minorant function

This strategy consists to minimize the minorant approximations G of 0 over 0, tt . To be efficient, this minorant approximation needs to be simple and sufficiently near 0. In our case, it requires

0 = 0(0), HXH2 = 0''(0) = -0'(0). We may define a minorant function G0 on [0,t [ by

Go(t) = Yot - ln(1 + pot) - (n - 1) ln(1 + aot),

with yo = nX - ||X|2, ao = X--a X and /3o = X + ax^n - 1.

yn - 1

- - i - — if a < 0 The logarithms are well defined when t < to with to = < ao 1 ao ,

I if not.

Theorem 4. We have Go (t) < 0 (t), yt G [0, t[.

Proof. Let x1,x2 > 0. Using the second statement of Proposition 1, we have

ln(xi) ^ ln(x + ax\Jn - 1) + (n - 1) ln ( x--x )

n - 1

This implies that

Vln(xi)+1 ||X||2 < ln(x + ax^n - 1) + (n - 1) \n(x - a x } + t|

£i \ Vn - 1J

which in turn implies that

nXt - (£ h**) + HXf tj > nXt - (ln (x + axVn - 1) + (n - 1) ln ( x - J*x ^ + HXf t^j .

Taking xi = 1 + tXi for any i = 1,... ,n, hence x = 1 + tX and ax = tax, we get

n

(nX - ||X||2) t - ln (1 + pot) - (n - 1) ln (1 + aot) < (nX - ||X||2) t - ^ ln (1 + Xit),

i=1

with ao = X--x and po = X + ax^n - 1.

n - 1

Note that the left-hand side of the above inequality is nothing but the function 0 (t) and the righthand side of the above inequality is nothing but the function Go (t). This means we have shown that Go (t) < 0 (t) on [0, t[. □

On the other hand, for any t G [0, to [, we have

n

0 (0) = Go (0) = 0, 0'(0) = G'o(t) = X2,

i=1

n

0''(0) = G'o (t) = J2 X2 = trace(E2). 1

2

The function Go is strictly convex over ]0, to [ and G'0(t) < 0. If t ^ œ and since Go minimizes 0 which is inf-compact, Go admits a minimum over [0, to\_.

If to < +œ so Go(t) ^ +œ if t ^ to, Consequently, Go admits a unique minimum over [0, to[. This minimum is obtained in topt such that G'o(topt) = 0.

We are then coming back to solve the second order following equation

t2 — 2bt + ct = 0,

with b =- (n — 1 — 1 ) and c = — . Let's take one root of the two roots topt = b± Vb2 — c

2 \y p a J apY

that belong to [0, t|.

2. The second minorant function

We can also think on other more simple functions than Gi that does intervene on a unique algorithm. Consider the following functions

Gl(t) = jt - <Sln(1 + &t),

Il \ II2

where à = ao = X-- , S = and j = Sa — ^A^2. Then

Yjä = 5ä — Y- (13)

if a < 0

The logarithm is well defined when t < t0 with t0 = ^ a0 i ao ,

if not.

The function G\ verifies the following proprieties G'{(0) = —G'1(0)= trace(E2) and Gi(0)= 0, besides G1(t) < 0, Vt G [0,t0 [. Or, since G1 is convex and admits a unique minimum over [0, t0[, which can be obtained by resolving the equation G[(t) = 0, then we obtained

5 1

ai = — ——.

Y 5

3. The third minorat function

The idea is to use the known inequality following mathematical analysis

n n

(\ \ X \ \ — £ Xi)t — ln(1+1 I I A 11 ) < ^ ln(1+ tXi). (14)

i=i i=i

Replacing in (6), we obtain

m > — \ \ a \ \ (\ \ a \ \ — i)t—in(i+1 \ \ a \ \ ),

then

G2(t) = — \ \ X \ \ (\ \ X \ \ — 1)t — ln(1 + a.2t),

with a2 = \\X\\ defined at [0,t2[ with t2 = —1.

X

Proposition 5. For any t G [0, t2 [ we have

a

. G2 (0) = 0(0) = 0 and G2(0) = 0'(O) = — \\X\\2 < 0.

2

b. g'2(0) = 0"(0) = ||A||2 > 0.

c. 0(t) > G2(t).

Proof. 1) (a) and (b) are obvious.

2) For proving (c), we consider the function

n n

h(t) = G2(t) - 0(t) = (||A|| - ^ Ai)t - ln(1 + tUXII) + £)ln(1 + tAi).

i=i i=i

We have by definition h(0) = 0 and to study the sign of the function h we distinguish two

cases

1. If there exists i such that ||A|| = -Ai, then h(t) = 0 for all t £ [0,i[.

2. Otherwise, it is known that -||A|| < Ai < ||A||. In addition

n

h' (t) = tj^Af [(1+tAi)-1 - (1+m—]. i=i

Since 1 + tAi < 1 +1 ||A|| for all i, then h' (t) < 0 for all t £ [0,t[. Hence the function h(t) is strictly increasing and h(0) = 0. Then h (t) < 0 for all t £ [0,i[, which gives

0(t) > G2(t) for all t £ [0,t[.

In following, we state a comparison between Go, Gi and G2 with the assumption of the equation (13) which indicates that in the following proposition where we clearly see the efficiency and the major interest given by the introduction of such functions.

Proposition 6. Gi, i = 0,1,2, is strictly convex over [0, ti [, Gi(t) ^ when t ^ ti. Beside, -m < Gf(t) < Gi(t) < G0(t) < 0(t) for any t > 0.

Proof. The first inequality is obvious. The inequality 0(t) > Go(t) is proved in Theorem 4.

Now, we prove that Gi (t) < Go (t). Let's consider the function v(t) = Gi (t) - Go (t). Since «o = a and ao < fo, we have for any t > 0

' = s&2 - (n - 1)a2 f20 = _f0___fOL < 0

U (1 + aot)2 (1 + fot)2 (1+ fo t)2 (1 + aot)2^'

Because v''(t) < 0 and v'(0) = 0, we have v '(t) < 0 for all t £ [0, t. Because v'(t) < 0 and v(0) = 0, we also have v (t) < 0 for all t £ [0, t[. Therefore, Gi (t) < Go (t) on [0, t[.

Next, we prove that G2(t) < Gi(t). Similarly, consider the function w(t) = G2 (t) -Gi (t), so, w(0) = w'(0) = 0 and

//CI a2 fi^2 2 ( 1 1 \

w (t) = -(j+atr + =^ low - (JTatf) <0

Because w" (t) < 0 and w (0) = w' (0) = 0, we have w(t) < 0 for all t £ [0, i[. Therefore G2 (t) < Gi (t) on t £ [0, t. The proof is complete. □

Thus, we deduce that the function Gi reaches its minimum at a unique point ti which is the root of Gi(t) =0. for i = 1, 2, we have

U = S- - - and Gi(ti) = K + M ln(1 - f).

Yi ai ai a2

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

In particular

t2 = Y and G2(t2 ) = \ \ X \ \ — ln( \ \ X \ \ — 1)-1.

The solution of the equation G'0(t) = 0 returns us to solve the equation of the second order t2 — 2bt + ct = 0, where

b 1( n 1 1) d \ \ X \ \ 2

b =o(----IT) and c =----,

2 Y0 a0 p0 a0p0Y0

whose roots are given by t = b ± Vb2 — c. For t0, we take the root that belongs to the interval (0,to).

Thus, the three roots t0, 11 and 12 are explicitly calculated. Then we take t0, 11 and 12 are belongs to the interval (0, t — e), with e is a small positive real.

Lemma 3. Let yk+1 and yk two strictly feasible solutions of perturbed problem (1), obtained respectively at the iteration k + 1 and k, so we have

fr(yk+1) < fr(yk).

Proof. We have

fr (yk+1) — fr (yk) + {▽fr (yk ),yk+1 — yk),

and yk+1 = yk + tk dk thus

fr (yk+1) — fr (yk) — {▽fr (yk),tkdk) = tk(—▽2fr (yk)dk,dk) — —tk (▽fr (yk)dk, dk) < 0. Therefore fr (yk+1) < fr (yk). □

Conclusion

In this study, we have presented a logarithmic barrier interior point method for solving the linear semidefinite programming problem. We have given the existence and uniqueness of the optimal solution of the corresponding perturbed problem and have verified its convergence to the optimal solution of the original problem when the barrier parameter approaches zero. Newton's method has been applied to find a new iterative point by calculating a sufficient descent direction. Due to the high computational cost, we have avoided using several methods, such as the line search methods, to calculate the displacement step. Alternatively, a new approach based on minorant functions has been proposed to accomplish this task to the optimal solution.

The minorant function technique is a very reliable alternative that will be confirmed as the technique of choice for both (SDP) and other classes of optimization problems.

The authors are very grateful and would like to thank the Editor-in-Chief and the anonymous referee for their suggestions and helpful comments which significantly improved the presentation of this paper.

References

[1] F.Alizadeh, J.P.Haberly, M.L.Overton, Primal-dual interior-point methods for semidefinite programming, convergence rates, stability and numerical results, SIAM Journal on Optimization, 8(1998), 746-768.

[2] D.Benterki, Resolution des problemes de programmation semidefinie par des methodes de reduction du potentiel, These de doctorat, Departement de mathematique, Universite Ferhat Abbas, Setif, 2004.

[3] D.Benterki, J.P.Crouzeix, B.Merikhi, A numerical implementation of an interior point method for semide... nite programming, Pesquisa Operacional Journal, 23(2003), no. 1.

[4] S.Kettab, D.Benterki, A relaxed logarithmic barrier method for semidefinite programming, RAIRO-Operations Research 49(2015), no. 3.

[5] J.F.Bonnans, J.-C.Gilbert, C.Lemarechal, C.Sagastizabal, Numerical optimization, theorit-ical and pratical aspects. Springer-Verlag, 2003.

[6] J.P.Crouzeix, B.Merikhi, A logarithm barrier method for semidefinite programming, R.A.I.R.O-Oper. Res., 42(2008), 123-139.

[7] J.Ji, F.A.Potra, R.Sheng, On the local convergence of a predictorcorrector method for semidefinite programming, SIAM Journal on Optimization, 10(1999), 195-210.

[8] T.Kim-Chuan, Some New Search Directions for Primal-Dual Interior Point Methods in Semidefinite Programming, SIAM Journal on Optimization, Aug., 2000.

[9] J.P.Crouzeix, A.Seeger, New bounds for the extreme values of a finite sample of real numbers, Journal of Mathematical Analysis and Applications, 197(1996), 411-426.

[10] Y.E.Nesterov, A.Nemirovski, Optimization over positive semidefinite matrices: Mathematical background and user's manual, Technical report, Central economic and mathematical institute, USSR academy of science, Moscow, USSR, 1990.

[11] R.T.Rockafellar, Convex analysis, Princeton University Press, New Jerzy, 1970.

[12] H.Wolkowicz, G.-P.-H. Styan, Bounds for eigenvalues using traces, Linear Algebra and Appl, 29(1980), 471-506.

Исследование логарифмического барьерного подхода для линейного полуопределенного программирования

Ассма Леуми

Кафедра математики Университет Скикда, Эдагария Алжир

Башир Мерики Джиамел Бентерки

Кафедра математики Университет Ферхата Аббаса Сефифа

Алжир

В настоящей 'работе представлен логарифмический барьерный метод внутренней точки для решения задачи полуопределенного программирования. Метод Ньютона используется для вычисления направления спуска, а минорантная функция используется как эффективная альтернатива методам линейного поиска для определения смещения шага в направлении, чтобы уменьшить порядок вычислений.

Ключевые слова: полуопределенное программирование, метод внутренней точки, метод логарифмического барьера, поиск строк.

i Надоели баннеры? Вы всегда можете отключить рекламу.