Научная статья на тему 'Theoretical and Numerical Result for Linear Optimization Problem Based on a New Kernel Function'

Theoretical and Numerical Result for Linear Optimization Problem Based on a New Kernel Function Текст научной статьи по специальности «Математика»

CC BY
105
50
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
kernel function / interior point algorithms / linear optimization / complexity bound / primal-dual methods. / функция ядра / алгоритмы внутренних точек / линейная оптимизация / оценка сложности / примало-дуальные методы.

Аннотация научной статьи по математике, автор научной работы — Louiza Derbal, Zakia Kebbiche

The propose of this paper is to improve the complexity results of primal-dual interior-point methods for linear optimization (LO) problem. We define a new proximity function for (LO) by a new kernel function wich is a combination of the classic kernel function and a barrier term. We present various proprieties of this new kernel function. Futhermore, we formilate an algorithm for a large-update primal-dual interior-point method (IPM) for (LO). It is shown that the iteration bound for large-update and smalupdate primal-dual interior points methods based on this function is a good as the currently best know iteration bounds for these type of methods. This result decreases the gap between the practical behaviour of the large-update algorithms and their theoretical performance, which is an open problem.The primal-dual algorithm is implemented with different choices of the step size. Numerical results show that the algorithm with practical and dynamic step sizes is more efficient than that with fixed (theoretical) step size.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Теоретический и численный результат для задачи линейной оптимизации на основе новой функции ядра

Целью данной работы является улучшение результатов сложности первично-двойственных методов внутренней точки для задачи линейной оптимизации (LO). Мы определим новую функцию близости для (LO) новой функцией ядра, которая является комбинацией классической функции ядра и барьерного члена. Мы представляем различные свойства этой новой функции ядра. Кроме того, мы сформулируем алгоритм для большого обновления метода первичной-двойной внутренней точки (IPM) для (LO). Показано, что оценка итераций для методов простого обновления и малых обновлений, основанных на этой функции, наилучшая из известных в настоящее время границ итераций для методов этого типа. Этот результат уменьшает разрыв между практическим поведением алгоритмов с большим обновлением и их теоретической эффективностью, что является открытой проблемой. Алгоритм первичного двойственного типа реализован с различными вариантами выбора размера шага. Численные результаты показывают, что алгоритм с практическим и динамическим размером шага более эффективен, чем алгоритм с фиксированным (теоретическим) размером шага.

Текст научной работы на тему «Theoretical and Numerical Result for Linear Optimization Problem Based on a New Kernel Function»

УДК 517.6

Theoretical and Numerical Result for Linear Optimization Problem Based on a New Kernel Function

Louiza Derbal* Zakia Kebbiche^

Department of Mathematics, Faculty of Sciences University of Ferhat Abbas Setif1, 19000 Algeria

Received 09.07.2018, received in revised form 06.12.2018, accepted 16.01.2019 The propose of this paper is to improve the complexity results of primal-dual interior-point methods for linear optimization (LO) problem. We define a new proximity function for (LO) by a new kernel function wich is a combination of the classic kernel function and a barrier term. We present various proprieties of this new kernel function. Futhermore, we formilate an algorithm for a large-update primal-dual interior-point method (IPM) for (LO). It is shown that the iteration bound for large-update and smal-update primal-dual interior points methods based on this function is a good as the currently best know iteration bounds for these type of methods. This result decreases the gap between the practical behaviour of the large-update algorithms and their theoretical performance, which is an open problem.The primal-dual algorithm is implemented with different choices of the step size.

Numerical results show that the algorithm with practical and dynamic step sizes is more efficient than that with fixed (theoretical) step size.

Keywords: kernel function, interior point algorithms, linear optimization, complexity bound, primal-dual methods.

DOI: 10.17516/1997-1397-2019-12-2-160-172.

Introduction

In this paper we deal with interior point methods (IPMs) for linear optimization (LO). Since Karmarkar's seminal paper [5], many researchers have proposed and analyzed various IPMs for LO and a large amount of results have been reported. For a survey we refer to recent books on the subject [3, 8,10,12,13]. In order to describe the idea of this paper we need to recall some ideas underlying new primal-dual IPMs. Recently, Peng, Roos and Terlaky [8] introduced the so-called self-regular barrier functions for primal-dual IPMs for LO and designed primal-dual interior-point algorithm based on self-regular proximities. Each such barrier function is determined by its (univariate) self-regular kernel function. The complexity bounds obtained by these authors are o(/nlog and O^/nlognlog for small-update methods and large-update methods, respectively, which are currently the best known bounds. Motivated by their work, in this paper we present a new class of kernel functions which are not self-regular. The best iteration bound of large-update interior point methods based on these functions is shown to be O(g%/n(log y^)~ log n) and for small-update methods is O(q3(log /q)~yjnlog n). These are currently the best-known bounds for primal-dual IPMs.

The paper is organized as follows. In Section 1, we start with some notations then we briefly review the basic concept of primal-dual IPMs for LO, such as the central path and new search

* [email protected] 1 [email protected] © Siberian Federal University. All rights reserved

directions. The generic polynomial interior-point algorithm for LO is also presented. In Section 2, we define a new kernel function and present its properties. We analyze the algorithm and derive the complexity bound for large and small-update methods in Section 3. Numerical results are described in Section 4. Finally, Conclusion contains some conclusions and directions for future research.

1. Preliminaries

1.1. Notations

Some notations used throughout the paper are as follows. 1" is the n-dimensional Euclidean space with the inner product (.,.) and ||.|| denotes the 2-norm. 1+ and 1++ denote the set of nonnegative vectors and the set of positive vectors, with n components, respectively. For x, s e 1", xmin and xs denote the smallest component of the vector x and the componentwise product of vector x and s respectively. We denote by X = diag(x) the n x n diagonal matrix with components of vector x e 1" are the diagonal entries, e denotes the n-dimensional vector of ones. For f, g : 1++ ^ 1++, f = O(g) if f (x) < C1g(x) for some positive constant C1 and f = 9(g) if C2g(x) < f (x) < C3g(x) for some positive constants C2 and C3.

1.2. The central path

In this paper, we consider the linear optimization (LO) problem in standard form

min {(c, x) : Ax = b, x ^ 0} , (P)

where A e 1mx" with rank(A) = m, b e Rm and c e 1". The dual problem of (P) is given by

max {(b,y) : ATy + s = c, s > 0} , (D)

with y e 1m and s e 1". Without loss of generality [10] we assume that (P) and (D) satisfy the interior-point condition (IPC), i.e., there exist x0, y0 and s0such that

Ax0 = b, x0 > 0, ATy0 + s0 = c, s0 > 0. (1)

It is well known that finding an optimal solution of (P) and (D) is equivalent to solving the nonlinear system

Ax = b, x > 0, ATy + s = c, s > 0, xs = 0. (2)

The basic idea of primal-dual IPMs is to replace the third equation in (2), the so-called complementarity condition for (P) and (D), by the parameterized equation xs = fie, with n> 0. Thus we consider the system

Ax = b, x ^ 0, ATy + s = c, s ^ 0, xs = fie. (3)

Due to the last equation, any solution (x, y, s) of (3) will satisfy x > 0 and s > 0. Surprisingly enough, if the IPC is satisfied, then there exists a solution, for each f > 0, and this solution is unique. It is denoted as (x(f), y(f), s(f)), and we call x(f) the f-center of (P) and (y(f), s(f)) the f-center of (D). The set of f-centers is called the central path of (P) and (D). If f ^ 0, then the limit of the central path exists, and since the limit points satisfy the complementarity condition, the limit yields optimal solutions for (P) and (D). IPMs follow the central path approximately. We briefly describe the usual approach. Without loss of generality, we assume that (x(f), y(f), s(f)) is known for some positive f. We then decrease f to f := (1 — 9)f for some fixed 9 e (0,1), and we solve the following Newton system

A Ax = 0, AT Ay + As = 0, xAs + sAx = fe — xs. (4)

This process is repeated until m is small enough, say until nm < e; at this stage we have found an e-solution of problems (P) and (D).

By taking a step along the search direction, one constructs a new triplet (x+,y+,s+) with x+ = x + aAx, s+ = s + aAs, y+ = y + aAy where a denote the step size, a G (0,1), which has to be chosen appropriately (defined by some line search rules). If necessary, we repeat the procedure until we find iterates that are in a certain neighborhood of ^-center (x(m), y(p), s(m)).

1.3. Search directions

Now we introduce the scaled vector v and the scaled search directions dx and ds as follows

Ixs vAx vAs

v = J —, dx =-, ds =-. (5)

y m x s

System (4) can be rewritten as follows

Ad,x =0, ATAy + ds =0, dx + ds = v-1 — v. (6)

Where A = — AV-1X, V = diag(v) and X = diag(x). Note that the right-hand side of the M

third equation in (6) equals to the negative gradient of the logarithmic barrier function ^¡(v), i.e., dx + ds = —V^i (v). Where the barrier function : R++ ^ R+ is defined as follows

" v2 _ — *'(v) = ^'(vi), ^i(vi) = ^--logvi, vi > 0.

i=i

Note that dx = ds =0 if and only if v-1 — v =0 if and only if x = x(m), s = s(m). By replacing

n

the proximity function (v) by a proximity function ^(v) = ^ ^(vi), where ■fi(t) is any strictly

i=1

differentiable convex barrier function on R++, with ^(1) = ^'(1) = 0, the system (6) is converted to the following system

Ad,x =0, AT Ay + ds =0, dx + ds = —W(v). (7)

We reassert that in (7), dx = ds =0 holds if and only if ^(v) =0 if and only if v = e if and only if (x, s) = (x(m), s(m)), as it should.

1.4. The generic interior-point algorithm for (LO)

Generic primal-dual IPMs for (LO)

Algorithm 1. Input: a proximity function ^(v); a threshold parameter t > 1; an accuracy parameter e > 0; a fixed barrier update parameter 9,0 < 9 < 1; begin

x := e; s := e; m := 1; v := e while nM ^ e do begin (outer iteration)

v

M :=(1 — 9)m; v :=

while ^(v) > t do begin (inner iteration)

- Find search directions by solving system (7);

- Determine a step size a;

- Put: x := x + aAx; s := s + aA; y := y + aAy; v

end (inner iteration), end (outer iteration), end.

Large and small-update methods

The parameters t, 9 and the step size a should be chosen in such a way that the algorithm is optimized in the sense that the number of iterations required by the algorithm is as small as possible. The choice of the so-called barrier update parameter 9 plays an important role both in theory and practice of IPMs. Usually, if 9 is a constant independent of the dimension n of the problem, then we call the algorithm a large-update (or long-step) method. If 9 depends on the dimension of the problem, such as 9 = O(n), then the algorithm is named a small-update (or short-step) method.

2. The new kernel function and its properties

In this paper, we define a new kernel function with logarithmic barrier term and propose primal-dual interior-point method which all the result of the complexity bound for large-update methods based on logarithmic kernel function, we prove that the corresponding algorithm has 0^q/n(log /n)sq log complexity bound for large-update method and

o(q3 (log /q) ^ /n log n j for small-update method.

2.1. Properties of the new kernel function

We define a new function ^(t) as follows

, ^ t2 - 1 - log(t) etq-1 - 1

m =-+ —2q— t> o, q > i. (8)

Then, we have:

1 e tq

-l

^ (t)= t ---

2t 2tq+i'

$"<<) = )e *-1 > 1, <9>

r'(t) = — — i(,2t-<3q+3> + 3,(, + 1)t-<2q+3> + (q + 1)(, + 2)t-<"+3>)eA-1 < 0.

We use tf(v) as the proximity function to measure the distance between the current iterate and the f-center for given f > 0. We also define the norm-based proximity measure, S(v) : 1++ ^ 1+, as follows

S(v):=2 ||V^(v)|| = 1 Udx + dj . (10)

Lemma 1. For t) we have the following: t) is exponentially convex for all t > 0; that is a) $ (VM*) < ^ (ti) + $ (t2));

b) ''' (t) is monotonically decreasing for all t > 0,

c) t'''(t) — ''(t) > 0 for all t > 0,

d) '''(t)'($t) — p''(t)'''(pt) > 0, t > 1, $ > 1.

Proof. For (a), using (9), we have t '''(t) + ''(t) = 2t + 2q(t-(q+1) + t-(2q+1)) > 0 for all

t > 0, and by Lemma 2.1.2 in [8], we have the result. For (b) and (c), using (9), so we have the result. For (d), using Lemma 2.4 in [2] , we have the result. This completes the proof. □

Lemma 2. For '(t), we have

1(t — 1)2 < '(t) < 1(''(t))2, t> 0, (11) '(t) < ^(t — 1)2, t> 1. (12)

t £ t

Proof. For (11) , since '''(t) > 1, we have '(t) = / /'''(()d(d£ < / '"(€)''(€)d£ =

1 1 1

1 t £ t £ 1

1(''(t))2, and '(t)=U '''(Z)dCd£ >// 1dCd£ = 1 (t — 1)2. For (12), since '(1) = ''(1) = 0,

2 11 11 2

''''(t) <0, '''(1) = 2 + q, and by using Taylor's theorem, we have '(t) < —(t — 1)2. This completes the proof. □

Lemma 3. Let q : [0, to) ^ [1, to) be the inverse function of '(t) for t ^ 1. Then we have

1 + < q(s) < 1 + JTs. (13)

V q + 2

Proof. Let s = '(t), t ^ 1, i.e., q(s) = t, t ^ 1. By the definition of '(t) we have s = '(t) ^ > 2(t —1)2, which implies that t = q(s) < 1+—2s. By (12), we have s = '(t) < (t — 1)2, t> 1,

so t = q(s) > 1 +J□ q+2

In the next lemma we use the so-called barrier term 'b(t) of '(t), which is defined by t2 1

'(t) = —^ + 'b(t),t> 0.

Lemma 4. Let p : [0, to) ^ (0,1] be the inverse function of the restriction of —' ^ ) in the

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

interval (0,1], p : [0, to) ^ (0,1] be the inverse function of the restriction of —'b(t) in the interval (0,1] and sb = —'b(t). Then one has

'p(s) > p(1 + 2s), (14)

P(sb) > ~—;—:-rr, sb > 1. (15)

- (log(2sb) + 1) q 2

Proof. Let t = p(s). Due to definition of p as the inverse function of — ' ^ for t < 1 this

means that —2s = ''(t) = t + 'b(t), 0 <t < 1. Since t < 1 this implies —'b(t) = t + 2s < 1 + 2s. Function p(sb) is also monotonically decreasing. We can say that p(s) = t = p(sb) =

= p(—'b(t)) > p(1 + 2s). For (15), let sb = -P-) + ^p(sb)-(q+1)ep(sb)-q-1, 0 < p(sb) < 1,

sb > 1 means that ep(sb)-q-1 = 2p(sb)(q+1)sb — p(sb)q < 2sb. Hence p(sb) > -1-r.

2 - - - (log(2sb) + 1)q

This completes the proof. □

Lemma 5. Let q : [0,^ [1, be the inverse function of $(t),t ^ 1. Then we have

*(3v) < n^pQ^, v e 1++, p > 1.

Proof. Using Lemma 1(d), and Theorem 3.2 in [1], we can get the result. This completes the proof. □

v n9 + 2T + 2\/2ut

Lemma 6. Let 0 < 9 < 1, v+ = , if ^(v) < t, then we have ^(v+) < -—-—-.

V1 — 9 2(1 — 9)

1 ( ^(v). q( !M)

Proof. Since > 1 and q(-) > 1, then " > 1, and for t > 1, we have $(t) <

1 — 9 n 1 — 9

t2 — 1 1 . . . . . . n9 +2t +2V2nT

< —-—. Using Lemma 5 with p = (13) and w(v) <t we have w(v+) <-—-—--.

2 1— 9 2(1 — 9)

This completes the proof. □ Denote

n9 + 2t + 2V2nT r, „ n , ,

^0 = 2(1 — 9)-= L(n,9,T) (16)

then is an upper bound for ^(v) during the process of the algorithm.

3. Analysis of algorithm

In this section, we compute the feasible step size a such that the proximity function is decreasing and is bound for the decrease during inner iterations; then give the default step size a;

a = -. We will show that the step size note only keeps

1 + (2q + 1)(1 + 46) [log(2 + 86) + 1] ~

the iterates feasible but also give rise to a sufficiently large decrease of the barrier function ^(v)

in each inner iteration. For fixed f, taking a step size a, we have new iterates x+ := x + aAx;

xs

y+ = y + aAy; s+ := s + aAs. Using (5), we have x+ = — (v + adx); s+ = —(v + ads); so we

have v+ = . + + = J(v + adx)(v + ads).

V f

Define, for a > 0, f (a) = ^(v+) — ^(v). Then f (a) is the difference of proximity between a new iterate and a current iterate for fixed f. By Lemma 1 (a), we have ^(v+) =

= *(V(v + adx)(v + ads)) < ^(^(v + adx) + ^(v + ads)). Therefore, we have f(a) < f1(a), where

fi(a) = 2(*(v + adx) + *(v + ads)) — *(v). (17)

Obviously, f (0) = f1(0) = 0. Taking the first two derivatives of f1 (a) with respect to a, we

, 1 " ,, 1 " have f1(a) = - (vi + adxi)dxi + K + adsi)dsi), f1 (a) = - J2($''(vi + adxi)d2xi + (vi+ 2 i=1 2 i=1

+adsi)d2i). Using (7) and (10), we have f (0) = 1 W(v)T(dx + d,s) = — 1 W(v)TV$(v) =

= —26(v)2. For convenience, we denote v1 = min(v), 6 := 6(v).

Lemma 7. Let 6(v) be as defined in (10). Then we have

6(v) 1^(v). (18)

n n 1 1

Proof. Using (11), we have ^(v) = £ ^(vi) < £ (v^)2 = - ||W(v)||2 = 25(v)2, so _ i=i i=i 2 2

5(v) ^ ^Ji^(v). This completes the proof. □

Remark 1. Throughout the paper, we assume that t ^ 1. Using Lemma 7 and the assumption that tf(v) ^ t , we have S(v) ^

From Lemmas 4.1-4.3 in [2], we have the following Lemmas 8-11.

Lemma 8. Let f1(a) be as defined in (17) and 5(v) be as defined in (10). Then we have fi(a) < 25 W - 2a5).

Lemma 9. fi(a) ^ 0 certainly holds if a satisfies

'(v1 - 2a5) + ^'(v1) < 25. (19)

Lemma 10. Let p : [0, ^ (0,1] be the inverse function of — 2^'(t) for all t G (0,1]. Then

the largest step size a satisfying (19) is given by a = — (p(5) — p(25)).

25

Lemma 11. let p and a be as defined in Lemma 10. then a ^

'' '(p(25))'

Lemma 12. let p and a be as defined in Lemma 10. If ^(v) ^ t ^ 1, then we have

1

a ^

1 + (2q + 1)(1 + 45) [log(2 + 85) + 1]~

Proof. Using Lemmas 11,4,7 and (9), we have

1

a ^ ........ ^

'''(p(25))' '''(p(1+45))

by setting t = p(1 + 45), (0 <t < 1), it follows that 11

a > - = - >

" '''(t) 1+ 212 + [2 (q + 1)t-(q+2) + 1 qt-(2q+2)] et-q-1 > l

> 1 + (2q + 1)t-(q+1)(—'b(t)) >

> -i+r,

(put t = p( 1 + v^R)) .

1 + (2q + 1)(1 + 45) [log(2 + 85) + 1] —

This completes the proof. □

Denoting

a =-1-q+r, (20)

1 + (2q + 1)(1 + 45) [log(2 + 85) + 1] q

we have that a is the default step size and that a < a. From Lemma 1.3.3 in [8], we can get the following lemma

Lemma 13. Suppose that h(t) is a twice differentiable convex function with h(0) = 0, h'(0) < 0. Suppose that h(t) attains its global minimum at t* > 0 and h''(t) is increasing with respect to t.

Then, for any t G [0,t*], we have h(t) ^ —20.

1

Let the univariate function h be such that h(0) = /1(0), h'(0) = /{(0) = —2ô2, h''(a) = = 2S2^"(v1 — 2aô). In this respect the next result is important.

Lemma 14. Let a be the default step size as defined in (20) and let ^(v) ^ 1. Then

f(a) ^ _V^jv)__(21)

/ (a) <--Ï+T • (21)

2 + (2q + 1)(1 + ^v/2) [log(2 + + 1] '

Proof. Using Lemma 4.5 in [2] and Remark 1, if the step size a satisfies a ^ a , then /(a) ^

ô

^ —aô2. So, for a ^ a, we have /(a) ^ —

ï+i •

V2+(2q + 1)(v/2 + 4)[log(2 + 8ô) + 1] ï

Since the decrease depends monotonically on ô, subtitution yields

vmv)

/(a) < —-

ï+i ï

2 + (2q + 1)(2 + 4%/2) [log(2 + 4v/2¥0)) + 1 where the last inequality follows from ^ ^ ^ t ^ 1. This result holds the lemma. □

3.1. Inner iteration bound

n9 + 2T + 2\/2nT

After the update of f to 1 — f, we have ^(v+) <-—-—-= L(n, 9, t). We need to

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

2(1 — 9)

count how many inner iterations are required to return to the situation were ^(v) < t. we denote the value of ^(v) after the f update as the subsequent values in the same outer iteration are

denoted as ^ , k = 1, 2,..., K, where K denotes the total number of inner iterations in the outer iteration. The decrease in each inner iteration is given by (21). In [2] we can find the appropriate

values of k and 7 e (0,1]: k =-1-q+r, 7 1

2 + (2q + 1)(2 + 4^) [log(2 + 4v/2¥0) + 1] «

2

Lemma 15. Let K be the number of inner iteration in the outer iteration. Then we have

( r ] — \ i

K < (4+(2q + 1)(4 + 8>/2) [log(2 + 4v/2¥0) + 1j * J .

Proof. By Lemma 1.3.2 in [8], we have

K ^ KY = ( 4+ (2q + 1)(4 + 8v^) [log(2 + 4v/2¥0) + 1j * W05. This completes the proof. □

3.2. Total iteration bound

log -

The number of outer iterations is bounded above by —(see [10] Lemma II.17, page116). By multiplying the number of outer iterations by the number of inner iterations, we get an upper bound for the total number of iterations, namely,

/ r ] — \ i log —

f 4 + (2q + 1X4 + 8^) [log(2 + 4v/2¥0) + 1j * J ^ ■ (22)

For large-update methods with t = O(v/n) and 9 = ©(1), we have = O(n) and

q+l

O(^Vn(log y/n)~ log —) iterations complexity.

Remark 2. The better total iteration bounds is when q = 1, the total iteration bounds are 0(v/n(logv/n)2 log —) for large-update interior point methods.

In case of a small-update methods, the best bound is obtained as follows. By (12), we have

^(v+) < n^

(Wb'(< n)2+D № - ■ "sing <13) and *(v) < T

we have

VT-0 \ n

n(2 + q) f f^(v) n

2(1 - 0)

2(1 - 0) \ n

We have t = 0(1) and 6 = ©(j^), in this case = O(q) and the iteration bound becomes o(q3 (log /q)sq /n log n j iteration complexsity.

4. Numerical results

The aim of this section is to investigate the influence of the choice of the new kernel function on the computational behavior of the generic primal-dual algorithm for linear optimizationas given in Fig. 1. The Algorithm is coded in MATLAB (R2014a) and our experiments were performed on PC with Processeur Genuine Intel(R) CPR T2080 @ 1, 73GHZ installed memory (RAM) 2, OOGO. For the parameters t, 6 and the accuracy parameter e, we fixed these parameters to t = /n,6 e {0.3, 0.5, 0.7, 0.9, 0.99} and e = 10"4.

The choice of the step size a (0 < a < 1) is another crucial issue in the analysis of the algorithm. It has to be made such that the closeness of the iterates to the current ^-center improves by a sufficient amount. In the theoretical analysis, the step size a is usually given a value that is very small during each inner iteration. In practice, this leads to very large inner iteration number. So, to accelerate the iteration process we propose a dynamic and practical choices defined bellow: Dynamic choice [9]

We enlarge the step size by using the following procedure:

We take a = pa, when p ^ 1 is a fixed scalar according to the the size of the increment of x or s and a is the default step size (the theoretical choice).

{pi a if ||Ax|| ^ n p2a if 1 ^ ||Ax|| ^ n . P3a if ||Ax|| < 1

Practical choice [6]

We have the following conditions of strict positivity:

Which give: a+ = ¡3ax and a + = ¡3as , such as 0 < ft < 1, or

I mini--— ) with i e I = {i: dxi < 0}

ax = < V dxi J ; a*

x \ aa + > 0 s + a+ds > 0

[ 1

We take ak

elsewhere

min(ax, as ). So the new iterated is (x+, s+)

min(--) with i e I = {i: dsi < 0}

dsi .

1 elsewhere

(x, s) + ak (dx, ds).

Example 1. We consider a linear program with m = 5, n = 9,

0 1 2 -1 11 0 0 0

1 2 3 4 -1 0 1 0 0

A = - 10 -2 1 20 0 1 0

1 2 0 -1 -2 0 0 0 1

( 1 3 4 2 10 0 0 0

and b = ( 1 2 3 2 1 ) T . The starting point is

( i 0 -211 0 0 0 0 )T

c

x0 = [ 0,1819 0,0699 0,063 0,1105 0, 2012 0, 6732 1,1885 2, 835 2,1912 ]T, s0 = [4, 939 3, 544 4, 7186 9,1788 4, 5072 1, 384 0, 875 0,4241 0,4463 ]T, y0 = [ -1, 3843 -0, 8751 -0,4241 -0,4463 -3,0424 ]T. The optimal solution is:

x* = ( 0 0 0,2664 0 0 0,4269 1,1406 3,5729 2 )T,

y* = ( 0 0 0 0 -0,4999 )T,

s* = ( 1,5 1,4999 0 1,9999 1,4999 0 0 0 0)T.

In the tables of results, n represents the size of the example, (Outer) represents the number of outer iterations, (Inner) represents the number of inner iterations and (Time) represents the calculation time in seconds.

Tab. 1 gives the numbers of inner and outer iterations for Example 1 with fixed scalars p1 = 100, p2 = 50 and p3 = 25. We obtain the following results:

Table 1. Numbers of inner and outer iterations for Example with a fixed scalar p.

Step size choices Inner Outer Time

Theoretical choice 2704 5 24.627917s

Dynamic choice 23 5 0.218753s

Practical choice 4 5 0.109279s

Example 2. We consider a linear program with m = 3, n = 6, /210 -10 0 \ A = I 0 0 1 0 1 -1 I, c = ( 3 -1 1 0 0 0 )T and b =(001 )T.

1 1 1 1 1 1 The starting point is

x0 = [0.06757, 0.13258, 0.13302, 0.26774, 0.13302, 0.2664 ]T, s0 = [1,0, 4, 6,1, 5,1]T, yo = [-2, -2, -3]T. The optimal solution is:

x* = (0.0000,0.5000,0.0000, 0.5000, 0.0000,0.0004)T,

y* = (-0.5000, -0.4902, -0.5000)T,

s* = (4.5000,0.0000,1.9902, 0.0000,0.9902,0.0098)T.

There is a parameter p involved in the definition of the dynamic choice, we used several values of this parameter as indicated in Tabs. 2, 3 below. Theses values were chosen after some preliminary experiments that showed that these values gave the most promising iteration counts.

The following table gives the numbers of iterations for possible combinations of e and p. The value e = 0,9 gives the lowest iteration count in all cases.

Table 2. Numbers of inner and outer iterations for several choices of e and p

e pi p2 p3 Inner Outer Time

0.3 200 100 50 1 31 0.027893

0.5 201 100 50 1 16 0.029697

0.7 201 100 50 2 10 0.032418

0.9 423 100 50 2 5 0.019141

0.99 422 100 50 13 3 0.074557

Tab. 3 gives the numbers of inner and outer iterations for Example 2 with 6 = 0.9 and variable values of scalars p1; p2 and p3.

Table 3. Numbers of inner and outer iterations for variable values of the scalar p

Step size choices Inner Outer Time

Theoretical choice 2174 5 7.715739s

Dynamic choice pi = 100 p2 =50 p3 = 25 21 5 0.1593165s

Dynamic choice p1 = 423 p2 = 100 p3 = 50 2 5 0.019141s

Practical choice 4 5 0.024999s

Example 3. We consider the following example: n = 2m.

AC J0 if l = j and l = j + m C\ 1 <■ , ï n Of • 1

A(l,j) = < . , c(l) = —1, c(l + m) = 0, b(l) = 2 for l = 1,...,m.

11 if I = j and I = j + m, The starting point is: x0(l) = x1(l + m) = 1, s0(l) = 1, so(l + m) = 2 and y0(l) = —2 for l = 1, . .. ,m. The optimal solutions are obtained as follows:

|2 if i = l,...,rn |0 if i = l,...,m

= < y* = —l for i = l,... ,n and s* = <

10 if i = m + 1,...,n, ll if i = m +1,...,n.

We have the results with 6 = 0.9 in Tab. 4.

Table 4. Numbers of inner and outer iterations for several choices of the step size a for an example with variable size

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

n p Theoretical choice Dynamic choice Practical choice

pi p2 p3 Inner Outer Time Inner Outer Time Inner Outer Time

20 500 350 150 4171 6 484.938590s 4 6 0.208902s 4 6 0.122828s

50 1050 350 150 6977 6 791.156169s 3 6 0.491385s 4 6 0.279962s

100 1050 350 150 10385 6 3475.590158s 4 6 2.937549s 4 6 0.360571s

200 2000 350 280 15547 7 19737.654690s 5 7 12.924049s 5 7 4.378243s

400 3010 500 280 4 7 68.116663s 5 7 21.679448s

500 3110 510 280 5 7 137.160435s 4 7 42.884622s

1000 5525 510 350 6 7 1321.177117 5 7 262.051496s

Conclusion

In this paper, we have proposed a primal-dual interior point algorithm for (LO) based on a new kernel function. For this parametric kernel function, we have shown that the best result of iteration bounds for large and small-update methods can be achieved, namely O(qy/n(log y/n)log —) for large-update and O(q3 (log ^/q)^ y/n log —) for small-update methods. In practice, the step size a plays a crucial role in the computational behavior of the algorithm. To accelerate the iteration process of our algorithm, we have proposed a dynamic and practical choices. The algorithm with practical step size work faster than that with the dynamic one, but for suitable values of the parameter p the two choices lead to a significant decrease in the total number of iterations.

For further research, it is necessary to think of a simple strategy to determine the appropriate values of the parameter p which keeps the iteration in the interior of the feasible domain.

Furthermore, this algorithm may be possible extended to the semidefinite linear optimization,

quadratic programming and linear complementarity problem with these choices of the step size.

References

[1] M.Achache, A new parameterized kernel function for LO yielding the best known iteration bound for a large-update interior point algorithm, Afrika Matematika, 27(2016), no. 3-4, 591-601.

[2] Y.Q.Bai, M.El.Ghami, C.Roos, A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization, SIAM Journal on Optimization, 15(2004), no. 1, 101-128.

[3] D.Den Hertog, Interior point approach to linear, quadratic, and convex programming, in: Mathematics and its Applications, vol. 277, Kluwer Academic Publishers, Dordrecht, 1994.

[4] M.Bouafia, D.Benterki, A.Yassine, An efficient parameterized logarithmic kernel function for linear optimization, Optimization Letters, 2018.

[5] N.K.Karmarkar, A new polynomial-time algorithm for linear programming, in: Proceedings of the 16th Annual ACM Symposium on Theory of Computing, 1984, 302-311.

[6] A.Keraghel, Etude adaptative et comparative des principales variantes dans l'algorithme de karmarkar, These de Doctorat, Universite de Joseph Fourier, Grenoble I, France, 1989.

[7] B.Kheirfam, M.Haghighi, A full-Newton step infeasible interior-point method for linear optimization based on a trigonometric kernel function, Journal of Mathematical Programming and Operations Research, 65(2016), no. 4, 841-857.

[8] J.Peng, C.Roos, T.Terlaky, Self-Regularity: A New Paradigm for Primal-Dual Interior-Point Algorithms, Princeton University Press, 2002.

[9] Z.G.Qian, Y.Q.Bay, Primal-dual Interior-Point Algorithms with Dynamic Step SizeBased on kernel functions for linear Programming, Journal of Shanghai University, 9(2005), no. 5, 391-396.

[10] C.Roos, T.Terlaky, J.-Ph.Vial, Theory and Algorithms for Linear Optimization, An Interior-Point Approach, John Wiley & Sons, Chichester, UK, 1997.

[11] G.Sonnevend, An "analytic center" for polyhedrons and new classes of global algorithms for linear (smooth, convex) programming, in: Prekopa, A, J. Szelezsan, B. (Eds.), System Modelling and Optimization: Proceedings of 12th IFIP-Conference, Budapest, Hungary, 1985; Lecture Notes in Control and Inform. Sci., Springer, Berlin, vol. 84, 1986, 866-876.

[12] R.J.Vanderbei, Linear Programming, Foundations and Extensions, 2nd ed., in: International Series in Operations Research and Management Science, vol. 7, 1997.

[13] Y.Ye, Interior Point Algorithms, Theory and Analysis, John Wiley and Sons, Chichester, UK, 1997.

Теоретический и численный результат для задачи

и и __и 1

линейной оптимизации на основе новой функции ядра

Луиза Дербал Закия Кеббиш

Кафедра математики, факультет наук Университет Ферхата Аббаса Сетиф-1, 19000 Алжир

Целью данной работы является улучшение результатов сложности первично-двойственных методов внутренней точки для задачи линейной оптимизации (Ьй). Мы определим новую функцию близости для (Ьй) новой функцией ядра, которая является комбинацией классической функции ядра и барьерного члена. Мы представляем различные свойства этой новой функции ядра. Кроме того, мы сформулируем алгоритм для большого обновления метода первичной-двойной внутренней точки (1РМ) для (Ьй). Показано, что оценка итераций для методов простого обновления и малых обновлений, основанных на этой функции, наилучшая из известных в настоящее время границ итераций для методов этого типа. Этот результат уменьшает разрыв между практическим поведением алгоритмов с большим обновлением и их теоретической эффективностью, что является открытой проблемой. Алгоритм первичного двойственного типа реализован с различными вариантами выбора размера шага.

Численные результаты показывают, что алгоритм с практическим и динамическим размером шага более эффективен, чем алгоритм с фиксированным (теоретическим) размером шага.

Ключевые слова: функция ядра, алгоритмы внутренних точек, линейная оптимизация, оценка сложности, примало-дуальные методы.

i Надоели баннеры? Вы всегда можете отключить рекламу.