Научная статья на тему 'A method for semidefinite quasiconvex maximization problem'

A method for semidefinite quasiconvex maximization problem Текст научной статьи по специальности «Математика»

CC BY
72
16
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
SEMIDEfiNITE LINEAR PROGRAMMING / GLOBAL OPTIMALITY CONDITIONS / SEMIDEfiNITE QUASICONVEX MAXIMIZATION / ALGORITHM / APPROXIMATION SET / ПОЛУОПРЕДЕЛЕННОЕ ЛИНЕЙНОЕ ПРОГРАММИРОВАНИЕ / УСЛОВИЯ ГЛОБАЛЬНОЙ ОПТИМАЛЬНОСТИ / ПОЛУОПРЕДЕЛЕННАЯ КВАЗИВЫПУКЛАЯ МАКСИМИЗАЦИЯ И МИНИМИЗАЦИЯ / АЛГОРИТМ

Аннотация научной статьи по математике, автор научной работы — Enkhbat Rentsen, Bellalij Bellalij, Jbilou Jbilou, Bayartugs Tamjav

We introduce so-called semidefinite quasiconvex maximization problem. We derive new global optimality conditions by generalizing [9]. Using these conditions, we construct an algorithm which generates a sequence of local maximizers that converges to a global solution. Also, new applications of semidefinite quasiconvex maximization are given. Subproblems of the proposed algorithm are semidefinite linear programming.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «A method for semidefinite quasiconvex maximization problem»

Серия «Математика» 2016. Т. 18. С. 110—121

Онлайн-доступ к журналу: http://isu.ru/izvestia

ИЗВЕСТИЯ

Иркутского государственного университета

УДК 519.853 MSC 90C26, 93C05

A Method for Semidefinite Quasiconvex Maximization Problem

R. Enkhbat

Institute of Mathematics, National University of Mongolia

M. Bellalij

University of Valenciennes and Hainaut-Cambresis, France

K. Jbilou

University of Littoral Cote d'Opale, Calais, France T. Bayartugs

University of Science and Technology, Mongolia

Abstract. We introduce so-called semidefinite quasiconvex maximization problem. We derive new global optimality conditions by generalizing [9]. Using these conditions, we construct an algorithm which generates a sequence of local maximizers that converges to a global solution. Also, new applications of semidefinite quasiconvex maximization are given. Subproblems of the proposed algorithm are semidefinite linear programming.

Keywords: Semidefinite linear programming, global optimality conditions, semidefinite quasiconvex maximization, algorithm, approximation set.

1. Introduction

Semidefinite linear programming can be regarded as an extension of linear programming and solves the following problem

min(C, X }f,

(Aj,X}F < b3,j = 1,2,..,s, (1.1)

X ^ 0,

Here X e Rnxn is a matrix of variables and Aj e Rnxn,j = 1,2,...,s. X 0 means that X is a positive semidefinite matrix. We denote by (■, -}f the Frobenius scalar product of two matrices X and Y defined by:

(X, Y)F = trace(XTY) where trace(Z) denotes the trace of the square matrix Z. The corresponding norm is the well known Frobenuis norm defined by ||x||F = V(x,x)F.

Semidefinite programming finds many applications in engineering and optimization [7]. Most interior-point methods for linear programming have been generalized to semidefinite convex programming [13; 7]. There are many works devoted to the semidefinite convex programming problem but less attention so far has been paid to the semidefinite quasiconvex maximization problem.

2. Quasiconvex function and its properties

Let X = [xij] be a matrix in rnxn, and define a scalar matrix functions f as follows

f : rnxn ^ r.

Definition 2.1. Let f (X) be a differentiable function of the matrix X. Then

= •

V uxj J nxn

If f (•) is differentiable, then it can be checked that

f (X + H) - f (X) = (ff(X),H)f + o(\\H||f).

Definition 2.2. A set d C rnxn is convex if aX + (1 - a)Y e d for all X, Y e d and a e [0,1].

Definition 2.3. The function f : d ^ r is said to be quasiconvex on d if f (aX + (1 - a)Y) < max{f (X),f(Y)} for all X, Y e d and a e [0,1].

The well known property of a convex function [8] can be easily generalized as follows:

Lemma 2.1. A function f : rnxn ^ r is quasiconvex if and only if the set

Lc(f) = {X e rnxn | f (X) < c} is convex for all c e r.

Proof. Necessity. Suppose that c e r is an arbitrary number and X,Y e Lc(f). By the definition of quasiconvexity, we have

f (aX + (1 - a)Y) < max{f (X), f (Y)} < c for all a e [0,1],

which means that the set Lc(f) is convex.

Sufficiency. Let Lc(f) be a convex set for all c e R. For arbitrary X,Y e rn, define

c° = max{f (X), f(Y)}. Then X e Lco(f) and Y e Lco(f). Consequently, aX + (1 - aY) e Lco (f), for any a e [0,1]. This completes the proof.

Lemma 2.2. Let f : rnxn ^ r be a quasiconvex and differentiable func-tion.Then the inequality f(X) ^ f(Y) for X,Y e rnxn implies that (f'(Y),X - Y}f < 0

Proof. Since f is quasiconvex,

f (aX + (1 - a)Y) < max{f (X), f(Y)} = f (Y)

for all a e [0,1] and X,Y e rnxn such that f (X) < f (Y). By Taylor's formula, there is a neighborhood of the point Y on which:

f (Y + a(X - Y)) - f (Y) =

o, «>0.

From the fact that °(aWx ~ vWf) Q we obtain ifi(Y),X- Y)F < 0

a

which completes the proof.

3. Semidefinite quasiconvex maximization problem

3.1. Global optimality conditions

Consider the problem of maximizing a differentiable quasiconvex matrix function subject to constraints.

max f (X)

subject to : ( )

A,X)f < bj,j = 1,2,..,s, (3.)

X ^ 0,

where Aj e rraxra, j = 1,2,...,s and bj e r. We call the problem (3.1) as the semidefinite quasiconvex maximization problem or equivalently, semidefi-nite quasiconcave programming.

Denote by d the set corresponding to the constraints of the problem:

d = {X e rraxra|(Aj,X)f < bj,j = 1,2,..,s; X ^ 0}.

Then the problem (3.1) reduces to

max f (X) (3.2)

x его J v 7 v 7

It can be checked that the set d is convex. Problem (3.2) is nonconvex and belongs to a class of global optimization problems.

Introduce the level set Ef (Z)(f) of the function f : rnxn ^ r at a point Z e rnxn:

Ef(Z)(f) = {Y e rnxnIf (Y) = f(Z)}.

It can be checked that a space rnxn is a Hilbert space equipped with a norm \\ • \\F. We now compute a gradient of the function g(X) defined as:

g(X) = ^\\X-u\\2F,XeRnxn.

Indeed,

Ag(X) = g(X + AX) - g(X) = ±\\X + ax- U\\2F -\\\X - U\\2F =

^(X + AX - U,X + AX - U)F - ^(X -U,X-U)F = ^(X -U,X -U)F + (X -U, AX)F + (AX, AX)F -^{X-U,X- U)F Ag{X) = (X-U, AX)F + A X\\2F

Hence, we get

g'(X) = (X - u ) (3.3)

The global optimality condition for the problem (3.2) can be formulated in the following theorem.

Theorem 3.1. [5] If Z e d is a global solution to the problem(3.2) then

(f'(Y),X - Y)f < 0 (3.4)

holds for all Y e Ef(Z)(f) and X e d. If in addition, f'(Y) = 0 holds for all Y e Ef(Z)(f), then condition (3.4) is sufficient for Z e D being a solution to the problem (3.2).

Proof. Necessity. Assume that Z is a solution of problem (3.2) and let Y e Ef (Z) (f) and X e D. Then we have f (X) ^ f (Y). Applying Lemma 2.2, we obtain (f'(Y),X - Y)f < 0.

Sufficiency. Suppose, on the contrary, that Z is not a solution to the problem (3.2), i.e, there exists an U e d such that f (U) > f (Z). The closed set Lf(Z)(f) = {X e rnxn|f(X) < f (Z)} is convex by Lemma 2.1. Let Y be the projection of U onto Lf (Z) (f) such that

\\Y - U\\f = min \\X - U\\f.

X eLf (Z)(f)

Obviously,

\\Y - U\\f > 0, (3.5)

holds since U E Lf (Z)(f )• The point Y can be considered as a solution of the convex minimization problem:

min {g{X)=\\\X-U\\2F} (3.6)

a eLf(z)(f) 2

Taking into account (3.3) applying the lagrange method [3] to problem (3.6) defined on a Hilbert space, we obtain the following optimality conditions at the point Y:

' Ao ^ 0, A ^ 0, Ao + A > 0, Xog'(Y) + Af'(Y) = 0, (3.7)

A(f (Y) - f (Z)) = 0

or equivalently,

Ao ^ 0, A ^ 0, Ao + A > 0, Ao(Y - U) + Af'(Y) = 0, (3.8)

A(f (Y) - f (Z)) = 0-

If A = 0, then (3.8) implies that A > 0, f (Y) = f (Z), and f '(Y) = 0 which contradicts the assumption in the theorem. If A = 0, then we have Ao > 0, and g'(Y) = Y - U = 0 which also contradicts (3.5). So, without loss of generality, we can set Ao = 1 and A > 0 in (3.8). Hence, we have

Y - U + Af'(Y) = 0, A > 0-

From this, we can conclude that

Af'(Y) = U - Y

and

A(f'(Y),U - Y}f = \\U - Y\\F > 0

which contradicts (3.4). Last contradiction implies that the assumption that Z is not a global solution to problem (3.2) must be false which completes the proof.

Remark 3.1. For a fixed Y e Ef(z)(f) checking condition (3.4) reduces to max(f'(Y),X}f < (f'(Y),Y}f

x£i№

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

or equivalently to semidefinite linear programming:

( max(f'(Y),X}f, I subject to

(3.9)

bj, J = 1, 2,..., s,

[X ^ 0.

] (Aj, X) < bj ,j = 1,2,...,s,

Remark 3.2. In order to conclude that a point Z_e d is not a global solution to problem (3.2), we need to find a pair (U, Y) such that

(f '(Y), U - Y)f > 0,Y e Ef(Z)(f), U e d. The following example illustrates the use of this property. Example 3.1. Consider the problem

max\\CX\\F,

xeD

B = {X G R2x2|X < X < X, X ^ 0},

where

^ = ( 2 ) ' I=(n)' x = ( 3 4

We can evaluate the gradient of f as:

f '(X) = 2CT CX. We check whether a point X0

X0 = (12

is a global solution or not. We have

f (X0) = 254.

Consider the matrix U 0 in d defined by

2.7 1.2

U 1 2.9 5.7

Let Y so that Y e Ef(Xo)(f) and defined by

0.0027 0.01264

Y 1 0.0090 0.03723

If we evaluate (f'(Y), U - Y)f, then (f '(Y), U - Y)f = 6.7218 > 0 which means that X0 is not a global solution. Therefore, we obtain the point U such that f (U) > f (X0). Similarly, continuing this process we can get the global solution X* which is :

X* = '3 4

3.2. The SDMAX Algorithm

As we have seen in Subsection 3.1 that in order to check condition (3.4), we need to solve the following semidefinite linear programming for each given Y e Ef(Z)(f)•

max (f '(Y),X}f• (3.10)

For this purpose, we need to approximate the level set of the function f with a finite number of points so that one could solve a finite number of problems (3.10).

Definition 3.1. The set Am? defined for a given m e n and Z e rraxra by

Am = {Y1, Y2, •••, YmY e Ef(Z)(f),i = 1,2, •••, m} (3-12)-

is called an approximation set to the level set Ef(Z)(f) at the point Z.

Assume that Amm is given and d is compact in rraxra• Let Ul, i = 1,2, •••, m be the solutions to the following problems:

f (y г), иг )F = maxf'Y )x)f - (3-13)

X GD

Define 9m as follows:

9m = max {f(Yi),X)f-

Lemma 3.1. If there is a point Yi e A% for Z e d such that {f'(Yi), иг -Yi)F > 0, where Uг satisfies (3.13),then

f (иг) > f (Z)-

Proof. By the definition of Uг, we have

{f '(yг), иг - Yг) = max{f'(ПХ - П

Since f is quasiconvex, then by Lemma 2 we have f (иг) ^ f (Y^ which implies that {f'(Yг), U - Yг)F ^ 0 for all U,Y e rraxra- The proof is complete.

Now we can formulate an algorithm for finding an approximate solutions for problem (3.2)

Algorithm Semidefinite quasiconvex maximization SDMAX Input: A quasiconvex differentiable function f and a compact set d in

rraxra

Output: An approximate solution Х to (3.2). Step 1. Choose a point Х0 e d. Set к := 0-

Step 2. Find a local maximizer Zk e d of problem (3.2) for example by a gradient method of semidefinite nonconvex programming proposed in [16] Step 3. Construct an approximation set A^k at the point Zk. Step 4. For each Yi e A^k solve semidefinite linear programming

max (f '(Y i),X)F. Let Ui, i = 1,2,..., m be solutions, i.e.,

(f '(y i),ui )f = max(f '(y i ),X)f .

X GD

Step 5. Find a number j e 1,2,..., m such that

0m = (f'(Yj),Uj - Yj)f = max (f'(Yi),Ui - Yi)F

j= l,2..,m

Step 6. If dm ^ 0 then terminate and Zk is an approximate solution. Step 7. Set Xk+1 := Ui,k := k + 1 and go to step 2. We notice that Algorithm SDMAX generates a sequence of local maximizers {Zk} of the problem (3.2) such that

f (Zk+1) > f (Zk),k = 0,1...

Also, local maximizers can be found by semidefinite linear programming relaxations similar to [10]. This gives us an opportunity to approach the global solution in (3.2) using standard approach of semidefinite programming.

As we can see that in Algorithm SDMAX, in order to run the algorithm we need to specify how to to construct an approximation set Amm. In general, construction of such approximation sets depends on the objective function f and structure of a feasible set d in rmxra. Let us show this on the following example.

Consider the quadratic function f:

f (X) = \\CX - XB - E\\2f, C,B,E e rmxra. It can be checked that the gradient of f is evaluated as follows

f '(X) = 2CT[(CX - XB - E)] - 2(CX - XB - E)BT Lemma 3.2. Let a point zed and a vector H e rmxra satisfy

(f'(Z),H)f < 0.

Then there exists a positive number a such that Z + aH e Ef(Z)(f).

Proof. With Ya = Z + aH, solve the equation f (Ya) = f (Z) In fact, we have

f (Z + aH) = \\C(Z + aH) - (Z + aH)B - E\\2F = ||(CZ - ZB - E) + a(CH - HB)\\2f = \\(CZ - ZB - E\\2f + 2a(CZ - ZB - E,CH - HB}f + a2\CH-HB\^F = f (Z)+2a(CT (CZ-ZB-E )-(CZ-ZB-E)B T ,H}f + a2\\CH - HB\\2f•

Now the equation f (Ya) = f (Z) gives us

2{CT(CZ -ZB-E)- {CZ -ZB- E)BT, H)F

a =--\\CH-HB\\2f-> (3"14)

which completes the proof.

Remark 1. If Z is a local maximizer of problem (3.2), then by [8] we have

(f'(Z),X - Z}f < 0,yX e ©•

If we take H = U - Z, U e D, then (f'(Z),H} < 0 which satisfies condition of the lemma.

For this reason, in a computational experiment points Y1 e Amk should be constructed as

Y* = Zfe + aiH1, i = 1,2,-,m,

where Zk is a current local maximizer to problem at k-th iteration (2,4), and H1 is random matrix in Rraxra, a* is computed by formula (3.14).

4. Application of semidefinite quasiconvex maximization

4.1. Maximum sum of mutual information in MIMO interference networks

In communication theory, multiple-input multiple-output (MIMO) refers to radio links with multiple antennas at the transmitter and the receiver side. The system to model consists to k user-MIMO where the transmitter has M antennas and each receiver has N antennas. A wide range of studies in this area end up to solve difficult optimization problems [15; 12]. Such problems do not admit a closed from solution and in general it is very difficult to solve them numerically. For instance, in[1], multiuser MIMO system with in general it is neither concave nor convex. Without going into details, mathematically the problem in question can be formulated as

A METHOD FOR SEMIDEFINITE QUASICONVEX MAXIMIZATION PROBLEM 119 follows:

' maxF(Qi,Q2, ...,Qk), subject to

k

trace(Qi) < pt,

i=l , Qi h 0,

where pT is the total power constraint and the objective function F is a nonlinear function defined by:

k

F(Qi,Q2,...,Qk) = J>g2 \\I + PiHi, iQiH11R-1\f, i=i

whith Ri = I + Yjj=ij=l ni jHi jQjH j, Hi j e RNxM denotes the channel matrix between the receive antennas of user l and the transmit antennas of user j. The parameters pi and ni,j are , respectively, the signal-to-noise ratio (SNR) of user l and the interference- to noise ratio (INR) of the interference which is generated by user j and received by user l's receiver. The maximization is performed over covariance matrices of all transmitter Qi,Q2, ...,Qk each of which is an M x M positive semi-definite matrix. The goal covariance matrices that achieve this maximum. In general, this problem seems not to be a standard semidefinite programming. It has been shown in [1] that when the INR is sufficiently large( large interference) for any i, F(Qi,Q2,...,Qk) is convex function with respect to one variable Qi, i = 1,...,k.

5. Conclusion

We consider the semidefinite convex maximization problem. Unlike semi-definite convex programming, the problem is nonconvex and NP hard. We derived global optimality conditions by extending a result of Strekalovsky [9] for semidefinite quasiconvex maximization problem. Based on the global optimality conditions, we propose an algorithm for solving the problem.

References

1. Arslan G., Demirkol M.F. and Song Y., Equilibrium efficiency improvement in MIMO interference systems: a desentralized stream control apprioch. IEEE Transaction on Wireless Communications, Vol. 6, Issue 8, August, 2007, pp. 2984-2993.

2. Bouhamidi A., Enkhbat R., Jbilou K. Semidefinite Concave Programming. Journal of Mongolian Mathematical Society, 2012, vol. 16, pp. 37-46.

3. Balakrishnan A.V. Introduction to Optimization Theory in a Hilbert Space. Springer-Verlag, 1970.

4. Enkhbat R. Quasiconvex Programming and its Applications. Lambert Publisher, Germany, 2009.

5. Enkhbat R., Bayartugs T., Quasiconvex Semidefinite Minimization Problem. Journal of Optimization: Hindawi Publishing Corporation, 2013, vol. 201, article ID 346131, 6 p.

6. Enkhbat R., Bayartugs T. Semidefinite Quasiconcave programming. International Journal of Pure and Applied Mathematics, 2013, vol. 87, no 4, pp. 547-557.

7. Pardalos P.M., Wolkowicz H. (eds.) Topics in Semidefinite and Interior Point Methods. Fields Institute Communications 18, AMS, Providence, Rhode Island, 1998.

8. Rockafellar R.T. Convex Analysis. Princeton University Press, 1970.

9. Strekalovsky A.S. Global Optimality Conditions for Nonconvex Optimization. Journal of Global Optimization, 1998, vol. 12, pp. 415-434

10. Strekalovsky A.S., Enkhbat R. Global Maximum of Convex Functions on an Arbitrary Set. Dep. in VINITI, Irkutsk, 1063, pp.1-27, 1990.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

11. Jerome Malick, Janez Povh, Franz Rendl, Angelika Wiegele. Regularization methods for semidefinite programming. September 1, 2008.

12. Sigen. Ye, Rick S., Blum. Optimized signaling for MIMO interference systems with feedback. IEEE Transactions on Signal Processing 51(11), 2003, pp. 2839-2848.

13. Vandenberghe L., Boyd S. Semidefinite programming. SIAM Rev., 1996, vol. 38, pp. 49-95.

14. Zhang Y. On extending some primal-dual interior point algorithms fromlinear programming to semidefinite programming. SIAM Journal on Optimization, 1998, vol. 8, no 2, pp. 365-386.

15. Zhang L., Zhang Y., Liang Y.C., Xin Y. and Poor H.V. On the Gaussian MIMO BC-MAC duality with multiple transmit covarianca constraints. IEEE Transactions on Information Theory, 2012, vol. 58, no 4, april.

16. Tzu-Chen Liang, Ta-Chung Wang, and Yinyu Ye. A Gradient Search Method to Round the Semidefinite Programming Relaxation Solution for Ad Hoc Wireless Sensor Network Localization. TECHNICAL REPORT SOL 2004-2. December, 2004.

17. Wolkowicz H., Saigal R. and Vandenberghe L. Handbook of Semidefinite Programming. Kluwer, 2000.

18. Jansen K., Rolim J., Sudan M. Linear Semidefinite Programming and Randomization Methods for Combinatorial Optimization Problems. Dagstuhl Seminar no 00041, Report no 263, 2012.

Rentsen Enkhbat, Dr. Sc., Professor, National University of Mongolia, Baga toiruu 4, Sukhbaatar district, Ulaanbaatar, Mongolia, tel.: 97699278403, (e-mail: renkhbat46@yahoo.com)

Mohammed Bellalij, Professor, University of Valenciennes and Hai-naut-Cambresis, Departement des Mathematiques, Valenciennes, Nord-Pas-de-Calais, France

Khalide Jbilou,Professor, Universite du Littoral Cote d'Opale, LMPA, 50 rue F. Buisson B.P.699, 62228 Calais Cedex, France. (e-mail: jbilou@lmpa.univ-littoral.fr)

Tamjav Bayartugs, University of Science and Technology, Mongolia, Baga toiruu 4, Sukhbaatar district, Ulaanbaatar, Mongolia, tel.: 97699873029, (e-mail: bayart1969@yahoo.com)

Р. Энхбат, M. Беллалиж, К. Жбилоу, Т. Баяртугс

Полуопределенное квазивыпуклое программирование

Аннотация. Рассматривается задача полуопределенного квазивыпуклого программирования (задача максимизации или минимизации квазивыпуклой функции на выпуклом множестве). Обобщая теорему А. С. Стрекаловского, мы получаем новое условие глобальной оптимальности для рассматриваемого класса задач. Основываясь на условиях глобальной оптимальности, мы строим алгоритм, который генерирует последовательность точек локальных максимумов, сходящуюся к глобальному решению. Вспомогательными задачами предложенного алгоритма являются задачи полуопределенного линейного программирования. Приводятся новые приложения задач полуопределенного квазивыпуклого программирования.

Ключевые слова: полуопределенное линейное программирование, условия глобальной оптимальности, полуопределенная квазивыпуклая максимизация и минимизация, алгоритм.

Рэнцэн Энхбат, доктор физико-математических наук, профессор, Национальный университет Монголии, ул. Бага Тойру, 4, Округ Сухэ-Батора, Улан-Батор, Монголия, тел.: 976-99278403, (e-mail: renkhbat46@yahoo.com)

М. Беллалиж, профессор, Университет Валенсьена, Нор-Па-де-Ка-ле, Франция.

К. Жбилоу, профессор, Университет Литторал, Кале, Франция.

Тамжав Баяртугс, кандидат физико-математических наук, Монгольский университет науки и технологии, ул. Бага Тойру 34, Округ Сухэ-Батора, Улан-Батор, Монголия, тел.: 976-99873029.

i Надоели баннеры? Вы всегда можете отключить рекламу.