Научная статья на тему 'Fractional optimization problems'

Fractional optimization problems Текст научной статьи по специальности «Математика»

CC BY
90
31
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
ДРОБНАЯ МАКСИМИЗАЦИЯ / FRACTIONAL MAXIMIZATION / ДРОБНАЯ МИНИМИЗАЦИЯ / FRACTIONAL MINIMIZATION / УСЛОВИЯ ГЛОБАЛЬНОЙ ОПТИМАЛЬНОСТИ / GLOBAL OPTIMALITY CONDITIONS / APPROXIMATION SET / АППРОКСИМИРУЮЩЕЕ МНОЖЕСТВО

Аннотация научной статьи по математике, автор научной работы — Enkhbat Rentsen, Bayartugs Tamjav

We consider fractional maximization and minimization problems with an arbitrary feasible set, with a convex function in the numerator, and with a concave function in the denominator. These problems have many applications in economics and engineering. It has been shown that both of kind of problems belongs to a class of global optimization problems. These problems can be treated as quasiconvex maximization and minimization problems under certain conditions. For such problems we use the approach developed earlier. The approach based on the special Global Optimality Conditions according to the Global Search Theory proposed by A. S. Strekalovsky. For the case of convex feasible set, we reduce the original minimization problem to pseudoconvex minimization problem showing that any local solution is global. On this basis, two approximate numerical algorithms for fractional maximization and minimization problems are developed. Successful computational experiments have been done on some test problems with a dimension up to 1000 variables.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Fractional optimization problems»

Серия «Математика» 2014. Т. 8. С. 104—114

Онлайн-доступ к журналу: http://isu.ru/izvestia

ИЗВЕСТИЯ

Иркутского государственного университета

УДК 519.853.4

Fractional Optimization Problems

R. Enkhbat

National University of Mongolia

T. Bayartugs

University of Science and Technology, Mongolia

Abstract. We consider fractional maximization and minimization problems with an arbitrary feasible set, with a convex function in the numerator, and with a concave function in the denominator. These problems have many applications in economics and engineering. It has been shown that both of kind of problems belongs to a class of global optimization problems. These problems can be treated as quasiconvex maximization and minimization problems under certain conditions. For such problems we use the approach developed earlier. The approach based on the special Global Optimality Conditions according to the Global Search Theory proposed by A. S. Strekalovsky. For the case of convex feasible set, we reduce the original minimization problem to pseudoconvex minimization problem showing that any local solution is global. On this basis, two approximate numerical algorithms for fractional maximization and minimization problems are developed. Successful computational experiments have been done on some test problems with a dimension up to 1000 variables.

Keywords: fractional maximization, fractional minimization, global optimality conditions, approximation set.

1. Introduction

In this paper we consider the fractional optimization problems of the following types:

max , (1.1) x£D g(x)

min 44, (1.2) x£D g(x)

where D c Rn is a subset, and f (x) is convex, g(x) is concave on D, f (x) and g(x) are positive on D. We call these problems as the fractional

optimization problems. Problems (1.1)-(1.2) have many applications in economics and engineering. For instance, problems such as minimization of average cost function [7] and minimizing the ratio between the amount of resource wasted and used on the production plan belong to a class of fractional programming.

The most well-known and studied class of fractional programming is the linear fractional programming class. When D is convex then well known existing methods for solving problem (1.2) are variable transformation [8], nonlinear programming approach [5], and parametric approach [3]. The variable transformation method reduces problem (1.2) to convex programming for the case

D = {x e S c Rn | h(x) < 0} with h : Rn — Rm a convex vector-valued function and S a convex set. Theorem 1. [5] Problem (1.2) can be reduced to convex programming

min{tf (t-ly) I th(t-1 y) < 0, tg(t-ly), t-ly e S,t> 0} (1.3) applying the transformation

1

y = xt and t = ——— .

g(x)

Moveover, if (y*,t*) solves problem (1.2) then x* = t-ly* solves (1.2).

One of the most popular strategies for fractional programming is the parametric approach which considers the class of optimization problems associated with problem (1.2) given by

inf {f (x) - Xg(x)} (1.4)

with A e R.

Introduce the function F(A) as follows

F(A) = mm{f (x) - Ag(x)}.

Lemma 1. [3] If D is a compact set then

(a) The function F : R — R is concave, continuous and strictly increasing.

(b) The optimal solution A* to (1.2) is finite and F(A*) = 0.

(c) F(A) = 0 implies that A = A*.

(d) A* = 44 = minM.

g(x*) xeD g(x)

2. Fractional Maximization and Global Optimality Conditions

Consider the fractional maximization problem:

(2Л)

where f,g : D ^ R are differentiable functions, D is a convex subset in Rn, and f (x) is convex on D and g(x) is concave on D, f (x) > 0 and g(x) > 0 for all x £ D.

Introduce the level set of the function p(x) for a given C > 0.

L(p,C) = {x £ D\ p(x) < C}.

Lemma 2. The set L(p,C) is convex.

Proof. Since g(x) > 0 on D, then p(x) < C Vx £ D can be written as follows :

f (x) - Cg(x) < 0 Vx £ D. Clearly, a set defined by

M = {x £ D\ f (x) - Cg(x) < 0}

is convex which implies convexity of L(p, C).

Definition 1. A function f : D ^ R is said to be quasiconvex on D if

f (ax + (1 - a)y) < max{f (x), f(y)} hold for all x,y £ D and a £ [0,1].

Lemma 3. [4] The function f (x) is quasiconvex on D if and only if the set L(f, C) is convex for all C £ R.

Then it is clear that the function p(x) is quasiconvex on D. The optimality conditions for a quasiconvex maximization problem were given in [4]. Applying this result to problem (2.1), we obtain the following proposition.

The optimality condition for problem (1.1) will be formulated as follows [4].

Theorem 2. Let z be a solution to problem (2.1), and let

Ec(p) = {y £ Rn\ p(y) = C}.

Then

(p'(y),x - y)<0 (2.2)

for all y £ Ep(z)(p) and x £ D.

If in addition p'(y) = 0 holds for all y £ E^(z)(p), then condition (2.2) is sufficient for z £ D to be a global solution to problem (2.1).

Condition (2.2) can be simplified as:

for all y £ Ey(z)((p) and x £ D.

These global optimality conditions are based on the Global Search Theory developed by A.S. Strekalovsky [10].

Algorithm and Approximation Set

Definition 2. The set A(z) defined for a given m by

Am = [y\ y2, ..., ym\ yi £ E„{z) (#) n D, i = 1, 2,..., m}

is called an approximation set.

Lemma 4. If there are a point y% £ Am and a feasible point z £ D such that

(V'(yj), uj - yj) > 0 then uj) > p(z), where (p'(yj),uj) = max(^'(yj),x).

xED

Proof. By the definition of uj, we have

max(f'(yj),x - yj) = (f'(yj),uj - yj).

xED

Since f is convex,

f (u) - f (v) >(f '(v),u - v)

holds for all u,v £ Rn . Therefore, the assumption in the lemma implies that

f (ui) - f (z) = f (ui) - f (yi) > (f '(yi),ui - yi) > 0.

Now we can construct an algorithm for solving problem (1.1) approximately.

Algorithm MAX Step 1. Choose xk £ D, k := 0. zk = argloc max<^(x), and m is given.

xED

Step 2. Construct an approximation set A™ at zk. Step 3. Solve Linear programming problems:

m&x(^'(yi ),x) , i = 1, 2,...,m

xED

Let ui be solutions to above problems:

(^'(ui ),x) = max(^' (yi ),x), i = 1,2,...,m.

xED

Step 4. Compute nk :

nk = max (y'(y%),u% - y%) = (y'(yj),uj - yj).

l<%<m

Step 5. If nk > 0 then xk+l := uj, k := k + 1 and go to Step 1. Step 6. Terminate, zk is an approximate global solution.

Lemma 5. If nk > 0 for all k = 0,1,..., then the sequence {zk} constructed by the Algorithm MAX is a relaxation sequence, i.e,

f(zk+l) >f(zk), k = 0,1,....

The proof follows from Lemma 4.

3. Fractional Minimization and Global optimality conditions

Consider the fractional minimization problem

where, D c Rn is an arbitrary compact set, f,g : Rn — D are differentiable functions, f(x) is convex, g(x) is concave on D, f and g are positive on D.

As we have shown in section 2, the function y(x) is quasiconvex on D. Now we can apply global optimality conditions in [4] to problem (3.1) as follows.

Theorem 3. Let z be a global solution to problem (3.1), and let Ec(y) = {y e Rn I y(y) = C}.

Then

(y'(x),x - y)>0 for all y e Ep(z)(y) and x e D. (3.2)

If, in addition

lim y(x) = and y'(x + ay'(x)) = 0 l|x|| ^^o

holds for all x e D and a > 0, then condition (3.2) becomes sufficient. The optimality condition (3.2) can be written as follows:

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

for all y e Ep(z)(y) and x e D.

Definition 3. Let Q be a subset of Rn. A differentiable function h : Q — R is pseudoconvex at y £ Q if

h(x) - h(y) < 0 which implies h'(y)(x - y) < 0 Vx £ Q.

A function h(.) is pseudoconvex on Q if it is pseudoconvex at each point

y £ Q.

Lemma 6. Let D be a convex set in Rn. Let f : D — R be convex, differentiable and positive; Let g : D — R be concave, differentiable and

f(x)

positive. Then the function <fi(x) = is pseudoconvex.

g(x)

Proof. Take any point y £ D. Introduce the function ^ : D — R as follows:

^(x) = f (x)g(y) -g(x)f (y).

Since g(y) > 0 and f (y) > 0, ^(x) is convex and differentiable. Clearly, ^(y) = 0. It is obvious that

^(y) > p(x) which is equivalent to ^(y) > ^(x).

Since ^(.) is convex and differentiable, then we have

0 > ^(x) - ^(y) > (ip'(y),x - y).

Taking into account that

{■ip'(y),x-y} _ (f'(y)g(y) -g'(y)f'(y),x-y) _ , _ [g'{y)Y Wiy)? vh

we obtain implications

p(y) > p(x), hence we have (p'(y),x - y) < 0 which prove the assertion.

Lemma 7. Let D be a convex set. Then any local minimizer x* of p(x) on D is also a global minimizer.

Proof. On the contrary, assume that x* is not a global minimizer. Then there exists a point u £ D such that

p(x*) > p(u). (3.3)

Since D is a convex set,

x* + a(u - x*) = au + (1 - a)x* £ D Va : 0 < a < 1.

By Taylor's expansion, we have

y(x* + a(u - x*)) = y(x*) + a(y'(x*),u - x*) + o(a||u - x*||),

o(aIIu - x*H M ^ , , , . ^

where iim - = 0. Since x is a local mimmizer ol on D,

a

there exists 0 < a* < 1 so that

y(x* + a(u - x*)) - y(x*) > 0 Va : 0 < a < a*, which implies

(y'(x*),u - x*) > 0.

Since y(.) is pseudoconvex, (y'(x*),u - x*) > 0 implies that y(u) > y(x*) contradicting (3.3) y(x*) > y(u). This completes the proof.

Lemma 7 allows us to apply gradient methods for solving problem (3.1). We provide with an algorithm of conditional gradient method [2].

Algorithm MIN

Step 1. Choose an arbitrary feasible point x0 e D and set k := 0. Step 2. Solve the linear programming

(y'(xk ),xk) = min(y'(xk ),x).

x£D

Let xk be a solution to the above problem. Step 3. Compute nk :

nk = (y'(xk),xk - xk).

Step 4. If nk = 0 then xk is a solution.

Step 5. xk+l = xk(ak), xk(a) = xk + a(xk - xk), a e [0,1],

f(xk(ak))= min f(xk(a)).

a€[0,l]

Step 6. Set k := k + 1 and go to Step 2.

The convergence of the Algorithm MIN is given below.

Theorem 4. [2] The sequence {xk, k = 0,1,...} generated by Algorithm MIN is a minimizing sequence, i.e,

lim y(xk) = min y(x).

4. Fractional optimization test problems

In order to implement numerically the proposed Algorithm MAX, we consider the problem of the following type:

f , , (Ax, x) + (b, x) + k max \<p(x) =

xeD \ (Cx,x) + (d,x) + e

where D = {x e Rn\ an < xi < pi, i = 1,2,..., n}, k = 4000, e = 6000000. Elements of the approximation set are defined as:

yi = zk + ahi, i = 1,2,...,m,

where zk is a local solution found by the conditional gradient method starting from an arbitrary feasible point xk e D. Vectors h are generated randomly. Parameter a can be found from the equation y(yi) = y(zk) in the following:

_ {(ip(zk)C - A)ti, zk) + {(ip(zk)C - A)zk + y{zk)d -b- y(zk)e, ft)

((ip(zk)C - A)^,^}

The following problems have been solved numerically on MATLAB by the proposed Algorithm MAX and in all cases the global solutions are found. Consider problem (3.1) for the quadratic case:

( f(x) {Ax, x) + (b, x) eD\g(x) {Cx, x) + (d, x) + e

mm

x

where D = {x e Rn \ Bx < l} is compact, A and C are matrices such that Anxn > 0, Cnxn < 0, f(x) > 0 and g(x) > 0 on D.

The algorithm of conditional gradient method is the following.. The problem (3.1) with the following data have been solved numerically on MATLAB based on Algorithm MIN.

Constraints of problems (2.1) and (3.1) are given as follows. Problem 1.

1 -2 -A / 2 1 3\ /3\ ¡2

A = | -1 3 0 | , C = I 0 2 1 | , b = I 2 | , d = I 1

-1 1 -1

Di = {1 < xi < 3, 2 < x2 < 5, 1 < x3 < 4}; D2 = {x e Rn \ Qx ^ q}, where

Q = I , q =(321

Problem 2.

A =

/1 1 1 1\ 12 11 113 1

K111 4J

C =

I-2 1 1 2 \ 1 -1 -1 -2 1131

, b =

2 -2 3

, d =

1 -2 3

\ 2 -2 1 -4 \-4 \-ч

Dl = {-2 < xi < 2, -1 < X2 < 4,-1 < X3 < 5, -3 < x4 < 1}; D2 = {x e Rn \ Qx ^ q}, where

Q =

t 4 3 2 1 \

0 3 2 1

0 0 2 1

\ 0 0 0 1 J

q =( 4 3 2 1 ) .

Problem 3.

A=

( n n - 1 n - 2 n - 1 n n - 1 n — 2n — 1 n

2 3 4 1 2 3

1 -1 -1 -1 .. .. -1 -1

2 -1 -2 -1 .. .. -1 -1

3 -1 -1 -3 .. .. -1 -1

, C=

-1 -1 -1 -1 .. .. 1 - n -1

n

b = (n,n - 1, n - 2,..., 3,2,1), d = (1,2,3, ...,n - 2,n - 1,n).

Di = { x e R D2 = {x e Rn

1 < xi < 10, i = 1,2,..., n}; ^ q}, where

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

(1 1 1 0 1 1 0 0 1

Q=

0 0 0 \0 0 0

1 1\ 1 1 1 1

1 1 0 1

n n - 1 n2

2 1

max min

problems constraints Initial Global time Initial Global time

value value /sec / value value /sec/

problem 1 D1 0.4166 0.6566 0.063 0.4166 0.4167 0.0047

D2 0.6684 0.6810 0.064 0.0667 0.0666 0.058

problem 2 D1 0.3500 3.6667 0.076 0.3500 0.1433 0.0773

D2 0.6713 0.7155 0.071 0.6713 0.6686 0.356

problem 3 D1 0.0036 1.4800 1.037 0.0141 2.4224e-20 0.882

n=50 D2 0.0141 0.0178 1.297 0.0808 0.0667 1.205

problem 3 D1 0.0282 14.6461 6.259 0.1121 9.5365e-18 2.220

n=100 D2 0.1121 0.1649 6.571 0.1789 0.0667 5.951

problem 3 D1 0.2241 168.6475 16.131 0.8982 1.6110e-5 50.5362

n=200 D2 0.8982 1.3340 57.683 0.9653 0.7788 66.114

problem 3 D1 3.5008 916.4216 147.373 14.5133 3.9996 1482.09

n=500 D2 14.513 21.6828 1854.412 14.5829 7.1792 2217.02

problem 3 D1 28.4096 734.3228 982.559 133.4068 50.5366 3492.972

n=1000 D2 133.4068 199.7074 7494.645 133.4868 100.2541 6897.254

References

1. Bector C.R. Duality in Nonlinear Fractional Programming. Zeitschrift fur Operations Research, 1973, no. 17, pp. 183-193.

2. Bertsekas D.P. Nonlinear Programming. Belmont, Athena Scientific, 1999.

3. Dinkelbach W. On Nonlinear Fractional Programming. Management Science, 1978, vol. 13, no. 7, pp. 492-498.

4. Enkhbat R. Quasiconvex Programming and its Applications. Germany, Lambert Publisher, 2009.

5. Hadjisavvas N., Komlosi S., Schaible S. (eds.). Handbook of Generalized Convexity and Generalized Monotonicity. Berlin, Springer, 2005.

6. Horst R., Pardalos P.M., Thoat N.V. Introduction to Global Optimization. Netherlands, Kluwer Academic Publishers, 1995.

7. Katzner D.W. The Walrasion Vision of the Microeconomy. The University of Michigan Press, 1994.

8. Pardalos P.M., Pillips A.T. Global Optimization of Fractional Programs. Journal ofGlobal Optimization, 1991, vol. 1, pp. 173-182.

9. Schaible S. Fractional programming II, On Dinkelbach's Algorithm. Management Science, 1967, vol. 22, no. 8, pp. 868-873.

10. Strekalovsky A.S. Elements of Nonconvex Optimization (in Russian). Novosibirsk, Nauka, 2003.

Rentsen Enkhbat, Dr. Sc., Professor, Director of Institute of Mathematics, National University of Mongolia, Baga toiruu 4, Sukhbaatar district,

Ulaanbaatar, Mongolia, tel.: 976-99278403 (e-mail: renkhbat46@yahoo.com)

Tamjav Bayartugs, University of Science and Technology, Baga Toiruu 34, Sukhbaatar District, Ulaanbaatar, Mongolia, lecturer (e-mail: : )bayart1969@yahoo.com

Р. Энхбат, Т. Баяртугс

Дробные задачи оптимизации

Аннотация. В статье рассматриваются дробные задачи оптимизации на максимум и на минимум на произвольном допустимом множестве с выпуклой функцией в числителе и вогнутой функцией в знаменателе, имеющие много приложений в экономике и технике. Показано, что оба типа задач относятся к классу задач глобальной оптимизации. При определенных условиях эти задачи можно исследовать как задачи квазивыпуклой максимизации и минимизации. С этой целью используется разработанный ранее подход. Этот подход базируется на специальных условиях глобальной оптимальности, построенных в соответствии с теорией глобального поиска А. С. Стрекаловского. Для случая выпуклого допустимого множества исходная дробная задача минимизации сводится к псевдовыпуклой задаче минимизации, в которой всякое локальное решение является глобальным. На этой основе разработаны два приближенных численных алгоритма для решения дробных задач оптимизации на максимум и на минимум. Проведены вычислительные эксперименты по решению ряда тестовых задач рассматриваемых классов размерности до 1000 переменных.

Ключевые слова: дробная максимизация, дробная минимизация, условия глобальной оптимальности, аппроксимирующее множество.

Рэнцэн Энхбат, Dr. Sc., профессор, директор Института математики, Национальный университет Монголии, ул. Бага Тойру, 4, Округ Сухэ-Батора, Улан-Батор, Монголия, тел: 976-99278403 (e-mail: renkhbat46@yahoo.com)

Тамжав Баяртугс, Университет науки и технологии, ул. Бага Тойру, 34, Округ Сухэ-Батора, Улан-Батор, Монголия, преподаватель (e-mail: bayart1969@yahoo.com)

i Надоели баннеры? Вы всегда можете отключить рекламу.