Научная статья на тему 'INEQUALITY AND OPTIMIZATION'

INEQUALITY AND OPTIMIZATION Текст научной статьи по специальности «Математика»

CC BY
51
8
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
Sciences of Europe
Область наук
Ключевые слова
НЕРАВЕНСТВО КОШИ / НЕРАВЕНСТВО ИЕНСЕНА / ОГРАНИЧЕНИЯ / ГЕОМЕТРИЧЕСКОЕ ПРОГРАММИРОВАНИЕ / СЕПАРАБЕЛЬНОЕ ПРОГРАММИРОВАНИЕ / CAUCHY / JENSEN INEQUALITY / RESTRICTIONS / GEOMETRIC PROGRAMMING / SEPARABLE PROGRAMMING

Аннотация научной статьи по математике, автор научной работы — Danilenko E.

Since ancient times, mathematics has independently developed the theory of inequalities and the theory of optimization, although even, at first glance, they have a lot in common. As an example, the classic Cauchy inequality and Jensen's inequality apply to the case when the differences between the variables are bounded. The connection of these inequalities with classical geometric programming and convex separable programming is shown, their interpretations are given.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «INEQUALITY AND OPTIMIZATION»

PHYSICS AND MATHEMATICS

НЕРАВЕНСТВА И ОПТИМИЗАЦИЯ

Даниленко Е.Л.

доктор технически наук, профессор, профессор кафедры прикладной математики и информационных технологий, Одесский национальный политехнический университет

INEQUALITY AND OPTIMIZATION

Danilenko E.

doctor of technical sciences, professor, professor of the department of applied mathematics and information technologies

Odessa National Polytechnic University

АННОТАЦИЯ

В математике с давних времен разрабатываются независимо теория неравенств и теория оптимизации, хотя даже, на первый взгляд, в них много общего. В качестве примера, классические неравенство Коши и неравенство Иенсена распространяются на случай, когда разности между переменными ограничены. Показывается связь этих неравенств с классическими геометрическим программированием и выпуклым се-парабельным программированием, приводятся их интерпретации.

ABSTRACT

Since ancient times, mathematics has independently developed the theory of inequalities and the theory of optimization, although even, at first glance, they have a lot in common. As an example, the classic Cauchy inequality and Jensen's inequality apply to the case when the differences between the variables are bounded. The connection of these inequalities with classical geometric programming and convex separable programming is shown, their interpretations are given.

Ключевые слова: неравенство Коши, неравенство Иенсена, ограничения, геометрическое программирование, сепарабельное программирование.

Keywords: Cauchy, Jensen inequality, restrictions, geometric programming, separable programming.

1. Foreword 2. Cauchy inequality and geometric program-

Since ancient times, mathematics has inde- ming pendently developed the theory of inequalities and the Let's start with the classical Cauchy, according to theory of optimization, although even, at first glance, which the geometric mean does not exceed the arithme-they have a lot in common. As an example, the classic tic mean, which found wide application in various Cauchy inequality and Jensen's inequality apply to the branches of mathematics [for example, 1 - 4]. We incase when the differences between the variables are troduce some restrictions. Let x = (x1, x2, ... , xn) e Rn bounded [1 - 5]. The connection of these inequalities , Sx = (xi, x2 - x1, ... , xn - xn-1) e Rn, ax = ( x1, x2 + x1, with classical geometric programming and convex sep- ... , x1 + ... + xn-1 + xn ) e Rn. If b e Rl, then by defini-arable programming is shown, their interpretations are tion b + x = (b + x1, ... , b + xn) e Rn. For any y = (y1, given [6, 9]. The same can be done with the inequalities y2, ... , yn) e Rn inequality x > y means that x - y e R+. of Bernoulli, Jung, Holder, Minkowski and others. All The arithmetic mean of the coordinates of x will be dethese results can be generalized to the case of series and noted by S(x) = nl(x1 + x2 + ... + xn), the geometric integrals. Particularly interesting is the case of optimal mean of the coordinates of x will be denoted by G(x) control. =пГу~-y V > n

у Л-1 Л2 ... л.п > U.

Theorem 1. Let a >0 und € e R+ fixed so that S(^x) < u. If La (£) — { x • x e Rn , Sx > £, S(x) — R+ , then

10. x* = arg max G(x) — a- S(as) + as.

xeba(E)

20. xt = arg min G(x) — n(a — S(as))en + as, where en — (0,... , 0,1).

xeba(e)

Proof. We note that for n = 1, the theorem is obvious. Let the expression 10 is true for n — m. This means that if £ e R™,S(Se) < a, then G(x) < G(a — S(Ss) + Ss) for any x e La(s)), with equality if and only when x — a— S(Se) + Se.

We show that the expression 10 is true for n — m + 1. Let — —,qm+1) e R™+1, S(Sq) < b. If у —

(Уг.....ym+i)eLb(^)cR1^+1,8y>-n,S(y) — b, or yi> ■qi,y2 — У1 > .....ym+i — ym > qm+1, yi +

—+ Ут+1 > (m + 1)b. Therefore yk > У1 — tj1 + (tj1 + —+ ijk),k — 1,m + 1. Adding these inequalities, we obtain

(m + 1)b = yi + ■■■ + ym+1 >(m + l)(y1 - Vl) + (m + 1)S(Sq) or yi<b- S(S^) + Vu together with inequality y1 > ogives

yiEl =[rh,b-s(8v)+vi] (1)

We fix ji and let z = (y?,- , ym+i) E R+l,0(yi) = (yi + V2,V3,-,Vm+i) ER+},v = (tfi + + tf? +

m,-,Vi + V? + ■■■+ Vm+i) E R+. Since Sz > 0(yi),

(m + 1)b - yi

S(z) =-and

m

S(a 0(yi)) = m. 1((yi + tf2) + [( yi + V2) + V3] + ■"+ [(yi + V2) + V3 + ■"+ Vm+i]) = yi

-iif™ -L 1 ^ _ ™ \ (U _ _L t» ^ _ rt -L C _ ™-i I f^ _ Cfsm^ — fo

m i((m + 1)S(a^) - tfi) < (b - S(a^) + i]1) - S(a^) - m i ((tfi - S(a^)) = m-i(i]i - S(aq)) < b - m-i(yi - b) = m-i((m + 1)b - yi),

then zE Lm-i((m+i)b-y±)

Since, by assumption, for n = m the theorem is true, then

G(z) < G(m-i((m + 1)b - m-iyi - S(a 6(yi)) + a Q(yi)) (2)

and equality will take place only when

z = m-i(m + 1)b - m-iyi - S(a d(yi)) + a d(yi) (3)

Obviously, Gm+i(z) = yi Gm(z). Therefore (see (2))

Gm+i(y) < yi Gm(m-i(m + 1)b - m-iyi - S(a d(yi)) + a d(yi)) (4)

and the inequality becomes an equality only when the equality (3). Let the right-hand side of (4) through P(yi). As

0&(yi) = yi- v,S(a Qfyi)) = yi- m-i((m + 1)S(aq) - m-itfi, to m-i((m + 1)b -

Therefore

yx - S(a dtyO) + a 0(yi) = m-i(tfi +mv+(m+ 1)(b - S(aq)) - yx. P (yi) = m-myi nZihi + mHf+i% + (m + 1)(b - S(a^)) - yi].

lk=i['li

Recall that yiEl (see (1)). Since P(yi) > 0 and ., ^ m ,p'( yi) 1 vr . v

m ) tf

p^,^ = / [tfi+ m/ Vi + (m + 1)(b- S(av))-yi]

yi) yi ^ ¿—>i=i

-i

> (b- S(aV)) + Vi)~

Zm iK + i

(b- S(aq)+ / tfi)-i > 0, to P'( yi) > 0.

k=i ¿—>i=i

[tfi + _ Vt + (m + 1)(b - S(av)) - (b - S(oq)) +

k=i l~i

ik+i

*k=i *—'i=i Therefore, the polynomial P( yi) is monotonically increasing on I. Means

yi Gm(( m+i)b- y1 - S(a Q(yi)) + a Q(yi)) < p(b - S(gri) + tfD

ore

yi Gm((m+i)b -yi - S(a O(yi)) + a 6(yi)) < Gm+i(b - S(oq) + oq) (5)

wherein in the equation (5) will occur if and only if

yi = b- S(a^) + tfi. (6)

From (4) and (5) it follows that for any y E Lb (q)

Gm+i(y) < Gm+i(b - S(oq) + orf), * and the inequality becomes an equality if the equalities (3) and (6). Let vector y * = (y*, ..., y^+i ), satisfying these relations. Then

y* = b- S(a^) + tfi (6*)

and

z* = (y*, ... , y*m+i ) = ( -y1- S(a 0(y*)) + a0(y*)= b- S(aV) + v, (7) together with (6*) gives *

y* = b- S(aq) + aq (8)

By combining expressions (7) and (8), we have

max G(y) = G(b - S(&q) +

yELa(e)

And the maximum is achieved only on the vector y = y*. So the maximuml0 in Theorem 1 is true for n = m + 1, which implies justice for any n by induction.

Similarly we can prove the expression 20 in Theorem 1.

m i

i

From Theorem 1 there follow particular cases for n = 2, 3. We have

JE1(2 — £1 < лIX1X2 < ^a2 — 2/4, X1 > £1 > 0,x2 — X1 > £2 >0,

x1+ x2 — 2a; V£1(£1 + £2)(3a + £2 + £3) < ^x!x2x3 <

3

<

(a — \£2—\ £3)(a + h—1 £з)(а + Г2+3 £з),

м

х1> £1> 0,х2 — х1 > £2 > 0,х3 — х2 > £3 > 0,х1 + х2+ х3 — 3а.

Это есть иллюстрация решения задачи геометрического программирования [3].

2. Jensen's inequality and convex separable programming

Consider Jensen inequality [1, 3] n 1 Y^k=1 (xk) - (n 1 xk), is directly related to the convex pro-

gramming [4, 6]. Here x e X с R, y(x) - differentiable and convex function on the interval (c - d, c + d). In view of the symmetrical dependence of the right and left-hand side of Jensen from , without loss of generality, we can assume xx < x2 < — < xn . The set of all such monotonous sequences, each member of which belongs to the domain of the function <p(x), denoted by R^ = {ж}, ty(x)=(ф(хх), ^(x2), ... , ^(xj). Then, in the above notation Jensen's inequality can be written as

S(<p(x)) - <p(S(x)).

We agree that the disparities between the vectors - coordinate on inequality. Then

S(ax) = a(Sx) = x, Sx - £, x - as.

And to ensure that the area of

La(£) = {x e R(p : Sx - £, S(x) = a}= 0

ft

necessary condition a - S(ct£). In this area, examine the function Ф(х) = X k=1 Ф(хк)

on the minimum and maximum. It is easily seen that the function Ф(х) differentiable convex downwards on the interval (c - d, c + d). Since the objective function and constraints that describe the feasible region, recorded separable functions [4], the problem of finding the extremum of function Ф(х) in La(e) is a problem of convex separable programming [7], a local whose extremum coincides with the global extremum. Proved [6, 7], the following fundamental theorem analogous to Theorem 1.

Theorem 2. If Ф(х) is differentiable and convex function, then

0

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

1 . xt = arg min Ф (x) = a - S(ct£) + as.

yeba(E)

0

2 . x = arg max Ф(х) = n(a - S(c£))e + as , где e = (0, ... , 0, 1).

yeba(E)

Proof. Let us prove the uniqueness of the vector 10. Note that for n = 1 this is obvious. Let this be true for n = m. This means that if S(ct£) <a,as > 0, then for any vector x e La(s) П R™ the inequality Ф(х) > Ф(а — S(ct£) + as), and equality holds if and only if x = a — S(ct£) + ae.

We verify that the vector 10 holds for n = m + 1. Lety = (x, x1 , ... , xm), r] = (e, e1, ... , em), aq > 0, S (aif) < b. We consider the function Ф(у) = <p(x) + T,7k=19(Xk) and the domain Lb(tf) — {у: у e Rv с Rm+l , Sy > 1,S(y) — b}. First of all, let us clarify the interval I = [min x, max x] of the change in. Because the x > e, min x = £. In order to achieve max x, the numbers x, x1,... , xm must take the smallest values x1= x + e1, x2 = x + £1+ e2, .... , xm= x + e1+ ... + £m. Adding this set by the equality x = x and adding all the equalities, we get max x = b — S(aq) + £. Then x el = [ e, b — S (аф + £ ]. Fix x and consider the domain La(x)£(x), where a(x) = m-1((m + 1)b - x ), e(x) = (x + e1, e2, , ... , em). We verify that La(x)e(x) ^ 0. To do this, we verify the validity of the inequality (x) > S(ct£(x)). After substituting into this inequality, instead of x the right-hand end of the interval I, we get

(m + 1)b > (m + 1)(b - S(aq) + e) + m£1 + (m— 1)e2 + —+ £m, (m + 1)b > (m + 1)b — (m + 1)£ — (m£1 + (m — 1)e2 + —+ £m) + (m + 1)£ + m£1 + (m — 1)e2 + —+ £m, 0 > 0.

According to the induction hypothesis, for fixed x > 0 we can assert that for any vector x(x) e La(X)£(x) n R™ the inequality Ф(х(х)) > Ф(а(х) — S(as(x)) + as(x)), and the equality will be only if x(x) = xt( x) = a(x) — S(az(x)) + as(x).

We show that Ф'(х(х: x — (b — S(a^) + e)) <0. Find

m

Ф(х,( x)) — ^ <p(m-1(( m + 1)b - m-1x + m-1(m + 1)( г — S(aq) + £1 + — + ek).

k=1

Then

£<P'( x ) — YIk=1(P(m-1((m + 1)b — m-1x + m-1(m + 1)( £— S(aq) + £1 + —+ £k) < 0,

mx < (m + 1)b - x, x < b — S(aq) + £

(the last inequality is correct by assumption). Consequently, the function reaches a minimum at the right end of the interval.

The minimum point y* E Rv c R+l+i can be written as y* = (b - S(aq) + £,x*( b - S(aq) + e)). It remains to verify that y* = b - S(aq) + a-q. Let us verify this equality for at least the second term. Taking a(x) and oe(x) into account, we obtain

a(x) - S(oe(x)) + x + £i= m-i(( m + 1)b-b+ S(aq) - £+ (m + 1)e - ( m + 1)S(aq) +

mei = b - S(aq) + e + ei. Thus, expression 10 is correct for n = m + 1 whence, by the method of mathematical induction, we deduce the correctness for any n.

Analogously to this proof we prove the expression 20 of Theorem 2. We verify that S(x*) = a. On the interval x EI we investigate the function

9(x) + T,k=i9(x+ £i + — + £k) + + (p((m + 1)(b-x - S(aq)+ e) + x + £i + — + £m). The derivative of this function

<p'(x) + YJk-i<p'(x + £i + —+ £k) - m<p'(( m + 1)(b - x - S(aq) + e) + x + £i + —+ £m) at the point of the left end min x = £ > 0 of the interval I is not positive, since x < b - S(aq) + £ = maxx,b - x - S(aq) > 0 . Thus, the function studied at the point of the left end min x = £ > 0 reaches its maximum.

We substitute x = e for the point (x, m(a(x) - S(ae(x))em + oe(x)) and show that the coordinates of this point are equal to the coordinates of the vector ( m + 1)(b - S(arj))em+i + aif) from the induction hypothesis

(x, m (a(x) - S(ae(x))em + ae(x)) = (£,( m + 1)(b - S(aq))em + (£+ £i.....e + ei + — + £m)) =

(£, £ + £i, ... ,£ + £i + — + £m-i, (m + 1)(b- S(oq)) + e + ei + — + £m) = (m + 1)(b- S(arf))em+i + oifi.

The theorem is proved.

We give an interpretation of the maximum scan task in Theorem 2. In the economy under the variable x mean successive capital investments in the planning period, £ - limiting capital investments, a - the arithmetic mean of capital investments for the entire peridium time, (p(x) - a function of the efficiency of capital investments.

Then the task is to find the distribution of capital investments x* at their maximum efficiency. In biology, a population is considered certain and a variable x refers to successive costs of its content. The goal is the maximum

growth of the population, given a function (x) growth with cost constraints £ and a given average cost for the

*

whole period and the finding of the optimal cost x ■

We write the Kuhn - Tucker to search for the problem of the maximum in Theorem 2. Then

x * E La ( £) n R+ ;

dx ( ) dx < , + , & ( ) [ gx , gx , dx ];

rdy(x*) ^t^t d9(x'XT..* _ a

(~ir- (A ) ~TT) x = 0,

(A*)Tb = 0,bT =[ 8x* - £, a - S(x*), S(x*) - a].

Verify whether the expression 20 satisfies Theorem 2 to these relations. The first condition is obvious. To took place the third condition set

4

9'(x'k) + ^k- A*k + i - Ki+i - Ki+2 = 0,k = 1,n - *1, ty'&n) - Ki - Ki+i + Ki+2 = 0

from which we obtain

Zn

v'&D,

k=i

Zn-i

^'(X*k),

k=i

Jk=i

¿h-i = 2(rn+i - rn+2) - p'(x^-i) - ^'(x^),

i* i* ^ „'fy.*^

^n = (PLn+i ^n+2) V (xn).

H and A^

And of these expressions we see that the values A*n+i and X*n+2, as well as form of the function (p(x) is determined A*, A!, ... , A!.

i, /l2, . , n.

The second condition of the Kuhn - Tucker into the form

li < n(A*n.

Kl+2)

Л2 <(n- i)(rn+i - x

)-1 1

*

n+2

<P'(xk),

k=i n-i

ф'(Х*к),

k=i

Ki-i < 2(K+i

K.+2) - 9'(<-i) - v'(xh),

An < (An+i ^n+2) V (xn).

From a comparison of the conditions of the second and third we see that the set defined by the third condition is a subset of, defined second condition. Therefore, for any numbers x*, k = l,n and any function <p(x) can always be chosen such A*, i = 1, n + l, to the second and third conditions are fulfilled.

Substituting the ratio of 20 to the fourth condition, then

(A*)T [ n(a- S (as))en, 0, 0 ] = 0, A*nn(a - S (as)) = 0.

If we take A*n = 0, then it will not affect the implementation of the second and third conditions.

Similarly, the condition is checked Kuhn - Tucker theorem to express 10 Theorem 2. We write for the search of the maximum problem in Theorem 2, approximating the problem in the X-form [6, 7], the optimal solution which is obtained as the solution of its linear programming problem (without additional restrictions on the basis of selection)

n

A* = arg ,max Yfc=u Ле£а(е)П R+n

ir

xki),i = 0,r,

La(e) =

Z(AiXu) > £i,

i=0

ZAî(X2Î -ХЦ )> £2,

i=0

- %(n-i)i ) > £n,

1 = 0

-iSTn Yr 5

n 1 У У Al xki = a,

ZAi = 1, Ai >0,i = 0~r.

i=0

The coordinates of the approximate solution of the original problem are calculated according to the formula

xk ~ Yd=0^-ixki ,k — 1,n.

Were compared with the approximate solutions by solving linear programming problems and analytical solutions (Theorem 2) for different and sufficiently large numbers n and r, which is in good agreement with the results. It should be noted that the cost of machine time at approximate calculation of 1,000 - 10,000 times higher than the analytical [8, 9].

All these results can be generalized to the case of series and integrals. Particularly interesting is the case of optimal control.

References

1. Харди Г. Неравенства. Пер. с англ. // Г. Харди, Д. Литтльвуд, Г. Полиа — М., - 1948.—456 с. (Изд.3. - 2008). 2. Беккенбах Э. Неравенства // Э. Беккенбах, Р. Беллман —М., - 1965.—276 с.

2. Даффин Р. Геометрическое программирование // Р. Даффин, Э. Петерсон, К. Зенер — М., 1972.—318с.

3. Хедли Дж. Нелинейное и динамическое программирование // Дж. Хедли - М.,1967. - 506 с.

4. Даниленко Е.Л. Неравенство Коши при ограничениях на переменные / Е.Л. Даниленко, И.И. Ежов // Известия вузов. Математика - 1982. - No 1.

- С. 6 - 9.

5. Даниленко Е.Л. Обобщение неравенства Иенсена / Е. Л. Даниленко, И. И. Ежов. // Исследование операций и АСУ. Вып. 17. К.: Вища школа. -1981. - С. 111-120.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

6. Даниленко Е.Л., Об одной задаче выпуклого сепарабельного программирования. Вычислительная и прикладная математика. Вып. 48. К.: Вища школа. - 1982. - С. 128 - 133.

7. Даниленко Е.Л. Некоторые классические неравенства при ограничениях// Сборник научных публикаций НИЦ «Знание» по материалам VIII международной конференции «Развитие науки в XXI веке" (академический уровень). Харьков: НИЦ. - 2015. С. 5 - 8.

8. Даниленко Е.Л. Классические неравенства Коши и Иенсена при ограничениях// Сборник центра научных публикаций «Велес» по материалам международной конференции «Наука в эпоху дисбалансов" (академический уровень). Киев: Центр научных публикаций. - 2016.С. 5 - 8.

9. Даниленко Е.Л. Некоторые классические неравенства при ограничениях на переменные// East European Scientific Journal. MATEMATYKA-FIZYKA, 2016. - vol.6. # 4(8). p. 132-135.

i Надоели баннеры? Вы всегда можете отключить рекламу.