Научная статья на тему 'Решение задач обобщенной полубесконечной оптимизации с использованием алгоритмического подхода, свойственного методам линейного программирования'

Решение задач обобщенной полубесконечной оптимизации с использованием алгоритмического подхода, свойственного методам линейного программирования Текст научной статьи по специальности «Математика»

CC BY
71
33
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук

Аннотация научной статьи по математике, автор научной работы — Pickl St, Weber G. -w

Обобщенная полубесконечная задача оптимизации возникает в многочисленных инженерных приложениях и проблемах механики. В настоящей работе эта задача исследуется в следующей формулировке.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Решение задач обобщенной полубесконечной оптимизации с использованием алгоритмического подхода, свойственного методам линейного программирования»

Вычислительные технологии

Том 5, № 3, 2000

AN ALGORITHMIC APPROACH BY LINEAR PROGRAMMING PROBLEMS IN GENERALIZED SEMI-INFINITE OPTIMIZATION*

St. Pickl, G.-W. Weber Darmstadt University of Technology Department of Mathematics, Germany e-mail:[email protected] [email protected]

Обобщенная полубесконечная задача оптимизации возникает в многочисленных инженерных приложениях и проблемах механики. В настоящей работе эта задача исследуется в следующей формулировке

Здесь I — конечно, а возможно бесконечное множество Y задано конечным набором равенств Uk и неравенств vg. Дополнительно предполагается, что множество Msi компактно и объединение множеств Y(x) может быть задано конечным набором ограничений. При этих условиях в окрестности каждого элемента множества Msi задача Psi может быть с любой точностью аппроксимирована линейной задачей оптимизации PFn. Такие линейные аппроксимации с конечным числом ограничений порождают итерационную проследовательность, содержащую подпоследовательность, сходящуюся к решению глобальной проблемы Psi . В будущем на основе этой процедуры может быть разработан алгоритм.

1. Introduction

In the last years, various problems from engineering and mathematics have made generalized semi-infinite optimization become an interesting and fruitful field of research. For example, motivating problems of the following kinds may under suitable assumptions be stated as generalized semi-infinite (QSX) optimization problems:

— optimizing the layout of a special assembly line (see [14, 17]);

— maneuverability of a robot (see [2, 11, 15]);

— time minimal heating or cooling of a ball of some homogeneous material (time optimal control; see [15, 18]);

— reverse Chebychev approximation (see [7, 11, 15]);

Минимизировать f (x) на Msi[h,g], где

Msi[h,g] := {x € IRn | hi(x) = 0 (i € I), g(x,y) > 0 (y € Y(x))}.

*The authors are responsible for possible misprints and the quality of translation. © St. Pickl, G.-W. Weber, 2000.

— structure and stability in optimal control of an ordinary differential equation

(see [28]).

Now, our GSI problems have the following form:

P (f h g u v) J Minimize f (x) on Msi[h,gL where

tsx(f ,h ,g ,u ,v)j m5I[h ,g]:= { x G RRn | hi(x) = 0 (i G I) , g(x,y) > 0 (y G Y(x)) }.

The semi-infinite character comes from the perhaps infinite number of elements of Y = Y(x), while the generalized character is due to the x-dependence of Y(x) (x G IRn). These latter index sets are supposed to be feasible sets in the sense of finite (F) optimization, i.e., they are defined by finitely many inequality constraints, besides the finite number of inequality constraints:

Y(x) = Mf[u(x, ■), v(x, ■)] := { y G IRq | uk(x ,y) = 0 (k G K) , v^x ,y) > 0 (€ G L)}

(x G IRn).

Let h =(hj)ie/, u = (uk)keK and v = (v^eL comprise the component functions h : IRn ^ IR, i G I := {l , ...,m}, uk : IRn x IRq ^ IR, k G K := {1 , ...,r}, and v : IRn x IRq ^ IR, € G L := {1,...,s}, respectively. We assume that f : IRn ^ IR, g : IRn x IRq ^ IR, hj(i G I), uk (k G K) and vi (€ G L) are C^functions (continuously differentiable). For each C^function, e. g. for f, Df (x) denotes the row-vector of the first order partial derivatives d

f (x) (k G {1,... , n}; x G IRn), while DTf (x) is the correponding notation as a column.

dx

d

Let, e.g., Dxg(x,y), Dyg(x,y), analogously comprise the coordinate functions d— g(x,y) and -g(x,y), respectively.

dyc

Provided that the so-called Reduction Ansatz holds, meaning some nondegeneracy for the minima of functions g(x, -)| Y(x), then our problem PSI(f, h, g, u, v) can locally be represented as a problem from finite (F) optimization (cf. [5, 24, 31]). Hence, under the very strong assumption of that Ansatz (approach), our GSI problem is well understood from both the qualitative and the iterative or numerical viewpoint. In this paper the Reduction Ansatz is not supposed.

For the present more general context, first order necessary or sufficient optimality conditions for a local minimum of PSI(f, h,g,u,v) were presented in [11, 14, 29]. In the paper [29], two different approaches were followed, each of them having its own assumptions. While the first one leads to a local (or global) problem representation of PSI(f, h, g, u, v) as an ordinary semiinfinite (OSI) optimization problem PSjj(f, h, g0, u0, v0), the second approach applies auxiliary GSI optimization problems which are also representable as OSI ones. In the OSI problems, the (index) sets of inequality constraints does no longer depend on x. (For the first of these approaches we shall give a brief sketch of that representation below.)

Based on the problem representations and optimality conditions, different iteration procedures are worked out in [30]. For further numerical approaches in generalized semi-infinite optimization we refer to [7], and to the branch and bound approach given in [19] (related research is given in [22]). Hereby, we extend our result to cases of one assumptions less, generalizing our approach.

The present paper, however, is founded in a third approach, which consists in a problem approximation that will turn out to be very natural. Based on its assumptions, this approach does no longer fully need problem transformations into OSI problems, but it is based on a convexification and selection technique with respect to Y(x), and on local linearizations

of functions. This technique leads to approximative finite and linear optimization problems. Finally, we shall be able to formulate and prove a convergence theorem for this new iteration procedure.

Our first basic assumptions will impose conditions on the sets Y(x). Hereby, we concentrate on elements x £ U0, where U0 C Rn is a bounded open set, U0 denoting the closure of U0, and where MSI[h,g] RU0 = 0. This set may be a neighbourhood of some given or expected local minimum, i.e., it reflects a local study. At the end of this section we shall explain one further assumption which we make for U0, namely on being a manifold with generalized boundary [8], where, moreover, the boundary is piecewise linear and transversally intersecting Msi[h, g]. (Below, we shall come to these properties in greater detail.) In the special case of a global study, U0 could also be a neighbourhood of the whole feasible set MSI[h,g]. (Lateron, we shall also use the notion local with respect to much smaller sets.)

ASSUMPTION AUo (Boundedness): The set Uxeuo Y(x) is bounded.

Because of the continuity of u, v, and U0 being compact, it easily follows that this boundedness condition is equivalent with the compactness of Uxeuo Y(x). Hence, Assumption AUo may be regarded as a compactness assumption.

For each x1 £ MSI[h,g], x2 £ lRn, y £ Mf[u(x2,-),v(x2, •)] we denote the corresponding sets of active inequality constraints as follows:

Y0(x1) := {y £ Y(x1) | g(x1,y) = 0 }, (1.1a)

L0(x2,y) := {t £ L | ve(x2,y) = 0 }. (1.1b)

DEFINITION 1.1. Let points x £ RRn, y £ Y(x) be given. We say that the linear independence constraint qualification, in short: LICQ, holds at y as an element of the feasible set Mf[u(x,-),v(x, •)], if the vectors

Dyuk(x,y), k £ K, Dyv£(x,y), t £ L0(x, y)

(considered as a family) are linearly independent.

The linear independence constraint qualification (LICQ) is said to hold for Mf[u(x, •), v(x, •)], if LICQ is fulfilled for all y £ Y(x).

ASSUMPTION BU0 (LICQ): LICQ holds for all sets MF[u(x, •), v(x, •)] (x £ U0).

Now, we may state that the set MSI[h,g] RU0, being representable in the sense of OSI, is also compact (cf. also [29, 30]).

In view of our iterative concept with its convergence theorem, the next assumption on linearity and convexity is made without loss of generality. On the one hand, that assumption will simplify the topological considerations of our iteration procedure. On the other hand, in one part of our explanations it will allow some exactness with respect to y, where otherwise a linearization of uk (k £ K), vg (t £ L) would only lead to set approximations. ASSUMPTION C:

(i) (Affine linearity): There exist functions ak £ (C 1(Rn,M))n, bk £ C 1(iRn,iR) (k £ K), such that

uk(x,y) = aj[(x)y + bk(x) (x £ iRn, k £ K).

(ii) (Convexity): The functions vg(x, •) : !Rq ^ R (t £ L) are convex.

Hereby, we understand aT(x) as the row vector corresponding to the column vector ak (x) (x £ Rn, k £ K).

The Assumptions Ayo, Bu0 give us the opportunity locally in a smooth (Cway to linearize each of the sets Y(x) = Mf[u(x,-),v(x, ■)], x G W0, where W0 is some small bounded open neighbourhood of U0), by means of a finite number of local C1 -diffeomorphisms j : Uj ^ Cj, j G J := {1,..., s}. These diffeomorphisms locally take the variable y to the new variable z. Hence, Y(x) is a compact manifold with generalized boundary (cf. [8, 13]). Hereby, the

parameter x is an element of W0 f Cj, where Cj is an open cube (j G J), and we have

W0 C UjgjCj. Moreover, the sets (U) = Cj are also closures of open (q-dimensional)

cubes Cj (x G W0 f Cj, j G J). In this way, Y(x) becomes replaced by a finite number of closed (relative) cubes Z (j G J) lying in the linear subspace {0r} x IRq-r of IRq. These new index sets do no longer locally depend on x. This means that with the help of local linearizations we have equivalently expressed our GSI problem PSI(f, h,g,u,v) as an OSI problem PSj(f, h,g0,u0,v0). Here, g0 = (g°)jeJ comes from gluing the locally defined function g(x, (0X)-1(z)) with 0 (j G J), using a partition of unity [6, 9]. Lateron, we shall specify our choice of the diffeomorphisms (j G J) a bit. For more details we refer to the paper [29] with its special notations.

DEFINITION 1.2. Let a point x G Msi[h,g] and local C1-linearizations j of Y(x) be given. We say that the extended Mangasarian-Fromovitz constraint qualification, in short: EMFCQ, holds at x, if the following two conditions are satisfied:

EMF1. The vectors Dhj(x), i G I, (considered as a family) are linearly independent. EMF2. There exists a vector Z G IRn such that

Dhj(x) Z = 0 for all i G I,

(1.2a)

Dxg°(x, z) Z > 0 for all z G IRq, j G J,

(1.2b)

with (j-1 (z) G Y)(x).

The extended Mangasarian-Fromovitz constraint qualification (EMFCQ) is said to hold for Msi[h, g] on U0, if EMFCQ is fulfilled for all x G Msi[h, g] f U0. With the help of the chain rule, we see that (1.2b) means

(Dxg(x, (0X)-1(z))+ Dyg(x, (0X)-1(z))Dxyj(x,z)) Z > 0 )

(1.2b')

for all z G IRq, j G J, with yj(x, z) := (0|)-1 (z) G Y>(x). J

For more information on EMFCQ and its versions we refer to [11, 12, 29, 30].

After our assumptions on the (feasible) sets Y(x) on the "lower stage", now we make the following assumption for the feasible set Msi[h,g] on the "upper stage": ASSUMPTION Duo (EMFCQ): EMFCQ holds for Msi[h, g] on U0. Under the basic Assumptions Auo, Buo, the Assumption Duo guarantees that inside of a small neighbourhood W of Msi[h,g] fU0 the feasible set Msi[h,g] is a topological (Lipschitzian) manifold (cf. [12, 30]). Without loss of generality, we may say: W = W0. Furthermore, let us from now on without loss of generality think that U0 is a manifold with generalized boundary, fulfilling LICQ and having transversal intersection with Msi[h,g]. Here, in the presence of maybe infinitely many active inequality constraints y G Y0(x) on g(x, ■), this transversality can be accomplished in the following way (and sense).

We denote the relative boundary of MSI[h,g] in M[h] := {x G lRn | hj(x) = 0 (i G I)} by dMsi[h, g]. Now, the parts of the boundary dU0 of U0 which have nonempty intersection with the (n — m — 1)-dimensional Lipschitzian manifold dMSI[h, g], may locally be given as the (m+ 1)-dimensional (residual) linear span of the EMF-vector ( and of the set {r/1,... ,nm} being a basis of the orthogonal complement of the tangent space TxM[h] := {p G IRn | Dhi(x)p = 0 (i G I)} at x. Hereby, the point x, where that span is attached, is the necessarily locally unique element in that intersection of the (creased or Lipschitzian) manifolds dU0 and dMSI[h, g] (respectively).

Hence, close to x, dU0 looks like a linear hyperplane. Inwardly, on the relative interior of MSI[h, g], dU0 can be adapted without becoming tangential with the feasible set (tangentially). For more information on transversality we refer to [6, 9]. In the sequel, the generalized (creased) boundary dU0 may even globally be composed by linear faces, shrinking (or in a transversal, affinely linear way perturbing and intersecting) U0 otherwise. Then, U0 is a compact polyhedron (polytope). Of course, from the viewpoint of the practice, geometrical insights and linear algebra turn out to be helpful in order to construct U0.

For an illustration see Fig. 1, where we also prepares an impression of a slightly perturbed feasible set.

Hence, MSI[h,g] nU° fulfills EMFCQ, too, namely with EMF-vectors (0 in the tangential and (relatively) "inwardly pointing" sense of Definition 1.2. For an illustration we look at a neighbourhood of the point x' in Fig. 1. Hence, we learn from [12, 30] that MSI[h,g] nU0 is also a compact topological (Lipschitzian) manifold.

In the sequel, let || • stand for the maximum norm in some Euclidean space, e. g., Rn or IRq. We emphasize that our local linearizations of sets given above are exact representations while our local linearizations of functions, which will be used in the next section, are only approximations.

/

\ /

Fig. 1. Transversal intersection of the feasible set Msi[h, g] with U0 (m = 1; an example). The feasible set Msi[h,g], being due to a slight perturbation (h,g) ^ (h,g), is indicated, too.

2. The iteration procedure and its topological background

In order to get a better understanding of our iteration procedure, we present the underlying topology step by step. On the distinguishing of these steps (parts), our (feasible) set approximations and, finally, our convergence proof will be founded.

For our approach, the following local consideration plays a central part of motivation.

2.1. Part 1.a: Locally finite approximative problems

Let some open neighbourhood W1 of U0, points x G W1, y G IRq, open squares S1 := {x G IRn |]|x-< i1}, S2 := {y G IRq | ||y-y< i2} (¿V2 > 0) be given with the properties W1, S1 C W0 and Y(x) f S2 = 0 (x G S1). (We remember that the set W0 was introduced in Section 1 as some bounded neighbourhood of U0, where for each x G W0 the set Y(x) can locally be linearized.)

In this part 1, let y be y'. However, in part 2, y will be specified in suitable different ways. We replace the inequality constraints g(x,y) > 0 (y G Y(x) f S2) on x G S1 by the approximative inequality constraints

gyn(x,y) > 0 (y G Y(x) fS2), (2.1)

where

gyr(x,y) :=

g(x,y)+ Dxg(x,y)(x-x) + Dyg(x,y)(y-y) ((x,y) G IRn x IRq).

Hereby, g has been substituted by its (local) linearization g^" which is an affinely linear function. Now, let us for each x G S1 introduce the convex hull Cox := co(Y(x) f S2) of Y(x) f S2, being the smallest convex set Q C IRq having Y(x) f S2 as a subset. From Caratheodory's theorem (cf. [21, Theorem 17.1] and [16]) we know that Cox can be represented as follows:

Cox = { Eq+; Ajyj | ACT G [0,1], y G Y(x) f S2 (a G {1,... , q +1}), j Aj = 1}. (2.2)

Of course, all the inequalities from (2.1) remain fulfilled if we replace the (sub)set Y(x) f S2 by Cox. Hence, (2.1) necessarily follows from

gy"(x,y) > 0 (y G Cox). (2.3)

Reversely, in view of the convex combinations from (2.2) it is easily realized that (2.1) is also sufficient for the inequalities from (2.3). Consequently, (2.1) and (2.3) are equivalent.

Now, let us impose one more property on the squares S1 and S2. Namely, we assume that their boundaries dS1, dS2 have transversal intersections with the manifolds (with Lipschitzian or generalized boundaries) Msi[h,g] f U0 and Y(x) (x GS1), respectively. (The intersections Y(x) f S2 (x G S1) on the "lower stage" were even supposed to be nonempty.)

These transversalities can be achieved by means of shrinking the squares S7 around the center points x, y' sufficiently much, and by arbitrarily small linear perturbations of the faces of SY (y G {1, 2}). Without loss of generality, x and y' may remain the centers of S1 and S2, respectively. (Here, we also remember the transversal choice of U0 on the "upper stage".)

Of course, a convex hull of some set does not need to reveal a polyhedral structure. However, under our Assumptions Auo, Buo, C and because of the transversal choice of S2, we have a

Fig. 2. The index set Y(x), its transversal intersection with the square S2, and Cox, x E S1

(r = 0, j0 = 5; an example).

geometrical situation as it is indicated in Fig. 2 where, in particular, Cox turns out to be compact and polyhedral. Hence, Cox is a polytope, having a finite number j0 of vertices yj(x) (j e{i,...,j0}).

For Fig. 2 we note that the lower level sets of a convex function is convex [21], and we take into account of the geometrical meaning of LICQ [8, 13]. The notation yj (x) already indicates the fact that there is a functional dependence of the vertices on x. Indeed, based on Assumption Buo, on the transversal choice of S2 and with S1 being supposed to be small enough and lying in suitable small open neighbourhoods , the functions yj : x ^ yj (x) (x G 1; j G {1,... , j0}) are implicitly given by means of applying the theorem on implicit functions on the following systems of equations. Hereby, these functions turn out to be of class C1, and we may also state the independence of j0 on the choice of x G S1. Now, for each j G {1,..., j0} our system is

Fj(x,y) = 0, given by <

u1(x, y) = 0,

ur(x, y) = 0, vgi (x,y) = 0,

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

(x,y) = ^

e: (y - y1j ) = 0,

jT

P

T,

(2.4)

(y - y j ) = 0

^q-r-pj \y & q-r-pj !

Here Lo(x,yj) = ff,... , tp }, and {y E 1Rq | j(y-y'J) =0} (k E {1,... , q -r-pj}) stands for the faces of S2 which locally appear around each vertex yj = yj(x) (j E {1,...,j0}). The point (x,yj(x)) is the locally unique solution of Fj(x,y) = 0. Moreover, the vectors

G (k G {1,... , q — r — p}) form a basis of the orthogonal complement, and they define the linear half spaces HK (k G {1,...,q — r — p}) whose intersection nK=i-pJ H7 locally represents the square S2, by

HK := {y G R | — (y — yKJ) > 0}. (2.5)

Our implicit C1 -functions y-7(•) (j G {1,... , j0}) with their underlying choice of the vectors k G {1,... , q—r — p } may be considered as specifications of the function y-7 = y from Section 1:

y7(x) = y7(x, 0) = (07)-1(0) (x G S1). (2.6)

Hence, these vertex functions become also involved into the Definition 1.2 of EMFCQ. Now, we may express Cox (x G S1) in the following way as a polytope:

Cox = { — A-y7(x) | ACT G [0,1] (a G {1,..., j0}), — A- = 1} (2.7)

(whereby the vertices y7(x), j G {1,... , j0}, need not to be affinely independent). Hence, we may over S1 (or over the neighbourhood USi ) equivalently represent the inequality constraints from (2.3) as

g-(x) := 4n(x,y7(x)) > 0 (j G {1,... , j0}). (2.8)

We note that the inequality constraint functions can be written as follows:

g-(x) = g(x,y) + D*g(x,y)(x — x) + g(x,y)(y7(x) — y) (x G Usi). (2.8a)

In view of the equivalences (2.1) ^^ (2.3) and (2.3) ^^ (2.8) we have arrived at (2.1) ^^ (2.8). This equivalence means a representation (over S1) of the generalized semiinfinite (GSI) inequality constraints from (2.1) by means of the finite (F) inequality constraints from (2.8). We underline that this representation is exact, not only approximative. Now, let us turn from the very local to the (in x and y) more global viewpoint.

2.2. Part 1.b: Collecting the locally finite problems

Because of the Assumption Ayo on compactness, there exist sufficiently fine and finite open coverings (SO)aeA, A = {1,... ,a0}, of W1 (hence, of Msi[h,g] f U0), and (S^)peB«, BO =

{1,...,P°}, of Y(x), uniformly in x G SO, (a G A), consisting of squares as being given in subsection 2.1. Therefore, the shrinking and perturbing of faces may already be performed for SO, SO^ (P G Ba, a G A) without destroying the covering properties. In particular, we have some exactness coming from the equations Y(x) = U^g"(Y(x) f SO2 p) (x G SO, a G A).

With the help of such an underlying covering structure we get (in U0) a (global) approximation of PSI(f, h,g,u,v) by means of (locally) finite optimization problems PO(f, h,gO) on SO, where gO comprises the constraints gO ,p j := gj, j G JO 'p := {1,... , jO p} (P G BO, a G A). The corresponding feasible sets are MO[h, gO] := {x G IRn | hj(x) = 0 (i G I), x G SO, gO , p j(x) > 0 (j G JO'p, P G BO)}. In other words, that global approximation

Minimize f (x) on PF(f,h,g°){ M°[h, g°] := {x G IRn | hi(x) = 0 (i G I),

x G SO, gOpj(x) > 0 (j G JO'p, P G BO) for some a G A},

where g° := (g^)aeA (enumerating all the functions ga,p,j, P G Ba, a G A), can be decomposed into the problems P^(f, h, ) (a G A). Hereby, we have the representation M°[h,g°] = ^aeAMa[h, g°a]. That global problem may also be called a problem from disjunctive optimization (cf. [10]). Note that, for simplicity, in the list of functional parameters (f, h,g°), we did not

' T

explicitly mention the inequality constraints j (x — x'aj) — 0, j G {1.... , 2n}, (a G A) of all these foregoing feasible sets and problems, respectively.

Of course, each of these (only) finitely many problems reveals a much easier structure than our given GSI problem. Moreover, the approximative character (perturbations) is only due to the local linearizations of the form (2.1), while the other changes in modelling were exact representations. Hence, in order to make the approximation better, we let the members of the open coverings (S^)aeA and (S^ p)peB" (a G A) become arbitrarily small, and the finite cardinalities |A|, |Ba| (a G A) become arbitrarily large. Then, for each a G A, P G Ba, the functions g1n (y' = ya p being the center point of S^ p) and g, both being restricted to the closure Sa,p of the set Sa,p := S^ x S^ p, become arbitrarily close to each other. This "being close" refers to our (locally uniformly continuous) functions g, gl n and their (locally uniformly continuous) derivatives Dg(x) and Dgl n (x) = Dg(xa,ya p). To be precise, it is

understood in the sense of the corresponding C^-Whitney topology on Sa, p. A typical base neighbourhood of g in the sense of that topology is given by the following sets which are due to "controlling" positive valued, continous functions e : Sa,p ^ R

n d d

W?'p := {^ G C1(iRn+p,iR) | |w(x,y) — g(x,y)| + J] |^"(x,y) — ^g(x,y)| +

K=1

q d d _

+ y ^^(x,y) ^^g(x,y)| < e(x,y) ((x,y) G Sa,p)}. (2-9)

0=« OVa OVo

In the sequel, we refer shall always refer to C«1 -Whitney topologies for spaces of globally defined C«-functions being restricted to corresponding suitable manifolds, and on the product-topology of several such topologies. (For more information see also [6, 9].) From topological investigations in (G)SI optimization (cf., e.g., [12, 30]) we learn that under the fulfilled condition EMFCQ on MSI[h, g] f U0 and under such close approximations of g by g1n (P G

Ba, a G A), the "perturbed" feasible set M°[h,g°]fU0 finally lies arbitrarily close to MSI[h,g]f U0. Therefore, we note that in U0, M°[h, g°] is exactly represented by the (finite) union of the (G)SI feasible sets MSI[h,glin'a] := {x G IRn | h,(x) = 0 (i G I), x G S^, (x,y) — 0 (y G

Y (x) fSi p, p G B")} = mSi[h,g°°] (a G A). The latter new sets may be considered as coming from slight functional perturbations of our feasible sets MSI[h,g] fU0 with the constraints g(x,y) — 0 (y G Y(x)) being split up due to different parts Y(x) f S^^ p (P G Ba).

This set theoretical approximation means that, finally, M°[h, g°] fU0 lies in each arbitrarily close neighbourhood W' of MSI[h, g] fU0, and that the boundaries of both sets, relatively in M[h], also lie arbitrarily close to each other ([12, 30]).

Of course, for each x G S^ we concentrate on the discrete structure of the vertices Va p(x) (j G Ja,p) of Y(x) f So;, which is of less complexity in comparison with the manifold of points y (= (0X)-1(z)) G Y(x). Moreover, we conclude from EMFCQ, necessarily holding for the slighty perturbed topological manifold MSI[h,gl i n'a] f U0 with EMF-vectors Z0 (cf. Section 1), too, that for each a G A also the finite version MFCQ of EMFCQ is fulfilled at

each element x G MsX[h, gO] f U0 of the union M°[h,g°] f U0 (a G A being suitable). In fact, as EMFCQ (being preserved under small perturbations) holds for Msi[h,glin'°] f U0 (= MO[h,gO] fU0) (see [12, 30]) and as the corresponding functions ga^pj and g°(^, 0) are (locally) arbitrarily C^-close (P G BO), then MFCQ necessarily follows for M_O[h,gO] f U0. (Therefore, a continuity argumentation on a small perturbation is made again, in particular looking at (1.2b,b') and with some simplified enumeration by j.)

Namely, using notations analogously as in Subsection 2.1 and referring to the transversally chosen faces {x G SO | nO (x — xOj) = 0} C dSO (j G {1,... , 2n}) with inwardly pointing normal vectors nO (j G {1,... , 2n}), here, this condition MFCQ means

MF1. The vectors Dhj(x), i G I, (considered as a family) are linearly independent. MF2. There exists a vector Z G IRn such that

Dhj(x) Z = 0 for all i G I, Dxg«'P'j(x) Z = (Dxg(x«,yO'P) + ^yg^yO'P)DyO'P(x)) Z > 0

for all j G J0a'p(x), P G Ba, j Z > 0 for all j G J0a(x),

where

J0°'P(x) := {j G JO'p | gO'P'j(x) = 0}, _ J0°(x) := {j G {1,... , 2n} | nOT(x — xOj) = 0}.

Hence, [h,gO] f U0 is a topological manifold (a G A). This fulfillment of MFCQ will be valuable in next Subsection (part 2).

However, our implicit functions yO p(•) would still remain to be determined, and by inserting them into (cf. (2.8)) the affine linearity of our inequality constraints (see (2.3)) gets lost

(P G BO, a G A). Therefore, in the next subsection we shall overcome these disadvantages with the help of further approximative (local) linearizations.

2.3. Part 2: Locally linear approximative problems

Based on the first (in U0 global) approximation P°(f, h,g°) with its finite subproblems PO(f, h, gO) and underlying square structure, we perform one more perturbation by means of local linearizations in the variable x. Therefore, we specify the parameter y (cf. (2.1)) by means of the vertices yO p = yO p(xO) (P G BO, a G A) and we set for all x G IRn, i G I, j G

JO 'p, P G BO, a G A:

/«(x) := f(xa) + Df(xa) (x — xa),

hO ' i(x) : hг(xa) + Dhг(xa) (x xa),

£ ,p (x) := y-p (xa)+ Dy-P (xa) (x — xa) = ^.p + ,p (x«)(x — xa)

and, herewith, ,p , j (x)

g(xa ,p ) +

:= gyn (x

ya,p(x))

(A^O^ y^p) + g^a^ y^p)

(2.10d)

Dya,p (xa))

(2.10a) (2.10b) (2.10c)

(x xa).

We note that for the definition of ga,p,j (j G Ja'p, P G Ba, a G A) we do no longer need explicitly to know yj p(x), Dyjp(x) as functions, but only their (special) values yjp(xa) = y^, p and Dy^, p(xa) at the one point xa. Moreover, from the proof of the implicit function theorem (cf., e.g., [1]) we see the following representation of Dy;j p(xa) (cf. [29]):

Dya ,p (xa) = — (Dy Fjj ,p (x«,ya ,p ))-1DxJP(j ,p (x«,ya ,p), (2.11)

which can further be evaluated by means of (2.4), where Fj = p. In this way we get the (locally) linear finite optimization problems

Pa>iin(/a,ha,ga) : Minimize /a(x) on Mnn[ha,ga],

being located in S^,, where ha, ga comprise the constraints ha,j, i G I, ga,p,j, j G Ja'p (P G Ba, a G A). In view of S^, being a square, the corresponding feasible set

M^iin[h«,g«] := {x G Rn | ha,j(x) = 0(i G I), x G S«, ga,p,j(x) — 0(j G Ja• p, P G Ba)},

on which fa has to be minimized, are compact and completely defined by affinely linear constraints (a G A). These problems may be regarded as the (linear) subproblems of the following collected global approximation of our GSI problem:

P>,iin(f>, h>,g>): Minimize f(x), where x G M^ [ha ,ga], a G A,

over the collected feasible set

M>,iin [h>,g>] := {x G IRn | ha , j(x) = 0 (i G I),

x G S1, ga, p ,j (x) — 0 (j G Ja ' p, P G Ba) for some a G A} =

= UaeA Ma,iin [h«,g«]-

Here

f> =(/a)aeA, h> = (..., ha (i G I, a G A),...), g> =(..., g^j (j G Ja ,p, P G Ba, a G A),...)

(with suitable enumerations), and the affinely linear inequality constraints defining (x G) S<1 have (for simplicity) again been suppressed in the list of functional parameters. As for each given a G A the open coverings (by means of squares) may again be thought to be fine enough, the functions fa, ha, ga are (locally) arbitrarily close perturbations of the functions f, h, g^. Now, under the guaranteed condition MFCQ for M°[h,g°]fU° (cf. Subsection 2.2), M>iin[h>, g>]fU° is an arbitrarily good approximation of M°[h,g°] f U0. From the considerations in Subsection 2.2, moreover, we know that the latter set may lie arbitrarily close to MSI[h,g] f U0.

Altogether, after these two approximations in (G)SI and F optimization, respectively, we state: in U0, Msi[h, g] can arbitrarily well be described by means of the compact approximative

set M£lin[h»,n

Moreover, the components of / locally approximate /, such that, together with the previous reflections on set approximations, P£- lin(f>, h>, g>) may serve as a very fine approximative description of our problem Psi(/, h, g, u, v). This fact will be exploited in the proof of the convergence theorem (Section 3). Both the feasible set M£ lin[h>, g>] and the problem P£- lin(/>, h>, g>) can in U0 be considered as a "mosaic" consisting of the linearly defined feasible sets M^ lin[ha, ga] (see Fig. 3) and the linear subproblems P^ lin(/«,ha, ga) (a G A). Hereby, each of the latter subproblems can be solved by means of linear programming, e. g., using the simplex algorithm for finite optimization; cf. [17, 25, 27]. (We also mention that there is a simplex algorithm in the case of semi-infinite optimization; see [23].)

Note, that those (compact) "simplices" are given as the intersection between the polytopes, which piece together lin[ha,ga], and the polytope U0. In this way, for our approximative problems we also have a polytope structure in the x-space 1Rn. As under our approximation the simplicial structure becomes finer and finer and Msi [h, g] transversally meets U0, that intersection will finally also be transversal.

We emphasize that up to approximation, the (structural) complexity of our GSI problem has been reduced to the complexity of a linear F problem.

Fig. 3. The mosaic Mnn[h>,g>] in U0 (indicaded in a hatched way), and lower level sets {x G lRn | fa(x) = t} (t G IR, a = 8) (m = 1, a0 = 11; cf. Fig. 1; an example).

2.4. Part 3: Completion of the iteration procedure

After all the preparations made in Subsections 2.1-2.3, we are in a position to summarize our iteration procedure in the following way. We start with the initialization step given at the index v = 0. Here, some open coverings consisting of squares are given, namely (SO'0) gA°, (SO ' p)peB"'° (a G

A0), whereby for the first covering the family (xO) gA° consists of corresponding elements, say,

center points. For instance, the edges of the (in pairs perhaps overlapping) squares SO'0 (a G A0) may be taken from some grid structure and with (|| • -) radius $2'° > 0, e. g., in the way of Fig. 4.

Up to slightly (transversally) perturbing and shrinking S2 p (P G BO '0), we get the mosaic problem P>lln(f>'0,h>'0,g>'0). The squares SO'0 (a G A0) may also be sufficiently small in order to come (as sub-domains) from applications of the implicit function theorem (see part 1.a); otherwise, they shall become sufficiently small in a later step (where v > 0). Of course, if it is desired for our implicit vertex functions, the finite index set A0 is also allowed immediately to be enlarged due to a subdivision of some square S]1 ' 0 into smaller squares.

Now, with the help of linear programming we choose (global) minima xO (a G A0 = {1,...,a0 '0}) of the subproblems PO lln(fO, hO, gO) (see also Fig. 3). These points need not to be uniquely defined. Let x0 = xO,° be some element of {xO | a G A0} which is minimal for f > 0 in the following sense:

(x0) = mmfxO) | a G{1,...,a0 ' 0 }} =

= minfx) | x G MO, iin[hO,$O ] f U0, a G A0}. (2.120)

In the case of a tangential effect between dSO' p and Y(x) for some P G BO '0, a G A0, x G

S ' 0, a small linear transversal perturbation preserves the open covering by means of squares. Hereby, the (|| • radius 0? may decrease a bit.

Let for some v G N0 := {0,1, 2, 3,... } a global minimum G M^lin[h> ' v,<f ' v] nU° of the mosaic lin(/>' v, fr^ v,<7^ v) (over U0) with underlying open coverings (S^' v, (S^' ?)?eB" ' v (a G Av) already be given. Then, for the index v +1 we decompose Sd' v by means of a further grid which is given by dividing the radius by some natural integer

xa+1 G IV, xa+1 — 2. Then, the new radii are ' v+1 := a+1 > 0. We have again to guarantee

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Xa

some overlapping of the new squares (e. g.: two or more inner parts vs. two outer part at each side, as it was done in the Figures 3, 4). After small transversal perturbations and restrictions we get open coverings (S^ V+1 )aeAv+1, (Sa^1)?^ ' (a G Av+1).

Now, we can again select global minima xa+1, xv+1 of the subproblems P^- lin(/<V+1, ^a+1, /a+1) (a G Av+1) and of the mosaic P^ lin(/> ' v+1, h>' v+1, <f ' v+1), all of them being restricted to U0. Hence, for some a/v+1 G Av+1 = {1,... , a°'v+1} we have: xv+1 = Xa+V+1 and

/:++1 (Xv+1) = min{/V+1(Xa+1) | a G {1,... , a0'v+1}} =

= min{/a+1 (x) | x G M«iin[ha+1,ga+1] nU0, a G Av+1}. (2.12v+1)

In this way, we have arrived at a sequence (xv)vejv0 of global minimizers of our approximative mosaics of linear problems.

2.5. On practical treatments and a generalization

We note that when the transversal squares Sl'v, S^? have become sufficiently small, they need furthermore no longer transversally to be perturbed, but translations are already sufficient. Hence, their inwardly pointing normal vectors (= , k G {1,... , q — r — p'a'?'v}, j g

Ja'?, P G Ba'v, a G Av) (cf. (2.4)) may remain unchanged. From this fact we conclude that our specification of the condition EMFCQ, on which our approximation is based, can be made independently from the choice of the iteration step v G IV0.

Hereby, we may also use xa as some controlling parameter for center points in order to guarantee open coverings based on the implicit function theorem, if otherwise there might arise a problem based on a tapering of S«v.

Looking at our iteration procedure, we note that the squares do not explicitly appear in the approximative linear problems. Moreover, each vertex function y^(x) and its derivative Dya'V?(x) need only one single evaluation, namely at x^, (v G IV0). Hence, we do only need to

determine the vertices y^?(xa) locally being extremal points for Y(xa) n S^ and its convex hull, with the locally minimal (total) number (in short: p0) of active inequality constraints v,(xa, •) (€ G L) or e a ' ? ' vT (y — y j a ' v) (k G {1,... , q — p' a ' v — r}). Here, a ' ? ' v (< is the number of active constraints v^(xa, •). Moreover, in these points some (one dimensional)

lines in d(Y(xa) n Sa ' ?) meet, along which that total number p0 locally is q — r. Sometimes it may be helpful to follow these lines, "paths", in order finally to detect their intersection point being the vertex y^ '? = y^' ?(xa) (for related techniques see [4]; cf. also [26]). We only remark that following of y^'? (x) in the n parameters xCT (a G {1,... , n}) around the point xa may lead to the boundary of that neighbourhood US1, v of xa, where the theorem on implicit functions applies.

After a phase of adjustment (being due to small v) of our procedure, the existence of our coverings consisting of squares in transversal position around given center points, is automatically satisfied in the following sense. Namely, if for some v' G IV0 at x = xO (G SO'v , a G Av ) the squares SO'v , SO'p (P G BO ' v ), transversally intersect Msi[h,g] fU0 and Y(xO'), respectively,

and if v' is large enough, then the same transversality conditions hold for all x G SO'v, v > v'. This comes from the sufficienctly great fineness of our antitonely shrinking squares, with their faces finally (from one v to the other, v +1, and in pairs) remaining parallel.

Hence, we may concentrate on the center points xO and the vertices y^, where after this adjustment tranversality is locally fulfilled. We would only intervene, if we have some more geometrical insight how the sets Msi[h,g] f U0, Y(x) and their intersections with the corresponding squares look. Then, we could accelerate the getting tranversal of squares by means of suitable perturbations.

When v increases more and more, then we may with the help of geometrical observations become convinced that our minimum does not lie in a certain part of Msi[h,g] f U0. In such a case, we can my means of transversal hyperplanes excise a smaller subset of Msi[h,g] f U0, which may also be expressed as a shrinking of U0. Then, the increase of the number of squares SO'v and, hence, SO'p (P G BO ' v, a G Av) becomes weakened.

Fig. 3 indicates that often those auxiliary linear subproblems PO lln(fO,hO,gO), which are located nearby the relative interior of Msi[h,g], can "extremely" easy be evaluated. (We note the simple structure of MO lln[hO,gO] f U0) there.)

In the following section we shall see that there is a subsequence (xVk )kSjv° for our iteration procedure converging to a (global) minimizer x for PSI(f, h,g,u,i) in U0. The points xVk (k G IV0) do not need to be feasible for PSI(f, h,g,u,i). However, each sufficiently good approximation xVk° of x can be made feasible by means of a slight shift xVk° ^ x* G d(Msi[h,g] f U0) in the direction of an EMF-vector Z0. Hereby, the number k0 may be chosen sufficiently large, or depending on our desire, how close to get to the minimizer x of our problem. Of course, as a foregoing task, in practice we have to look for converging subsequences, and always to exploit all the structural and geometrical features of the given problem under consideration.

As we typically use linear approximations of our functional data, our problem need only to be of class C1. We mention that in different parts of our approximations, higher differentiability (if it is given) could be exploited by means of Taylor polynomials of degree > 2.

Now, let us come back to the general situation, where the Assumption C on affine linearity and convexity is not made. Hereby, for simplicity, at first we suppress the index v. Then, we would replace the defining functions uk, v by their linearizations uOnp k, vOnp t (k G K, i G L), respectively (P G BO, a G A), which are given by

uOnp'k(x,y):= uk(xo,y» ' p) +Dxuk(xo,y» ' p)(x—xO)+Dyuk(xo,y» ' p) (y—y^p) (kGK),

(2.13a)

1Onp'.£(x, y) := 1í(xa, yO 'p) + (xO,yO pp)(x — xO)+ Dy(xO, ya ' p) (y — ya ^) (i G L).

(2.13b)

(We remember, that yO p is the center point of SO p, P G BO, a G A.) Let us set YO ' p(x) :=

Mf[u)inp(x,-),vOnp(x, •)] (x G IRn). Firstly, for each x G SO, YO ' p(x) is allowed to be a rough approximation for Y(x).

Now, we have again to look for a square SO p in such a (nonempty) way that dSO transversally meets Y^p (x). Then, Y^p (x) is already a polytope with the vertices yO p (x) (j G {1,..., j p}), which are computable by means of linear algebra. In the case of Assumption C, these vertices play the part of yO p(x), while in general the points yO p(x) need no longer to lie in Y(x). However, if with increasing v our square structure becomes finer and finer, then ((«Onp,iOnp) =

O,p , VO,p ) locally approaches (u,v) such that in virtue of the Assumption Bu° on LICQ (hence, MFCQ), for each x G SO'v, the union UpgB"^Y^p(x) f SO 'p gets arbitrarily close to Y(x) (approximation; [3, 30]). Hereby, YO p(x) is understood in the sense of YO p(x) and being due to v G IV0. Again, we distinguish two approximation steps (parts), one in the (G)SI sense and a further one in the F sense.

Then, under our local perturbations from above, we finally arrive at mosaics in U0, namely M>lln[h> ' v,g>'v], and P>' lln(f>' v,h>' v,g>' v) consisting of linear subproblems P^lln(f^, hO, gO), a G Av (v G IV0). These mosaics approximate Msi[h,g] and PSI(f, h,g,u,v) (in U0), respectively.

Let us remark, that we could also comprise the functions «Onpv, vOnpv (P G BO, a G A) in the vector notation u> ' v, V> ' v, respectively (v

G IV0).

As there is one more step of perturbation (see (2.13a,b)) instead of exactness involved, this approximation is less close than the one under Assumption C.

Finally, our mosaics yield us a sequence (xv)vejV° consisting of (global) minimizers of P> lln(/> ' v, h> ' v,g> ' v), restricted to U0, respectively. Because of the less close approximation we cannot expect that this sequence, or some subsequence, converges stronger than it is accomplished in the case of Assumption C.

We underline that by means of our referring to polytopes, we did not explicitly need a change of our coordinates y ^ z. For further treatments on polytopes we refer to [32].

Taking account of both all the preceding explanations of our iterative approach and the special features of some concretely given problem, an algorithm which solves our given GSI problem, can be developped.

3. On the convergence of the iteration procedure 3.1. The convergence theorem and its proof

Based on the preparations given in Sections 1 and 2, we may formulate our main result as follows.

THEOREM 3.1 (Theorem on Convergence).

Let the Assumptions AM°, BM°, C and DM° be satisfied due to a bounded open set U0 C IRn, fulfilling Msi[h,g] f U0 = 0, U0 being a manifold with piecewise linear boundary, and dU0 being in transversal position with Msi[h,g].

Then, on the one hand, there exists a sequence (xv )veV° of global minimizers of topologically approximative mosaics P> 1in(f> ' v, h> ' v,g> ' v) (v G IV0), being restricted to U0, respectively.

On the other hand, there is a convergent subsequence (xVk)k€jV° of (xv)vejV° such that its limit point x = xVk is a global minimizer for the generalized semi-infinite

optimization problem PSI(f, h,g,u,i) being restricted on U0. (Hence, it is also a candidate for a local minimum of PSI(f, h, g, u, v).)

If the Assumption C is violated, then the same conclusion holds, too. However, the approximation

by means of mosaics P!> iin(/> 'v, h>' v,g>'v) (v G 1N0) on U0 and the corresponding (sub)sequence ksiVo of minimizers can not in general be expected to be as fast approximating and converging, respectively, as it can be accomplished under Assumption C.

Proof:

Let us first of all under all four assumptions reflect the approximation of MSI[h, g] HU0 by the sequence (M>lin[h>'v,g>' v] HU0)vejv0. There are two effects of linearization which come together. Namely, as a first effect we have linearizations of our defining functions and of our vertex functions (see (2.1), (2.10a-d), and, for the more general case, (2.13a-b), too). As a second effect, we have the getting arbitrarily fine of our covering squares' structure. The common virtue of both effects is very comparable with approximations of functions by means of arbitrarily small perturbations in the sense of the C^ -Whitney topology. The only differences firstly consist in the local splitting of the constraints g(x,y) > 0 (y G Y(x); we remember the square structure underlying MSI[h,glin 'a]) and, then, in the approximations in the sense of both (G)SI and F optimization. For the purpose of our set theoretical approximation, these differences mean no problem.

Indeed, we remember that there are two parts which contribute to our functional approximations. Part 1 is based on a local linearization of g; here, the approximation happens in GSI optimization (see, firstly, Subsections 2.1-2.2 and, lateron, 2.5). Part 2 is based on further linearizations, which give rise to approximations of defining functions in F optimization (see Subsections 2.3-2.4). _

Based on our Assumptions B^o on LICQ (for Y(x), x G U0) and D^o (for MSI[h,g] on U0), respectively, and on our transversal choice of covering squares, we may translate these approximations of functions into the language of set approximations. Namely, for part 1 we take account of the GSI investigation from [30], while for part 2 we utilize the F investigation [3]. Moreover, if we also (locally) perturb u, v by means of their linearizations «0"/?, , then

Y(x) gets in the same way locally approximated by YUa>9 'Va>9 (x) := Y^ ?(x). Hereby, we always exploit suitable (by transversal configurations enriched) versions of EMFCQ, MFCQ and LICQ, respectively.

Because of the compactness of MSI[h, g]HU0, the minimum of f on this set is also attained. Let us demonstrate that the minima minjfd(x) | x G lin[h^, g^] HU0, a G Av} (v G 1N0; see (2.120 — 12v+1)) tend to min{f (x) | x G MSI[h,g] HU0} when v tends to infinity.

Let some e > 0 be given. As f is continuous and MSI[h,g] HU0 is compact, we know that there is a finite open covering (UCT)CTg{1 ,... , cto} of MSI[h,g] HU0 such that

Msi[h,g] nU° C U^U7 C W°, Msi[h,g] nU° n = 0 (a G {1,..., a°}),

(3.1a)

/(x) — /(x)| < - for all x,x GUCT, a G{1,...,a0}. (3.2)

Moreover, as we demonstrated in Section 2, the sets M^ lin[h> ' v, g> ' v]nU0 = U a*M^ lin[ha,

ga] nU0 (v G 1V0) approximate MSI[h,g] nU0 whereby the (relative) boundaries d(MSI[h,g]

nU0) and d(M^ lin[h>' vv] n U0) (in M[h]) get arbitrarily close together (see [3, 12, 30]).

Hence, there is some vi G 1V0 such that

2

M^'lin[h>'v,g^v] nU0 C u£ 1UCT,

' " for all v — v2. (3.1b) M> lin[h^v,g^v] nU0n Ua = 0 (a G {1,... , a0}) ' 2

Now, we may on the one hand (e.g., indirectly, by contradiction) conclude from (3.1a-b) and (3.2) that it holds

min{f (x) | x G Msi[h, g] f U0} - min{f (x) | x G M>lln[h> ' v,g> ' v] f U0}| < ^

for all v > v' .

(3.3)

On the other hand, as everywhere on our approximating mosaics the collected functions fO (a G Av) locally approaches f (v ^ to), there is some v" such that it holds

If (x) - /O(x)| < f for all x G M°'lln[hO,gv] f U0, a G Av, v > v|.

From the pointwise given inequalities (3.4) we may also (e.g., indirectly) conclude:

(3.4)

min{f(x) | x G M>'nn[h> ' v,g>'v] f U0} - minf(x) | x G M^^gO] f U0, a G Av}| <

for all v > v".

2

Altogether, a simple estimation, based on (3.3) and (3.5), delivers

(3.5)

1 min{f(x) | x G Msi[h,g] f U0} - minf (x) | x G M£lln[hO,g^] f U0, a G Av}| < e

for all v > v2, (3.6)

where vf := max{vi, v"}. So, we have given the proof of the relation(s)

min{f(x) | x G Msi[h,g] f U0} = lim™ (minf(x) | x G MO'lmfe^] f U0, a G Av})

lim^—^, /V'v (xv),

(3.7)

which was asserted above.

That sequence (xv)veV° consisting of minimizers of our mosaic problems PS>I(f> ' v, h> ' v,g> ' v) (v G IV0), being restricted to U0, however, is bounded. Hence, for our iteration procedure there is a subsequence (xVk)kSjV° of (xv)ve1V° which converges to some point x G IRn:

x=

limK—x .

(3.8)

Because of xVk being elements of the sets M> lln[h> ' ,g> ' ]fU0 (k G IV0) which approximate the closed set Msi[h,g] fU0, the limit point x is an element of Msi[h,g] f U0. As, moreover, f (a G AVk , k G IV0) in a collected way locally approaches the continuous function f, there are numbers K e. , Ke G IV0 such that it holds

I/VKvk (xVk ) - f (xVK )| < f for all K > |f (xvK) - f (x)| < f for all K > Altogether, from (3.9a-b) we conclude

|/>K (xVK) - f (x)| < e for all k > where Kf := max{K'e, k" }. From (3.10) we learn

K e

K'e.

Kf',

limK

(xVK)= f (x),

(3.9a) (3.9b)

(3.10)

(3.11)

2

2

2

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

2

and, hence, in view of (3.7):

min{/(x) | x G Msi[h,g] nU0} = /(x). (3.12)

As the limit point x is feasible, i.e., x G MSI[h,g] nU0, our proof is finished under all our assumptions.

We already indicated that for the more general situation where Assumption C on affine linearity and convexity may be is violated, our topological argumentations remain true. However, then the process of approximation and, hence, the corresponding convergence of some minimizing sequence are usually less fast.

Let us only remember that the new vertices yO ? (x) may have become infeasible in the sense of yO ?(x) / Y(x), and that the stability theory on the GSI feasible set also allows local perturbations, e.g., uO"^ and vOn/v (v G N0), of u and v, respectively ([30]).

3.2. Conclusion

In this paper we presented a concept of an iteration procedure for a wide class of generalized semi-infinite optimization problems under assumptions on boundedness and constraint qualifications, for both the feasible sets and the index sets of inequality constraints. We worked out the topological background and gave a proof of our convergence theorem. Hereby, the subclass of problems, where the defining functions of the index sets fulfill conditions of affine linearity and convexity, allowed special insights. Moreover, aspects of local-global modelling, of practical treatments and of comparisons with former approaches were also given. For a concrete given generalized semi-infinite optimization problem fulfilling our assumptions, the development of a solution algorithm can be performed, based on the problem's structural or geometrical features and on our iterative approach.

The authors thank Professor Dr. Werner Krabs, Professor Dr. K. G. Roesner and Professor Dr. Yurii I. Shokin for encouragement.

References

[1] BARNER M., FLOHR F. Analysis II. Walter de Gruyter, Berlin, N. Y., 1983.

[2] Graettinger T.J., KROGH B.H. The acceleration radius: a global performance measure for robotic manipulators. IEEE J. of Robotics and Automation, 4, 1988, 60-69.

[3] GUDDAT J., JONGEN H.Th., RUCKMANN J.-J. On stability and stationary points in nonlinear optimization. J. Austral. Math. Soc., Ser. B, 28, 1986, 36-56.

[4] GUERRA VASQUEZ F., GUDDAT J., JONGEN H. Th. Parametric Optimization: Singularities, Pathfollowing and Jumps. John Wiley, 1990.

[5] HETTICH R., JONGEN H. Th. Semi-infinite programming: conditions of optimality and applications. In "Optimization Techniques". Part 2. Ed. J. Stoer. Lect. Notes in Control and Inform. Sci., 7, Springer-Verlag, 1978, 1-11.

[6] HlRSCH M.W. Differential Topology. Springer-Verlag, 1976.

[7] HOFFMANN A., Reinhard R. On reverse Chebychev approximation problems. Prepr. Ilmenau University of Technology, Ilmenau, Germany, 1994.

[8] JONGEN H. Th., Jonker P., TwiLT F. Nonlinear Optimization in Rn, I. Morse Theory, Chebychev Approximation. Peter Lang Verlag, Frankfurt a.M., Bern, N. Y., 1983.

[9] JONGEN H.Th., Jonker P., Twilt F. Nonlinear Optimization in lRn, II. Transversality, Flows, Parametric Aspects. Ibid., 1986.

[10] JONGEN H. Th., RÜCKMANN J.-J., Stein O. Disjunctive optimization: critical point theory. J. Optim. Theory Appl., 93, 1997, 321-326.

[11] JONGEN H.Th., RÜCKMANN J.-J., Stein O. Generalized semi-infinite optimization: a first order optimality condition and examples. Math. Program., 83, 1998, 145-158.

[12] JONGEN H. Th., Twilt F., Weber G.-W. Semi-infinite optimization: structure and stability of the feasible set. J. Optim. Theory Appl., 72, 1992, 529-552.

[13] JONGEN H. Th., Weber G.-W. On parametric nonlinear programming. Annals of Operations Research, 27, 1990, 253-284.

[14] KAISER C., Krabs W. Ein Problem der semi-infiniten Optimierung im Maschinenbau und seine Verallgemeinerung. Working Paper, Darmstadt University of Technology, Germany, 1986.

[15] KAPLAN A., Tichatschke R. On a class of terminal variational problems. In "Parametric Optimization and Related Topics IV". Eds. J. Guddat, H.Th. Jongen, F. Nozicka, G. Still, F. Twilt. Peter Lang Publ. House, Frankfurt, Berlin, N.Y., 1996, 185-199.

[16] KRABS W. Optimization and Approximation. John Wiley, 1979.

[17] KRABS W. Einführung in die lineare und nichtlineare Optimierung für Ingenieure. Teubner, Leipzig, Stuttgart, 1983.

[18] KRABS W. On time-minimal heating or cooling of a ball. Int. Ser. Numer. Math., Birkhauser Verlag, Basel, 81, 1987, 121-131.

[19] LEVITIN E., TICHATSCHKE R. A branch and bound approach for solving a class of generalized semi-infinite programming problems. J. Global Optim., 13, 1998, 299-315.

[20] MANGASARIAN O. L., Fromovitz S. The Fritz-John necessary optimality condition in the presence of equality and inequality constraints. J. Math. Anal. Appl., 17, 1967, 37-47.

[21] ROCKAFELLAR R. T. Convex Analysis. Princeton University Press, 1967, Stuttgart, 1970.

[22] RUDOLPH H. Zur Approximation semiinfiniter Programme. Wissenschaftliche Zeitschrift, Math.-Naturwiss. R., University of Leipzig, 27, 1978, 501-508.

[23] RUDOLPH H. Der Simplexalgorithmus der semiinfiniten linearen Optimierung. Wissenschaftliche Zeitschrift, TH Leuna-Merseburg, Germany, 29, 1987, 782-806.

[24] RÜCKMANN J.-J. On the existence and uniqueness of stationary points. Prepr. Aachen University of Technology, Mathematical Programming Ser. A, 86, 1999, 387-415.

[25] SPELLUCCI P. Numerische Verfahren der nichtlinearen Optimierung. Birkhäuser Verlag, Basel, Boston, Berlin, 1993.

[26] TAMMER K. Parametric linear complementarity problems. Prepr. Humboldt University Berlin, 1996, submitted for publication.

[27] VAN De Panne C. Linear Programming and Related Techniques. North Holland /American Elsevier, 1971.

[28] WEBER G.-W. Optimal control theory: on the global structure and connections with optimization. Part 2. Prepr. Darmstadt University of Technology, Germany, 1998, submitted for publication.

[29] WEBER G.-W. Generalized semi-infinite optimization: on some foundations. J. Comput. Technol., 4, 3, 1999, 41-61.

[30] WEBER G.-W. Generalized semi-infinite optimization: on iteration procedures and topological aspects. In "Similarity Methods". Eds. B. Kräplin, St. Rudolph, St. Bruckner. Institute of Statics and Dynamics of Aviation and Space-Travel Constructions, 1998, 281309.

[31] WETTERLING W. W. E. Definitheitsbedingungen fär relative Extrema bei Optimierungsund Approximationsaufgaben. Numer. Math., 15, 1970, 122-136.

[32] ZIEGLER G. M. Lectures on Polytopes. Graduate Texts im Mathematics, 152, SpringerVerlag, 1995.

Received for publication March 29, 2000

i Надоели баннеры? Вы всегда можете отключить рекламу.