Научная статья на тему 'A new algorithm from semi- infinite optimization for a problem of time-minimal control'

A new algorithm from semi- infinite optimization for a problem of time-minimal control Текст научной статьи по специальности «Математика»

CC BY
99
43
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук

Аннотация научной статьи по математике, автор научной работы — Kropat E., Pickl St, Rossler A., Weber G. -w

Исследуется алгоритмический подход к задаче управления нагреванием (или охлаждением) однородного шара за минимальное время. Крабе показал, что эта задача оптимального управления может быть интерпретирована как задача двухстадийной оптимизации. На первой стадии решается задача минимального по норме управления, а на второй задача обобщенной полубесконечной оптимизации. Итерационная процедура реализует обе стадии, включая аппроксимацию негладких функций и пошаговое применение метода дискретизации к задаче оптимизации. Для иллюстрации алгоритма наряду с комментированной блок-схемой используются описания различных вариантов, альтернатив и практических приемов.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «A new algorithm from semi- infinite optimization for a problem of time-minimal control»

Вычислительные технологии

Том 5, № 4, 2000

A NEW ALGORITHM FROM SEMI-INFINITE OPTIMIZATION FOR A PROBLEM OF TIME-MINIMAL CONTROL*

E. Kropat, St. Pickl, A. Rossler, G.-W. Weber1 Darmstadt University of Technology Department of Mathematics, Germany e-mail: weber@mathematik.tu-darmstadt.de

Исследуется алгоритмический подход к задаче управления нагреванием (или охлаждением) однородного шара за минимальное время. Крабс показал [28], что эта задача оптимального управления может быть интерпретирована как задача двухстадий-ной оптимизации. На первой стадии решается задача минимального по норме управления, а на второй — задача обобщенной полубесконечной оптимизации. Итерационная процедура реализует обе стадии, включая аппроксимацию негладких функций и пошаговое применение метода дискретизации к задаче оптимизации. Для иллюстрации алгоритма наряду с комментированной блок-схемой используются описания различных вариантов, альтернатив и практических приемов.

1. Introduction

This article is devoted to a problem, given by time minimal heating (or cooling) of a ball. In this example of an optimal control problem, we consider questions concerned with problem representation, theoretical and practical solvability and structural frontiers.

The ball B consists of a homogeneous material. Based on a description in the article Krabs we study the problem

P

tm

'Min I(T, u) := T , such that

there is a bounded function 9 : [0, R] x [0, to) ^ R, where 9|(0,R] x (0, to) is partially differentiable, u = 9(R, •) | [0, T] is continuous, and

% #■ (r29

r2 Qf \

9(r, 0) = 9o (r G [0, R]), 9(R,T ) = 9e , T > 0,

K(R,t)| < (t G [0,T]).

0t(r,t) = aA0(r,t) = %£(r%(r,t))((r,t) e (0, R] x (0, то)),

(1)

(2)

(3)

(4)

(5)

(6)

*The authors are responsible for possible misprints and the quality of translation.

1 Corresponding author.

© E. Kropat, St. Pickl, A. Roßler, G.-W. Weber, 2000.

Here, A9 represents the Laplacian of 9 and R denotes the radius of B. The temperature 9(r,t) is a function of the radial variable r, where r measures the distance from the center point 03 of B, and of the time t. Moreover, we start with an initial temperature 90 and finish with an intended target (end) temperature OE > 90 (or OE < 90, respectively). Because of this inequality, each time T which is optimal for Ptm, can not be zero (i.e., T > 0). The temperature is essentially governed by the heat equation (2), where a > 0 describes the coefficient of heat conductivity (see Myint-U [32]). This can effectively be realized by the substitution v(r, t) := rO(r, t) in (2). We interpret uT(■) := u(-) = 9(R, ■)| [0, T] as a control variable (T > 0). Now, we are focussing on partial differential equations with the following unique solution of the (boundary-value) problem (2)-(4) (see Krabs [28]):

r,t) = 2r £ ^^ exp(-a(R)2t)9o 1 sin(Rr) +

k=1

X

+ if E (-1)k+1kn •/ exp(-a(R)2(t - s)) u(s) ds • 1 sin(Rr). k= 1 0

(7)

Furthermore, the function au(r,t) denotes some thermal stress tangential to the boundary dB of B (r = R). Finally, a* is a given upper bound of the stress.

Under suitable physical assumptions, at the boundary the function au = has the form

R

(au(R,t) =) aU° (R, t) = ¿OiGU 9(r,t)r2 dr - u(t)). (8)

0

(For more details see Parkus [34], where other elementary geometrical bodies are considered.) Here, E is the modulus of elasticity, ^ and a are the coefficients of cross-extension and linear heat extension, respectively. By means of (7), (8) represents the thermal stress for r = R, that can be evaluated further. Its dependence on the parameter 90, indicated in (8), then becomes obvious.

Fig. 1 displays the ball with its distribution of temperatures at some time t.

2. Evaluation of the Problem

2.1. Problem Decomposition

In the article Krabs [28], a first interpretation of Ptm as a problem from two-stage optimization is given, based on the representation (7) of the temperature.

On the lower stage, for each T > 0 we consider the one-parameter family (PTnm)Te[o,rc>) of norm-minimal control problems on the thermal stress at the boundary, given by

f MinuT ||aUT(R, Olkr,

where uT G C([0,T],1R) fulfills

-pnm ' T

Ut (T ) = 9e .

(9)

The mapping || ■ ||^,T denotes the maximum-norm for continuous functions on [0,T]. This problem is called an approximation problem. (See Braess [3], Krabs [27], Jongen/Jonker/Twilt [16] for further details.) In Krabs [28] we find the following re sult:

Fig. 1. A ball b consisting of a homogeneous material, and its temperature distribution. Increasing

darkness represents increasing temperature.

Item 1 : For each T > 0 the problem VT^m has precisely one solution UT. This (unique) solution UT of VTpm (T > 0) is Ut (t) := ^EgTr ■ Ut (t)+ Vt (t) (t G [0,T]) ,

where (uT,yT) is the unique solution of the system of integral equations:

uT(t) — J k(t — s)uT(s) ds = 1 0

t _

Vt(t) — f k(t — s)Vt(s) ds = 9o(t)

(t G [0,T]).

(10)

(11)

Here, we have

k(t) := 0o (t) :=

HE exp(—a( kn )2t) ,

R2 ^ R

k=l

<x

6 _

R2 Z^ (kn)2 k=l

E ^ exp(—a(kn)2t) 0o .

(12) (13)

The mapping T) := uT(t) is called a core of a Kuhn-Tucker function or, more precisely: a global minimizer function. In the special case 9o = 0 and, hence, 90 = 0, the variable yT = 0 is the unique solution of the second equation from system (11). Hence, the representation (10) of uT simplifies and we observe (Krabs [28]):

Ut(t) = 1 + / r(t — s) ds (t G [0,T]) o

where

r(t) := £ kK(t) (t G [0, T])

K=1

(15)

ki(t) := k(t) (cf. (12))

and

kK(t) = / kK-i(t - s)k(s) ds(t G [0,T], k G IV \ {1}). 0

(16)

Tricomi [40] provides more information on the solution theory of integral equations. Related qualitative or numerical aspects can be found in Gripenberg/Londen/Staffans [6], Hackbusch [9] and Jorgens [15]. Concerning the heat equation and methods from the optimal control, we also refer to Hackbusch [8]. Moreover, for Ptm further basic theory can be found in Krabs [26], [29].

Inserting the optimal control variables (u =) UT into the given problem Ptm leads to the upper stage, given by the following generalized semi-infinite (GSI) optimization problem of class C0, with x := T and y := t:

PGSI (f,g,v)

Min f (x) := x such that

± <0 (R,y) + a* > 0 (y G Y (x)), x > 0,

where Y (x) := [0,x] (x G R).

(17)

Here, g, u comprise the three or two continuous inequality constraints on x and y, respectively. A similar way of (partially) representing Ptm by a generalized semi-infinite optimization problem is done in Kaplan/Tichatschke [20]. The problem Ptm is an example of a so-called terminal problem. For a first numerical treatment see the paper Kaplan/Tichatschke [21], which also includes a convergence theorem.

From Krabs [28] we get the following second item:

Item 2 : Under the parameter constellation

9o = 0, d = ^ < I9e|,

Vgsi(f,g,v) has precisely one optimal solution T. ■

Then, the pair (T, ut) is the unique solution of the problem Ptm. Item 2 on PgSI(f, g, v) is based on a monotonicity argumentation which ensures that the function

d(T) := |d(T,0E)|- d (T G [0, ro)) (18)

has a single zero, where

d(T,9E) := =gTy. (19)

2.2. Problem Treatment: the Basic Idea

Krabs [28] directs his attention to an approximate solution of the problems Pnm (T £ [0, to)) of the lower stage. Now, we embed this approximation into an iterative concept of solving Ptm. Remember that our approach can be interpreted as a two-stage optimization problem, where PgSi(f, g,v) becomes treated, too. The functional data of this GSI problem are continuous, but they need not be differentiable. Hence, in order to apply the approaches from Weber [42], [43] or Pickl, Weber [35], we must approximate continuous data by C^differentiable functions. This will be done by means of approximate problems, defined by exchanging series (see, e.g., (12)-(13), (15)) by their vth partial sum (v £ IV).

Another obstacle consists in the fact that, here, it is hard to verify or falsify boundedness and EMFCQ for MgSi[g]. So, little structural or topological knowledge exists about the latter set. If, however, the functional approximations lead to set approximations of MgSi[g] of topological manifold character, then we could combine such a process (approximation) with our iteration procedure from Weber [42] (or [43]). By this stepwise and levelwise process, Ptm becomes described closer and closer.

Based on a presentation of an iterative concept, we are going to discuss these opportunities of a stepwise (perturbational) approximation. For that purpose we choose the concept of (equidistant) discretization, which extends from the GSI Approach I of Weber [42] (or [43]) to the whole two-stage problem with its two variables x and y. For a better understanding, we give a short description of that approach (see also Weber [44]).

2.3. A General Iteration Concept and Its Foundations

GSI problems have the following form:

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Minimize f(x) on MgSi[h,g], where

Pgsi(f, h, g, u, v)

Mgsi[h,g] := {x £ Rn | h,(x) = 0 (i £ I),

g(x,y) > 0 (y £ Y(x)) }.

The semi-infinite character comes from the perhaps infinite number of elements of Y = Y(x), while the generalized character is due to the x-dependence of Y(•). These latter index sets are supposed to be feasible sets in the sense of f initely constrained (F) optimization, i.e.:

Y(x) = MF[u(x, •),v(x, •)] := { y £ Rq | uk(x,y) = 0 (k £ K), v*(x,y) > 0 (i £ L)}

(x £ Rn).

Let h = (hi)ie/, u = (uk)keK and v = (v^eL comprise hi : Rn ^ R, i £ I := {1, ... , m}, uk Rn x Rq ^ R, k £ K := {1, ... , r}, and v^ : Rn x Rq ^ IR, i £ L := {1, ... , s}, respectively. We assume that f : 1Rn ^ IR, g : 1Rn x 1Rq ^ IR, hi (i £ I), uk (k £ K) and v^ (i £ L) are continuously differentiable. We locally focus our attention by referring to a given open, bounded set U0 C Rn. Here, we make the following assumptions on the lower stage (of

y):

Assumption AMo: Uxeuo Y(x) is bounded (hence, by continuity, compact). In generalized semi-infinite optimization, the feasible set need not be closed. However, the following assumption guarantees closedness.

Assumption BMo: For all x £ U0, the linear independence constraint qualification (LICQ) is fulfilled for Mf[u(x, •), v(x, •)]. This means, that the family of vectors

DyUk(x,y), k e K, Dyve(x,y), t e Lo(x,y),

is linearly independent, where L0(x,y) := {t e L | vi(x,y) = 0 } consists of active indices.

By means of some differential topology (Hirsch [12], Jongen/Jonker/Twilt [17]), these assumptions permit local linearization of Y(x) (x e U0) by finitely many C^diffeomor phisms : Vj ^ Sj (j e J) in such a way that the images Zj are x-independent squares (in hyperplanes). Herewith, Vgsi(f,h,g,u,v) becomes represented (in U0) by an ordinary semiinfinite optimization problem POSi(f,h,g0,u0,v0) with feasible set MOSi[h,g°]nU° = MgSi[h, gjfl U0. (For details see Weber [41], [43].)

We also need a constraint qualification on the upper stage (of x):

Definition. Let a point x e MgS1 [h, g] be given. We say that the extended Mangasarian-Fromovitz constraint qualification (EMFCQ) is fulfilled at x, if the following conditions EMFl 2 are satisfied:

EMF 1. The family of vectors Dhi(x), i e I, is linearly independent. EMF2. There exists an "EMF-vector" Z e IRn such that

Dhi(x) Z = 0 for all i e I,

Dxg0(x,z) Z > 0 for all z e Rq, j e J, with (4)-1(z) e Y>(x), where Y0(x) := {y e Y(x) | g(x,y) = 0 } consists of active indices. EMFCQ is said to be fulfilled for Mgst[h,g] on U0, if EMFCQ is fulfilled for all x e Mgst[h,g] nU0.

The following three theorems underline the importance of EMFCQ for establishing that MgS1 [h,g,u,v] := MgS1 [h,g] is a topological manifold with boundary, it behaves continuous, but also stable under perturbations of the defining functional data. With these perturbations we remain inside of suitable open neighbourhoods of (h, g, u, v) in the sense of the strong or Whitney topology C^ that takes into account asymptotic effects. Concerning the Lipschitzian condition of local linearizability (by Lipschitzian charts), upper and lower semi-continuity, continuity (in the Hausdorff-metric), transversality (absense of tangentiality) and topological stability (in the sense of homeomorphy) see Berge [2] and Weber [43].

Manifold Theorem (Weber [43]). Let EMFCQ be fulfilled in U0 for Mgst[h,g]. Then there is an open neighbourhood W C IRn of U0 such that MgS1 [h,g] nW is a Lipschitzian manifold (with boundary) of the dimension n — m.

Continuity Theorem^ Weber [42], [43]). Let EMFCQ be fulfilled in U0 for Mgst[h, g]. Moreover, let the closure W C ]Rn of some open set W C U0 be representable as a feasible set from finitely constrained optimization which fulfills LICQ, and let the intersection of its boundary dW with MgS1 [h,g] be transversal. Then, there is an open C^-neighbourhood O C (C 1(Rn,R))m x C 1(Rn+q,R) x (C 1(Rn+q,R))r x (C 1(Rn+q,R))s of (h,g,u,v) such that MW : (h,g,u,v) ^ Mgsi[h, g, u,v] n W, is upper and lower semi-continuous at all

(h,g,u, v) e O.

If, moreover, W is bounded, then O can be chosen so that O is mapped to Pc(Rn) by M and M is continuous.

Stability Theorem (Weber [42], [43]). The feasible set MgSi [h,g] is topologically stable, if and only if EMFCQ is fulfilled for Mgsi[h,g].

In Approach I of Weber [42], [43] we discretize the x-independent squares Zj of inequality constraints. Here, we take a regular grid, such that any two neighbouring points of the finitely

many grid points z°"'v are equidistant (in each step v £ IV). Making the underlying grid finer and finer, we arrive at a sequence of finitely constrained problems Pf(f, h,g0' v) (v £ IV) which are easier to treat and have global minimizers xv. Using Continuity Theorem and Stability Theorem, we see that there exists a subsequence (x)ksjv converging towards a global minimizer x of our given problem. (For more details and further approaches cf. Weber [43].)

2.4. Problem Treatment: an Explanation

Let us come back to the treatment of Ptm, interpreted as a two-stage problem. For our approach, first numerical experience is done by Jathe/Pickl/Weber [14] using Mathematica (cf. Kaufmann [24]). The computation bases on given (fixed) parameters a,R,a,^,a*, 00,, and more technical (auxiliary) parameters of initialization and termination. These auxiliary parameters are v0,v^ and vE (being sufficiently large, v0 < v1 < vE), mvo-1,c0,i°,s°,e°,eE, and d (defined in the way of Item 2). Herewith, we have initialized the essence of a corresponding " commented flow diagram!'.

After the following basic considerations, we continue to present and explain the flow diagram below.

We also can weaken the full discretization by means of referring to nondiscretized (here: differentiable) approximate functions on the upper stage. If we rigorously simplified the model functions, then this weakening would become a preferable alternative. For example, then we could apply the nonlinear optimization subroutine Find Minimum of Mathematica on Lagrange (penalty) functions or, alternatively, a quasi-Newton method for finding a zero of its gradient (later on, see also (33)).

Let us come back to the preparation of our flow diagram. If we do not know whether MgSi[g] is bounded, we introduce an upper bound T^-1 > 0 of (x =) T in each step v, where mv = 4V-v v0 + 1 and T^-1 = 2V-V v0. Let v £ IV be already chosen appropriately large, say v > v0 with respect to some v0 £ {4k | k £ IV} being divisible by 4. In other words, we remain in the T-interval [0,Tmv-1] (or, to make a numerical differentiation later on: in a bit larger interval [T—1, T^]). This set becomes discretized and then, based on the grid points T = Tjf (i £ {-1, ... , mv}), the (y =) t-interval Y(T) = [0,T] becomes analogously discretized, too. Here, we are in a standard situation with natural coordinate transformation

: y ^ z, where y = zT, z £ [0,1]. Turning from step v to step v + 1, the considered T-interval reaches a double size and a double finess of discretization.

Because of our discretization, the constraint T > 0 on the upper stage will be satisfied automatically. For the two other constraint functions ga (a £ {1, 2}) we need not distinguish between index sets Ya (T), but refer to Y(T) alone. Despite of some nice properties (e. g., the reverse monotonicity behaviour of g1 and g2) the special properties from Weber [43], Chapters 1 and 3, (e. g., quasi concavity) are hard to verify here.

We shall study the geometry or topology of some approximate set MgSi[g v] if (-TO,Tmv-1]. In the case of a manifold form we conclude EMFCQ for this set by using Manifold Theorem.

Using Krabs [28] and a convergence argumentation based on strictly monotonical decreasing (invertibility), we easily prove the "approximate version" of Item 2. For an illustration see Fig. 2, (a).

Item 3: Under the parameter constellation from Item 2, the vth approximate problem has precisely one optimal solution Tv for sufficiently large v. For v ^ to, the sequence of solutions Tv tends to the desired (unique) solution T of Ptm. ■

In the general case of a parameter constellation, our iteration also analyzes the strictly monotonical decreasing of a function T ^ dv (T), which approximates T ^ d(T). We write in short: d,v := dv(•); see the illustration in Fig. 2, (b), (i) (Jathe/Pickl/Weber [14]). Herewith, we study the zero sets of d.v and d(-). We get a better visualization and qualitative understanding of Mgsj[g] by raising the bound T^1"-1 with the factor 2 step by step.

Altogether, on the other hand, we wish to find (in certain steps) a smaller (sub)interval that contains a solution T. Finally, to hope to arrive at an approximate solution Tv « T of the given problem Ptm (" «" stands for nearby), where Tv lies in the zero set of d• ,v. In case of such a successful interval adaption, the doubling of the interval size stops. For the iteration process, we choose a mainly Lagrangian (penalty) way. This way is presented below, and it also incorporates the (previously described) graphical and numerical evaluations with respect to dT,v.

The intimate relation between t and T (t G [0,T]) motivates a slightly modified variant of Approach I (see Alternative (II) below). We have zl'j := Tk, where t3v = Tj (j G {0, ... , i}).

T v

Subsequently, we state both versions and indicate two further alternative variants. (Our notational modifications with respect to Subsection 2.3 should not cause misunderstandings.)

Now, we define, calculate and visualize according to the following (commented) "flow diagram". We omit technicalities, e.g., smaller enumerating loops. For more information on the ("discrete") variables in (20)-(25) below, we refer to Kaiser/Krabs [19] (cf. also Tricomi [40]). In a balanced way we consider both the usual (functional) notation of this work and elements of Mathematica. (The causal sequel within the "flow" differs from Mathematica.)

Commented flow diagram (fixed parameters being given, alternatives implied):

Mark (A) v = v0 (initialization):

Cv0 := 4, qvo := 1 , pvo := 1 ,

sv0 := s0 > 0 , £vo := e° > 0 , and tvo := i0 (e.g., := v0) ,

0

mv 0-1 := — + 1 ,

v3av0 := 1 (j G {0, ..., mvo-1 - 1}, a G {1, 2}). Mark (B) v > v0 (declarations, recursion and adaptions):

mv := cv • (mv-i - 1) + 1,

(T =) T-1 := -2-v+v0, (t =) t-1 := -2-v+v°,

(T =) Ti+1 := Tt + 2-v+v0, (t =) 4+1 := tfv + 2-v+v0 (i G {-1, ..., mv - 1}). (Subsequently, we especially refer to these discrete values of T and t.)

kv(t) := f E exp(-a(kR)2t) , (20)

k=1

kv>1(t) := kv(t) and (21a)

(t) := / K,K-1(t - s)kv(s) ds (k G N \ {1}). (21b)

0

(Here, we apply a standard method from numerical integration; e.g., Krylov [31], or the subroutine Integrate.) For the following approximate definitions (23), (25) (remember (11)), the index T may be omitted in the variables uT,v(t), yTv(t), respectively:

rv(t) := E kv^(t) , (22)

K=1

t

ur,v(t) := 1 + / rv(t - s) ds , (23)

Fig. 2. Iteration procedure for solving Ptm (visualization, some examples): (a) Under the parameter constellation from Item 2: Find the unique solution T with the help of

stepwise minima Tv (see Item 3). (b) General parametrical case: (i) a function gL)V (based on a numerical computation from Jathe/Pickl/Weber [14]), (ii) graphs z) and zeroes of functions g® z) — a.

V(t) := -62(Z (kn)2 exp(-a(f )2t))0o , fc=i v ' _ t _

Vtv(t) •= (t) + / rv(t - s)0o,v(s) ds and, finally, 0

dT'V •= _T,v (T) ,

c?t,v •= |dT,v | — d.

(24)

(25)

(26) (27)

Alternative (I): Up to a renumbering, about a quarter of all the points t the values

kv(t), 0o>v(t) can recursively be determined from kv-1(t), 0o>v-1(t). Hence, storing suitable

variables from the foregoing step, many calculations need not to be performed. Therefore, we

put

v0-1

v0-1

Ko-1(t) •= |E exp(—a(kn)2t), 0o,*o-1(t) •= -6( £ (k^ exp(—a(f )2t))0o

fc=1

fc=1

kv(t) •= kv-1(t) + 62 exp(—a(f)2t), (t) •= 0o,*-1(t) + ^ exp(—a(f)2t)0o

(v > vo). □

We investigate the qualitative form of the graph of T ^ dT,v, especially its monotonicity and, finally, its zeroes. Therefore, Newton's method can be applied (see, e.g., Jongen/Triesch [18]). Krabs [30] suggests regula falsi. Moreover, we visualize (plot) the graph in a figure and, concerning monotonicity, we study the corresponding arithmetic mean Adf, of the left and

right hand side difference quotients at T^: Adf, study based on the nondifferentiable form of d.

- d„

(One can make a case

- 2-v + v0 + !

see (18).) During the investigation, we observe the behaviour of the sequence of zero sets. In particular we look for a limit set, perhaps consisting of the singleton T. Furthermore, we define

Ut,v (t)

9E_TyT(T()T) • «t> (t) + yT v (t) (computation: together with (26)),

(t) •= 15 • (0o,v (t) — UT,V(t) + / kv(t — s) • UT,v(s) ds

g1,v(T,t) •= (t) + a*, (T,t) •= —aT,v(t) + a*, (z =) •= j (j 6 {0, ...,£}).

(T, zT) at T 6 {T

-1

V 1

T mv

(28)

(29)

(30)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

(31)

} (a 6

Soon we shall evaluate functions (T, z) : {1, 2}).

First of all, we visualize the graph of (-,z) (referring to the numbers z from (31)); moreover, in view of the approximate (bounded) feasible set, we calculate and visualize the

9,10} U {± 1, ± 1, ± 1, ±

-1 ± -1

16' 32

}. An

zeroes of (^,z) — a for, e.g., a 6 { — 10, —9, . illustration is given in Fig. 2, (b), (ii).

If we think that the approximate feasible set describes Mgsi [g] in a sufficiently close way and if v > v1 holds, then we put cv+1 •= 2. From this illustration we may (but need not) make a conjecture, whether the properties EMFCQ and boundedness hold for MgSI[$]fl(—ro, Tmv-1].

Otherwise (in the case of no approximation), we put cv+1 •= cv.

At certain steps v, we adapt the iteration procedure. Our decision may on the one hand be based on the insights about dT;V and v (for example, we redefine intervals with respect to both their size and their fineness of decomposition). On the other hand, we may vary the auxiliary parameters (according to Ptm perhaps also the fixed parameters; cf. mark (A)) and

the values pv,qv, or we can turn to Alternative (I), or to the subsequent Alternatives (II), (III). The variation of the parameters can be interpreted as a further entrance of parametric programming or optimal control theory and its methods into this work. Referring to T = T^, we write shortly:

9°,vAi := 9a,v (T, ztj T) (= (T£, zt'j)) (j G{0, ...,£}). (32)

Alternative (II): In Approach I, we originally discretize analogously with the help of zi'j := j--1 (j G {1, ... , mv - 1} or, for numerical differentiation, j G {0,mv}). As a result we obtain functions g°'v in the sense of Subsection 2.3. □

Motivated by Hestenes [10], [11], Powell [36] (referring to equality constraints) and modified in the sense of Rockafellar [37] (referring to inequalities see Kaiser/Krabs [19] and Weber [43], Remarks 3.1.6), now we mimic a "discretized" Lagrange function. Namely, in the sense of a penalty method, we put:

2 t 2 L := Tt + ¿7 E E ((max{0, j - 2 g^^})2 - < ,2),

a=1j=0

t G {0, ... , mv - 1}. (33)

By comparing these nonnegative values we select a minimum tv, which corresponds to the point T^v :

tv G Argmin{LV | t G {0, ... , mv - 1}}. (34)

Alternative (III): Here, we exploit the information of the inequalities, given by their approximate derivatives. Namely, we put the (arithmetic mean) difference quotient of the smooth function g® ) at Tt by

00

g

Ag

0 __ga, v,t+1,j ga,v/-1,j

■ 2-v+v0+1 '

and then we define

ALi := 1 + E E max{0, j - 2svg%A3}Ag° ^j (i G {0, ... , mv - 1}).

a=1j=0

Then we ask whether there is an i satisfying |ALv | < e v .If this is not the case and v +1 < vE, then we may suitably adapt the penalty variable sv+1 := ^pv to the control of convergence: we put " v := v + 1" and go back to the mark (B). Otherwise, we select a minimum iv G Arg min{|ALv || i G{0,...,mv - 1}}. □

The number Ttv is given both as an numerical output and as a point inside a figure (visualization). Of course, this output can be arranged just for a number of iteration steps v. If v + 1 < vE or |dT<v v| > ev holds, then we could suitably define pv+1,qv+1 and we put

<v+i := max{0, <v - 2sVgl,v,iv,j}, (35)

s v+1 := 2pSv\ 1 , £ V+1 := 2ît + 1 ) " V := V +1 (36)

afterwards we return to mark (B). Otherwise, the whole procedure stops.

In the case of |dT£v v | < ev, we regard the number (of the last step v) as a satisfying approximation of the desired minimum T. This is an optimistic reservation. (Another approximate candidate could be an expected lim inf of all forthcoming values T~5, assuming that the procedure continues running.) ♦

If, furthermore, in the limit of u (imagining u ^ to) we suppose EMFCQ and boundedness to hold for MgSi[g], then the heuristic (optimistic) approximation reservation, made at the termination of our iteration, is supported by the theory of Approach I. (Remember the stepwise arising question on the validity of EMFCQ and boundedness, and the iteration procedure of Approach I. This approach was presented in Subsection 2.3, based on the a topological study.) For that purpose, we choose a sufficiently large parameter vE. However, in the present Subsection 2.4, our main Lagrangian (penalty) approach also evaluates the F optimization problems which result from Approach I with its discretization on the lower stage. For more information on the corresponding numerical obstacles in F optimization see Spellucci [39]. We also remember the reflection under absence of a discretized upper stage, where different subroutines may be applied on the (less complex) GSI optimization problem. Another opportunity for insights is established by the additional (parallel) approach from the values dT;v ( u > vo). In the case of the parameter constellation from Item 2, we indeed have an existence and convergence theory (see Item 3).

2.5. Further Evaluations

Until now, we studied the structure and numerical treatment of our control problem Ptm. Existence and convergence results were stated (Items 1-3) and a (commented) flow diagram was presented. Moreover, we expoited relations to generalized semi-infinite optimization and discussed alternatives and obstacles. Further difficulties will be noted below. For an illustration and a numerical evaluation see Fig. 2, and for further information cf. Weber [43].

From foregoing reflections we learn that in such a concrete problem there may be obstacles (frontiers) in applying iteration procedures, resulting from lacking structural knowledge of the feasible set MgSi[g] (here I = 0). In our iteration procedure, given above in the flow diagram, a great algorithmical effort (a lot of operations) has to be performed in order to overcome these structural frontiers. Errors can happen in the course of the procedure by accumulation of roundings (sensitivity).

However, we also remember the treatments, or adaptions, stated above in Alternatives (I)-(III), by studying the mappings T ^ dT,v, T ^ g°v (T, z) and by varying auxiliary parameters and variables. For our present concrete, but structurally complex problem, some further practical treatments from GSI optimization may turn out to be helpful again.

Namely, in order to get an idea how Mgsi[g] looks like and whether EMFCQ is fulfilled, we can utilize (vectors of) pseudo-random numbers (Eichenauer-Herrmann [4]). Hereby, we obtain more information on structures of (in)feasibility. We also mention Karger [22], [23] on randomization in graph (or matroid) optimization problems. In a forthcoming article, we return to pseudo-random numbers from the viewpoint of random graphs which admit insights being related to the Morse theory (see also Weber [43]). Concerning the diagnosing of infeasibilies we refer to Aggarwal/Ahuja/Hao/Orlin [1] on certain discrete optimization problems. (For the continuous case see Kearfott [25].) Furthermore, we refer to methods from reverse engineering (Elsasser [5], Hoschek/Dankwort [13]), image restoration (Noll [33]), discrete tomography (Gritzmann

[7]) and discrete topology (Rozvany [38]). These methods approximately describe or visualize a manifold or a structure based on discrete data.

In this way we have widened our scope from continuous problems of invertibility and reconstruction in optimal control and GSI optimization ( Weber [43]) to discrete inverse problems.

3. Conclusion

In this article, we presented, analyzed and algorithmically treated an optimal control problem from time-minimal heating (or cooling). First systematical analysis and evaluation was done by Krabs. Our research additionally utilized an approach from generalized semi-infinite optimization, and we discussed the wide field of alternative methods, structural obstacles and related mathematical techniques. Hereby, we finally took into consideration stochastic and discrete features and methods.

Treating the heating problem in terms of a general model is very hard, such that rigorous utilization of the topological, geometrical or intrinsic combinatorial character of the specialized real-world problem is undispensable.

In the sense of these reflections, our heating problem remains an interesting subject of future reserach from both the theoretical and numerical viewpoint.

Acknowledgement. The authors thank Prof. Dr. Werner Krabs and Prof. Dr. Yurii Shokin for support, and Dipl.-Math. Susanne Mock for technical help.

References

[1] Aggarwal C.C., Ahuja R. K., Hao J., Orlin J. B. Diagnosing infeasibilities in network flow problems. Mathematical Programming, 81, 1998, 263-280.

[2] Berge C. Espaces topologique, functions multivoque. Dunod, Paris, 1966.

[3] BRAESS D. Nonlinear Approximation Theory. Springer, Berlin, Heidelberg, N.Y., 1986.

[4] Eichenauer-Herrmann. J. personal communication, Darmstadt, 1997.

[5] ELSASSER B. Approximation mit rationalen B-Spline Kurven und Flachen. Doctoral thesis. Darmstadt University of Technology, Department of Mathematics, 1998.

[6] Gripenberg G., Londen S.-O., Staffans O. Volterra Integral and Functional Equations. Cambridge University Press, 1990.

[7] Gritzmann P. On the reconstruction of finite lattice sets from their X-rays. Prepr. Munich University of Technology, 1998.

[8] HACKBUSCH W. On the fast solving of parabolic boundary value problems. SIAM J. Control Optim., 17, 1979, 231-244.

[9] HACKBUSCH W. Integralgleichungen. Teubner Studienbucher, Stuttgart, 1989.

[10] HESTENES M. R. Multiplier and gradient methods. In: "Computing Methods in Optimization Problems - 2", eds L. A. Zadeh, L. W. Neustadt and A. V. Balakrishnan. Academic Press, N. Y., 1969.

[11] HESTENES M. R. Multiplier and gradient methods. J. Optim. Theory Appl., 4, 1969, 303-320.

[12] HIRSCH M. W. Differential Topology. Springer, 1976.

[13] HOSCHEK J., DANKWORT W., editors. Reverse Engineering, Teubner, Leipzig, Stuttgart, 1996.

[14] Jathe M., PlCKL St., Weber G.-W. Numerical computation and visualization. Darmstadt University of Technology, Department of Mathematics, 1998.

[15] JÖRGENS K. Linear Integral Operators. Pitman Publ. House, London, 1982.

[16] JONGEN H. Th., JONKER P., TWILT F. Nonlinear Optimization in IRn, I: Morse Theory, Chebychev Approximation. Peter Lang Publ. House, Frankfurt a. M., Bern, N. Y., 1983.

[17] JONGEN H. Th., JONKER P., TWILT F. Nonlinear Optimization in IRn, II: Transversality, Flows and Parametric Aspects. Peter Lang Publ. House, Frankfurt a. M., Bern, N.Y., 1986.

[18] JONGEN H. Th., TRIESCH E. Optimierung A. Lecture notes. Aachen University of Technology, Augustinus Publ. House, Aachen, 1988.

[19] KAISER C., KRABS W. Ein Problem der semi-infiniten Optimierung im Maschinenbau und seine Verallgemeinerung. Working paper. Darmstadt University of Technology, Department of Mathematics, 1986.

[20] Kaplan A., TICHATSCHKE R. On a class of terminal variational problems. In: "Parametric Optimization and Related Topics IV", eds J. Guddat, H. Th. Jongen, F. Nozicka, G. Still and F. Twilt. Peter Lang Publ. House, Frankfurt, Berlin, N. Y., 1996, 185-199.

[21] KAPLAN A., TICHATSCHKE R. On the numerical treatment of a class of semi-infinite terminal problems. Optimization, 41, 1997, 1-36.

[22] KARGER D. R. Random sampling and greedy sparsification for matroid optimization. Mathematical Programming. Ser. B 82, 1998, 41-81.

[23] KARGER D. R. Randomization in graph theory: a survey. Optima. Mathematical Programming Society Newsletter, 58, 1998, 1-11.

[24] KAUFMANN St. Mathematica als Werkzeug. Birkhauser Publ. House, Basel, Boston, Berlin, 1992.

[25] KEARFOTT R. B. On proving the existence of feasible points in equality constrained optimization problems. Mathematical Programming, 83, 1998, 89-100.

[26] KRABS W. Einführung in die Kontrolltheorie. Wissenschaftliche Buchgesellschaft Darmstadt, 1978.

[27] KRABS W. Optimization and Approximation. John Wiley, N. Y., 1979.

[28] KRABS W. On time-minimal heating or cooling of a ball. Intern. Ser. Numer. Math., 81, Birkhäuser, Basel, 1987, 121-131.

[29] KRABS W. On Moment Theory and Controllability of One-Dimensional Vibrating Systems and Heating Processes. Lecture Notes in Control and Information Sciences 173, eds. M. Thoma and A. Wyner. Springer, Berlin, Heidelberg, N. Y., 1992.

[30] KRABS W. Personal communication. Darmstadt, Germany, 1996-1998.

[31] KRYLOV V. I. Approximate Calculation of Integrals. Translation from the Russian by A. H. Stroud. MacMillan Company, London, N. Y., 1962.

[32] MYINT-U T. Partial Differential Equations of Mathematical Physics. American Elsevier Publishing Company, Inc., N. Y., London, Amsterdam, 1973.

[33] NOLL D. Restoration of degraded images with maximum entropy. J. Global Optim., 10, 1997, 91-103.

[34] PARKUS H. Instationäre Wärmespannungen. Springer, Wien, 1959.

[35] PlCKL St., WEBER G.-W. An algorithmic approach by linear programming problems in generalized semi-infinite optimization. Prepr. Darmstadt University of Technology. Darmstadt, Germany, 2000, submitted for publication.

[36] POWELL M. J. D. A method for nonlinear constraints in minimization problems. Optimization. Ed. R. Fletcher. Academic Press, N. Y., 1972.

[37] ROCKAFELLAR R. T. On multiplier method of Hestenes and Powell applied to convex programming. J. Optim. Theory Appl., 12, 1973, 555-562.

[38] ROZVANY G. I. N. Topology optimization of multi-purpose structures. Math. Meth. Operat. Res., 47, 1998, 265-287.

[39] SPELLUCCI P. Numerische Verfahren der nichtlinearen Optimierung. Birk-häuser Publ. House, Basel, Boston, Berlin, 1993.

[40] TRICOMI F. G. Integral Equations. Interscience Publishers, Inc., N. Y., 1957.

[41] WEBER G.-W. Generalized semi-infinite optimization: on some foundations. J. Comp. Tech, 4, 3,1999, 41-61.

[42] WEBER G.-W. Generalized semi-infinite optimization: on iteration procedures and topological aspects. In: "Similarity Methods", eds. B. Kräplin, St. Rudolph, St. Bruckner. Institute of Statics and Dynamics of Aviation and Space-Travel Constructions, 1998, 281-309.

[43] WEBER G.-W. Generalized Semi-Infinite Optimization and Related Topics. Habilitation thesis. Darmstadt University of Technology, Department of Mathematics, 1999/2000.

[44] WEBER G.-W. Generalized semi-infinite optimization: theory and applications in optimal control and discrete optimization. In: "Optimality Conditions, Generalized Convexity and Duality in Vector Optimization", eds. A. Cambini, L. Martein. J. Statistics and Management Systems. Submitted for publication, 2000.

Поступила в редакцию 29 марта 2000 г.

i Надоели баннеры? Вы всегда можете отключить рекламу.