Научная статья на тему 'Optimization problems with random data'

Optimization problems with random data Текст научной статьи по специальности «Математика»

CC BY
83
19
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
ЧИСЛЕННЫЙ ВЕРОЯТНОСТНЫЙ АНАЛИЗ / СЛУЧАЙНОЕ ПРОГРАММИРОВАНИЕ / МАТЕМАТИЧЕСКОЕ ПРОГРАММИРОВАНИЕ / NUMERICAL PROBABILISTIC ANALYSIS / RANDOM PROGRAMMING / MATHEMATICAL PROGRAMMING

Аннотация научной статьи по математике, автор научной работы — Popova Olga A.

The article discusses a new approach to optimization problems with random input parameters, which is defined as a random programming. This approach uses a numerical probability analysis and allows us to constructthe set of solutions ofthe optimizationproblembased onthejointprobabilitydensityfunction.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Optimization problems with random data»

УДК 519.4

Optimization Problems with Random Data

Olga A. Popova*

Institute of Space and Information Technology, Siberian Federal University, Kirenskogo, 26, Krasnoyarsk, 660074,

Russia

Received 10.05.2013, received in revised form 10.07.2013, accepted 20.09.2013 The article discusses a new approach to optimization problems with random input parameters, which is defined as a random programming. This approach uses a numerical probability analysis and allows us to construct the set of solutions of the optimization problem based on the joint probability density function.

Keywords: numerical probabilistic analysis, random programming, mathematical programming.

Introduction

Many practical problems, including the problem of decision-making, involve implementation of an optimization approach. The effectiveness of the solutions is determined by several factors. Such factors primarily include data needed for the description and the solution of a problem. The need to take into account the nature and characteristics of the historical data has been observed in linear programming, namely in the crisis linear programming in the sixties and seventies of the twentieth century. The rapid development of the theory of linear programming to solve practical problems of planning of the national economy have run into difficulty with inconsistency of theoretical assumptions and actual results in solving specific economic problems. The results of solving problems of linear programming are often not in line with projected expectations. The uncertainty of input data is one of the significant factors that are not taken into account in the proposed models but this phenomenon is inherent in many practical problems of the economy.

The term "uncertain data" includes three types of uncertainty: stochasticity, fuzziness and imprecision of data.

Random errors associated with measurements or incompleteness of information lead to uncertain data. Then we deal with random, inaccurate and incomplete data.

It is clear that the results of the solutions depend on the quantity and quality of relevant information available and the limited cognitive abilities of decision makers, as well as on numerical methods chosen for calculation.

The article deals with a numerical probabilistic approach to solve optimization problems with random inputs. The solutions of such problems obtained with the use of mathematical programming are optimal solutions that depend on input parameters. When probability density of input parameters is known it is possible to construct a probability density function of the joint probability of the optimal solutions. In contrast to the stochastic programming [7,10], where the optimal solution is a fixed solution, this approach allows us to build a whole set of solutions of the optimization problem. This set is defined by the joint probability density function.

Methods to construct the set of solutions for an optimization problem with random input parameters will be denoted the random programming. The methods are based on the application of numerical probabilistic analysis.

* [email protected] © Siberian Federal University. All rights reserved

It is important to note that we need a method that will allow for the subsequent calculations in such a way as to get real results and it may not result in additional uncertainty [9].

At present, mathematical tools of uncertain programming are developed. Uncertain programming is the theoretical basis for solving optimization problems with various uncertainties.

In recent study [7] three main types of uncertainty identified: randomness, fuzziness and imprecision.

Since the number of intervals can be regarded as a special case of inaccurate value, inaccurate programming includes interval analysis, interval arithmetic and, accordingly, interval programming.

In this regard it should be noted that the development of hybrid algorithms combines the ideas and approaches of statistical modeling, neural networks, genetic algorithms and tabu search [7].

The expectation operator and averaging procedure are used in most of uncertain programming algorithms.

Researchers try to find a good compromise between the adequacy of the optimization model and the computational complexity of the appropriate numerical method for solving the problem being studied. These two components together usually affect the usefulness and quality of the solutions. There are various approaches to the formulation and solution of uncertain optimization problems. It is impossible to give a complete overview of all such models and methods in a single article. Therefore, the focus is only on the stochastic approach to solve optimization problems.

Consider a general formulation of the problem of stochastic programming (Stochastic Programming Problem (SPP)) [10]

max f (x, £), £) < 0, i = l, ...,p,

where x is the solution vector, £ is a random vector, f (x, £) is an objective function and gj(x, £) are random constraint functions.

For the purpose of applying appropriate approaches to solve optimization problems with stochastic uncertainty a general problem can be formulated in form of E and P-problems in relation to the definition of the objective function and constraints.

E-staging means optimization of the expectation of the objective function. It is the first type of formulation of the SPP [10]. Such problems are called expected model value (EMV) [7].

E-staging is formulated as follows

max M [f (x,^)] (1)

M[gi(x,0] < 0, i = l,...,p. (2)

In many cases, the problem of stochastic optimization can be treated as a multi-criteria problem. In this case we have multi-criteria stochastic programming.

There are two main approaches to solving stochastic programming:

1) indirect methods which consist in finding the functions F(x), Gj and solution of an equivalent problem of NP type (1), (2);

2) direct methods for stochastic programming based on the information about the significance of functions f (x, £) and gj(x, £) obtained by experiments.

It is necessary to point out the relatively new formulations of optimization problems with interval uncertainty. For instance, linear programming problem with interval data is formulated as follows [6]:

(c, x) ^ min, (3)

Ax = b, x > 0. (4)

A e A, b e b, c e c, (5)

where A is interval matrix, b, c are interval vectors of dimension n.

1. Formulation of the problem and supporting information

Let us formulate the problem of random programming as follows:

f (x, £) ^ min, (6)

£) < 0, i = 1,..., m. (7)

where x is the solution vector, £ is a vector of parameters, f (x, £) is objective function and gi(x,£) are constraint functions. For vector £ it is known that

e e £, (8)

where £ is a random vector.

Vector x* is the solution of problem (6)-(8) if

f (x*,o = inf f (x,e),

where

U = {x|gi(x, e) ^ 0, i = 1,..., m.} The set of solutions of problem (6)-(8) is defined as follows

X = {x|f (x, e) ^ min, gj(x, e) ^ 0, i = 1,..., m, e e £}

Note that x* is a random vector so in contrast to the deterministic problem it is necessary to determine the probability density function for each component x*. Then we can obtain the joint probability density function for x*.

Then we extend the relation * e {<, >} to random variables:

x * y ^ x * y for all x e x, y e y.

If supports of x and y intersect then we can introduce the probability of x * y as

P(x * y)= p(x,y)dxdy, JQ

where Q = {(x, y)|x * y} and p(x, y) is the joint probability density function for x and y. The problem of linear programming with random data is formulated as follows:

(c, x) ^ min, (9)

Ax = b, x > 0. (10)

A e A, b e b, c e c, (11)

where A is a random matrix, b and c are random vectors of dimension n.

Vector x* is the solution of problem (9)-(11) if

(c,x*) =

where

U = {x|Ax = b, x > 0.} The set of solutions of problem (9)-(11) is

X = {x|(c, x) ^ min, Ax = b, x > 0, A e A, b e b, c e c}

2.1. Systems of linear algebraic equations

Consider a system of linear algebraic equations

Ax = b, i = 1...n, (12)

where x e Rn is a random solution vector, A = (ajj) is a random matrix and b = (b¿) is a right-hand vector. Suppose that random matrix ajj and vector bj have independent components with probability densities pajj and respectively.

The support of the set of solutions can be represented as follows [1]

X = {x|Ax = b, A e A, b e b}.

We can associate with each x e X a subset of coefficients Ax c A, bx c b

Qx = {A, b|Ax = b, A e A, b e b}.

Note that for a given vector x coefficients of the matrix and the right-hand side are related by the following relation

n

a,jjXj — bj = 0, i = 1, ...,n,

j=i

therefore

n

Qx = {A, b| ajjXj — bj = 0, i = 1,..., n}.

j=i

Suppose that we want to find the probability P(X0) that solutions x are in a subset X0 C X. Xo is a comparable set of O0 = {Ox|x e X0}. Then

r n n n

P(Xo)=/ It IIpaj^pbjdQ.

j=i j=i j=i

Since P(X0) is proportional in many cases to the volume of O0 one can a priori determine the areas with the lowest and highest probability.

2.2. Quasi-Monte Carlo

Quasi-Monte Carlo (QMC) integration is a method of numerical integration that operates in the same way as Monte Carlo (MC) integration but instead it uses sequences of quasi-random numbers which have more uniform behavior. Quasi-random numbers are generated by computer and they are similar to pseudo-random numbers. In addition, they have important property of being deterministically chosen on equally distributed sequences [11] in order to minimize errors. In general, the use of quasi-random numbers causes the difference of approximate value of integral and the actual value of integral decreases as (lnN)Ns/N (where s is the integral dimension). In standard Monte Carlo procedure this difference decreases as 1/vN.

QMC methods can be treated as deterministic versions of Monte Carlo methods [8]. Determinism presents in two ways:

1) by working with deterministic points rather than random samples and

2) by the availability of deterministic error bounds estimates instead of probabilistic MC error bounds estimates. Most practical implementations of MC methods are, in fact, quasi-Monte Carlo methods since the purportedly random samples that are used in Monte Carlo calculation are often generated by computer with the use of some deterministic algorithm. In QMC methods deterministic nodes are selected in such a way that the error bound is as small as possible. The

very nature of the QMC methods with their completely deterministic procedures implies that we get deterministic and thus guaranteed error bounds [8]. Therefore, it is always possible to choose in advance an integration rule that yields a given accuracy.

The basic idea of a quasi-Monte Carlo method is to replace random samples in Monte Carlo method by well-chosen deterministic points. The criterion for the choice of deterministic points depends on the numerical problem involved. For the important problem of numerical integration, the selection criterion is easy to find and it leads to the concepts of uniformly distributed sequence and discrepancy. The discrepancy can be considered as a quantitative measure for the deviation from uniform distribution.

For system (12) with n = 2 we have

a^bi - ai2b2

xi =-,

aiia22 — ai2a2i

aiib2 — a2ibi

x2 = -.

aiia22 — ai2a2i

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

We can be calculated the vector (xi,x2) by replacing a^, bj with tj,tj:

t22 ti — ti2t2

xi = -.

tiit22 — t i2121

tii t2 — t2it2

x2 = -,

tiit22 — ti2t2i

and the probability of (xi ,x2) is

P(xi ,x2) nijPijn«Pi .

Next, following [4] one can construct the approximate histogram of the joint probability density of (xi, x2).

2. Random linear programming

It is known that the optimal solution x* of problem (9)-(10) is achieved at the corner of the set U.

Theorem 1 ( [12]). Let the set U is defined by conditions (10). Then necessary and sufficient condition for point x = (xi, ...,xn) e U to be a corner point is as follows. There are numbers ji, ...jr so that

Ajixji + ... + Ajrxjr = b; xj = 0, j = ji, l = 1,...,r, and columns of matrices Aj1,..., Ajr are linearly independent. Example 1. Let U is defined by the following matrix A and vector b

A = ( 1 —1 1 2 ). b = (r

then the columns of the matrix Ai, A2 correspond to the corner point with coordinates (2,1,0,0), columns of the matrix Ai, A3 — (0,0,1,0) and columns of the matrix A2, A4 — (0, 5/7,0,4/3).

Note that out of n columns one can choose r linearly independent columns in not more then Cn ways. Consequently, the number of corner points of the set U is finite.

One can suggest the following algorithm to solve the canonical problem (9)-(11): 1) find all corners x of the set U,

2) calculate the value of (c, x) for each corner point and choose x with the smallest value of (c, x).

However, this approach is not necessarily efficient because even in the case of low-dimensional problems the number of corner points can be very large.

Nevertheless, the idea of item-by-item examination of corner points of the set was very fruitful and allowed for developing a number of methods for solving linear programming problems. One of these methods is the so-called simplex method.

For problem (9)-(11) we need to construct a joint probability density of the vector x*. For this purpose, we use one of the methods of deterministic solutions of linear programming problems, such as the simplex method.

Let us consider the auxiliary problem

(ct,x) ^ min, (13)

Atx = bt,x > 0. (14)

At e A,bt e b,ct e c, (15)

and find a solution x* and the corresponding corner point with numbers ji, ...jr.

We solve the stochastic system of linear algebraic equations with the use of numerical probabilistic analysis [3]

(Aji -j )x = b

The joint probability density of the solution is consistent with x*. If supports of input parameters are small enough, then by virtue of continuity x* will coincide with x*. In the case of arbitrary supports of input parameters the sampling of At e A, bt e b, andct e c should be repeated using Monte Carlo approach or genetic algorithms. If this produces different solutions x* then they can be compared by calculating the probabilistic extensions f t = (c, x*) [5].

3.1. Numerical example

As a numerical example, consider the following problem

(c, x) ^ min, (16)

Ax = b, x > 0. (17)

A e A, b e b, c e c, (18)

where A = (aj) is an uniform random matrix, each element of the matrix is the uniform random variable with support [a^ ,ajj], vectors b and c are random vectors. Elements of these vectors are uniform random variables.

The support is defined as follows

. , [1 — r, 1 + r] [1 — r, 1 + r] [3 — r, 3 + r] [1 — r, 1 + r] = ' [1 — r, 1 + r] [—1 — r, —1 + r] [1 — r, 1+ r] [2 — r, 2 + r]

[3 — r, 3 + r] [1 — r, 1 + r]

c = ( —1, —1,0, 0).

If r = 0 (this corresponds to the deterministic case) the solution x* = (2, 1, 0, 0) and columns of the matrix A1, A2 correspond to the angular point.

Fig. 1 shows joint probability density of the vector x1, x2 when r = 0.1, components x3 and x4 are equal to zero. The solid line is the boundary of solutions on the plane (x1,x2). The

Fig. 1. Joint probability density of the vector x1; x2

-2.0 -2.83 -4.0

Fig. 2. Histogram of the objective function cixi + c2x2

solution set X is a quadrangle with vertices (2.0,0.636), (2.444,1.0), (2.0,1.444) and (1.636,1.0). As can be seen from the figure the probability density is distributed very unevenly. The largest probability density is in the center, near the point (2.0,1.0).

Fig. 2 shows the histogram of the objective function cixi + c2x2 with expectation is equal to -2.834.

Area of X is strongly dependent on r. With increasing r the area grows and already at r = 1 becomes infinite. This is defined by the matrix

( 0 0 ) ( [0,2] [0,2] )

V 0 0 ) e\ [0,2] [—2,0] y ,

With linearly dependent columns.

3. Random nonlinear programming

Consider a random nonlinear programming problem without restriction in the following form

2 (Ax, x) — (b, x) ^ min . (19)

A G A, b G b, (20)

where A is a random matrix, b is a random vector. In the case of symmetric positive definite matrix A problem (19),(20) is reduced to random system of linear algebraic equations

Ax = b. (21)

To solve random systems of linear algebraic equations of the form (21) one can use the Monte Carlo method.

In some cases it is possible to use numerical probabilistic analysis which is more efficient then Monte Carlo method [2].

In general, the problem of random non-linear programming (6), (7) can be reduced to random system of nonlinear equations

F(x, k) = 0, k e k, (22)

where k e k is a random vector.

Upon solving problems (21) and (22) we obtain the joint probability density solutions x.

4.1. Numerical examples

Let us consider problem (19) with uniform random matrix

A

ai a a ai

where b is uniform random vector. Supports are a1 = [2,4], a2 = [—1,0] and b1 = b2 = [0.5,1].

Fig. 3. Joint probability density of the vector x

Fig. 3 shows a piecewise constant approximation of the joint probability density of the vector x for problem (19), (20). For comparison, Fig. 4 illustrates samples of solutions of system (21) obtained by similar to quasi-Monte Carlo method [8]. Because of symmetry of the matrix A particular solutions of system (21) form a certain pattern.

Let us add to problem (19) the following constraint

xi + x2

(23)

where a G a and a is uniform random value with support [0.9, 1.0]. In this case, the solution of the optimization problem can be written in explicit form

xi = —(62 — bi — 2 * a * a2 — 2 * a * ai)/(8 * ai)

x2 := a — xi;

Solution x can be obtained with the use of numerical probabilistic analysis [4,5]. Fig. 5 shows the joint probability density of the vector solution of problem (19) with restrictions (23) defined on the square [0.7,1.0] x [0,0.3].

a

BÉÉ

1 i

Hp

Fig. 4. Samples of solutions of system (21)

Fig. 5. Solutions of problem (19) with restrictions (23)

Conclusion

The considered methods for solving problems of linear and nonlinear optimization shows that it random programming is effective method for solving optimization problems with uncertain input parameters. In the future we plan to develop algorithms that choose the best optimal solutions out of the constructed set of solutions.

References

[1] B.Dobronets, Interval Mathematic, KSU, Krasnoyarsk, 2004 (in Russian).

[2] B.S.Dobronets, A.M.Krantsevich, N.M.Krantsevich, Software implementation of numerical operations on random variables, Journal of Siberian Federal University. Mathematics & Physics, 6(2013), no. 2, 168-173.

[3] B.S.Dobronets, O.A.Popova, Numerical operations on random variables and their application, Journal of Siberian Federal University. Mathematics & Physics, 4(2011), no. 2, 229-239 (in Russian).

[4] B.S.Dobronets, O.A.Popova, Elements of numerical probability analysis, SibGAU Vestnik, 42(2012), no. 2, 19-23 (in Russian).

[5] B.S.Dobronets, O.A.Popova, Numerical probabilistic analysis for the study of systems with uncertainty, Zhurnal Kotrolya i Komp'yuternyh Nauk, 21(2012), no. 4, 39-46 (in Russian).

[6] M.Fiedler, J.Nedoma, J.Ramrik, J.Rohn, K.Zimmermann, Linear Optimization Problems with Inexact Data, Springer Science+Business Media, New York, 2006.

[7] B.Liu, Theory and Practice of Uncertain Programming (2nd Edition), Springer-Verlag, Berlin, 2009.

[8] H.Niederreiter, Random Number Generation and Quasi-Monte Carlo Methods, SIAM, Philadelphia, 1992.

[9] H.Schjaer-Jacobsen, Representation and calculation of economic uncertainties: Intervals, fuzzy numbers, and probabilities, Int. J. Production Economics, 78(2002), 91-99.

[10] A.Shapiro, D.Dentcheva, A.Ruszczynski, Lectures on Stochastic Programming: Modeling and Theory, SIAM, Philadelphia, 2009.

[11] C.W.Ueberhuber, Numerical Computation 2: Methods, Software, and Analysis, SpringerVerlag, Berlin, 1997.

[12] F.P.Vasil'ev, Numerical methods for solving extremal problems, Nauka, Moscow, 1988 (in Russian).

Задачи оптимизации со случайными данными

Ольга А. Попова

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

В статье рассматривается новый подход к 'решению оптимизационных задач со случайными входными параметрами, который определяется как случайное программирование. Данный подход использует численный вероятностный анализ и позволяет строить множество решений оптимизационной задачи на основе совместной функции плотности вероятности.

Ключевые слова: численный вероятностный анализ, случайное программирование, математическое программирование.

i Надоели баннеры? Вы всегда можете отключить рекламу.