Научная статья на тему 'Application of local algorithm error to recognition problem'

Application of local algorithm error to recognition problem Текст научной статьи по специальности «Математика»

CC BY
57
14
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
European science
Область наук
Ключевые слова
погрешность алгоритмов типа градиента / локальные алгоритмы / дискретная оптимизация / задача распознавания / задача портфеля / gradient-type error of algorithms / local algorithms / discrete optimization / recognition problem / the problem of portfelio.

Аннотация научной статьи по математике, автор научной работы — Баширова Сабина Агамехди

Recently methods for finding error gradient-type algorithms are developed. In this paper the application of is concerned with the issue of the actual local recognition algorithm error and how to apply the offense being investigated. The difference between the optimal solution for the problem of errors being found.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Application of local algorithm error to recognition problem»

Application of local algorithm error to recognition problem Bashirova S. (Republic of Azerbaijan)

Применение погрешности локальных алгоритмов к задачам распознавания Баширова С. А. (Азербайджанская Республика)

Баширова Сабина Агамехди /Bashirova Sabina - преподаватель, кафедра государственного управления информационных технологий,

Академия государственного управления при Президенте Азербайджанской Республики, г. Баку, Азербайджанская Республика

Abstact: recently methods for finding error gradient-type algorithms are developed. In this paper the application of is concerned with the issue of the actual local recognition algorithm error and how to apply the offense being investigated. The difference between the optimal solution for the problem of errors being found.

Аннотация: в последнее время разработаны методы нахождения ошибок в алгоритмах градиентного типа. Проблемы, обсуждаемые в статье, связаны с вопросом о фактической локальной погрешности алгоритма распознавания, представлены рекомендации, как применять эти данные при исследовании прогрешностей. Определяется разница между оптимальными решениями с учетом определенных ошибок.

Keywords: gradient-type error of algorithms, local algorithms, discrete optimization, recognition problem, the problem of portfelio.

Ключевые слова: погрешность алгоритмов типа градиента, локальные алгоритмы, дискретная оптимизация, задача распознавания, задача портфеля.

1. Introduction and formulation of the problem

It is known that the local problem of discrete optimization (approximate) algorithms are not always the best solution. Therefore, it is actually to find a local algorithm error. Note that the approximate discrete optimization algorithm error for any of the problem is not always possible to set up methods.

If it is not possible to establish a method to solve problems in different sizes, they are getting to the statistics. Of course, in this case the need arises complex program.

On the other hand it is known that type of gradient algorithms for error detection methods are developed. Here, the problem of recognition lately, the actual implementation is considered a local algorithm error and how to apply the offense being investigated. The difference between the optimal solution for the problem of errors being found.

The first coordinate - investigated the properties of convex functions. Later, these features are guaranteed to find a violation. For example, Z” (R”) n - dimensional non-negative integer (true) vectors sets, R on the set of real

numbers. If X < yt, Vi E In = {1,..., n} , then we will consider that X < y, X, y E Z” .

Let's, p = (p1,..., pn) E R” . If f : Z” ^ R function P - convex coordinate [5]

A J(x) ^ 0 Vx E Z”, Vi, j E In, i * Jj A J(x) ^ -P, Vx E Z”, Vi E In,

there

Ajf (x) = f (x ” e) - f (x), Ajf (x) = A f (x ” e') - A f (x), e = (e(,...,e”), ej = 0, i * j, e' = 1,

in other words, e is j-th n-dimensional unit ort.

In Z” all p - coordinate - convex function set is designated by ^p =ffip(Z”) [6]. Let's given following designations:

N(x y) = {i: xi < yi,' E In К h (x y) = X h (xi, Уi ^

'eN (x, y)

\yt - xi, xi ^ yi

h (xi, yi) =

0, x > y

’ i Z i

Theorem 1.1. Following sentences are equivalent: 1. f (x) EKp(Z”);

2. f (У) — f(x) - Z h (xi , У)л if(x) —.

ieN (x, y)

1

-- Z Ph (X , У )(h (X , У ) - 1), VX У G Z" , X - У

2 ieN(x,у)

Let's the look at the following non-linear discrete optimization problems (1.1) and (1.2)

n

max{ f(x) = Z f (x>): x = (х1’-’ xn) e P}; (11)

i=l

there

Pp = {x = (Xi,...,Xn) e Z_n :p(x) = 1(Ax,x) - a,a e R}, f (x) eK(Z П ), p = (pi,..., pn) e R

va'i>---> z-'n/ |+

n — Z1

(Ax, x) - Ax and x are scalar multiplying vectors, p : Z+ —— Z+ is increasing function, A = (a j ) — n X n dimential symmetric matrix with real elements.

n

max{ f (x) = Z f (x,) : x = ^..^ xn ) e Pg К (1.2)

In there

i=i

1 n

pg ={x = ^..^xn)eZn+:g(x) = &x)+-Zq>x>-aaeRl},

2 i=1

f (x) e^p(Z n+ X p = (р^.., Pn Xc = ^.^ cn X q = ^..^ 4n) e К

g : Zn+ —— Z- is increasing function.

2. Calculation

Let's the following local algorithms for the solution of the problem, let's see:

Algorithm A1. (1.1) mosolosinin holli ugun a§agidaki lokal alqoritmo baxaq:

Alqoritm A1.

1. x0 = 0 = (0,...,0), t = 0, Q(t) = 0.

2. x“1 = x1 +ei (t)

There

i (t) = mg тах{л j(x): г e fes (X, py^

fes(X, pp) = {i:. У + У e pг e In}

3. If fes(X ,Pp ) Ф 0, then algorithm is finished and ontained solution designate by Xs = (X-,..., X^ )

Otherwise, accept t = t + 1, Q(t) = Q(t) O {i(t)} and return to the second step.

The local algorithm for solution problem (1.2).

Algorithm A2.

1. x0 = 0 = (0,...,0), t = 0, Q(t) = 0.

2. x?+1 = x1 +ei (t)

There

i(t) = argmax{^f (xt): i e fes(xt,Pp)},

A if ( У )

Af£l i e N" Aig(x1 У q ’

A if ( X )

i e N1

4i

fes(X,Pg) = {i: X 1 e e Pg,ie In},N = {i: qt = 0,i e In},N = {i: qt > 0,i e In}.

3. If fes (X, P ) Ф0 or Ai(t) f (X ) < 0 , then algorithm is finished and ontained solution designate by

Xg = (xg,...,Xg). Otherwise, accept t = 111, Q(t) = Q(t) ^{'(O} and return to the second step.

Finding error from the gradient of the solution by means of algorithms A1 and A2:

The following should be taken designations

n

X = (X ,...,¥tn X Xi =AX( X \ Q(X ) = IX, Q (X) = IX,

i=1

ieN„

_+. . ^ hQ (X) hQ1 (q) -—

Q1 (q) = I qi, r = —— i —q, t = o,t,

ieNl

X

(2.1)

i(t)

li (t)

Q(q) = I qt,

i =1

Ф

f t-1 ^

V1

l(т, h q) = 1 ^ (1-------) , As =Ai(s)f (xX s = 0, t, t = 0,т,

V s=0 Гs J

a(P) = <

o, n;=0

r 1v

VieNP Pi J

N+p*0

a2(p,h,Sf) = hQ(Sf) -ha(p)l2, Sj = A f (0), i e In,

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Sf = (S1 ,...,SlX N(x,y) ={i: x, < yt,ie In}, h (Xy) = Ih(xi,yi),

h = h (P) = max{h (0,x): x e P}, h(x;,y.) =

ieN (x, y)

ly - xi, xi < yt

0, xi > yi

Let’s, x = (xj,...,xn) the issue (1) of the optimal solution.

Local algorithm warranty (relative) S X 0 error when it is understood that pay inequality follows:

(f (x*) - f (xg )) l(f (x*) - f (0)) < s

Let’s, the number of steps T numerical algorithm A1.

Q(t) = {i (0),..., i (t)}, t = 0, T A2 algorithm based on the indexes pointed to a majority.

Teorem 2.1. Ogor (1.1) mosolosindo f (x) artan funksiya vo <a2(p,h,Sf ) > 0 boraborsizliyi dogrudursa, onda A2 alqoritminin (1.1) mosolosino a§agidaki qarantiyali xotasi dogrudur:

Theorem 2.1. In the (1.1) f (x) growing inequality function and C02(p,h,S^) > 0is true, then A2 algorithm (1) the issue of guaranteed error following is true:

f (x*) - f (xg )

f (x *) - f (0)

< 1 -0j(t, h, q, p,Sf), (2.2)

Here

ю3(т, h, q, p,Sf ) = (юх(г, h, q)) 1 +

inequality is true

(h -т)2а>(р)(д>(т, h, q)) 2h®2 (p, h,Sf )

1

-, h = h (Pv)

Lemma 2.1. If (1) the issue of f (x) increasing function then the following

H (X,x) = £ph W,x’)(h(X,x*) -1) 2 2 «L&M', = 0T (2.3)

Result 1. Under the terms of Theorem 2.1, if h = t , then A2 algorithm guaranteed error is expressed by the following formula

T — 1

а = П (1 -r—1)-

s=0

T+

’ p

Result 2. Under the terms of Theorem 2.1, if Np Ф 0, h = T , then A2 local algorithm guaranteed error is expressed by the following formula

T—1 q^

a =

П (1

-)•

(3.1)

s=0 h^ (q)'

3. Apply model

Let's, given the image of X:

X = ^ X2 x„ }

The mathematical model for the recognition of the image is as follows [1]:

max{ f(xl,•••, xn ) : X = (X1,•••, Xn ) e D}

Here, f - may be a function of the differential or discrete function. D multiplicity - a subset of the training sequence.

Obviously this type of problem, including the settlement of the class of NP (the case).

That is a difficult problem to resolve. It should be noted that the optimal solution to the issue of P with

X = (xiv, X„) .

Obviously, if we can find the optimal solution, then the problem of recognition is quite simplified.

Suppose that this condition is paying f function:

f: Z +„ ^ R

R - set of real numbers.

The concept of such derivative are not able to function in the usual sense [3]. Because of the continuous function f.

e = (0,...,0,-,0,...,0) n - dimensional unit should be assigned to ort: i

AJ (x) = f (x + e) — f (x)

A i - disignation by following:

A f ( x) = Af ( x),..., A nf (x))

The same rule should be set to Ay percent:

Ajf (x) = A i(A jf(x)) = A i(f (x + ) — f (x)) =

= f (x + e1 + e ) — f (x + e1) — f (x + e ) + f (x)

If the following conditions are met,

A f (x) 2 0, Vi e fes (x, D), Vx e D

here,

fes (x, D) = {i: x + el e D, x e D}

Here fes (x, D) is called multiplicity possible destinations.

Obviously, the problem has been a matter of bull variable portfolio and it is well known that the optimal solution (x*) to the problem of the establishment of the portfolio algorithm is quite a complicated problem and it is not known exactly effective.

However, it is known that (3.1) the type of problems can be solved by means of an effective approximate algorithms.

T

The approximate solution to the (4.1) problem X in the following order to determine the division classes let out.

Let's, given the number of border L us. If

then XT e K. Otherwise

f (*)

f (xT)

< L

f (x*) f (xT)

> L

then XT e K2 . (K1 - effective, K2 - non-effective class).

For example, let’s gevin

f (Xj, x2 ) = -x\ + 3xj + x2 ^ max

3Xj + x2 < 3

(xj, X2 ) e Z+ .

Z2 - that means (X1, X2 ) non-negative numbers.

*

It should be sign X in the example above, the optimal solution.

D = {(Xj,x2) e Z^; 3xj + x2 < 3}; x* e D.

If Vx e D ^ f (X ) > f (x) , then the optimal solution to solve x the above problem is called.

Let XП

sign with the approximate solution.

/V)

f ( X П )

< 2 ^ xП

e K

1 Э

Otherwise X e K2 .

4. Conclusions

It is well known that the construction of exact solutions for discrete optimization problems is not always possible. Therefore, for such tasks urgent problem is the construction of local (approximate) algorithms. On the other hand, besides the application of algorithms to approximate respective tasks, it becomes necessary to find the error of these algorithms.

In detail in the following conclusions were obtained:

- Finding error of gradient extremes for nonlinear discrete optimization problem of the first class;

- Finding error warranty, depending on the spectral range and the eigenvalues of the matrix, which satisfies the constraint conditions;

- Applying these results to it is well known that the construction of exact solutions for discrete optimization problems is not always possible. Therefore, for such tasks actual problem is the construction of local (approximate) algorithms. On the other hand, besides the application of algorithms to approximate respective tasks, it becomes necessary to find the error of these algorithms.

References

1. Mazurov V. D. Methods of committee classification and pattern recognition, M. Nauka 1990, p. 420.

2. Ramazanov A. B. An Estimate for the Curvature of an Order-Convex Set in the Integer Lattice and Related Questions // Mathematical Notes. 2008, vol. 84, N 1, p. 147-151.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

3. Ramazanov A. B. On stability of the gradient algorithm in convex discrete optimization problems and related questions. // Discrete Math. And Appl. 2011.V. 21, N 4. p. 465-476.

4. Emelichev V. A., Kovalev M. M., Ramazanov A. B. Errors of gradient extrema of a strictly convex function of discrete argument // J. Discrete Math. And Appl., 1992, vol. 2, № 2, p. 119-131.

5. Ramazanov A. B. Estimates of the global extremum d.s. - Convex functions of discrete argument // Proceedings of the 14th International Seminar -School << Optimization methods and their applications >> Irkutsk, Publishing House of the Institute of Energy Systems. Name. Melentiev L. A. Irkutsk, 2008, p. 483-490.

6. Ramazanov A. B. Estimates of accuracy of the algorithm-wise lifting solutions of discrete convex optimization // Discrete. Analysis res. Operations Ser. 1, 2005, v. 12, №4, p. 60-80.

i Надоели баннеры? Вы всегда можете отключить рекламу.