Научная статья на тему 'Methods and characteristics of locality-preserving transformations in the problems of computational intelligence'

Methods and characteristics of locality-preserving transformations in the problems of computational intelligence Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
182
58
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
SAMPLE / INSTANCE / FEATURE / LOCALITY-PRESERVING TRANSFORMATION / HASHING / PATTERN RECOGNITION / DIAGNOSIS / DIMENSIONALITY REDUCTION

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Subbotin S. A.

The problem of the development of mathematical support for data dimensionality reduction is solved. Its results can be used to automate the process of diagnostic and recognizing model construction by precedents. The set of rapid transformations from the original multidimensional space to the one-dimensional axis was firstly proposed. They provide a solution of the feature extraction and feature selection problems. The complex of indicators characterizing the properties of transformations was firstly proposed. On the basis of the proposed indicators the set of criteria was defined. It facilitate comparison and selection of the best transformations and results of their work in diagnosis and recognition problems solving on the basis of computational intelligence methods. The software realizing proposed transformations and indicators characterizing their properties was developed. The experimental study of proposed transformations and indicators was conducted, which results allow to recommend the proposed transformations for use in practice.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Methods and characteristics of locality-preserving transformations in the problems of computational intelligence»

UDC 004.93

Subbotin S. A.

Doctor of Science, Professor, Zaporizhzhya National Technical University

Ukraine, E-mail: [email protected]

METHODS AND CHARACTERISTICS OF LOCALITY-PRESERVING TRANSFORMATIONS IN THE PROBLEMS OF COMPUTATIONAL _INTELLIGENCE_

The problem of the development of mathematical support for data dimensionality reduction is solved. Its results can be used to automate the process of diagnostic and recognizing model construction by precedents. The set of rapid transformations from the original multidimensional space to the one-dimensional axis was firstly proposed. They provide a solution of the feature extraction and feature selection problems. The complex of indicators characterizing the properties of transformations was firstly proposed. On the basis of the proposed indicators the set of criteria was defined. It facilitate comparison and selection of the best transformations and results of their work in diagnosis and recognition problems solving on the basis of computational intelligence methods. The software realizing proposed transformations and indicators characterizing their properties was developed. The experimental study of proposed transformations and indicators was conducted, which results allow to recommend the proposed transformations for use in practice.

Keywords: sample, instance, feature, locality-preserving transformation, hashing, pattern recognition, diagnosis, dimensionality reduction.

INTRODUCTION

The problem of model complexity reduction and the model construction speed increasing often occurs in the process of diagnostic and recognizing model constructing by precedents, characterized by a big number of features [1]. One way to solve this problem is the using of transformation from the multidimensional space of initial features to the one-dimensional axis for data dimensionality reduction [2, 3].

There are various methods of transformation for data dimension reduction [2-11], which, however, require the calculation of distances between instances or feature correlation coefficients and for a large-scale problem they are hardly applicable in practice due to big requirements of time and computer memory in the process of determining the transformation parameters and in the process of transformation execution. This situation is additionally compounded by that the number of known transformations and their modifications is very big and there are no any formal criteria to analyze their quality, as well as to select the best available transformation for a particular task [3].

Therefore, the actual problem is speed increasing of the data dimensionality reduction transformation, and the development of criteria for the transformation selection to use in a particular problem solving.

The purpose of this work is the development of rapid transformations from a multidimensional feature space to a one-dimensional axis, the creation of a set of indicators characterizing the properties of transformations, and the experimental study of the properties of transformations in practical problem solving.

© Subbotin S. A., 2014

1 PROBLEM STATEMENT

Suppose we have an initial (original) sample X = <x, y> the set of S precedents describing dependence y(x), x = {x1}, y={ys}, s = 1, 2, ..., S, characterized by a set of N input features {x.}, j = 1, 2, ..., N, wherej is the number of feature, and an output feature y. Every s-th precedent can be represented as <xs, ys>, xs={xs}, where xsj is the value of j-th input feature andys is the value of output feature for the s-th precedent (instance) of the sample, ys e {1, 2, ..., K}, where K is the number of classes, K> 1.

Then the problem of the sample X dimensionality reduction can be formally represented as follows: find a transformation H: X^I, which for each instance xs={xj} determine the coordinate Is on the generalized axis I and thus provides a mapping of instances of different classes to the different intervals of the generalized axis.

Since, as a rule, known transformations do not guarantee an exact solution of this problem, further problem arises of designing of indicators to quantify the quality of the transformation and to compare the results of the various transformations between themselves to choose the best transformation of the set.

2 TRANSFORMATIONS OF INSTANCES FROM THE MULTIDIMENSIONAL SPACE TO THE GENERALIZED AXIS

For large-scale problems it is advisable to ensure the creation of such transformations, which would allow the mapping of individual instances without loading of whole initial sample, as well as taking into account the feature informativity in the process of transforming and to provide a generalization of data.

120 DOI 10.15588/1607-3274-2014-1-17

To ensure the generalization of close located data points (instances) we propose to replace feature values to numbers of feature value interval. For this we need previously to discretize the features by partitioning them into intervals of values.

To partitioning the features into intervals the number if interval (term), which hits the 5-th instance on thej-th feature is proposed to determine by the formula:

round

1 +

xi _ xmm '

0 j > 0;

1 0 ; = 0

X max _ x mm

0, = -J—.

k] = \

K, K > round(ln S), K < N4S;

max{2, round( N4S )}, K > round(ln S ), K > NS ;

max{2, round(ln S )}, K < round(ln S ) < NS ;

K, round(ln S ) < K, K < NS ;

max{2, round(NS )}, round(ln S) < K, K > NS,

where xjnsn, Xjmax are the minimum and maximum values of j-th feature, respectively.

For the mapping of instances from the original multidimensional feature space to the one-dimensional generalized axis is suggested to use the following transformations.

Transformation 1. For each number of interval of j-th feature get it binary representation (binary numbers padded with zeros from the left to cj - the number of digits in k). Set the coordinate of 5-th instance on the generalized axis I5=0, set the position (bit) number of generalized axis coordinate p = 1. Going by the feature numbers j s in descending order of their rank and by the group of digits in the interval number c = 1, 2 , ..., c. perform in a cycle: ifp < d, where d is a number of bits in a computer bit grid, then record at p-th position (with numbering from left) of the binary representation of a generalized feature I5 the c-th position value (with numbering from left) of interval number, in which the 5-th instance hit on the j-th feature, and set: p=p+1. As a result, we will obtain a generalized axis coordinate of instance with the implicit ranking and selection of features.

Transformation 2. It is an alternative format of constructing a generalized feature for transformation 1. If the total number of bits to represent interval numbers of all features ckN does not exceed the number of bits in a binary bit grid d when the values c. are equal for all features: for each interval number of j-th feature obtain its binary representation (binary numbers padded with zeros from the left to cj - the number of digits in k ) set the coordinate of 5th instance on the generalized axis I5 = 0, set the position

number of coordinate on a generalized axis p = 1; looking in a cycle on a group of digits in the interval number c = 1, 2, ..., cj and on feature numbers j in the descending order of their ranks: put to the p-th bit position (numbering from the left) of the binary representation of the generalized feature I5 the c-th bit (numbering from the left) of interval number, in which the 5-th instance hits on the j-th feature and set: p=p+1. As a result, we obtain the generalized axis coordinate with implicit ranking of features.

Transformation 3. The generalized feature formed on the basis of locality-preserving hashing [12-15]. The initial feature space is divided into 2k equal hypercubes, each of which identified by the key I5 of a k bit length, where k is a number of feature partitions. After the i-th partition the initial feature space split to 2z N-dimensional cubes, wherein the z'-th partition is carried out on the j-th dimension: j = i mod N. At the i-th partition if hypercube located in the top half of the partitioned range, then set to one the i-th bit of its key, and otherwise set the i-th bit of its key to zero (set to one the bit in the i-th position of k-bit identifier, extended by zeros from the left, ifthe length is less than k). The key I5 algorithmically can be generated as follows: set: I5 = 0, xmm'=xmm, xm '

x

then for i = 1, 2,

n =x min "j i ' "j k do: set: - = i mod N,

x mid = (x min+x max')/2, I5 = 2I5; if x5 > x mid, then set: x min' = x mid, j v J J ' JJ J J

I5 = I5 + 1, else set: x max' = x mid.

j J

Transformation 4. The above-described transformations provide mapping to the discrete generalized axis. If the total number of bits to represent numbers of all feature intervals exceeds the number of bits in a bit grid of computer, it is possible to use a transformation to the generalized real axis with partial information loose: add to the real coordinate on generalized feature I5 the c-th bit (numbering from the left) of interval number, in which the 5-th instance hits on the j-th feature:

7S N—L

M r-k-

1 j

=-M w

k ^

k- k=1

-,k,

Wj,k =

max {Sq.k}

Sj,k

Sjk > 0

K

St = M Sq

q=1

j,k'

where is a number of instances of q-th class located in the k-th interval of j-th feature, v^ is a rank of j-th feature (the number of j-th feature in decreasing order of individual feature importance).

Transformation 5. Define the distance from the 5-th instance to the unit vector in the normalized coordinate system:

dL =

M (( _ >r.

j=1

0

k

w

and the angle between the instance as a vector and the unit vector:

(

9 = arccos

N

x x

j=1

A

N X ( 2

j=1

Thus we map the 5-th instance from the N-dimensional space into two-dimensional space. Next for coordinates of 5-th instance in formed two-dimensional space by analogy with the first transformation obtain coordinate of 5-th instance on the generalized axis I5.

Transformation 6. Generate Q support vectors - the centers of pseudo-clusters Cq = {Cj}, q=1, 2, ..., Q, K < Q<<S, j = 1, 2, ..., N. In the simplest case their coordinates can be set as random taking into account dimensionality and feature scales (x1™. < Cqj < xmaxj), or by setting Q=K to determine the center of each its class:

cq =-q YiXjly5 = q}, j = 1, 2, ..., N, q=1, 2, ..., K.

S 5=1

After this calculate the clusters based on their proximity and position in feature space relative to the smallest feature values:

- find the distance from the cluster centers to the point with the lowest feature values:

Rm

(Cq) = x C - Xjm inJ2;

j=i

- find the distance between the cluster centers:

R(Cq, Cp) = X (cq - Cp J2 .

j=i ;

- find the center of cluster closest to the point with the lowest feature values:

q = arg min {^min(Cg)}; g=1,2,...,Q

- set this center as the current, set a new number of current cluster t=1, put current cluster in the set of centers with a new index (C*=C* u C*1, C*1 = Cq) and delete it from the set of centers without a new index (C=C / Cq);

- while exist at least one cluster without a new index (i.e. C^ 0) perform: among the remaining clusters without a new index in C find the closest cluster to the current cluster:

p = arg min {R(Cq, Cg)}, g=1,2,...Q;

CgeC

then increase t = t+1, put the current cluster to the set of centers with a new index (C*=C*u C*t, C^ = C) and remove it from the set of centers without a new index (C=C / Cp).

As a result we will receive C* - a set of cluster centers with numbers corresponding to their proximity to the point with the lowest values of features, and also allowing to determine qualitatively the proximity of the cluster centers.

Further for each instance of the initial sample x5, 5=1, ..., S do:

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

- define the distance from it to each cluster center, q=1,2,...,Q:

R(xJ,C*q) = X( -C*q J2;

j=i

- find the index of the nearest cluster center:

p = arg min {R(xs,Cq*)};

q=1,2,...,ö

- find the angle between the vectors Xs and Cp* relative to the point with the lowest feature values:

N

x (xj - xm in)(Cp* - xym in)

9 = arccos-

j=i

II

^ (((- xmin ^ (- xm)2 j=i V j=i

- assign the s-th instance with the coordinate on the generalized axis:

/J = p +9.

n

3 CHARACTERISTICS OF TRANSFORMATIONS TO THE GENERALIZED AXIS

Entered above transformations are encouraged to use the following characteristics of instance mapping process:

- t5 is the time of transforming of one instance from the original feature space to the generalized axis for the sequential computations;

- m5 is the computer memory volume used by the transformation method for processing one instance;

- X is the number of adjustable parameters of transformation needed for its implementation;

- t is the time of calculation of transformation parameters based on the training sample;

- m the computer memory volume used to calculate the transformation parameters on the basis of training sample.

Situations where several instances have equal coordinates may occur in the original and in the synthesized feature spaces. Such situations are called collisions. Under the collision point we will understand the point in the feature space, in which there is a collision.

The collision is quite admissible and even desirable in problems of automatic classification on condition that all instances located at the point of collision belongs to the same class. However, if the instances located at the point of collision, belongs to different classes, the used feature set does not provide a good separability of instances.

Denote the set of points of collision {gv}, v=1, 2, ..., V, where gv is a set of instances belonging to a v-th point of collision, V is the number of points of collision, which obviously can not exceed 0,5S .

To estimate the quality of the results of considered transformations we propose to use the following indicators.

The number of points of collisions in which instances belongs to different classes, after the transformation of the sample to the generalized axis can be defined as:

E.

<I, y>

= £{1|3s, p = 1,2,..., S, p * s: Is e gv

v=1

Ip

e gv

, Is = Ip , ys * yp }.

This indicator in the best case will be equal to zero when there is no collision points, and in the worst case it maximum value will not exceed 0,5S.

The probability estimation (frequency) of the collision points in which instances belonging to different classes, after the transformation of the sample to the generalized axis can be expressed by the formula:

P<

2E,

< I, y>

<I, y>

S

The corrected number of points of collisions in which instances belong to different classes, after the transformation

of the sample to the generalized axis is defined as:

* *

E<I,y> = E<I,y> - E<x,y>,

where E*<xy> - is the number of collision points in which the instances belongs to different classes in the initial sample:

E<x,y>=!{11 ^ p = S, p * s : ^ e gv,.

v=1

p

x e gv.

ys * yp, Vj = 1,2,.

, N: xsj = xp }

The indicator , y> more accurately characterizes the quality of transformation to the generalized axis because it eliminates the errors present in the sample. In the best case it will be equal to zero when there is no collisions, and in the worst case it maximum value will not exceed 0,5S.

The corrected probability estimation (frequency) of the collision points in which the instances belong to different classes after the sample transformation to the generalized axis can be obtained by the formula:

P<

2E.

< I, y>

<I, y>

The total number of instances in the collision points in which the instances belong to different classes after the sample transformation to the generalized axis is suggested to calculate by the formula:

E<i,y> = I{| gv 13s,p = 1,2,...,S,p * s: Is e gv

v=1

Ip egv,Is = Ip,ys *yp}.

The more will be value of this indicator, the worse separability of instances on the generalized axis. In the best case it will be equal to zero and in the worst case it will not exceed the number of instances in the sample S.

The probability estimation of instance hitting to the collision point in which instances belong to different classes after the sample transformation to the generalized axis can be obtained by the formula:

P<

E.

<I,y> '

<I, y>

~S

The total number of instances in the collision points of the initial sample in which the instances belong to different classes it is proposed to define as:

V

E<x,y> = I{|gv |3J, p = iA.-S, p * s : xs e gv, xP e gv, v=1

ys * yp,Vj = 1,2,...,N: xsj = xp}.

The more will be the value of this indicator, the worse the separability of instances of the initial sample will be. In the best case it will be equal to zero and in the worst case it not exceed the number of instances in the sample S.

The probability estimation of instance collision in the sample in which instances belong to different classes can be obtained from the formula:

P<

E

< x, y>

< x, y>

~S

The number of pairwise collision of instances of different classes after the sample transformation to the generalized axis is proposed to determine as:

S S

E<i,y>=I I {ys *ypIIs = Ip}.

s=1 p=s+1

In the best case, this indicator is zero when there is no any collision, and in the worst case its value will not exceed S(S-1).

The probability estimation (frequency) of pairwise collision of instances of different classes after the sample transformation to the generalized axis can be calculated as follows:

P<

<I, y>

E <I, y>

S (S -1)

S

The corrected number of pairwise collision of instances of different classes after the transformation of training and (or) test sample to the generalized axis is proposed to determine as:

E<I,y> = E<I,y> E<x,y> ,

where E<xy> is a number of pairwise collision of instances of different classes in the original sample:

S S

E<x,y> = Z Z{y5 * yp\ Vj = 1,2,...,N: x) = xp}.

5=1 p=5+1

This indicator E<j, y> in comparison with the previous indicator more accurately characterizes the quality of the transformation to the generalized axis, because it eliminates the errors present in the original sample. In the best case, it would be equal to zero when there is no any collision, and at worst case, it maximum value will not exceed S(S-1).

The corrected probability estimation (frequency) of pairwise collisions of instances of different classes after sample transformation to the generalized axis can be defined by the formula:

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

P

<I ,y>

E r <I, y>

s (S -1) •

The average number of clusters per class on a generalized

- k

axis can be calculated by the formula k = ^, where k is a

number of clusters of different classes on a generalized axis.

To determine k, we need order the instances <I5, y5> in ascending order on the generalized axis. Then, looking from left to right, we need to identify clusters - the intervals of one-dimensional axis, all instances of each of which belong to only one class.

The less will be the number of such clusters, the simply is partition of generalized axis.

In the best case when the classes are compact, i.e. k = K, this indicator is equal to one.

The more will be value of this indicator, the worse the separability of instances will be on the generalized axis. In the worst case where each instance falls into a single

_ s

cluster its value will be k = —.

K

The minimum distance between instances of different classes on the generalized axis is offered to determine by the formula:

j 15 - Ip\y5 * yp }.

Rmin = min 5=1,...,S; p=5+1,...,S

The more will be value of this ratio, the better classes will be separated on the generalized axis.

The maximum distance between instances of one class on the generalized axis is offered to determine by the formula:

j 15 - Ip\y5 = yp }.

Rmax = max

5=1,...,S; p=5+1,...,S

The less will be this indicator value, the more compact instances of each class will be positioned on the generalized axes.

The average ratio of distances on the generalized axis and in the original feature space is proposed to calculate by the formula:

A=-

R„

SS

—Z Z

0,5S(S - 1)Rmax 5=1 p=5+1

where

\Is -Ip \

Rmax = max ir -1 5=1,...,S;

R( xs,xp)

j j5 - Ip \},

R(x5, xp) > 0

p=5+1,...,S

Rmax = max \ R(x , x^)\

5=1,...,S; p=5+1,...,S

j R(x5, xp )\},

R(x , xp) =

N

Z (x5j - xp)

p )2

j=1

The more will be value of this indicator, the better on the average the transformation on the generalized axis reflects location of instances in the original space and features the better separability of instances on the generalized axis;

Average of the relative distance products on the generalized axis and in the original feature space:

Z Z \I5 -Ip\R(x5,xp)

. 5=1 p=5+1

A = —t--*--

0,5S(S -1)-^maxRmax

This indicator will vary from zero to one: The more will be its value, the better on the average the transformation on the generalized axis reflects the location of instances in the original feature space.

The indicator of generalized axis feasibility of establishing:

G =

min k< xt, y >} j=1,2,..., N '

k '

where k< xj, y> is the number of intervals of different classes

on the axis of feature xj.

This indicator in the best case will be equal to S / K, and in the worst case will be equal to K / S. If this indicator will be greater than one, the use of the generalized axis will be feasible, otherwise it can be replaced with the original

feature, characterized by the smallest number of intervals of different classes.

4 THE COMPARISON CRITERIA OF GENERAL IZED AXIS TRANSFORMATIONS

On the basis of the indicators characterizing the basic properties of on the generalized axis transformations introduced in the previous section it is possible to determine the criteria for comparison, the criteria for performing and criteria for evaluating the quality of the results of transformations.

The criteria for evaluation of the transformation process is proposed to define as the following:

- the combined criterion of the minimum of time and memory on the instance transformation: F = tsms ^min;

- the combined criterion of the minimum of time and memory to determine the transformation parameters for the training sample: F2 = Xtm ^ min;

X

- the integral criterion: F3 = tsms + ~tm ^ min.

O

The criteria for evaluating the quality of results of transformations:

- the criterion of the minimum of probability of instance

group collisions: F4 = PKl,y> ^ min;

- the criterion of the minimum of probability of instance

pair collisions: F5 = P<1,y> ^ min;

- the combined criterion of the minimum probability of pair and group collisions:

F =-

P<

<I, y> + P<I, y> 2

min;

- the maximum of class compactness-

separability: F7 = k ^ min;

- the integral criterion of minimum of collisions-compactness-separability of classes:

F8 =

y> + P<j,y> min, k > 0;

- the integral criterion of minimum of collision-maximum of compactness-separability of classes and maximum of average of relative distances products on the generalized axis and in the original feature space:

Fq -

k p< I, y> + P<I, y>)

1 + Ae"

-A+1

■> min;

- the integral criterion of the minimum of collisionsmaximum of a generalized axis establishing feasibility-compactness-separability of classes and the maximum of average of relative distances products on the generalized axis and in the original feature space:

F10 --

P<

+P<

I, y>)

G + Ae

-A+1

> min

5 EXPERIMENTS AND RESULTS

The proposed transformations on the generalized axis, as well as indicators characterizing their properties have been implemented as software and experimentally studied in practical problem solving of technical and medical diagnosis, and of automatic classification, whose characteristics are given in the table 1. [3]

The fragment of the results of experiments to study the transformations on the generalized axis is shown in the table. 2.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

The conducted experiments confirmed the efficiency and the practical suitability of the developed mathematical tools. The experiments have shown that the proposed transformations allow to significantly reduce the data sample dimensionality.

The developed indicators of transformation quality allow to select the best transformation for the corresponding task providing thereby the data dimensionality reduction, also as class separability improving.

The proposed transformations can be recommended for use in the construction of diagnostic and recognizing models by precedents, as well as for the formation of the training samples from the source samples of large volume.

CONCLUSION

The actual problem of the development of mathematical support for data dimensionality reduction was solved in the paper. Its results can be used to automate the process of diagnostic and recognizing model construction by precedents.

The scientific novelty of results consists in that:

- the set of rapid transformations from the original multidimensional space into one- dimensional axis was firstly proposed. It is based on the principles of hashing and provides taking into account the instance locations in the feature space with respect to the class centers of gravity, and also allows to determine and to take into account the feature weights and thereby implicitly solves the problem of feature selection. Thus, the proposed transformations provide a solution both the problem of constructing of

Table 1. Characteristics of initial data samples

Initial sample Task

characteristics Gas-turbine Chronic obstructive Agricultural plant Fisher Iris

air-engine blade diagnosis bronchitis diagnosis recognition on the remote sensing data classification

S 32 205 3226 150

N 513 28 256 4

K 2 2 3 3

Table 2. The fragment of the experimental results to study the transformations on generalized axis

Best transformation characteristics Task

Gas-turbine air-engine blade diagnosis Chronic obstructive bronchitis diagnosis Agricultural plant recognition on the remote sensing data Fisher Iris classification

Best transformation number 2 1 4 1

E* <I, y> 1 0 0 1

P* P<I, y > 0,0625 0 0 0,013333

E< I, y> 1 0 0 9

P<I, y> 0,0010081 0 0 0,00040268

E* n <x,v> 0 0 0 0

E*' <I, y> 1 0 0 1

-E<x,v> 0 0 0 0

E' r <I, y> 1 0 0 9

^< x, y> 0 0 0 0

pi <x, y> 0 0 0 0

E^, <I, y> 2 0 0 10

P<I, y> 0,0625 0 0 0,066667

k 15 72 1773 11

Rmax 70,114 918,99 7,5551 7,0852

R* "max 1,5228-109 1,6927-109 3,3217 1,5126-109

Rmin 0 1056 1,1437-10-7 0

Rmax 1,5228 109 1,6833-109 3,318 609746944

A 0,82534 0,69432 0,37993 0,88456

A 0,095099 0,098493 0,028541 0,18511

G 0,2 0,69444 0,56007 1

f 0,00097501 0,00053269 3,8686-10-5 0,000416

m5 4309 262,83 2057,9 82,133

t 0,2184 0,093601 4,524 0,1092

m 333484 109644 6764132 20340

X 1539 84 1024 12

Fi 4,2013 0,14001 0,079613 0,034168

F2 1,1209-108 8,6207-105 3,1336-1010 0,26654-105

F3 3,5028-106 4205,4 9,7134-106 177,73

F4 0,0010081 0 0 0,00040268

F5 0,0625 0 0 0,013333

F6 0,031754 0 0 0,006868

F7 7,5 36 591 3,6667

f8 0,23816 0 0 0,025183

F9 0,42786 0 0 0,041701

F10 0,20274 0 0 0,011373

artificial features (feature extraction), and the problem selection of the most significant features (feature selection);

- the complex of indicators characterizing the properties of transformations from multidimensional space to generalized axis was firstly proposed. On the basis of the proposed indicators the set of criteria is defined. It facilitates comparison and selection of the best transformations and results of their work at diagnosis and recognition problems solving by precedents.

The practical significance of obtained results is that:

- the software realizing proposed transformations and indicators characterizing their properties was developed. Its usage allows to automate the data dimensionality reduction and analysis of its results;

- the experimental investigation of the proposed transformations and the indicators characterizing them was conducted at practical problem solving. The results of research allow to recommend the proposed transformations

for use in practice for diagnosis and pattern recognition problem solving.

The prospects of further utilization of the results obtained in this work consists in the possibility of their use for automation of training and testing samples formation (data dimensionality reduction through decrease of the number of precedents by the most important precedents extraction).

The work performed as part of the state budget scientific research project of Zaporizhzhya National Technical University «Intelligent information technologies of automation of designing, simulation, control and diagnosis of manufacturing processes and systems» (state registration number 0112U005350).

SPISOK LITERATURY

1. Jensen, R. Computational intelligence and feature selection: rough and fuzzy approaches / R. Jensen, Q. Shen. - Hoboken: John Wiley & Sons, 2008. - 339 p.

2. Бабак, О. В. Решение некоторых задач обработки данных на основе метода генеральной обобщенной переменной / О. В. Бабак // Проблемы управления и информатики. -2002. - № 6. - С. 79-91.

3. Интеллектуальные информационные технологии проектирования автоматизированных систем диагностирования и распознавания образов : монография / [С. А. Субботин, Ан. А. Олейник, Е. А. Гофман и др.] ; под ред. С. А. Субботина. - Харьков : Компания СМИТ, 2012. -318 с.

4. Lee, T. W. Independent component analysis: theory and applications / T. W. Lee. - Berlin: Springer, 2010. - 248 p.

5. Lee, J. A. Nonlinear dimensionality reduction / J. A. Lee, M. Verleysen. - New York : Springer, 2007. - 308 p.

6. Dimension reduction : technical report UCD-CSI-2007-7 / University College Dublin ; C. Padraig. - Dublin, 2007. - 24 p.

7. Multifactor dimensionality reduction for detecting haplotype-haplotype interaction / Y. Jiang, R. Zhang, G. Liu [et al.] // Fuzzy systems and knowledge discovery : Sixth international

conference, Tianjin, 14-16 August 2009 : proceedings. - Los Alamitos: IEEE, 2009. - P. 241 -245.

8. Kulis, B. Fast low-rank semidefinite programming for embedding and clustering [Electronic resource] / B. Kulis, A. C. Surendran, J. C. Platt // Artificial intelligence and statistics : Eleventh international conference, San Juan, 2124 March 2007 : proceedings / eds.: M. Meila, X. Shen. -Madison: Omnipress, 2007. - 8 p.

9. Бабак, О. В. Об одном подходе к решению задач классификации в условиях неполноты информации / О. В. Бабак, А. Э. Татаринов // Кибернетика и системный анализ. - 2005. -№ 6. - С. 116-123.

10. Васильев, В. И. Принцип редукции в задачах обнаружения закономерностей : монография / В. И. Васильев, А. И. Шевченко, С. Н. Эш. - Донецк : Наука i освгга, 2009. -340 с.

11. Yu, S. Feature selection and classifier ensembles: a study on hyperspectral remote sensing data : proefschrift ... doctor in de wetenschappen / Yu Shixin. - Antwerpen : Universitaire Instelling Antwerpen, 2003. - 124 p.

12. Super-bit locality-sensitive hashing / [J. Jianqiu, J. Li, Sh. Yany, B. Zhang et al.] // Advances in Neural Information Processing Systems / [eds. P. Bartlett et al.]. - 2012. -Vol. 25. - P. 108-116.

13. Andoni, A. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions / A. Andoni, P. Indyk // Communications of the ACM. - 2008. - Vol. 51, No. 1. -P. 117-122.

14. Yang, X. A scalable index architecture for supporting multidimensional range queries in peer-to-peer networks / X. Yang and Y. Hu // Collaborative computing: networking, applications and worksharing : International conference CollaborateCom-2006, Atlanta 17-20 November 2006 : proceedings. - P. 1-10.

15. Locality-sensitive hashing scheme based on p-stable distributions / A. Andoni, M. Datar, N. Immorlica, P. Indyk, V. Mirrokni // Nearest neighbor methods in learning and vision: theory and practice / [eds.: T. Darrell, P. Indyk, G. Shakhnarovich]. - MIT Press, 2006. - P. 55-67.

Стаття надшшла до редакцй 23.04.2014.

Субботин С. А.

Д-р техн. наук, профессор, Запорожский национальный технический университет, Украина

МЕТОДЫ И ХАРАКТЕРИСТИКИ ПРЕОБРАЗОВАНИЙ, СОХРАНЯЮЩИХ МЕСТОНАХОЖДЕНИЕ ЭКЗЕМПЛЯРОВ, В ЗАДАЧАХ ВЫЧИСЛИТЕЛЬНОГО ИНТЕЛЛЕКТА

Решена проблема разработки математического обеспечения для сокращения размерности данных. Впервые предложен набор быстрых преобразований из многомерного пространства на одномерную ось, позволяющих решать задачи извлечения и отбора признаков. Впервые предложен комплекс показателей и критериев качества преобразований на обобщенную ось. Предложенные преобразования и показатели программно реализованы и исследованы при решении практических задач.

Ключевые слова: выборка, экземпляр, признак, преобразование сохраняющее местонахождение, хеширование, распознавание образов, диагностирование, сокращение размерности.

Субботш С. О.

Д-р техн. наук, професор, Запорiзький нацюнальний техшчний ушверситет, Украша

МЕТОДИ I ХАРАКТЕРИСТИКИ ПЕРЕТВОРЕНЬ, ЩО ЗБЕР1ГАЮТЬ РОЗТАШУВАННЯ ЕКЗЕМПЛЯР1В, В ЗАДАЧАХ ОБЧИСЛЮВАЛЬНОГО 1НТЕЛЕКТУ

Виршено проблему розроблення математичного забезпечення для скорочення розмiрностi даних. Вперше запропоновано набiр швидких перетворень з багатовимiрного простору на одновимiрну вюь, що дозволяють виршувати завдання витягу та вщбору ознак. Вперше запропоновано комплекс показниюв i критерйв якост перетворень на узагальнену вюь. Запропоноваш перетворення i показники програмно реалiзовано i дослщжено при виршенш практичних завдань.

Ключовi слова: вибiрка, екземпляр, ознака, перетворення що зберкае Мюце розташування, гешування, розтзнавання образiв, дiагностування, скорочення розмiрностi.

REFERENCES

1. Jensen R., Shen Q. Computational intelligence and feature selection: rough and fuzzy approaches. Hoboken, John Wiley & Sons, 2008, 339 p.

2. Babak O. V. Reshenie nekotorykh zadach obrabotki dannykh na osnove metoda gene-ralnoy obobshchennoy peremennoy [The solution of some problems of data processing on the basis of the general generalized variable], Problemy upravleniya i informatiki [Problem.5 of Control and Informatic5], 2002, No. 6, pp. 79-91.

3. Subbotin S. A., Oleinik An., Gofman E. A. et al. ed.: S. A. Subbotin.Intellektualnye informatsionnye tekhnologii proektirovaniya avtomatizirovannykh sistem diagnostirovaniya i raspoznavaniya obrazov : monografiya [Intelligent information technologies of automated diagnostic and pattern recognition systems design : monograph]. Kharkov, Company «SMIT», 2012, 318 p.

4. Lee T. W. Independent component analysis: theory and applications. Berlin, Springer, 2010, 248 p.

5. Lee J. A., Verleysen M. Nonlinear dimensionality reduction. New York, Springer, 2007, 308 p.

6. Dimension reduction : technical report UCD-CSI-2007-7 / University College Dublin ; C. Padraig. Dublin, 2007, 24 p.

7. Jiang Y., Zhang R., Liu G. [et al.] Multifactor dimensionality reduction for detecting haplotype-haplotype interaction, Fuzzy systems and knowledge discovery : Sixth international conference, Tianjin, 14-16 August 2009, proceedings, Los Alamitos, IEEE, 2009, pp. 241-245.

8. Kulis B., Surendran A. C., Platt J. C. Fast low-rank semidefinite programming for embedding and clustering [Electronic resource], Artificial intelligence and jtatijticj : Eleventh international conference, San Juan, 21-24 March

2007 : proceedings, eds.: M. Meila, X. Shen. Madison, Omnipress, 2007, 8 p.

9. Babak O. V., Tatarinov A. E. Ob odnom podkhode k resheniyu zadach klassifikatsii v usloviyakh nepolnoty informatsii [An approach to solving classification problems under incomplete information], Kibernetika i sistemnyy analiz [Cybernetics and Systems Analisys], 2005, No. 6, pp. 116-123.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

10. Vasilev V. I., Shevchenko A. I., Esh S. N. Printsip reduktsii v zadachakh obnaruzheniya zakonomernostey : monografiya [ Principle of reductions it the problems of dependence detection: monograph]. Donetsk, Nauka i osvita [Donetsk, Nauka i osvita, publishing], 2009, 340 p.

11. Yu S. Feature selection and classifier ensembles: a study on hyperspectral remote sensing data : proefschrift ... doctor in de wetenschappen. Antwerpen, Universitaire Instelling Antwerpen, 2003, 124 p.

12. Jianqiu J., Li J., Yany Sh., Zhang B. et al.] Super-bit locality-sensitive hashing, Advances in Neural Information Processing Systems, [eds. P. Bartlett et al.], 2012, Vol. 25, pp. 108-116.

13. Andoni A., Indyk P. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions, Communications of the ACM, 2008, Vol. 51, No. 1, pp. 117122.

14. Yang X., Hu Y. A scalable index architecture for supporting multi-dimensional range queries in peer-to-peer networks, Collaborative computing: networking, applications and worksharing, International conference CollaborateCom-2006, Atlanta 17-20 November 2006, proceedings, pp. 1-10.

15. Andoni A., Datar M., Immorlica N., Indyk P., Mirrokni V., eds.: T. Darrell, P. Indyk, G. Shakhnarovich Locality-sensitive hashing scheme based on p-stable distributions, Nearest neighbor methods in learning and vision: theory and practice, MIT Press, 2006, pp. 55-67.

i Надоели баннеры? Вы всегда можете отключить рекламу.