Научная статья на тему 'Data-reducing principal component analysis (PCA) is NP-hard even under the simplest interval uncertainty'

Data-reducing principal component analysis (PCA) is NP-hard even under the simplest interval uncertainty Текст научной статьи по специальности «Математика»

CC BY
105
35
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
МЕТОД ГЛАВНЫХ КОМПОНЕНТ / ИНТЕРВАЛЬНАЯ НЕОПРЕДЕЛЕННОСТЬ / NP- СЛОЖНАЯ ЗАДАЧА / PRINCIPAL COMPONENT ANALYSIS / INTERVAL UNCERTAINTY / NP-HARD

Аннотация научной статьи по математике, автор научной работы — Koshelev M.

Principal component analysis (PCA) is one of the most widely used methods for reducing the data size. In practice, data is known with uncertainty, so we need to apply PCA to this uncertain data. Several authors developed algorithms for PCA under interval uncertainty. It is known that in general, the problem of PCA under interval uncertainty is NP-hard. The usual NP-hardness proof uses situations in which all measurement results come with interval uncertainty. In practice, often, most measurements are reasonably accurate, and only a few (or even one) variables are measured with significant uncertainty. When we consider such situations, will the PCA still be NP-hard? In this paper, we prove that even in the simplest case when for each object, at most one data points comes with interval uncertainty, the PCA problem is still NP-hard.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Data-reducing principal component analysis (PCA) is NP-hard even under the simplest interval uncertainty»

Вычислительные технологии

Том 16, № 2, 2011

Data-reducing principal component analysis (PCA) is NP-hard even under the simplest interval uncertainty

M. KOSHELEV Baylor College of Medicine, Houston, USA e-mail: misha680hnl@gmail.com

Principal component analysis (PCA) is one of the most widely used methods for reducing the data size. In practice, data is known with uncertainty, so we need to apply PCA to this uncertain data. Several authors developed algorithms for PCA under interval uncertainty. It is known that in general, the problem of PCA under interval uncertainty is NP-hard.

The usual NP-hardness proof uses situations in which all measurement results come with interval uncertainty. In practice, often, most measurements are reasonably accurate, and only a few (or even one) variables are measured with significant uncertainty. When we consider such situations, will the PCA still be NP-hard?

In this paper, we prove that even in the simplest case when for each object, at most one data points comes with interval uncertainty, the PCA problem is still NP-hard.

Keywords: principal component analysis, interval uncertainty, NP-hard.

1. Data reduction, PCA, and interval uncertainty: a brief reminder

Need to reduce the data size. In many real-life situations, for each object and/or situation k, we measure a large number d of variables. As a result of these measurements, we get the values xk)1,..., xk,d corresponding to different objects k = 1,..., n. When the number of variables d is large, processing all this data requires a lot of computation time.

Example. Such a large amount of data occurs in 3-D medical imaging. For example, in functional magnetic resonance imaging (fMRI), for each of many patients, we measure the intensity values at dozens of thousands of voxels at dozens of moments of time; see, e. g. [3, 4, 15, 17]. As a result, processing all this data requires a large amount of time-consuming computations.

Possibility to reduce the data size. Often, the measured values are strongly dependent on each other. In such situations, it is possible to use this dependence to reduce the data size.

Principal component analysis (PCA): a brief reminder. For the case of linear dependence, the technique for correspondingly reducing the size of the data is called principal component analysis (PCA, for short; see, e.g., [18]). This technique was first invented by the famous statistician K. Pearson in the early 20 century [28].

The use of the original data means, in effect, that we represent each data vector xk = (xfc>1,xfc>2,... ,xfc>i,... ,xM) as a linear combination of the basis vectors u1 = (1, 0,..., 0), u2 = (0,1, 0,..., 0), ..., u = (0,..., 0,1, 0,..., 0), ..., ud = (0,..., 0,1):

xk = xfc)i ■ Ui + xfc,2 ■ U2 + ... + xk,i ■ Ui + ... + xM ■ Ud. (1)

The basis vectors u are orthonomal in the sense that different vectors are orthogonal, i.e., (uj, Uj) = 0 for i = j, where

(a, b) = ai ■ bi + a2 ■ 62 + • • • + aj ■ bj + • • • + «d ■ bd, (2)

and each of these vectors has a unit Euclidean norm ||«j||2 = 1, where for every vector a = (a1, • • •, ad), its Euclidean norm ||a||2 is defined by the formula

II 112 def / \ 2,2, 1 2 , 1 2 /o\

||a||2 = (a, a) = a1 + a2 + • • • + aj + • • • + ad- (3)

The main idea behind PCA is that instead of using the standard orthonormal basis, we find a different orthonormal basis ej = (ej)1, • • •, ej)d) for which (ej,ej-) = 0 for i = j and e2 = (ej, ej) = 1. With respect to this basis, each data vector xk can be represented as

Xfc = yfc,i ■ ei + yfc,2 ■ e2 + • • • + yfc,j ■ ej + • • • + yM ■ ed, (4)

where, due to orthonormality, we have, for every i,

(ej, xfc) = yM ■ (ej, ei) + yfc,2 ■ (ej, e2) + • • • + yfc,j-i ■ (ej, ej-i) + yfc,j ■ (ej, ej) +

+yfc,j+i ■ (ej, ej+i) + • • • + yM ■ (ej, ed) = = yk,i ■ 0 + yfc,2 ■ 0 + • • • + yfcjj-i ■ 0 + yfc,j ■ 1 + yfcjj+i ■ 0 + • • • + yfc,d ■ 0 = yfc>j, (5)

hence

yfc,j = (ej, xfc) = ej,i ■ xfc)i + ej,2 ■ xfc,2 + • • • + ej,j ■ xfc,j + • • • + ej,d ■ xfc)d• (6)

Then, for each data point , we only use the first p < d values yk)i, • • •,

As a result, instead of the original vector (4), we use an approximate value

Xfc = yfcji ■ ei + yfc,2 ■ e2 + • • • + yfc,j ■ ej + • • • + yfc;P ■ (7)

We want to select the vectors ei, • • •, ep for which Xk « for all objects k = 1, • • •, n, i. e., for which Xkj « xkj for all objects k and for all variables i.

The values xkj form a (n ■ d)-dimensional vector x. Similarly, the values Xkj form a (n ■ d)-dimensional vector X. We want each coordinate xk)j of the vector x to be close to the corresponding coordinate of the vector X. In other words, we want the approximation vector X to be as close to the original data vector x as possible. A reasonable measure of distance between the two vectors is the Euclidean distance

||X - X||2 =

n d

EE(Xk,j - XM)2- (8)

fc=i j=i

Thus, we should select the vectors ei,..., ep for which this distance ||X — x|2 is the smallest possible.

This minimization formulation can be simplified if we take into account that the square root is a strictly increasing function and thus, minimizing the square root is equivalent to minimizing the sum of the squares

nd

||x — x||2 = EE(Xfc)j — xfc,j)2- (9)

fc=ij=i

Here, by the definition of the Euclidean norm,

d

E(Xm — xfc,j)2 = ||xfc — xfc ||2, (10)

j=i

so we arrive at the following precise formulation.

Select the vectors ei,..., ep in such a way that the mean squared difference between the original data vectors xk and the approximate vectors Xk is the smallest possible:

minimize ||Xi — xi||2 + |X — X2||2 + • • • + ||Xfc — xfc||2 + • • • + ||Xn — Xn||2- (11)

Already Pearson showed that this minimum is attained if we take, as ei,..., ep, the eigenvectors of the covariance matrix

Cj,j xi,j ' xi,j + x2,j ' x2,j + • • • + xk,j ' xk,j + • • • + xn,j ' xn,j (12)

that correspond to the p largest eigenvalues.

Comment. It should be mentioned that the same PCA technique is also used when we have a reasonably small data size d. In such situations, PCA is used to solve a different practical problem: namely, to find appropriate factors, i.e., combinations of variables which are the most relevant for a given process.

In this paper, however, we are mainly in the data-reducing applications of PCA. Need to take interval uncertainty into account. In practice, measurements are never absolutely exact. In general, the measured values Xk)j are different from the actual (unknown) values xk)j. In other words, the measurement inaccuracy is usually non-zero:

Axfc,j =f Xfc,j — xfc,j = 0. (13)

In some cases, we know the probability distribution for the measurement inaccuracies Axk)j. However, frequently, we do not know this probability distribution. Often, the only information that we have about the measurement inaccuracy Axk;j is the upper bound Ak)j on its absolute value:

|Axfc,j| < Afc>j. (14)

After each such measurement, the only information that we have about xk)j is that this belongs to the interval

xfc,j ^ xfc,j = [xfc,j — Afc,j,xfc,j + Afc,j]- (15)

In other words, we get an interval uncertainty (see, e.g., [25]). We need to take interval uncertainty into account when we use PCA to reduce the data size.

2. Data-reducing PCA under interval uncertainty: what is known

PCA under interval uncertainty: known algorithms. The need for PCA under interval uncertainty is well known. There exist several efficient algorithms for PCA under interval uncertainty; see, e.g., [1, 2, 6-8, 11-14, 16, 21-26, 30, 31] and references therein.

Most of these algorithms aim at the factor applications of PCA, but they can be used in data reduction as well.

Data-reducing PCA under interval uncertainty: towards a precise formulation of the problem. In data reduction, our objective is to decrease the size of the data as much as possible. In the usual PCA, we select the basis vectors ei,...,ep for which the corresponding sum of the squares (11) is the smallest possible.

Because of the interval uncertainty, we can now also select the values xk)i within the corresponding intervals.

For example, suppose that for almost all objects k, we know the exact values of xk)i, and these exact values satisfy the property xk,2 = xk)1. This means that the second quantity is redundant — and we can therefore reduce the data size by keeping only the values xk)1.

This redundancy may not survive when we get more data — but as long as the assumption xk,2 = xk)1 is confirmed by all the known data, it makes sense to use this assumption to reduce the data.

Suppose now that we now add, to that data, a new object k0 for which the values xko>1 and xko,2 are only known with interval uncertainty, i.e., for which, instead of the actual values xko>i we only know the intervals xko>1 and xko,2 of possible values. If these two intervals have a common point, this means that the new data is still consistent with the assumption that xk,2 = xM — and thus, it still makes sense to use this assumption to reduce the data.

This example shows that it is reasonable to select both the vectors ei and the values xk)i g xki for which the approximation is the best. Thus, we arrive at the following formulation.

Data-reducing PCA under interval uncertainty: a precise formulation of the problem. We are given intervals xk)i. We need to select the vectors e1,..., ep and the values xk)i g xk i in such a way that the mean square difference between the original data vectors xk and the approximate vectors xk is the smallest possible:

minimize iX - x1||2 + ||X - x2||2 + ... + ||X - xk||2 + ... + ||X„ - x„||2, (16)

where

Xk = (xk, e1) ■ e1 + ... + (xk, ei) ■ ei + ... + (xk, ep) ■ ep. (17)

PCA under interval uncertainty is NP-hard: a conjecture. While the existing interval PCA algorithms are usually efficient, sometimes, they require a large amount of computation time. This empirical fact prompted a conjecture that the problem of PCA under interval uncertainty is NP-hard (see, e.g., [16, 23], also [19, 27] for formal definitions of NP-hardness).

This conjecture was also motivated by the fact that many similar statistical problems becomes NP-hard once we take interval uncertainty into account; even the problem of computing the range of the variance under interval uncertainty is NP-hard [9, 10, 20].

PCA under interval uncertainty is NP-hard: a proof. A part of the PCA problem is checking whether it is possible to achieve the exact data reduction, i.e., whether it is possible to find the vectors e1,..., ep, p < d, and the values xk)i for which Xk = xk for all objects k.

In mathematical terms, this means checking whether it is possible to select the "column" vectors

zi . . . , xk,i, . . . , xn,i) (18)

in such a way that they are linearly dependent, i.e., that there exists a vector a = (a1,..., ad) = 0 for which for every k, we have

a1 ■ xk,1 + ... + ai ■ xk,i + ... + ad ■ xM = 0. (19)

For a square matrix (d = n), the existence of such a is equivalent to the matrix being singular. Thus, for a square interval matrix with entries xki, the possibility of such a reduction is equivalent to the possibility of finding a singular matrix with entries g xk i. It is known that checking for the existence of such a matrix — or, equivalently, checking whether all matrices g xk)i are non-singular — is NP-hard. This result — one of the first NP-hardness results in interval computations — was proved by S. Poljak and J. Rohn in [29] (see [19] for further similar results).

Thus, PCA under interval uncertainty is indeed NP-hard.

3. Realistic cases of interval-valued PCA and their computational complexity: formulation of the problem and the main result

The known NP-hardness result: reminder. The above result shows that, in general, the problem of PCA under interval uncertainty is NP-hard.

The general case is rare. The above proof is based on considering matrices in which all the entries are non-degenerate intervals.

In practice, however, often, most measurements are reasonably accurate, and only a few (or even one) variables are measured with significant uncertainty. In such situations, for each object k, we can safely assume that

— we know most of the values exactly, and

— only for a few i, we know the (non-degenerate) interval xk i.

Natural question. When we consider such situations, will the interval-valued PCA still be NP-hard?

Simplest case. In particular, the same question about the computational complexity can be asked about the simplest case, when for each object k, at most one data point comes with interval uncertainty.

Our main result. Our result is that even for this simplest case, the data-reducing PCA problem under interval uncertainty is NP-hard.

Comment. Our proof will follow the main ideas from NP-hardness proofs described in [19].

4. Proof of the main result

What is NP-hard: a brief informal reminder. Crudely speaking, the fact that a problem p0 is NP-hard means that every problem p (from a reasonable class NP) can be reduced to this problem p0, i.e., informally, that this problem p0 is the toughest possible.

How NP-hardness is usually proved: by reduction to NP-hardness of a known problem. The usual way to prove NP-hardness of a problem p0 is to show that a known NP-hard problem pk can be reduced to a particular case of our problem p0.

Since the problem pk is NP-hard, this means that every problem p from the class NP can be reduced to this problem pk. Since the problem pk can be, in turn, reduced to p0, this means that every problem p from the class NP can be reduced to p0. By definition of NP-hardness, this means that our problem p0 is indeed NP-hard.

Selection of the known NP-hard problem. As the known NP-hard problem pk, we take the following problem:

— we are given several positive integers s1 > 0, ..., sm > 0;

— we need to find the signs q g {-1,1} for which the corresponding signed sum of the given integers is equal to 0:

m

EQ ■ si = 0. (20)

1=1

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Possible intuitive interpretation of the problem pk. The requirement (20) can be reformulated as

Esi - E si = 0. (21)

I: e£=1 i': £t,=-1

If we move all the negative terms in the signed sum (21) into the other side, we get the equality between the sum of all the values to which we assigned plus and the sum of all the values to which we assigned minus:

E si = E si'. (22)

l: e£=1 i': e£' = -1

The resulting problem allows the simple interpretation — e.g., as the problem of diving the inheritance into two equal parts:

— we have m objects with known costs si;

— we must divide them into two groups of equal cost.

Reduction: a reminder. To prove the NP-hardness of our interval-valued PCA problem p0, we wan to reduce the problem pk to our problem p0.

To reduce means that for every instance s1,..., sm of the problem pk, we must form a case of the interval PCA problem pk from whose solution we will be able to extract the solution to the original instance.

How we reduce. In the original problem, we have m positive integers s1,..., sm (and we must find the signs ei for which the signed sum is zero).

In the reduction, we form n = 2m + 1 objects with d = m + 1 variables and the following data:

— for the first m objects I = 1,..., m, we take

xi,i = [-1,1], xi;m+1 = [1,1], xi,i = [0, 0] for i = I, m +1; (23)

— for the next m objects k = m + I, where I = 1,..., m, we take

xm+i,i = [-1 ^ xm+i,m+1 = [1, 1],

xm+i,i = [0, 0] for i = I, m + 1; (24)

— finally, for the last object k = 2m + 1, we take

x2m+1,l = [si, si] for all I < m, x2m+1,m+1 = [0, 0]. (25)

Towards proving that this is indeed a reduction: what does it mean to have a solution to the instance of the interval PCA problem. In the interval PCA problem, we check whether the columns of the data matrix are linearly dependent, i. e., whether there exist values xk)i from the corresponding intervals xk)i and values a = (a1,..., am+1) not all equal to 0 for which the corresponding linear combination is equal to 0 for all objects k:

a1 ■ xk,1 + ... + ai ■ xk,i + ... + ad ■ xM = 0. (26)

Proving reduction: let us first consider the second group of m equations. For

each l < m, from the (m+l)-th equation (24), we conclude that for some xm+i,m+i g [—1,1], we get

ai + «m+i ■ Xm+l;m+i = 0, (27)

al = —am+i ■ xm+l,m+i • (28)

Thus, the absolute value of al is equal to the product of absolute values of am+i and of the absolute value of xm+l,m+i:

|ai| = |«m+i| ■ |xm+l,m+i|- (29)

Since xm+l,m+i is between —1 and 1, this means that the absolute value of al is smaller than or equal to the absolute value of am+i:

|ai| < |am+i|- (30)

So, if the last a coefficient am+i is 0, then all the alpha values are zeros. Since we assumed that the vector a is not 0, this means that the coefficient am+i is not 0.

Since am+i = 0, we can divide all the other alpha terms by this coefficient. For the resulting ratios

def ai

£e = -, (31)

am+i

the inequality (30) implies that

M < 1- (32)

Proving reduction: let us now consider the first group of m equations. For

each I < m, the l-th equation implies that

xi,i ■ ai + «m+i = 0 (33)

for some xl,l g [—1,1], i.e., that

xi,i ■ ai = —am+i- (34)

Dividing both sides of this equality by am+i = 0, we get

xi,i ■ ei = — 1- (35)

So, the product of the absolute values of and of xi)i is 1:

|xi,i| ■ |ei| = 1, (36)

and

M = r^-r. (37)

|xi,i|

Since xi,i is between —1 and 1, its absolute value is bounded by 1. Hence, the absolute value of is at least one:

|ei| > 1- (38)

From (38) and (32), we conclude that |ei| = 1, i.e., that g { — 1,1}.

Proving reduction: let us use the last equation. The last equation has the form

E ai ■ si = 0. (39)

i

Dividing both sides by am+i, we get

E ei ■ si = 0, (40)

i

which is exactly what we wanted.

The reduction is proven. Thus, the PCA under interval uncertainty problem is indeed NP-hard even in the simplest case when for each object, no more than one quantity is known with interval uncertainty.

Acknowledgments

The author is thankful to all his colleagues from the Human Neuroimaging Lab, specially to Drs. P. Read Montague and Terry Lohrenz, for valuable suggestions, and to the anonymous referees for their help.

References

[1] Antoch J., Brzezina M., Miele R. A note on variability of interval data // Comput. Statist. 2010. Vol. 25. P. 143-153.

[2] Analysis of Symbolic Data: Exploratory Methods for Extracting Statistical Information from Complex Data / Eds. H.H. Bock and E. Diday. Heidelberg: Springer-Verlag, 2000.

[3] Buxton R.B. An Introduction to Functional Magnetic Resonance Imaging. Principies and Techniques. Cambridge, Massachusetts: Cambridge Univ. Press, 2002.

[4] Handbook of Functional Neuroimaging of Cognition / Eds. R. Cabeza and A. Kingstone. Cambridge, Massachusetts: MIT Press, 2006.

[5] Cazes P., Chouakria A., Diday E., Schektman Y. Extension de l'analyse en composantes principales a des donnees de type intervalle // Revue de Statist. Appl. 1997. Vol. 45. P. 5-24.

[6] Chouakria A. Extension de L'analyse en Composantes Principales a des Donnees de Type Intervalle. PhD Dissertation. Univ. of Paris IX Dauphine, 1998.

[7] Chouakria A., Diday E., Cazes P. An improved factorial representation of symbolic objects // Proc. of the Conf. of Knowledge Extraction and Symbolic Data Analysis KESDA'98. Luxembourg, 1999.

[8] D'Urso P., Giordani P. A least squares approach to principal component analysis for interval valued data // Chemometr. and Intelligent Laboratory Systems. 2004. Vol. 70, No. 2. P. 179-192.

[9] Ferson S., Ginzburg L., Kreinovich V. et al. Computing variance for interval data is NP-hard // ACM SIGACT News. 2002. Vol. 33, No. 2. P. 108-118.

[10] Ferson S., Ginzburg L., Kreinovich V. et al. Exact bounds on finite populations of interval data // Reliable Comput. 2005. Vol. 11, No. 3. P. 207-233.

[11] Ferson S., Kreinovich V., Hajagos J. et al. Experimental Uncertainty Estimation and Statistics for Data Having Interval Uncertainty. Sandia National Laboratories. Rep. SAND2007-0939, May 2007.

[12] Gioia F. Statistical Methods for Interval Variables. PhD Dissertation. Department of Mathematics and Statistics. Univ. of Naples "Federico II", 2001 (in Italian).

[13] Gioia F., Lauro C.N. Principal component analysis on interval data // Comput. Statist. 2006. Vol. 21. P. 343-363.

[14] Hu C., Kearfott R.B. Interval matrices in knowledge discovery // Knowledge Processing with Interval and Soft Computing / Eds. C. Hu, R.B. Kearfott, A. de Korvin, and V. Kreinovich. London: Springer-Verlag, 2008. P. 99-117.

[15] Huettel S.A., Song A.W., McCarthy G. Functional Magnetic Resonance Imaging. Sin-auer Associates. Sunderland, Massachusetts, 2004.

[16] Irpino A. Spaghetti' PCA analysis: An extension of principal components analysis to time dependent interval data // Patt. Recognit. Lett. 2006. Vol. 27. P. 504-513.

[17] Functional MRI: An Introduction to Methods / Eds. P. Jezzard, P.M. Matthews, and S.M. Smith. New York: Oxford Univ. Press, 2003.

[18] Jolliffe I.T. Principal Component Analysis. New York: Springer-Verlag, 2002.

[19] Kreinovich V., Lakeyev A., Rohn J., Kahl P. Computational Complexity and Feasibility of Data Processing and Interval Computations. Dordrecht: Kluwer, 1998.

[20] Kreinovich V., Xiang G., Starks S.A. et al. Towards combining probabilistic and interval uncertainty in engineering calculations: algorithms for computing statistics under interval uncertainty, and their computational complexity // Reliable Comput. 2006. Vol. 12, No. 6. P. 471-501.

[21] Lauro C., Palumbo F. Principal component analysis of interval data: A symbolic data analysis approach // Comput. Statist. 2000. Vol. 15. P. 73-87.

[22] Lauro C., Palumbo F. Some results and new perspectives in principal component analysis for interval data // Atti del Convegno CLADAG'03 Gruppo di Classificazione della Societa Italiana di Statistica, 2003. P. 237-244.

[23] Lauro C.N., Verde R., Irpino A. Principal component analysis of symbolic data described by intervals // Symbolic Data Analysis and the SODAS Software / Eds. E. Diday and M. Noirhome Fraiture. Chichester, UK: John Wiley and Sons, 2007. P. 279-312.

[24] Lauro C., Palumbo F. Principal component analysis for non-precise data // New Developments in Classification and Data Analysis / Eds. M. Vichi, P. Monari, S. Mignani and A. Montanari. Berlin, Heidelberg, New York: Springer-Verlag, 2005. P. 173-184.

[25] Moore R.E., Kearfott R.B., Cloud M.J. Introduction to Interval Analysis. Philadelphia, Pennsylvania: SIAM Press, 2009.

[26] Palumbo F., Lauro C. A PCA for interval-valued data based on midpoints and radii // New Developments on Psychometrics: Proc. of the Intern. Meeting of the Psychometric Society IMPS'2001 / Eds. H. Yanai, A. Okada, K. Shigemasu, Y. Kano and J.J. Meulman. Tokyo: Springer-Verlag, 2003. P. 641-648.

[27] Papadimitriou C.H. Computational Complexity. Addison Wesley, 1994.

[28] Pearson K. On lines and planes of closest fit to systems of points in space // Philosop. Magazine. 1901. Vol. 2, No. 6. P. 559-572; available as

http://stat.smmu.edu.cn/history/pearson1901.pdf

[29] Poljak S., Rohn J. Checking robust nonsingularity is NP-hard // Math. Control Signals Syst. 1993. Vol. 6, No. 1. P. 1-9.

[30] Rodriguez O. Classification et Modeles Lineaires en Analyse des Donnes Symboliques. PhD Dissertation. Univ. of Paris IX Dauphine, 2000.

[31] Sato-Ilic M. Weighted principal component analysis for interval-valued data based on fuzzy clustering // Proc. of the 2003 IEEE Conf. on Systems, Man, and Cybernetics. IEEE Press, 2003. P. 4476-4482.

Received for publication 29 March 2010

i Надоели баннеры? Вы всегда можете отключить рекламу.