Dimension reduction and visualization of large high-dimensional data via interpolation
- 21 June 2010
- conference paper
- Published by Association for Computing Machinery (ACM)
- p. 203-214
- https://doi.org/10.1145/1851476.1851501
Abstract
The recent explosion of publicly available biology gene sequences and chemical compounds offers an unprecedented opportunity for data mining. To make data analysis feasible for such vast volume and high-dimensional scientific data, we apply high performance dimension reduction algorithms. It facilitates the investigation of unknown structures in a three dimensional visualization. Among the known dimension reduction algorithms, we utilize the multidimensional scaling and generative topographic mapping algorithms to configure the given high-dimensional data into the target dimension. However, both algorithms require large physical memory as well as computational resources. Thus, the authors propose an interpolated approach to utilizing the mapping of only a subset of the given data. This approach effectively reduces computational complexity. With minor trade-off of approximation, interpolation method makes it possible to process millions of data points with modest amounts of computation and memory requirement. Since huge amount of data are dealt, we represent how to parallelize proposed interpolation algorithms, as well. For the evaluation of the interpolated MDS by STRESS criteria, it is necessary to compute symmetric all pairwise computation with only subset of required data per process, so we also propose a simple but efficient parallel mechanism for the symmetric all pairwise computation when only a subset of data is available to each process. Our experimental results illustrate that the quality of interpolated mapping results are comparable to the mapping results of original algorithm only. In parallel performance aspect, those interpolation methods are well parallelized with high efficiency. With the proposed interpolation method, we construct a configuration of two-million out-of-sample data into the target dimension, and the number of out-of-sample data can be increased further.Keywords
This publication has 16 references indexed in Scilit:
- NP-hardness of Euclidean sum-of-squares clusteringMachine Learning, 2009
- Embedding new data points for manifold learning via coordinate propagationKnowledge and Information Systems, 2008
- Glimmer: Multilevel MDS on the GPUIEEE Transactions on Visualization and Computer Graphics, 2008
- The out-of-sample problem for classical multidimensional scalingComputational Statistics & Data Analysis, 2008
- Further Relaxations of the Semidefinite Programming Approach to Sensor Network LocalizationSIAM Journal on Optimization, 2008
- Convergence of the majorization method for multidimensional scalingJournal of Classification, 1988
- Matrix algorithms on a hypercube I: Matrix multiplicationParallel Computing, 1987
- Nearest neighbor pattern classificationIEEE Transactions on Information Theory, 1967
- Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesisPsychometrika, 1964
- Multidimensional Scaling: I. Theory and MethodPsychometrika, 1952