Abstract
A model of unsupervised learning is studied, where the environment provides N-dimensional input examples that are drawn from two overlapping Gaussian clouds. We consider the optimization of two different objective functions: the search for the direction of the largest variance in the data and the largest separating gap (stability) between clusters of examples respectively. By means of a statistical-mechanics analysis, we investigate how well the underlying structure is inferred from a set of examples. The performances of the learning algorithms depend crucially on the actual shape of the input distribution. A generic result is the existence of a critical number of examples needed for successful learning. The learning strategies are compared with methods different in spirit, such as the estimation of parameters in a model distribution and an information-theoretical approach.

This publication has 17 references indexed in Scilit: