Convergence of a neural network classifier

Abstract
It is shown that the LVQ (learning vector quantization) learning algorithm converges to locally asymptotic stable equilibria of an ordinary differential equation. It is demonstrated that the learning algorithm performs stochastic approximation. Convergence of the Voronoi vectors is guaranteed under the appropriate conditions on the underlying statistics of the classification problem. The authors also present a modification to the learning algorithm which, it is argued, results in convergence of the LVQ for a larger set of initial conditions. Finally, it is shown that the LVQ is a general histogram classifier and that its risk converges to the Bayesian optimal risk as the appropriate parameters go to infinity with the number of past observations.

This publication has 1 reference indexed in Scilit: