An adaptive nonparametric linear classifier

Abstract
The equalized-error ("EE") training procedure, introduced in this paper, is a new nonparametric training procedure for linear classifiers in a multiple-feautre stochastic environment. This procedure is a form of stochastic approximation that minimizes the sum of the expected normalized first moments of the fasely classified pattern vectors about the decision hyperplane. This sum is the "EE loss function." The minimization is achived by a simply implemented recursive equation. We show that the sequence of decision hyperplanes generated by this recursive equation converges in mean square and with probability one to a hyperplane that minimizes the EE loss function. We provide premiliminary qualitative and quantitative evidence that the EE training procedure converges rapidly and achives low asymptotic error probabilities over a wide range of overlapping pairs of class densities and nonlinearly separable pairs of class densities.

This publication has 10 references indexed in Scilit: