Abstract
A neural-network-based approach for distortion (translation, scale, and rotation)-invariant character recognition is presented. To reduce the dimension of the required network, as well as to achieve invariancy, six distortion-invariant features are extracted from each image and are used as inputs to the neural net. These six continuous-valued features are derived from the geometrical moments of the image. A multilayer perceptron (MLP) with one hidden layer along with backpropagation training algorithm is utilized. The MLP is trained with twelve 64*64 differently oriented, scaled, and translated binary images of each of the twenty-six English characters. Its performance is tested using eight binary images from each character which were not used during training. Results of experimentation with different numbers of hidden layer nodes are presented.

This publication has 5 references indexed in Scilit: