Reduction of required precision bits for back-propagation applied to pattern recognition

Abstract
The number of precision bits for operations and data are limited in the hardware implementations of backpropagation (BP). Reduction of rounding error due to this limited precision is crucial in the implementation. The new learning algorithm is based on overestimation of significant error in order to alleviate underflow and omission of weight updating for correctly recognized patterns. While the conventional BP algorithm minimizes the squared error between output signals and supervising data, the new learning algorithm minimizes the weighted error function. In the learning simulation of multifont capital recognition, this algorithm converged recognition accuracy to 100% with only 8-b precision. In addition, the recognition accuracy for characters that did not appear in the training data reached 94.9%. This performance is equivalent to that of a conventional BP with 12-b precision. Moreover, it is found that the performance of the weighted error function is high even when only a small number of hidden neurons is used. Consequently, the algorithm reduces the required amount of weight memory.

This publication has 8 references indexed in Scilit: