Abstract
Despite the absence of derivatives, binary node neural networks having a hidden layer and multiple outputs can be trained using an algorithm which closely resembles conventional back propagation. The algorithm is based on the use of hidden unit activation functions which transform in the course of the training from analogue (sigmoid) to binary (step).

This publication has 1 reference indexed in Scilit: