Training binary node feedforward neural networks by back propagation of error
- 11 October 1990
- journal article
- Published by Institution of Engineering and Technology (IET) in Electronics Letters
- Vol. 26 (21) , 1745-1746
- https://doi.org/10.1049/el:19901121
Abstract
Despite the absence of derivatives, binary node neural networks having a hidden layer and multiple outputs can be trained using an algorithm which closely resembles conventional back propagation. The algorithm is based on the use of hidden unit activation functions which transform in the course of the training from analogue (sigmoid) to binary (step).Keywords
This publication has 1 reference indexed in Scilit:
- Learning Internal Representations by Error PropagationPublished by Defense Technical Information Center (DTIC) ,1985