Terminal attractor learning algorithms for back propagation neural networks
- 1 January 1991
- proceedings article
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- p. 183-189 vol.1
- https://doi.org/10.1109/ijcnn.1991.170401
Abstract
No abstract availableThis publication has 10 references indexed in Scilit:
- The learning rate in back-propagation systems: an application of Newton's methodPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1990
- Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weightsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1990
- Smoothing backpropagation cost function by delta constrainingPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1990
- Learning with hidden targetsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1990
- A Learning Rule in the Chebyshev Norm for Multilayer PerceptronsPublished by Springer Nature ,1990
- High order neural networks with reduced numbers of interconnection weightsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1990
- A new approach for finding the global minimum of error function of neural networksNeural Networks, 1989
- Terminal attractors in neural networksNeural Networks, 1989
- Efficient training of the backpropagation network by solving a system of stiff ordinary differential equationsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1989
- Increased rates of convergence through learning rate adaptationNeural Networks, 1988