Role of biases in on-line learning of two-layer networks
- 1 March 1998
- journal article
- research article
- Published by American Physical Society (APS) in Physical Review E
- Vol. 57 (3) , 3265-3291
- https://doi.org/10.1103/physreve.57.3265
Abstract
The influence of biases on the learning dynamics of a two-layer neural network, a normalized soft-committee machine, is studied for on-line gradient descent learning. Within a statistical mechanics framework, numerical studies show that the inclusion of adjustable biases dramatically alters the learning dynamics found previously. The symmetric phase that has often been predominant in the original model all but disappears for a nondegenerate bias task. The extended model furthermore exhibits a much richer dynamical behavior, e.g., attractive suboptimal symmetric phases even for realizable cases and noiseless data.Keywords
This publication has 17 references indexed in Scilit:
- On-line learning from finite training setsEurophysics Letters, 1997
- Finite-size effects in on-line learning of multilayer neural networksEurophysics Letters, 1996
- On-line backpropagation in two-layered neural networksJournal of Physics A: General Physics, 1995
- On-line learning in soft committee machinesPhysical Review E, 1995
- Storage capacity and generalization error for the reversed-wedge Ising perceptronPhysical Review E, 1995
- Learning by on-line gradient descentJournal of Physics A: General Physics, 1995
- On Fokker-Planck approximations of on-line learning processesJournal of Physics A: General Physics, 1994
- Optimal unsupervised learningJournal of Physics A: General Physics, 1994
- Approximation by superpositions of a sigmoidal functionMathematics of Control, Signals, and Systems, 1989
- Learning representations by back-propagating errorsNature, 1986