Learning drifting concepts with neural networks
- 7 June 1993
- journal article
- Published by IOP Publishing in Journal of Physics A: General Physics
- Vol. 26 (11) , 2651-2665
- https://doi.org/10.1088/0305-4470/26/11/014
Abstract
The learning of time-dependent concepts with a neural network is studied analytically and numerically. The linearly separable target rule is represented by an N-vector, whose time dependence is modelled by a random or deterministic drift process. A single-layer network is trained online using different Hebb-like algorithms. Training is based on examples which are chosen randomly and according to a query strategy. The evolution of the generalization error can be calculated exactly in the thermodynamic limit N to infinity . The rule is never learnt perfectly, but can be tracked within a certain error margin. The generalization performance of various learning rules is compared and simulations confirm the analytic results.Keywords
This publication has 13 references indexed in Scilit:
- On-Line Learning of a Time-Dependent RuleEurophysics Letters, 1992
- Optimal generalization in perceptionsJournal of Physics A: General Physics, 1992
- Learning-parameter adjustment in neural networksPhysical Review A, 1992
- Statistical mechanics of learning from examplesPhysical Review A, 1992
- Neural net algorithms that learn in polynomial time from examples and queriesIEEE Transactions on Neural Networks, 1991
- On the ability of the optimal perceptron to generaliseJournal of Physics A: General Physics, 1990
- Linear and Nonlinear Extension of the Pseudo-Inverse Solution for Learning Boolean FunctionsEurophysics Letters, 1989
- Forgetful MemoriesEurophysics Letters, 1988
- Learning and forgetting on asymmetric, diluted neural networksJournal of Statistical Physics, 1987
- Solvable models of working memoriesJournal de Physique, 1986