Learning drifting concepts with neural networks

Abstract
The learning of time-dependent concepts with a neural network is studied analytically and numerically. The linearly separable target rule is represented by an N-vector, whose time dependence is modelled by a random or deterministic drift process. A single-layer network is trained online using different Hebb-like algorithms. Training is based on examples which are chosen randomly and according to a query strategy. The evolution of the generalization error can be calculated exactly in the thermodynamic limit N to infinity . The rule is never learnt perfectly, but can be tracked within a certain error margin. The generalization performance of various learning rules is compared and simulations confirm the analytic results.

This publication has 13 references indexed in Scilit: