Progress in supervised neural networks
- 1 January 1993
- journal article
- review article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Signal Processing Magazine
- Vol. 10 (1) , 8-39
- https://doi.org/10.1109/79.180705
Abstract
Theoretical results concerning the capabilities and limitations of various neural network models are summarized, and some of their extensions are discussed. The network models considered are divided into two basic categories: static networks and dynamic networks. Unlike static networks, dynamic networks have memory. They fall into three groups: networks with feedforward dynamics, networks with output feedback, and networks with state feedback, which are emphasized in this work. Most of the networks discussed are trained using supervised learning.Keywords
This publication has 74 references indexed in Scilit:
- SmcHD1, containing a structural-maintenance-of-chromosomes hinge domain, has a critical role in X inactivationNature Genetics, 2008
- On the training of radial basis function classifiersNeural Networks, 1992
- Neural Network Classifiers Estimate Bayesian a posteriori ProbabilitiesNeural Computation, 1991
- Layered Neural Networks with Gaussian Hidden Units as Universal ApproximationsNeural Computation, 1990
- Fast Learning in Networks of Locally-Tuned Processing UnitsNeural Computation, 1989
- A Learning Algorithm for Continually Running Fully Recurrent Neural NetworksNeural Computation, 1989
- Learning State Space Trajectories in Recurrent Neural NetworksNeural Computation, 1989
- What Size Net Gives Valid Generalization?Neural Computation, 1989
- Projection Pursuit RegressionJournal of the American Statistical Association, 1981
- The perceptron: A probabilistic model for information storage and organization in the brain.Psychological Review, 1958