Perceptron-based learning algorithms
- 1 June 1990
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 1 (2) , 179-191
- https://doi.org/10.1109/72.80230
Abstract
A key task for connectionist research is the development and analysis of learning algorithms. An examination is made of several supervised learning algorithms for single-cell and network models. The heart of these algorithms is the pocket algorithm, a modification of perceptron learning that makes perceptron learning well-behaved with nonseparable training data, even if the data are noisy and contradictory. Features of these algorithms include speed algorithms fast enough to handle large sets of training data; network scaling properties, i.e. network methods scale up almost as well as single-cell models when the number of inputs is increased; analytic tractability, i.e. upper bounds on classification error are derivable; online learning, i.e. some variants can learn continually, without referring to previous data; and winner-take-all groups or choice groups, i.e. algorithms can be adapted to select one out of a number of possible classifications. These learning algorithms are suitable for applications in machine learning, pattern recognition, and connectionist expert systems.Keywords
This publication has 22 references indexed in Scilit:
- A connectionist learning algorithm with provable generalization and scaling boundsNeural Networks, 1990
- Learning in feedforward layered networks: the tiling algorithmJournal of Physics A: General Physics, 1989
- Inductive knowledge acquisition and induction methodologiesKnowledge-Based Systems, 1989
- What Size Net Gives Valid Generalization?Neural Computation, 1989
- Perceptron Trees: A Case Study in Hybrid Concept RepresentationsConnection Science, 1989
- Learning quickly when irrelevant attributes abound: A new linear-threshold algorithmMachine Learning, 1988
- Connectionist expert systemsCommunications of the ACM, 1988
- Distinctive features, categorical perception, and probability learning: Some applications of a neural model.Psychological Review, 1977
- On the boundedness of an iterative procedure for solving a system of linear inequalitiesProceedings of the American Mathematical Society, 1970
- Die LernmatrixBiological Cybernetics, 1961