A general weight matrix formulation using optimal control
- 1 May 1991
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 2 (3) , 378-394
- https://doi.org/10.1109/72.97914
Abstract
Classical methods from optimal control theory are used in deriving general forms for neural network weights. The network learning or application task is encoded in a performance index of a general structure. Consequently, different instances of this performance index lead to special cases of weight rules, including some well-known forms. Comparisons are made with the outer product rule, spectral methods, and recurrent back-propagation. Simulation results and comparisons are presented.Keywords
This publication has 11 references indexed in Scilit:
- A massively parallel architecture for a self-organizing neural pattern recognition machinePublished by Elsevier ,2005
- On the capacity of associative memories with linear threshold functionsIEEE Transactions on Information Theory, 1989
- Linear and logarithmic capacities in associative neural networksIEEE Transactions on Information Theory, 1989
- Information capacity of associative memoriesIEEE Transactions on Information Theory, 1989
- Qualitative analysis and synthesis of a class of neural networksIEEE Transactions on Circuits and Systems, 1988
- Layered neural nets for pattern recognitionIEEE Transactions on Acoustics, Speech, and Signal Processing, 1988
- Absolute Stability of Global Pattern Formation and Parallel Memory Storage by Competitive Neural NetworksPublished by Elsevier ,1987
- Information storage and retrieval in spin-glass like neural networksJournal de Physique Lettres, 1985
- Neural networks and physical systems with emergent collective computational abilities.Proceedings of the National Academy of Sciences, 1982
- Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern RecognitionIEEE Transactions on Electronic Computers, 1965