Deterministic Boltzmann Learning Performs Steepest Descent in Weight-Space
- 1 March 1989
- journal article
- Published by MIT Press in Neural Computation
- Vol. 1 (1) , 143-150
- https://doi.org/10.1162/neco.1989.1.1.143
Abstract
The Boltzmann machine learning procedure has been successfully applied in deterministic networks of analog units that use a mean field approximation to efficiently simulate a truly stochastic system (Peterson and Anderson 1987). This type of “deterministic Boltzmann machine” (DBM) learns much faster than the equivalent “stochastic Boltzmann machine” (SBM), but since the learning procedure for DBM's is only based on an analogy with SBM's, there is no existing proof that it performs gradient descent in any function, and it has only been justified by simulations. By using the appropriate interpretation for the way in which a DBM represents the probability of an output vector given an input vector, it is shown that the DBM performs steepest descent in the same function as the original SBM, except at rare discontinuities. A very simple way of forcing the weights to become symmetrical is also described, and this makes the DBM more biologically plausible than back-propagation (Werbos 1974; Parker 1985; Rumelhart et al. 1986).Keywords
This publication has 2 references indexed in Scilit:
- Learning representations by back-propagating errorsNature, 1986
- Neurons with graded response have collective computational properties like those of two-state neurons.Proceedings of the National Academy of Sciences, 1984