Learning in Boltzmann Trees
- 1 November 1994
- journal article
- Published by MIT Press in Neural Computation
- Vol. 6 (6) , 1174-1184
- https://doi.org/10.1162/neco.1994.6.6.1174
Abstract
We introduce a large family of Boltzmann machines that can be trained by standard gradient descent. The networks can have one or more layers of hidden units, with tree-like connectivity. We show how to implement the supervised learning algorithm for these Boltzmann machines exactly, without resort to simulated or mean-field annealing. The stochastic averages that yield the gradients in weight space are computed by the technique of decimation. We present results on the problems of N-bit parity and the detection of hidden symmetries.Keywords
This publication has 9 references indexed in Scilit:
- A Learning Algorithm for Boltzmann Machines*Published by Wiley ,2010
- The limitations of deterministic Boltzmann machine learningNetwork: Computation in Neural Systems, 1993
- A scaled conjugate gradient algorithm for fast supervised learningNeural Networks, 1993
- The Upstart Algorithm: A Method for Constructing and Training Feedforward Neural NetworksNeural Computation, 1990
- Deterministic Boltzmann Learning Performs Steepest Descent in Weight-SpaceNeural Computation, 1989
- Learning algorithms and probability distributions in feed-forward and feed-back networksProceedings of the National Academy of Sciences, 1987
- Fusion, propagation, and structuring in belief networksArtificial Intelligence, 1986
- Optimization by Simulated AnnealingScience, 1983
- Cayley trees, the Ising problem, and the thermodynamic limitPhysical Review B, 1974