A Tree-Structured Algorithm for Reducing Computation in Networks with Separable Basis Functions
- 1 February 1991
- journal article
- Published by MIT Press in Neural Computation
- Vol. 3 (1) , 67-78
- https://doi.org/10.1162/neco.1991.3.1.67
Abstract
I describe a new algorithm for approximating continuous functions in high-dimensional input spaces. The algorithm builds a tree-structured network of variable size, which is determined both by the distribution of the input data and by the function to be approximated. Unlike other tree-structured algorithms, learning occurs through completely local mechanisms and the weights and structure are modified incrementally as data arrives. Efficient computation in the tree structure takes advantage of the potential for low-order dependencies between the output and the individual dimensions of the input. This algorithm is related to the ideas behind k-d trees (Bentley 1975), CART (Breiman et al. 1984), and MARS (Friedman 1988). I present an example that predicts future values of the Mackey-Glass differential delay equation.Keywords
This publication has 7 references indexed in Scilit:
- Adaptive Mixtures of Local ExpertsNeural Computation, 1991
- Fast Learning in Networks of Locally-Tuned Processing UnitsNeural Computation, 1989
- Application of a General Learning Algorithm to the Control of Robotic ManipulatorsThe International Journal of Robotics Research, 1987
- Additive Regression and Other Nonparametric ModelsThe Annals of Statistics, 1985
- Oscillation and Chaos in Physiological Control SystemsScience, 1977
- Multidimensional binary search trees used for associative searchingCommunications of the ACM, 1975
- Problems in the Analysis of Survey Data, and a ProposalJournal of the American Statistical Association, 1963