Hierarchical mixtures of experts and the EM algorithm
- 24 August 2005
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- Vol. 2, 1339-1344
- https://doi.org/10.1109/ijcnn.1993.716791
Abstract
We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIMs). Learning is treated as a maximum likelihood problem; in particular, we present an expectation-maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an online learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain.Keywords
This publication has 7 references indexed in Scilit:
- Hierarchical mixtures of experts and the EM algorithmPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2005
- Multivariate Adaptive Regression SplinesThe Annals of Statistics, 1991
- Adaptive Mixtures of Local ExpertsNeural Computation, 1991
- AutoClass: A Bayesian Classification SystemPublished by Elsevier ,1988
- Induction of decision treesMachine Learning, 1986
- Generalized Linear ModelsPublished by Springer Nature ,1983
- Maximum Likelihood from Incomplete Data Via the EM AlgorithmJournal of the Royal Statistical Society Series B: Statistical Methodology, 1977