UNIFIED FRAMEWORK FOR MLPs AND RBFNs: INTRODUCING CONIC SECTION FUNCTION NETWORKS
- 1 July 1994
- journal article
- research article
- Published by Taylor & Francis in Cybernetics and Systems
- Vol. 25 (4) , 511-554
- https://doi.org/10.1080/01969729408902340
Abstract
Multilayer perceptions (MLPs) (Werbos, 1974; Rumelhart et al., 1986) and radial basis function networks (RBFNs) (Broomhead and Lowe, 1988; Moody and Darken, 1989) are probably the most widely used neural network models for practical applications. Whereas the former belong to a group of “classical” neural networks (whose weighted sums are loosely inspired by biology), the latter have risen only recently from an analogy to regression theory (Broomhead and Lowe, 1988). On first sight, the twoKeywords
This publication has 20 references indexed in Scilit:
- Initializing back propagation networks with prototypesNeural Networks, 1993
- Learning higher order correlationsNeural Networks, 1993
- Using radial basis functions to approximate a function and its error boundsIEEE Transactions on Neural Networks, 1992
- Designing multilayer perceptrons from nearest-neighbor systemsIEEE Transactions on Neural Networks, 1992
- Kolmogorov's theorem and multilayer neural networksNeural Networks, 1992
- Orthogonal least squares learning algorithm for radial basis function networksIEEE Transactions on Neural Networks, 1991
- A fast and robust learning algorithm for feedforward neural networksNeural Networks, 1991
- Approximation capabilities of multilayer feedforward networksNeural Networks, 1991
- Layered Neural Networks with Gaussian Hidden Units as Universal ApproximationsNeural Computation, 1990
- Fast Learning in Networks of Locally-Tuned Processing UnitsNeural Computation, 1989