Abstract
The application of a radial basis functions network to a static speech pattern classification problem is described. The radial basis functions network offers training times two to three orders of magnitude faster than backpropagation, when training networks of similar power and generality. Recognition results compare well with those obtained using backpropagation and a vector-quantized hidden Markov model on the same problem. A computationally efficient method of exactly solving linear networks in a noniterative fashion is also described. The method was applied to classification of vowels into 20 classes using three different types of input analysis and varying numbers of radial basis functions. The three types of input vectors consisted of linear-prediction-coding cepstral coefficient; formant tracks with frequency, amplitude, and bandwidth information; and bark-scaled formant tracks. All input analyses were supplemented with duration information. The best test results were obtained using the cepstral coefficients and 170 or more radial basis functions.

This publication has 7 references indexed in Scilit: