Discriminant clustering using an HMM isolated-word recognizer

Abstract
One limitation of hidden Markov model (HMM) recognizers is that subword models are not learned but must be prespecified before training. This can lead to excessive computation during recognition and/or poor discrimination between similar sounding words. A training procedure called discriminant clustering is presented that creates subword models automatically. Node sequences from whole-word models are merged using statistical clustering techniques. This procedure reduced the computation required during recognition for a 35-word vocabulary by roughly one-third while maintaining a low error rate. It was also found that five iterations of the forward-backward algorithm are sufficient and that adding nodes to HMM word models improves performance until the minimum word transition time becomes excessive.

This publication has 5 references indexed in Scilit: