Multi-level clustering of acoustic features for phoneme recognition based on mutual information

Abstract
An optimal method for organizing acoustic features to recognize phonemes in continuous speech is described. Each level of acoustic features, including power and its variational pattern, and the linear predictive coding Mel-cepstrum and its pattern of temporal change, is clustered hierarchically on the basis of the mutual information between the acoustic feature vector and phoneme labels assigned for the speech wave. Multilevel clustering is used to discriminate phonemes by detecting the most reliable features in the context and by using the effective combination of acoustic characteristics. Phoneme recognition for each frame is discussed. The conditional entropy is evaluated for the phoneme labels of the frame, given the various acoustic features for the neighboring frames. Phoneme discrimination can be performed effectively using the conditional entropy. In the preliminary test the phoneme recognition rate was 81.6%, and the vowel recognition rate was 92.4% in the frame level. In a completely talker-independent experiment the recognition rates were 76.8% and 89.7%, respectively.<>

This publication has 4 references indexed in Scilit: