Rapid speaker adaptation using speaker-mixture allophone models applied to speaker-independent speech recognition

Abstract
A speaker mixture principle that allows the creation of speaker-independent phone models is proposed. Speaker-tied training for rapid speaker adaptation using utterances shorter than one second is derived from this principle. The concept of speaker pruning is also introduced for reducing computational cost without degrading the speaker adaptation performance. The above principle is combined with context-dependent phone models, which have been automatically generated by the successive state splitting algorithm. In a Japanese phrase recognition experiment, speaker mixture allophone models achieved an error reduction of 29.0%, which is high in comparison with the conventional speaker-independent HMM (hidden Markov model)-LR method. Speaker adaptation by speaker-tied training attained an error reduction of 16.8% using a 0.6-s Japanese word utterance. Speaker pruning reduced the number of phone model mixtures by between 50% and 92% without lowering recognition performance.

This publication has 3 references indexed in Scilit: