Rapid speaker adaptation using speaker-mixture allophone models applied to speaker-independent speech recognition
- 1 January 1993
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- Vol. 2 (15206149) , 570-573 vol.2
- https://doi.org/10.1109/icassp.1993.319371
Abstract
A speaker mixture principle that allows the creation of speaker-independent phone models is proposed. Speaker-tied training for rapid speaker adaptation using utterances shorter than one second is derived from this principle. The concept of speaker pruning is also introduced for reducing computational cost without degrading the speaker adaptation performance. The above principle is combined with context-dependent phone models, which have been automatically generated by the successive state splitting algorithm. In a Japanese phrase recognition experiment, speaker mixture allophone models achieved an error reduction of 29.0%, which is high in comparison with the conventional speaker-independent HMM (hidden Markov model)-LR method. Speaker adaptation by speaker-tied training attained an error reduction of 16.8% using a 0.6-s Japanese word utterance. Speaker pruning reduced the number of phone model mixtures by between 50% and 92% without lowering recognition performance.Keywords
This publication has 3 references indexed in Scilit:
- BYBLOS: The BBN continuous speech recognition systemPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2005
- Bayesian learning for hidden Markov model with Gaussian mixture state observation densitiesSpeech Communication, 1992
- A successive state splitting algorithm for efficient allophone modelingPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1992