Automated lip-synch and speech synthesis for character animation
- 1 May 1986
- journal article
- Published by Association for Computing Machinery (ACM) in ACM SIGCHI Bulletin
- Vol. 17 (SI) , 143-147
- https://doi.org/10.1145/30851.30874
Abstract
An automated method of synchronizing facial animation to recorded speech is described. In this method, a common speech synthesis method (linear prediction) is adapted to provide simple and accurate phoneme recognition. The recognized phonemes are then associated with mouth positions to provide keyframes for computer animation of speech using a parametric model of the human face. The linear prediction software, once implemented, can also be used for speech resynthesis. The synthesis retains intelligibility and natural speech rhythm while achieving a “synthetic realism” consistent with computer animation. Speech synthesis also enables certain useful manipulations for the purpose of computer character animation.Keywords
This publication has 2 references indexed in Scilit:
- Speech Analysis Synthesis and PerceptionPublished by Springer Nature ,1965
- The Wiener (Root Mean Square) Error Criterion in Filter Design and PredictionJournal of Mathematics and Physics, 1946