An articulatory synthesizer for perceptual research
- 1 August 1981
- journal article
- research article
- Published by Acoustical Society of America (ASA) in The Journal of the Acoustical Society of America
- Vol. 70 (2) , 321-328
- https://doi.org/10.1121/1.386780
Abstract
A software articulatory synthesizer, based upon a model developed by P. Mermelstein [J. Acoust. Soc. Am. 53, 1070–1082 (1973)], has been implemented on a laboratory computer. The synthesizer is designed as a tool for studying the linguistically and perceptually significant aspects of articulatory events. A prominent feature of this system is that it easily permits modification of a limited set of key parameters that control the positions of the major articulators: the lips, jaw, tongue body, tongue tip, velum, and hyoid bone. Time-varying control over vocal-tract shape and nasal coupling is possible by a straightforward procedure that is similar to key-frame animation: critical vocal-tract configurations are specified, along with excitation and timing information. Articulation then proceeds along a directed path between these key frames within the time script specified by the user. Such a procedure permits a sufficiently fine degree of control over articulator positions and movements. The organization of this system and its present and future applications are discussed.This publication has 1 reference indexed in Scilit:
- Signal models for low bit-rate coding of speechThe Journal of the Acoustical Society of America, 1980