Multi-level acoustic segmentation of continuous speech

Abstract
As part of the goal to better understand the relationship between the speech signal and the underlying phonemic representation, the authors have developed a procedure that describes the acoustic structure of the signal. Acoustic events are embedded in a multi-level structure in which information ranging from coarse to fine is represented in an organized fashion. An analysis of the acoustic structure, using 500 utterances from 100 different talkers, show that it captures over 96% of the acoustic-phonetic events of interest with an insertion rate of less than 5%. The signal representation, and the algorithms for determining the acoustic segments and the multi-level structure are described. Performance results and a comparison with scale-space filtering is also included. Possible use of this segmental description for automatic speech recognition is discussed.

This publication has 4 references indexed in Scilit: