Noncausal all-pole modeling of voiced speech

Abstract
This paper introduces noncausal all-pole models that are capable of efficiently capturing both the magnitude and phase information of voiced speech, It is shown that noncausal all-pole filter models are better able to match both magnitude and phase information and are particularly appropriate for voiced speech due to the nature of the glottal excitation. By modeling speech in the frequency domain, the standard difficulties that occur when using noncausal all-pole filters are avoided. Several algorithms for determining the model parameters based on frequency-domain information and the masking effects of the ear are described. Our work suggests that high-quality voiced speech can be produced using a 14th-order noncausal all-pole model.

This publication has 18 references indexed in Scilit: