Abstract
In this paper, we investigate a new statistical language model which captures topic-related dependencies of words within and across sen- tences. First, we develop a sentence-level mixture language model that takes advantage of the topic constraints in a sentence or article. Second, we introduce topic-dependent dynamic cache adaptation techniques in the framework of the mixture model. Experiments with the static (or unadapted) mixture model on the 1994 WSJ task indicated a 21% reduction in perplexity and a 3-4% improvement in recognition accuracy over a general -gram model. The static mix- ture model also improved recognition performance over an adapted -gram model. Mixture adaptation techniques contributed a further 14% reduction in perplexity and a small improvement in recognition accuracy.

This publication has 8 references indexed in Scilit: