Class phrase models for language modeling

Abstract
Previous attempts to automatically determine multi-words as the basic unit for language modeling have been successful for extending bigram models to improve the perplexity of the language model and/or the word accuracy of the speech decoder. However, none of these techniques gave improvements over the trigram model so far, except for the rather controlled ATIS task (McCandless & Glass, 1994). We therefore propose an algorithm that minimizes the perplexity of a bigram model directly. The new algorithm is able to reduce the trigram perplexity and also achieves word accuracy improvements in the Verbmobil task. It is the natural counterpart of successful word classification algorithms for language modeling that minimize the leaving-one-out bigram perplexity. We also give some details on the usage of class-finding techniques and m-gram models, which can be crucial to successful applications of this technique.

This publication has 5 references indexed in Scilit: