Topic adaptation for language modeling using unnormalized exponential models

Abstract
In this paper, we present novel techniques for performing topic adaptation on an -gram language model. Given training text la- beled with topic information, we automatically identify the most relevant topics for new text. We adapt our language model toward these topics using an exponential model, by adjusting probabilities in our model to agree with those found in the topical subset of the training data. For efficienc y, we do not normalize the model; that is, we do not require that the "probabilities" in the language model sum to 1. With these techniques, we were able to achieve a modest reduction in speech recognition word-error rate in the Broadcast News domain.

This publication has 9 references indexed in Scilit: