Discriminative training methods for hidden Markov models
Top Cited Papers
- 1 January 2002
- proceedings article
- Published by Association for Computational Linguistics (ACL)
- Vol. 10, 1-8
- https://doi.org/10.3115/1118693.1118694
Abstract
We describe new algorithms for training tagging models, as an alternative to maximum-entropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a modification of the proof of convergence of the perceptron algorithm for classification problems. We give experimental results on part-of-speech tagging and base noun phrase chunking, in both cases showing improvements over results for a maximum-entropy tagger.Keywords
This publication has 0 references indexed in Scilit: