Word recognition with recurrent network automata
- 2 January 2003
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
The authors report a method to directly encode temporal information into a neural network by explicitly modeling that information with a left-to-right automaton, and teaching a recurrent network to identify the automaton states. The state length and position are adjusted with the usual train and re-segment iterative procedure. The global model is a hybrid of a recurrent neural network which implements the state transition models, and dynamic programming, which finds the best state sequence. The advantages achieved by using recurrent networks are outlined by applying the method to a speaker-independent digit recognition task.Keywords
This publication has 7 references indexed in Scilit:
- Connectionist Viterbi training: a new hybrid method for continuous speech recognitionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Speaker-independent word recognition using a neural prediction modelPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Speech recognition using a sequential neural networkPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1991
- An enhancement to MLP model to enforce closed decision regionsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1991
- Neural Networks and Speech ProcessingPublished by Springer Nature ,1991
- Continuous speech recognition using linked predictive neural networksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1991
- Integrating time alignment and neural networks for high performance continuous speech recognitionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1991