Connectionist architectural learning for high performance character and speech recognition
- 1 January 1993
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- Vol. 1 (15206149) , 625-628 vol.1
- https://doi.org/10.1109/icassp.1993.319196
Abstract
The authors applied an automatic structure optimization (ASO) algorithm to the optimization of multistate time-delay neural networks (MSTDNNs), an extension of the TDNN. These networks allow the recognition of sequences of ordered events that have to be observed jointly. For example, in many speech recognition systems the recognition of words is decomposed into the recognition of sequences of phonemes or phonemelike units. In handwritten character recognition the recognition of characters can be decomposed into the joined recognition of characteristic strokes, etc. The combination of the proposed ASO algorithm with the MSTDNN was applied successfully to speech recognition and handwritten character recognition tasks with varying amounts of training data.Keywords
This publication has 6 references indexed in Scilit:
- Consonant recognition by modular construction of large phonemic time-delay neural networksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2003
- Application oriented automatic structuring of time-delay neural networks for high performance character and speech recognitionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Design of a neural network character recognizer for a touch terminalPattern Recognition, 1991
- Integrating time alignment and neural networks for high performance continuous speech recognitionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1991
- What Size Net Gives Valid Generalization?Neural Computation, 1989
- Dynamic programming algorithm optimization for spoken word recognitionIEEE Transactions on Acoustics, Speech, and Signal Processing, 1978