Robust connectionist parsing of spoken language

Abstract
A modular, recurrent connectionist network architecture which learns to robustly perform incremental parsing of complex sentences is presented. From sequential input, one word at a time, the networks learn to do semantic role assignment, noun phrase attachment, and clause structure recognition for sentences with passive constructions and center embedded clauses. The networks make syntactic and semantic predictions at every point in time, and previous predictions are revised as expectations are affirmed or violated with the arrival of new information. The networks induce their own grammar rules for dynamically transforming an input sequence of words into a syntactic/semantic interpretation. These networks generalize and display tolerance to input which has been corrupted in ways common in spoken language.

This publication has 1 reference indexed in Scilit: