A modular neural network architecture for sequential paraphrasing of script-based stories
- 1 January 1989
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
Sequential recurrent neural networks have been applied to a fairly high-level cognitive task, i.e. paraphrasing script-based stories. Using hierarchically organized modular subnetworks, which are trained separately and in parallel, the complexity of the task is reduced by effectively dividing it into subgoals. The system uses sequential natural language input and output and develops its own I/O representations for the words. The representations are stored in an external global lexicon and are adjusted in the course of training by all four subnetworks simultaneously, according to the FGREP-method. By concatenating a unique identification with the resulting representation, an arbitrary number of instances of the same word type can be created and used in the stories. The system is able to produce a fully expanded paraphrase of the story from only a few sentences, i.e. the unmentioned events are inferred. The word instances are correctly bound to their roles, and simple plausible inferences of the variable content of the story are made in the process.Keywords
This publication has 4 references indexed in Scilit:
- Forming global representations with extended backpropagationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1988
- Parallel Distributed ProcessingPublished by MIT Press ,1986
- In-Depth UnderstandingPublished by MIT Press ,1983
- Scripts in memory for textCognitive Psychology, 1979