Unification-based multimodal integration
Open Access
- 1 January 1997
- proceedings article
- Published by Association for Computational Linguistics (ACL)
- p. 281-288
- https://doi.org/10.3115/976909.979653
Abstract
Recent empirical research has shown conclusive advantages of multimodal interaction over speech-only interaction for map-based tasks. This paper describes a multimodal language processing architecture which supports interfaces allowing simultaneous input from speech and gesture recognition. Integration of spoken and gestural input is driven by unification of typed feature structures representing the semantic contributions of the different modes. This integration method allows the component modalities to mutually compensate for each others' errors. It is implemented in Quick-Set, a multimodal (pen/voice) system that enables users to set up and control distributed interactive simulations.Keywords
This publication has 0 references indexed in Scilit: