Abstract
This paper focuses on the role of cognition in visual language processing in the deaf and hard of hearing. Although there are modality-specific cognitive findings in the literature on comparisons across speech communication modes and language (sign and speech), there is an impressive bulk of evidence that supports the notion of general modality-free cognitive functions in speech and sign processing, A working-memory framework is proposed for the cognitive involvement in language understanding (sign and speech). On the basis of multiple sources of behavioural and neuro-science data, four important parameters for language understanding are described in some detail: quality and precision of phonology, long-term memory access speed, degree of explicit processing, and general processing and storage capacity. Their interaction forms an important parameter space, and general predictions and applications can be derived for both spoken and signed language conditions. The model is mathematically formulated at a general level, hypothetical ease-of-language-understanding (ELU) functions are presented, and similarities and differences from current working-memory and speech perception formulations are pointed out.