Spontaneous sign systems created by deaf children in two cultures

Abstract
Deaf children whose access to usable conventional linguistic input, signed or spoken, is severely limited nevertheless use gesture to communicate1,2,3. These gestures resemble natural language in that they are structured at the level both of sentence4 and of word5. Although the inclination to use gesture may be traceable to the fact that the deaf children's hearing parents, like all speakers, gesture as they talk6, the children themselves are responsible for introducing language-like structure into their gestures7. We have explored the robustness of this phenomenon by observing deaf children of hearing parents in two cultures, an American and a Chinese culture, that differ in their child-rearing practices8,9,10,11,12 and in the way gesture is used in relation to speech13. The spontaneous sign systems developed in these cultures shared a number of structural similarities: patterned production and deletion of semantic elements in the surface structure of a sentence; patterned ordering of those elements within the sentence; and concatenation of propositions within a sentence. These striking similarities offer critical empirical input towards resolving the ongoing debate about the ‘innateness’ of language in human infants14,15,16.