Bottlenosed dolphin and human recognition of veridical and degraded video displays of an artificial gestural language.
- 1 January 1990
- journal article
- Published by American Psychological Association (APA) in Journal of Experimental Psychology: General
- Vol. 119 (2) , 215-230
- https://doi.org/10.1037//0096-3445.119.2.215
Abstract
2 bottlenosed dolphins proficient in interpreting gesture language signs viewed veridical and degraded gestures via TV without explicit training. In Exp. 1, dolphins immediately understood most gestures: Performance was high throughout degradations successively obscuring the head, torso, arms, and fingers, though deficits occurred for gestures degraded to a point-light display (PLD) of the signer's hands. In Exp. 2, humans of varying gestural fluency saw the PLD and veridical gestures from Exp. 1. Again, performance declined in the PLD condition. Though the dolphin recognized gestures as accurately as fluent humans, effects of the gesture's formational properties were not identical for humans and dolphin. Results suggest that the dolphin uses a network of semantic and gestural representations, that bottom-up processing predominates when the dolphin's short-term memory is taxed, and that recognition is affected by variables germane to grammatical category, short-term memory, and visual perception.Keywords
This publication has 0 references indexed in Scilit: