Visual Feature Learning in Artificial Grammar Classification.

Abstract
The Artificial Grammar Learning task has been used extensively to assess individuals' implicit learning capabilities. Previous work suggests that participants implicitly acquire rule-based knowledge as well as exemplar-specific knowledge in this task. This study investigated whether exemplar-specific knowledge acquired in this task is based on the visual features of the exemplars. When a change in the font and case occurred between study and test, there was no effect on sensitivity to grammatical rules in classification judgments. However, such a change did virtually eliminate sensitivity to training frequencies of letter bigrams and trigrams (chunk strength) in classification judgments. Performance of a secondary task during study eliminated this font sensitivity and generally reduced the contribution of chunk strength knowledge. The results are consistent with the idea that perceptual fluency makes a contribution to artificial grammar judgments.