Two ways of learning associations

Boucher, Luke and Dienes, Zoltan (2003) Two ways of learning associations. Cognitive Science, 27 (6). pp. 807-842. ISSN 0364-0213

Full text not available from this repository.


How people learn chunks or associations between adjacent items in sequences was modelled. Two previously successful models of how people learn artificial grammars were contrasted: the CCN, a network version of the competitive chunker of Servan-Schreiber and Anderson [J. Exp. Psychol.: Learn. Mem. Cogn. 16 (1990) 592], which produces local and compositionally-structured chunk representations acquired incrementally; and the simple recurrent network (SRN) of Elman [Cogn. Sci. 14 (1990) 179], which acquires distributed representations through error correction. The models' susceptibility to two types of interference was determined: prediction conflicts, in which a given letter can predict two other letters that appear next with an unequal frequency; and retroactive interference, in which the prediction made by a letter changes in the second half of training. The predictions of the models were determined by exploring parameter space and seeing how densely different regions of the space of possible experimental outcomes were populated by model outcomes. For both types of interference, human data fell squarely in regions characteristic of CCN performance but not characteristic of SRN performance.

Item Type: Article
Schools and Departments: School of Psychology > Psychology
Depositing User: Zoltan Dienes
Date Deposited: 06 Feb 2012 15:47
Last Modified: 15 Mar 2012 15:48
📧 Request an update