A computational model of early language acquisition from audiovisual experiences of young infants
Loading...
Access rights
openAccess
publishedVersion
URL
Journal Title
Journal ISSN
Volume Title
A4 Artikkeli konferenssijulkaisussa
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
Authors
Date
2019-01-01
Major/Subject
Mcode
Degree programme
Language
en
Pages
5
Series
Proceedings of Interspeech, Volume 2019-September, pp. 3594-3598, Interspeech - Annual Conference of the International Speech Communication Association
Abstract
Earlier research has suggested that human infants might use statistical dependencies between speech and non-linguistic multimodal input to bootstrap their language learning before they know how to segment words from running speech. However, feasibility of this hypothesis in terms of real-world infant experiences has remained unclear. This paper presents a step towards a more realistic test of the multimodal bootstrapping hypothesis by describing a neural network model that can learn word segments and their meanings from referentially ambiguous acoustic input. The model is tested on recordings of real infant-caregiver interactions using utterance-level labels for concrete visual objects that were attended by the infant when caregiver spoke an utterance containing the name of the object, and using random visual labels for utterances during absence of attention. The results show that beginnings of lexical knowledge may indeed emerge from individually ambiguous learning scenarios. In addition, the hidden layers of the network show gradually increasing selectivity to phonetic categories as a function of layer depth, resembling models trained for phone recognition in a supervised manner.Description
Keywords
Computational modeling, L1 acquisition, Language acquisition, Lexical learning, Phonetic learning
Other note
Citation
Räsänen, O & Khorrami, K 2019, A computational model of early language acquisition from audiovisual experiences of young infants . in Proceedings of Interspeech . vol. 2019-September, Interspeech - Annual Conference of the International Speech Communication Association, International Speech Communication Association (ISCA), pp. 3594-3598, Interspeech, Graz, Austria, 15/09/2019 . https://doi.org/10.21437/Interspeech.2019-1523