Convolutional networks can model the functional modulation of the MEG responses associated with feed-forward processes during visual word recognition
Loading...
Access rights
openAccess
CC BY
CC BY
publishedVersion
URL
Journal Title
Journal ISSN
Volume Title
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
Unless otherwise stated, all rights belong to the author. You may download, display and print this publication for Your own personal use. Commercial use is prohibited.
Date
Major/Subject
Mcode
Degree programme
Language
en
Pages
28
Series
eLife, Volume 13, issue RP96217
Abstract
Traditional models of reading lack a realistic simulation of the early visual processing stages, taking input in the form of letter banks and predefined line segments, making them unsuitable for modeling early brain responses. We used variations of the VGG-11 convolutional neural network (CNN) to create models of visual word recognition that starts from the pixel-level and performs the macro-scale computations needed for the detection and segmentation of letter shapes to word-form identification of large vocabulary of 10k Finnish words, regardless of letter size, shape, or rotation. The models were evaluated based on an existing magnetoencephalography (MEG) study where participants viewed regular words, pseudowords, noise-embedded words, symbol strings, and consonant strings. The original images used in the study were presented to the models and the activity in the layers was compared to MEG evoked response amplitudes. Through a few alterations to make the network more biologically plausible, we found an CNN architecture that can correctly simulate the behavior of three prominent responses, namely the type I (early visual response), type II (the ‘letter string’ response), and the N400m. In conclusion, starting a model of reading with convolution-and-pooling steps enables the flexibility and realism crucial for a direct model-to-brain comparison.Description
Other note
Citation
van Vliet, M, Rinkinen, O, Shimizu, T, Niskanen, A-M, Devereux, B & Salmelin, R 2025, 'Convolutional networks can model the functional modulation of the MEG responses associated with feed-forward processes during visual word recognition', eLife, vol. 13, no. RP96217, 96217. https://doi.org/10.7554/eLife.96217.3