Gaze-direction-based MEG averaging during audiovisual speech perception
Loading...
Access rights
openAccess
publishedVersion
URL
Journal Title
Journal ISSN
Volume Title
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
Date
Major/Subject
Mcode
Degree programme
Language
en
Pages
7
Series
Frontiers in Human Neuroscience, Volume 4, pp. 1-7
Abstract
To take a step towards real-life-like experimental setups, we simultaneously recorded magnetoencephalographic (MEG) signals and subject’s gaze direction during audiovisual speech perception. The stimuli were utterances of /apa/ dubbed onto two side-by-side female faces articulating /apa/ (congruent) and /aka/ (incongruent) in synchrony, repeated once every 3 s. Subjects (N = 10) were free to decide which face they viewed, and responses were averaged to two categories according to the gaze direction. The right-hemisphere 100-ms response to the onset of the second vowel (N100m’) was a fifth smaller to incongruent than congruent stimuli. The results demonstrate the feasibility of realistic viewing conditions with gaze-based averaging of MEG signals.Description
Other note
Citation
Hirvenkari, L, Jousmäki, V, Lamminmäki, S, Saarinen, V-M, Sams, M E & Hari, R 2010, 'Gaze-direction-based MEG averaging during audiovisual speech perception', Frontiers in Human Neuroscience, vol. 4, 17, pp. 1-7. https://doi.org/10.3389/fnhum.2010.00017