Glottal features for classification of phonation type from speech and neck surface accelerometer signals
Loading...
Access rights
openAccess
publishedVersion
URL
Journal Title
Journal ISSN
Volume Title
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Authors
Date
2021-11
Major/Subject
Mcode
Degree programme
Language
en
Pages
13
Series
Computer Speech and Language, Volume 70
Abstract
Glottal source characteristics vary between phonation types due to the tension of laryngeal muscles with the respiratory effort. Previous studies in the classification of phonation type have mainly used speech signals recorded by microphone. Recently, two studies were published in the classification of phonation type using neck surface accelerometer (NSA) signals. However, there are no previous studies comparing the use of the acoustic speech signal vs. the NSA signal as input in classifying phonation type. Therefore, the current study investigates simultaneously recorded speech and NSA signals in the classification of three phonation types (breathy, modal, pressed). The general goal is to understand which of the two signals (speech vs. NSA) is more effective in the classification task. We hypothesize that by using the same feature set for both signals, classification accuracy is higher for the NSA signal, which is more closely related to the physical vibration of the vocal folds and less affected by the vocal tract compared to the acoustical speech signal. Glottal source waveforms were computed using two signal processing methods, quasi-closed phase (QCP) glottal inverse filtering and zero frequency filtering (ZFF), and a group of time-domain and frequency-domain scalar features were computed from the obtained waveforms. In addition, the study investigated the use of mel-frequency cepstral coefficients (MFCCs) derived from the glottal source waveforms computed by QCP and ZFF. Classification experiments with support vector machine classifiers revealed that the NSA signal showed better discrimination of the phonation types compared to the speech signal when the same feature set was used. Furthermore, it was observed that the glottal features showed complementary information with the conventional MFCC features resulting in the best classification accuracy both for the NSA signal (86.9%) and the speech signal (80.6%).Description
Keywords
phonation type, voice quality, neck surface accelerometer, glottal source waveform, support vector machine
Other note
Citation
Kadiri, S R & Alku, P 2021, ' Glottal features for classification of phonation type from speech and neck surface accelerometer signals ', Computer Speech and Language, vol. 70, 101232 . https://doi.org/10.1016/j.csl.2021.101232