Fine-tuning of pre-trained models for classification of vocal intensity category from speech signals
No Thumbnail Available
Access rights
openAccess
publishedVersion
URL
Journal Title
Journal ISSN
Volume Title
A4 Artikkeli konferenssijulkaisussa
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
Date
2024
Major/Subject
Mcode
Degree programme
Language
en
Pages
5
Series
Interspeech 2024, pp. 482-486, Interspeech
Abstract
Speakers regulate vocal intensity on many occasions for example to be heard over a long distance or to express vocal emotions. Humans can regulate vocal intensity over a wide sound pressure level (SPL) range and therefore speech can be categorized into different vocal intensity categories. Recent machine learning experiments have studied classification of vocal intensity category from speech signals which have been recorded without SPL information and which are represented on arbitrary amplitude scales. By fine-tuning four pre-trained models (wav2vec2-BASE, wav2vec2-LARGE, HuBERT, audio speech transformers), this paper studies classification of speech into four intensity categories (soft, normal, loud, very loud), when speech is presented on such arbitrary amplitude scale. The fine-tuned model embeddings showed absolute improvements of 5% and 10-12% in accuracy compared to baselines for the target intensity category label and the SPL-based intensity category label, respectively.Description
Keywords
speech, audio speech transformers, HuBERT, sound pressure level, Vocal intensity, wav2vec2
Other note
Citation
Kodali, M, Kadiri, S & Alku, P 2024, Fine-tuning of pre-trained models for classification of vocal intensity category from speech signals . in Interspeech 2024 . Interspeech, International Speech Communication Association (ISCA), pp. 482-486, Interspeech, Kos Island, Greece, 01/09/2024 . https://doi.org/10.21437/Interspeech.2024-2237