Advancing Audio Emotion and Intent Recognition with Large Pre-Trained Models and Bayesian Inference

Loading...
Thumbnail Image

Access rights

openAccess

URL

Journal Title

Journal ISSN

Volume Title

A4 Artikkeli konferenssijulkaisussa

Date

2023-10-27

Major/Subject

Mcode

Degree programme

Language

en

Pages

5

Series

MM '23: Proceedings of the 31st ACM International Conference on Multimedia

Abstract

Large pre-trained models are essential in paralinguistic systems, demonstrating effectiveness in tasks like emotion recognition and stuttering detection. In this paper, we employ large pre-trained models for the ACM Multimedia Computational Paralinguistics Challenge, addressing the Requests and Emotion Share tasks. We explore audio-only and hybrid solutions leveraging audio and text modalities. Our empirical results consistently show the superiority of the hybrid approaches over the audio-only models. Moreover, we introduce a Bayesian layer as an alternative to the standard linear output layer. The multimodal fusion approach achieves an 85.4% UAR on HC-Requests and 60.2% on HC-Complaints. The ensemble model for the Emotion Share task yields the best value of .614. The Bayesian wav2vec2 approach, explored in this study, allows us to easily build ensembles, at the cost of fine-tuning only one model. Moreover, we can have usable confidence values instead of the usual overconfident posterior probabilities.

Description

Keywords

Other note

Citation

Porjazovski, D, Getman, Y, Grósz, T & Kurimo, M 2023, Advancing Audio Emotion and Intent Recognition with Large Pre-Trained Models and Bayesian Inference . in MM '23: Proceedings of the 31st ACM International Conference on Multimedia . ACM, pp. 9477-9481, ACM International Conference on Multimedia, Ottawa, Ontario, Canada, 29/10/2023 . https://doi.org/10.1145/3581783.3612848