wav2vec2-based Speech Rating System for Children with Speech Sound Disorder

Loading...
Thumbnail Image

Access rights

openAccess
publishedVersion

URL

Journal Title

Journal ISSN

Volume Title

A4 Artikkeli konferenssijulkaisussa

Date

Major/Subject

Mcode

Degree programme

Language

en

Pages

5

Series

Proceedings of Interspeech'22, pp. 3618-3622, Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH

Abstract

Speaking is a fundamental way of communication, developed at a young age. Unfortunately, some children with speech sound disorder struggle to acquire this skill, hindering their ability to communicate efficiently. Speech therapies, which could aid these children in speech acquisition, greatly rely on speech practice trials and accurate feedback about their pronunciations. To enable home therapy and lessen the burden on speech-language pathologists, we need a highly accurate and automatic way of assessing the quality of speech uttered by young children. Our work focuses on exploring the applicability of state-of-the-art self-supervised, deep acoustic models, mainly wav2vec2, for this task. The empirical results highlight that these self-supervised models are superior to traditional approaches and close the gap between machine and human performance.

Description

The computational resources were provided by Aalto ScienceIT. This work was supported by NordForsk through the funding to Technology-enhanced foreign and second-language learning of Nordic languages, project number 103893.

Other note

Citation

Getman, Y, Al-Ghezi, R, Voskoboinik, E, Grósz, T, Kurimo, M, Salvi, G, Svendsen, T & Strömbergsson, S 2022, wav2vec2-based Speech Rating System for Children with Speech Sound Disorder. in Proceedings of Interspeech'22. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, International Speech Communication Association (ISCA), pp. 3618-3622, Interspeech, Incheon, Korea, Republic of, 18/09/2022. https://doi.org/10.21437/Interspeech.2022-10103