Self-supervised end-to-end ASR for low resource L2 Swedish

Loading...
Thumbnail Image

Access rights

openAccess

URL

Journal Title

Journal ISSN

Volume Title

A4 Artikkeli konferenssijulkaisussa

Date

2021

Major/Subject

Mcode

Degree programme

Language

en

Pages

5
1086-1090

Series

22nd Annual Conference of the International Speech Communication Association, INTERSPEECH 2021, Proceedings of the Annual Conference of the International Speech Communication Association

Abstract

Unlike traditional (hybrid) Automatic Speech Recognition (ASR), end-to-end ASR systems simplify the training procedure by directly mapping acoustic features to sequences of graphemes or characters, thereby eliminating the need for specialized acoustic, language, or pronunciation models. However, one drawback of end-to-end ASR systems is that they require more training data than conventional ASR systems to achieve similar word error rate (WER). This makes it difficult to develop ASR systems for tasks where transcribed target data is limited such as developing ASR for Second Language (L2) speakers of Swedish. Nonetheless, recent advancements in selfsupervised acoustic learning, manifested in wav2vec models [1, 2, 3], leverage the available untranscribed speech data to provide compact acoustic representation that can achieve low WER when incorporated in end-to-end systems. To this end, we experiment with several monolingual and cross-lingual selfsupervised acoustic models to develop end-to-end ASR system for L2 Swedish. Even though our test is very small, it indicates that these systems are competitive in performance with traditional ASR pipeline. Our best model seems to reduce the WER by 7% relative to our traditional ASR baseline trained on the same target data.

Description

Funding Information: This work is part of Digitala project which is funded by the Academy of Finland (grant numbers 322619, 322625, 322965). The computational resources were provided by Aalto ScienceIT. Funding Information: This work is part of Digitala project which is funded by the Academy of Finland (grant numbers 322619, 322625, 322965). The computational resources were provided by Aalto Scien-ceIT. Publisher Copyright: Copyright © 2021 ISCA.

Keywords

End-to-End L2 ASR, Nonnative ASR, Self-supervised

Other note

Citation

Al-Ghezi, R, Getman, Y, Rouhe, A, Hildén, R & Kurimo, M 2021, Self-supervised end-to-end ASR for low resource L2 Swedish . in 22nd Annual Conference of the International Speech Communication Association, INTERSPEECH 2021 . Proceedings of the Annual Conference of the International Speech Communication Association, International Speech Communication Association (ISCA), pp. 1086-1090, Interspeech, Brno, Czech Republic, 30/08/2021 . https://doi.org/10.21437/Interspeech.2021-1710