Releasing a toolkit and comparing the performance of language embeddings across various spoken language identification datasets

dc.contributorAalto-yliopistofi
dc.contributorAalto Universityen
dc.contributor.authorLindgren, Matiasen_US
dc.contributor.authorJauhiainen, Tommien_US
dc.contributor.authorKurimo, Mikkoen_US
dc.contributor.departmentDepartment of Signal Processing and Acousticsen
dc.contributor.groupauthorSpeech Recognitionen
dc.contributor.groupauthorCentre of Excellence in Computational Inference, COINen
dc.contributor.organizationUniversity of Helsinkien_US
dc.date.accessioned2021-01-25T10:12:02Z
dc.date.available2021-01-25T10:12:02Z
dc.date.issued2020en_US
dc.description| openaire: EC/H2020/780069/EU//MeMAD
dc.description.abstractIn this paper, we propose a software toolkit for easier end-to-end training of deep learning based spoken language identification models across several speech datasets. We apply our toolkit to implement three baseline models, one speaker recognition model, and three x-vector architecture variations, which are trained on three datasets previously used in spoken language identification experiments. All models are trained separately on each dataset (closed task) and on a combination of all datasets (open task), after which we compare if the open task training yields better language embeddings. We begin by training all models end-to-end as discriminative classifiers of spectral features, labeled by language. Then, we extract language embedding vectors from the trained end-to-end models, train separate Gaussian Naive Bayes classifiers on the vectors, and compare which model provides best language embeddings for the back-end classifier. Our experiments show that the open task condition leads to improved language identification performance on only one of the datasets. In addition, we discovered that increasing x-vector model robustness with random frequency channel dropout significantly reduces its end-to-end classification performance on the test set, while not affecting back-end classification performance of its embeddings. Finally, we note that two baseline models consistently outperformed all other models.en
dc.description.versionPeer revieweden
dc.format.extent5
dc.format.mimetypeapplication/pdfen_US
dc.identifier.citationLindgren, M, Jauhiainen, T & Kurimo, M 2020, Releasing a toolkit and comparing the performance of language embeddings across various spoken language identification datasets. in Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. vol. 2020-October, Interspeech, International Speech Communication Association (ISCA), pp. 467-471, Interspeech, Shanghai, China, 25/10/2020. https://doi.org/10.21437/Interspeech.2020-2706en
dc.identifier.doi10.21437/Interspeech.2020-2706en_US
dc.identifier.issn2308-457X
dc.identifier.otherPURE UUID: 57006ede-d074-41fe-b29a-eb6e028e6b35en_US
dc.identifier.otherPURE ITEMURL: https://research.aalto.fi/en/publications/57006ede-d074-41fe-b29a-eb6e028e6b35en_US
dc.identifier.otherPURE LINK: http://www.scopus.com/inward/record.url?scp=85098199407&partnerID=8YFLogxK
dc.identifier.otherPURE FILEURL: https://research.aalto.fi/files/55067030/Releasing_a_Toolkit_and_Comparing_the_Performance.pdfen_US
dc.identifier.urihttps://aaltodoc.aalto.fi/handle/123456789/102153
dc.identifier.urnURN:NBN:fi:aalto-202101251463
dc.language.isoenen
dc.relationinfo:eu-repo/grantAgreement/EC/H2020/780069/EU//MeMADen_US
dc.relation.ispartofInterspeechen
dc.relation.ispartofseriesProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECHen
dc.relation.ispartofseriesVolume 2020-October, pp. 467-471en
dc.relation.ispartofseriesInterspeechen
dc.rightsopenAccessen
dc.subject.keywordDeep learningen_US
dc.subject.keywordLanguage embeddingen_US
dc.subject.keywordSpoken language identificationen_US
dc.subject.keywordTensorFlowen_US
dc.subject.keywordX-vectoren_US
dc.titleReleasing a toolkit and comparing the performance of language embeddings across various spoken language identification datasetsen
dc.typeA4 Artikkeli konferenssijulkaisussafi
dc.type.versionpublishedVersion

Files