Conditional Spoken Digit Generation with StyleGAN

dc.contributorAalto-yliopistofi
dc.contributorAalto Universityen
dc.contributor.authorPalkama, Kasperien_US
dc.contributor.authorJuvela, Laurien_US
dc.contributor.authorIlin, Alexanderen_US
dc.contributor.departmentDepartment of Computer Scienceen
dc.contributor.departmentDept Signal Process and Acousten
dc.contributor.groupauthorProfessor of Practice Ilin Alexanderen
dc.contributor.organizationAalto Universityen_US
dc.date.accessioned2021-01-25T10:10:21Z
dc.date.available2021-01-25T10:10:21Z
dc.date.issued2020en_US
dc.description.abstractThis paper adapts a StyleGAN model for speech generation with minimal or no conditioning on text. StyleGAN is a multiscale convolutional GAN capable of hierarchically capturing data structure and latent variation on multiple spatial (or temporal) levels. The model has previously achieved impressive results on facial image generation, and it is appealing to audio applications due to similar multi-level structures present in the data. In this paper, we train a StyleGAN to generate melspectrograms on the Speech Commands dataset, which contains spoken digits uttered by multiple speakers in varying acoustic conditions. In a conditional setting our model is conditioned on the digit identity, while learning the remaining data variation remains an unsupervised task. We compare our model to the current unsupervised state-of-the-art speech synthesis GAN architecture, the WaveGAN, and show that the proposed model outperforms according to numerical measures and subjective evaluation by listening tests.en
dc.description.versionPeer revieweden
dc.format.extent5
dc.format.extent3166-3170
dc.format.mimetypeapplication/pdfen_US
dc.identifier.citationPalkama, K, Juvela, L & Ilin, A 2020, Conditional Spoken Digit Generation with StyleGAN . in Proceedings of Interspeech . Interspeech, International Speech Communication Association (ISCA), pp. 3166-3170, Interspeech, Shanghai, China, 25/10/2020 . https://doi.org/10.21437/Interspeech.2020-1461en
dc.identifier.doi10.21437/Interspeech.2020-1461en_US
dc.identifier.issn1990-9772
dc.identifier.otherPURE UUID: 3bf9f9a9-18df-4e81-8b0a-5da12014a6d8en_US
dc.identifier.otherPURE ITEMURL: https://research.aalto.fi/en/publications/3bf9f9a9-18df-4e81-8b0a-5da12014a6d8en_US
dc.identifier.otherPURE LINK: http://www.scopus.com/inward/record.url?scp=85098142466&partnerID=8YFLogxKen_US
dc.identifier.otherPURE FILEURL: https://research.aalto.fi/files/55065886/Conditional_spoken_digit_generation_with_StyleGAN.pdfen_US
dc.identifier.urihttps://aaltodoc.aalto.fi/handle/123456789/102123
dc.identifier.urnURN:NBN:fi:aalto-202101251433
dc.language.isoenen
dc.publisherInternational Speech Communication Association
dc.relation.ispartofInterspeechen
dc.relation.ispartofseriesProceedings of Interspeechen
dc.relation.ispartofseriesInterspeechen
dc.rightsopenAccessen
dc.titleConditional Spoken Digit Generation with StyleGANen
dc.typeConference article in proceedingsfi
dc.type.versionpublishedVersion
Files