Conditional Spoken Digit Generation with StyleGAN

Loading...
Thumbnail Image

Access rights

openAccess
publishedVersion

URL

Journal Title

Journal ISSN

Volume Title

A4 Artikkeli konferenssijulkaisussa

Date

Major/Subject

Mcode

Degree programme

Language

en

Pages

5

Series

Proceedings of Interspeech, pp. 3166-3170, Interspeech

Abstract

This paper adapts a StyleGAN model for speech generation with minimal or no conditioning on text. StyleGAN is a multiscale convolutional GAN capable of hierarchically capturing data structure and latent variation on multiple spatial (or temporal) levels. The model has previously achieved impressive results on facial image generation, and it is appealing to audio applications due to similar multi-level structures present in the data. In this paper, we train a StyleGAN to generate melspectrograms on the Speech Commands dataset, which contains spoken digits uttered by multiple speakers in varying acoustic conditions. In a conditional setting our model is conditioned on the digit identity, while learning the remaining data variation remains an unsupervised task. We compare our model to the current unsupervised state-of-the-art speech synthesis GAN architecture, the WaveGAN, and show that the proposed model outperforms according to numerical measures and subjective evaluation by listening tests.

Description

Keywords

Other note

Citation

Palkama, K, Juvela, L & Ilin, A 2020, Conditional Spoken Digit Generation with StyleGAN. in Proceedings of Interspeech. Interspeech, International Speech Communication Association (ISCA), pp. 3166-3170, Interspeech, Shanghai, China, 25/10/2020. https://doi.org/10.21437/Interspeech.2020-1461