Glottal source estimation from coded telephone speech using a deep neural network
dc.contributor | Aalto-yliopisto | fi |
dc.contributor | Aalto University | en |
dc.contributor.author | Nonavinakere Prabhakera, Narendra | en_US |
dc.contributor.author | Airaksinen, Manu | en_US |
dc.contributor.author | Alku, Paavo | en_US |
dc.contributor.department | Department of Signal Processing and Acoustics | en |
dc.contributor.groupauthor | Speech Communication Technology | en |
dc.date.accessioned | 2017-11-21T13:36:31Z | |
dc.date.available | 2017-11-21T13:36:31Z | |
dc.date.issued | 2017-08 | en_US |
dc.description.abstract | In speech analysis, the information about the glottal source is obtained from speech by using glottal inverse filtering (GIF). The accuracy of state-of-the-art GIF methods is sufficiently high when the input speech signal is of high-quality (i.e., with little noise or reverberation). However, in realistic conditions, particularly when GIF is computed from coded telephone speech, the accuracy of GIF methods deteriorates severely. To robustly estimate the glottal source under coded condition, a deep neural network (DNN)-based method is proposed. The proposed method utilizes a DNN to map the speech features extracted from the coded speech to the glottal flow waveform estimated from the corresponding clean speech. To generate the coded telephone speech, adaptive multi-rate (AMR) codec is utilized which is a widely used speech compression method. The proposed glottal source estimation method is compared with two existing GIF methods, closed phase covariance analysis (CP) and iterative adaptive inverse filtering (IAIF). The results indicate that the proposed DNN-based method is capable of estimating glottal flow waveforms from coded telephone speech with a considerably better accuracy in comparison to CP and IAIF. | en |
dc.description.version | Peer reviewed | en |
dc.format.extent | 5 | |
dc.format.extent | 3931-3935 | |
dc.format.mimetype | application/pdf | en_US |
dc.identifier.citation | Nonavinakere Prabhakera, N, Airaksinen, M & Alku, P 2017, Glottal source estimation from coded telephone speech using a deep neural network . in Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH . vol. 2017-August, Interspeech: Annual Conference of the International Speech Communication Association, International Speech Communication Association (ISCA), pp. 3931-3935, Interspeech, Stockholm, Sweden, 20/08/2017 . https://doi.org/10.21437/Interspeech.2017-882 | en |
dc.identifier.doi | 10.21437/Interspeech.2017-882 | en_US |
dc.identifier.issn | 1990-9772 | |
dc.identifier.other | PURE UUID: 667abd30-6669-4fbd-a9dd-90d59daff94f | en_US |
dc.identifier.other | PURE ITEMURL: https://research.aalto.fi/en/publications/667abd30-6669-4fbd-a9dd-90d59daff94f | en_US |
dc.identifier.other | PURE FILEURL: https://research.aalto.fi/files/15742494/narendra_interspeech0882.pdf | en_US |
dc.identifier.uri | https://aaltodoc.aalto.fi/handle/123456789/28802 | |
dc.identifier.urn | URN:NBN:fi:aalto-201711217623 | |
dc.language.iso | en | en |
dc.relation.ispartof | Interspeech | en |
dc.relation.ispartofseries | Proceedings of Interspeech 2017 | en |
dc.relation.ispartofseries | Interspeech: Annual Conference of the International Speech Communication Association | en |
dc.rights | openAccess | en |
dc.rights.copyright | © 2017 ISCA. This article was originally published in the Proceedings of Interspeech 2017: Narendra, N., Airaksinen, M., Alku, P. (2017) Glottal Source Estimation from Coded Telephone Speech Using a Deep Neural Network. Proc. Interspeech 2017, 3931-3935, DOI: 10.21437/Interspeech.2017-882. | en_US |
dc.subject.keyword | glottal source estimation | en_US |
dc.subject.keyword | glottal inverse filtering | en_US |
dc.subject.keyword | deep neural network | en_US |
dc.subject.keyword | telephone speech | en_US |
dc.title | Glottal source estimation from coded telephone speech using a deep neural network | en |
dc.type | A4 Artikkeli konferenssijulkaisussa | fi |
dc.type.version | publishedVersion |