Visual Interpretation of DNN-based Acoustic Models using Deep Autoencoders

Loading...
Thumbnail Image

Access rights

openAccess
publishedVersion

URL

Journal Title

Journal ISSN

Volume Title

A4 Artikkeli konferenssijulkaisussa

Date

2020

Major/Subject

Mcode

Degree programme

Language

en

Pages

Series

Machine Learning Methods in Visualisation for Big Data: Eurographics proceedings, pp. 25-29

Abstract

In the past few years, Deep Neural Networks (DNN) have become the state-of-the-art solution in several areas, including automatic speech recognition (ASR), unfortunately, they are generally viewed as black boxes. Recently, this started to change as researchers have dedicated much effort into interpreting their behavior. In this work, we concentrate on visual interpretation by depicting the hidden activation vectors of the DNN, and propose the usage of deep Autoencoders (DAE) to transform these hidden representations for inspection. We use multiple metrics to compare our approach with other, widely-used algorithms and the results show that our approach is quite competitive. The main advantage of using Autoencoders over the existing ones is that after the training phase, it applies a fixed transformation that can be used to visualize any hidden activation vector without any further optimization, which is not true for the other methods.

Description

Keywords

Other note

Citation

Grósz, T & Kurimo, M 2020, Visual Interpretation of DNN-based Acoustic Models using Deep Autoencoders . in D Archambault, I Nabney & J Peltonen (eds), Machine Learning Methods in Visualisation for Big Data : Eurographics proceedings . Eurographics Association, pp. 25-29, International Workshop on Machine Learning in Visualisation for Big Data, Norrköping, Sweden, 25/05/2020 . https://doi.org/10.2312/MLVIS.20201103