Graph signal sampling via reinforcement learning

Loading...
Thumbnail Image

URL

Journal Title

Journal ISSN

Volume Title

Perustieteiden korkeakoulu | Master's thesis

Date

2018-11-07

Department

Major/Subject

Machine Learning and Data Mining

Mcode

SCI3044

Degree programme

Master’s Programme in Computer, Communication and Information Sciences

Language

en

Pages

59

Series

Abstract

Graph signal sampling is one the major problems in graph signal processing and arises in a variety of practical applications, such as data compression, image denoising and social network analysis. In this thesis we formulate graph signal sampling as a reinforcement learning (RL) problem, which unleashes powerful methods developed recently within RL. Within our approach the signal sampling is carried out by an agent which crawls over the empirical graph and selects the most relevant graph nodes to sample, i.e., determines the corresponding graph signal values. Overall, the goal of the agent is to select signal samples which allow for the smallest graph signal recovery error. The behavior of the sampling agent is described using a policy which determines whether or not a particular node should be sampled. The policy is continuously adjusted which implies an inherent robustness to changes in the data structure. We propose two RL-based sampling algorithms and evaluate their performance by means of illustrative numerical experiments. After this we conduct elaborative discussion on strengths and weaknesses of the proposed solutions and explain phenomenons observed during simulations. Based on the simulation results, we identify some major challenges related to application of RL to graph signal sampling and propose possible solutions leading to prospects for the future work.

Description

Supervisor

Jung, Alexander

Thesis advisor

Jung, Alexander

Keywords

reinforcement learning, graph signal sampling, convex optimization, multi-armed bandit, policy gradient, complex networks

Other note

Citation