Informative Bayesian Neural Network Priors for Weak Signals

Loading...
Thumbnail Image

Access rights

openAccess

URL

Journal Title

Journal ISSN

Volume Title

A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä

Date

2021

Major/Subject

Mcode

Degree programme

Language

en

Pages

Series

Bayesian Analysis

Abstract

Encoding domain knowledge into the prior over the high-dimensional weight space of a neural network is challenging but essential in applications with limited data and weak signals. Two types of domain knowledge are commonly available in scientific applications: 1. feature sparsity (fraction of features deemed relevant); 2. signal-to-noise ratio, quantified, for instance, as the proportion of variance explained. We show how to encode both types of domain knowledge into the widely used Gaussian scale mixture priors with Automatic Relevance Determination. Specifically, we propose a new joint prior over the local (i.e., feature-specific) scale parameters that encodes knowledge about feature sparsity, and a Stein gradient optimization to tune the hyperparameters in such a way that the distribution induced on the model's proportion of variance explained matches the prior distribution. We show empirically that the new prior improves prediction accuracy compared to existing neural network priors on publicly available datasets and in a genetics application where signals are weak and sparse, often outperforming even computationally intensive cross-validation for hyperparameter tuning.

Description

| openaire: EC/H2020/101016775/EU//INTERVENE

Keywords

Informative prior, Neural network, Proportion of variance explained, Sparsity

Other note

Citation

Cui, T, Havulinna, A S, Marttinen, P & Kaski, S 2021, ' Informative Bayesian Neural Network Priors for Weak Signals ', Bayesian Analysis . https://doi.org/10.1214/21-BA1291