Periodic Activation Functions Induce Stationarity

Loading...
Thumbnail Image

Access rights

openAccess
publishedVersion

URL

Journal Title

Journal ISSN

Volume Title

Conference article in proceedings

Date

2021

Major/Subject

Mcode

Degree programme

Language

en

Pages

13

Series

Advances in Neural Information Processing Systems 34 pre-proceedings (NeurIPS 2021), Advances in Neural Information Processing Systems

Abstract

Neural network models are known to reinforce hidden data biases, making them unreliable and difficult to interpret. We seek to build models that `know what they do not know' by introducing inductive biases in the function space. We show that periodic activation functions in Bayesian neural networks establish a connection between the prior on the network weights and translation-invariant, stationary Gaussian process priors. Furthermore, we show that this link goes beyond sinusoidal (Fourier) activations by also covering triangular wave and periodic ReLU activation functions. In a series of experiments, we show that periodic activation functions obtain comparable performance for in-domain data and capture sensitivity to perturbed inputs in deep neural networks for out-of-domain detection.

Description

Keywords

Other note

Citation

Meronen , L , Trapp , M & Solin , A 2021 , Periodic Activation Functions Induce Stationarity . in Advances in Neural Information Processing Systems 34 (NeurIPS 2021) . Advances in Neural Information Processing Systems , Curran Associates Inc. , Conference on Neural Information Processing Systems , Virtual, Online , 06/12/2021 . < https://papers.nips.cc/paper/2021/hash/0d5a4a5a748611231b945d28436b8ece-Abstract.html >