Input-gradient space particle inference for neural network ensembles

No Thumbnail Available

Access rights

openAccess
publishedVersion

URL

Journal Title

Journal ISSN

Volume Title

A4 Artikkeli konferenssijulkaisussa

Date

2024

Major/Subject

Mcode

Degree programme

Language

en

Pages

19

Series

12th International Conference on Learning Representations (ICLR 2024)

Abstract

Deep Ensembles (DEs) demonstrate improved accuracy, calibration and robustness to perturbations over single neural networks partly due to their functional diversity. Particle-based variational inference (ParVI) methods enhance diversity by formalizing a repulsion term based on a network similarity kernel. However, weight-space repulsion is inefficient due to over-parameterization, while direct function-space repulsion has been found to produce little improvement over DEs. To sidestep these difficulties, we propose First-order Repulsive Deep Ensemble (FoRDE), an ensemble learning method based on ParVI, which performs repulsion in the space of first-order input gradients. As input gradients uniquely characterize a function up to translation and are much smaller in dimension than the weights, this method guarantees that ensemble members are functionally different. Intuitively, diversifying the input gradients encourages each network to learn different features, which is expected to improve the robustness of an ensemble. Experiments on image classification datasets and transfer learning tasks show that FoRDE significantly outperforms the gold-standard DEs and other ensemble methods in accuracy and calibration under covariate shift due to input perturbations.

Description

| openaire: EC/H2020/951847/EU//ELISE

Keywords

Other note

Citation

Trinh, T, Heinonen, M, Acerbi, L & Kaski, S 2024, Input-gradient space particle inference for neural network ensembles . in 12th International Conference on Learning Representations (ICLR 2024) . Curran Associates Inc., International Conference on Learning Representations, Vienna, Austria, 07/05/2024 . < https://openreview.net/forum?id=nLWiR5P3wr >