Topological Neural Networks go Persistent, Equivariant, and Continuous
Loading...
Access rights
openAccess
publishedVersion
URL
Journal Title
Journal ISSN
Volume Title
A4 Artikkeli konferenssijulkaisussa
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
Authors
Date
2024
Department
Major/Subject
Mcode
Degree programme
Language
en
Pages
20
Series
Proceedings of Machine Learning Research, Volume 235, pp. 49388-49407
Abstract
Topological Neural Networks (TNNs) incorporate higher-order relational information beyond pairwise interactions, enabling richer representations than Graph Neural Networks (GNNs). Concurrently, topological descriptors based on persistent homology (PH) are being increasingly employed to augment the GNNs. We investigate the benefits of integrating these two paradigms. Specifically, we introduce TopNets as a broad framework that subsumes and unifies various methods in the intersection of GNNs/TNNs and PH such as (generalizations of) RePHINE and TOGL. TopNets can also be readily adapted to handle (symmetries in) geometric complexes, extending the scope of TNNs and PH to spatial settings. Theoretically, we show that PH descriptors can provably enhance the expressivity of simplicial message-passing networks. Empirically, (continuous and E(n)-equivariant extensions of) TopNets achieve strong performance across diverse tasks, including antibody design, molecular dynamics simulation, and drug property prediction.Description
Publisher Copyright: Copyright 2024 by the author(s)
Keywords
Other note
Citation
Verma, Y, Souza, A H & Garg, V 2024, ' Topological Neural Networks go Persistent, Equivariant, and Continuous ', Proceedings of Machine Learning Research, vol. 235, pp. 49388-49407 . < https://proceedings.mlr.press/v235/verma24a.html >