Hardware-efficient tractable probabilistic inference for TinyML Neurosymbolic AI applications
Loading...
Access rights
openAccess
CC BY
CC BY
acceptedVersion
URL
Journal Title
Journal ISSN
Volume Title
A4 Artikkeli konferenssijulkaisussa
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
Unless otherwise stated, all rights belong to the author. You may download, display and print this publication for Your own personal use. Commercial use is prohibited.
Date
Major/Subject
Mcode
Degree programme
Language
en
Pages
6
Series
2025 IEEE International Conference on Omni-Layer Intelligent Systems, COINS 2025
Abstract
Neurosymbolic AI (NSAI) has recently emerged to mitigate limitations associated with deep learning (DL) models, e.g. quantifying their uncertainty or reason with explicit rules. Hence, TinyML hardware will need to support these symbolic models to bring NSAI to embedded scenarios. Yet, although symbolic models are typically compact, their sparsity and computation resolution contrasts with low-resolution and dense neuro models, which is a challenge on resource-constrained TinyML hardware severely limiting the size of symbolic models that can be computed. In this work, we remove this bottleneck leveraging a tight hardware/software integration to present a complete framework to compute NSAI with TinyML hardware. We focus on symbolic models realized with tractable probabilistic circuits (PCs), a popular subclass of probabilistic models for hardware integration. This framework: (1) trains a specific class of hardware-efficient deterministic PCs, chosen for the symbolic task; (2) compresses this PC until it can be computed on TinyML hardware with minimal accuracy degradation, using our nth-root compression technique, and (3) deploys the complete NSAI model on TinyML hardware. Compared to a 64b precision baseline necessary for the PC without compression, our workflow leads to significant hardware reduction on FPGA (up to 82.3% in FF, 52.6% in LUTs, and 18.0% in Flash usage) and an average inference speedup of 4.67× on ESP32 microcontroller.Description
Publisher Copyright: © 2025 IEEE. | openaire: EC/HE/101071179/EU//SUSTAIN
Keywords
Other note
Citation
Leslin, J, Trapp, M & Andraud, M 2025, Hardware-efficient tractable probabilistic inference for TinyML Neurosymbolic AI applications. in 2025 IEEE International Conference on Omni-Layer Intelligent Systems, COINS 2025. IEEE, International Conference on Omni-Layer Intelligent Systems, Madison, Wisconsin, United States, 04/08/2025. https://doi.org/10.1109/COINS65080.2025.11125733