Applying residue number systems to hardware probability models

Loading...
Thumbnail Image

URL

Journal Title

Journal ISSN

Volume Title

School of Electrical Engineering | Master's thesis

Department

Major/Subject

Mcode

Language

en

Pages

64

Series

Abstract

The rapid advancement of machine learning, particularly in the realm of probabilistic models, faces challenges such as computational efficiency and accuracy. Deep neural networks (DNNs), while powerful and versatile, often lacks transparency and reasoning capabilities, making it less suitable for applications requiring clear decision-making processes. Probabilistic models, offering structured and interpretable systems for reasoning, present a promising alternative. However, accelerating these models remains a significant challenge due to their computational demands. This thesis investigates the application of Residue Number System (RNS) as an alternative to traditional fixed number systems for hardware-based probabilistic models. The research proposes an 8-moduli RNS-based approach to enhance parallel computation in Bayesian Networks (BNs). Through mathematical analysis and hardware implementation using Verilog and Vivado, the study demonstrates that RNS can significantly improve computational speed while maintaining acceptable accuracy. Error analysis and benchmarks reveal that RNS provides a competitive balance between performance and precision compared to 16-bit fixed-point and floating-point systems. The findings suggest that RNS is a viable and efficient alternative for applications requiring high-speed probabilistic inference, particularly in scenarios where numerical range and speed are critical.

Description

Supervisor

Zhou, Quan

Thesis advisor

Balaji Ravichandran, Naresh

Other note

Citation