May 19 – 21, 2025
Pacific/Honolulu timezone

Empowering AI Implementation: The Versatile SLAC Neural Network Library (SNL) for FPGA

May 19, 2025, 2:10 PM
20m
Contributed talk (12'+3') Technology, Contributed

Speaker

Abhilasha Dave (SLAC National Accelerator Laboratory)

Description

The SLAC Neural Network Library (SNL) is a high-performance, hardware-aware framework for deploying machine learning models on FPGAs at the edge of the scientific data chain. Developed using Xilinx's High-Level Synthesis (HLS) tools, SNL combines the flexibility of software-defined design with the low-latency, high-throughput advantages of reconfigurable hardware. It offers a user-friendly API modeled after the Keras interface, streamlining the transition from model development to hardware deployment.

SNL is optimized for moderately sized neural networks, with support for dynamic weight and bias reloading—eliminating the need for time-consuming re-synthesis during model updates. Its modular architecture enables the integration of custom layers and specialized logic, while maintaining compatibility with standard formats like HDF5 for parameter storage.

Designed to meet the stringent latency and throughput demands of real-time scientific computing, SNL empowers edge intelligence by delivering adaptable, efficient, and experiment-ready ML inference engines—positioning itself as a critical enabler for next-generation AI-accelerated detector systems.

Preferred session (remote speakers) Morning session

Author

Abhilasha Dave (SLAC National Accelerator Laboratory)

Co-authors

Julia Gonski (SLAC National Accelerator Laboratory) Larry Ruckman (SLAC National Accelerator Laboratory) Ryan Herbst (SLAC National Accelerator Laboratory)

Presentation materials