all AI news
NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable Functions
March 5, 2024, 2:43 p.m. | Marta Andronic, George A. Constantinides
cs.LG updates on arXiv.org arxiv.org
Abstract: Field-Programmable Gate Array (FPGA) accelerators have proven successful in handling latency- and resource-critical deep neural network (DNN) inference tasks. Among the most computationally intensive operations in a neural network (NN) is the dot product between the feature and weight vectors. Thus, some previous FPGA acceleration works have proposed mapping neurons with quantized inputs and outputs directly to lookup tables (LUTs) for hardware implementation. In these works, the boundaries of the neurons coincide with the boundaries …
abstract accelerators array arxiv cs.ar cs.lg deep neural network dnn feature field-programmable gate array fpga functions gate inference latency network neural network operations product stat.ml tasks type vectors
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Data Scientist
@ ITE Management | New York City, United States