all AI news
SQUAT: Stateful Quantization-Aware Training in Recurrent Spiking Neural Networks
May 1, 2024, 4:42 a.m. | Sreyes Venkatesh, Razvan Marinescu, Jason K. Eshraghian
cs.LG updates on arXiv.org arxiv.org
Abstract: Weight quantization is used to deploy high-performance deep learning models on resource-limited hardware, enabling the use of low-precision integers for storage and computation. Spiking neural networks (SNNs) share the goal of enhancing efficiency, but adopt an 'event-driven' approach to reduce the power consumption of neural network inference. While extensive research has focused on weight quantization, quantization-aware training (QAT), and their application to SNNs, the precision reduction of state variables during training has been largely overlooked, …
arxiv cs.lg cs.ne networks neural networks quantization spiking neural networks training type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
Customer Data Analyst with Spanish
@ Michelin | Voluntari
HC Data Analyst - Senior
@ Leidos | 1662 Intelligence Community Campus - Bethesda MD
Healthcare Research & Data Analyst- Infectious, Niche, Rare Disease
@ Clarivate | Remote (121- Massachusetts)
Data Analyst (maternity leave cover)
@ Clarivate | R155-Belgrade
Sales Enablement Data Analyst (Remote)
@ CrowdStrike | USA TX Remote