all AI news
Low Latency Conversion of Artificial Neural Network Models to Rate-encoded Spiking Neural Networks. (arXiv:2211.08410v1 [cs.NE])
Nov. 16, 2022, 2:12 a.m. | Zhanglu Yan, Jun Zhou, Weng-Fai Wong
cs.LG updates on arXiv.org arxiv.org
Spiking neural networks (SNNs) are well suited for resource-constrained
applications as they do not need expensive multipliers. In a typical
rate-encoded SNN, a series of binary spikes within a globally fixed time window
is used to fire the neurons. The maximum number of spikes in this time window
is also the latency of the network in performing a single inference, as well as
determines the overall energy efficiency of the model. The aim of this paper is
to reduce this …
artificial arxiv conversion latency low network networks neural network neural networks rate spiking neural networks
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote