Web: http://arxiv.org/abs/2206.08656

June 20, 2022, 1:10 a.m. | Rachmad Vidya Wicaksana Putra, Muhammad Shafique

cs.LG updates on arXiv.org arxiv.org

Larger Spiking Neural Network (SNN) models are typically favorable as they
can offer higher accuracy. However, employing such models on the resource- and
energy-constrained embedded platforms is inefficient. Towards this, we present
a tinySNN framework that optimizes the memory and energy requirements of SNN
processing in both the training and inference phases, while keeping the
accuracy high. It is achieved by reducing the SNN operations, improving the
learning quality, quantizing the SNN parameters, and selecting the appropriate
SNN model. Furthermore, …

arxiv energy memory networks neural neural networks spiking neural networks

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY