April 3, 2024, 4:42 a.m. | Rachmad Vidya Wicaksana Putra, Muhammad Shafique

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.01685v1 Announce Type: cross
Abstract: Spiking Neural Networks (SNNs) can offer ultra low power/ energy consumption for machine learning-based applications due to their sparse spike-based operations. Currently, most of the SNN architectures need a significantly larger model size to achieve higher accuracy, which is not suitable for resource-constrained embedded applications. Therefore, developing SNNs that can achieve high accuracy with acceptable memory footprint is highly needed. Toward this, we propose a novel methodology that improves the accuracy of SNNs through kernel …

abstract accuracy applications architectures arxiv consumption cs.ai cs.lg cs.ne embedded energy improving kernel low low power machine machine learning methodology networks neural networks operations power scaling snn spiking neural networks through type

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

[Job - 14823] Senior Data Scientist (Data Analyst Sr)

@ CI&T | Brazil

Data Engineer

@ WorldQuant | Hanoi

ML Engineer / Toronto

@ Intersog | Toronto, Ontario, Canada

Analista de Business Intelligence (Industry Insights)

@ NielsenIQ | Cotia, Brazil