April 3, 2024, 4:42 a.m. | Rachmad Vidya Wicaksana Putra, Muhammad Shafique

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.01685v1 Announce Type: cross
Abstract: Spiking Neural Networks (SNNs) can offer ultra low power/ energy consumption for machine learning-based applications due to their sparse spike-based operations. Currently, most of the SNN architectures need a significantly larger model size to achieve higher accuracy, which is not suitable for resource-constrained embedded applications. Therefore, developing SNNs that can achieve high accuracy with acceptable memory footprint is highly needed. Toward this, we propose a novel methodology that improves the accuracy of SNNs through kernel …

abstract accuracy applications architectures arxiv consumption cs.ai cs.lg cs.ne embedded energy improving kernel low low power machine machine learning methodology networks neural networks operations power scaling snn spiking neural networks through type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York