April 25, 2024, 7:43 p.m. | Lang Qin, Rui Yan, Huajin Tang

cs.LG updates on arXiv.org arxiv.org

arXiv:2211.11760v3 Announce Type: replace
Abstract: In recent years, spiking neural networks (SNNs) have been used in reinforcement learning (RL) due to their low power consumption and event-driven features. However, spiking reinforcement learning (SRL), which suffers from fixed coding methods, still faces the problems of high latency and poor versatility. In this paper, we use learnable matrix multiplication to encode and decode spikes, improving the flexibility of the coders and thus reducing latency. Meanwhile, we train the SNNs using the direct …

abstract arxiv coding consumption cs.ai cs.lg cs.ne event features framework however latency low low latency low power networks neural networks power power consumption reinforcement reinforcement learning spiking neural networks type

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US