all AI news
A Low Latency Adaptive Coding Spiking Framework for Deep Reinforcement Learning
April 25, 2024, 7:43 p.m. | Lang Qin, Rui Yan, Huajin Tang
cs.LG updates on arXiv.org arxiv.org
Abstract: In recent years, spiking neural networks (SNNs) have been used in reinforcement learning (RL) due to their low power consumption and event-driven features. However, spiking reinforcement learning (SRL), which suffers from fixed coding methods, still faces the problems of high latency and poor versatility. In this paper, we use learnable matrix multiplication to encode and decode spikes, improving the flexibility of the coders and thus reducing latency. Meanwhile, we train the SNNs using the direct …
abstract arxiv coding consumption cs.ai cs.lg cs.ne event features framework however latency low low latency low power networks neural networks power power consumption reinforcement reinforcement learning spiking neural networks type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Sr. Software Development Manager, AWS Neuron Machine Learning Distributed Training
@ Amazon.com | Cupertino, California, USA