all AI news
Memory-efficient Reinforcement Learning with Knowledge Consolidation. (arXiv:2205.10868v2 [cs.LG] UPDATED)
Oct. 14, 2022, 1:13 a.m. | Qingfeng Lan, Yangchen Pan, Jun Luo, A. Rupam Mahmood
cs.LG updates on arXiv.org arxiv.org
Artificial neural networks are promising for general function approximation
but challenging to train on non-independent or non-identically distributed data
due to catastrophic forgetting. The experience replay buffer, a standard
component in deep reinforcement learning, is often used to reduce forgetting
and improve sample efficiency by storing experiences in a large buffer and
using them for training later. However, a large replay buffer results in a
heavy memory burden, especially for onboard and edge devices with limited
memory capacities. We propose …
arxiv consolidation knowledge memory reinforcement reinforcement learning
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote