all AI news
Augmenting Replay in World Models for Continual Reinforcement Learning. (arXiv:2401.16650v1 [cs.LG])
cs.LG updates on arXiv.org arxiv.org
In continual RL, the environment of a reinforcement learning (RL) agent
undergoes change. A successful system should appropriately balance the
conflicting requirements of retaining agent performance on already learned
tasks, stability, whilst learning new tasks, plasticity. The first-in-first-out
buffer is commonly used to enhance learning in such settings but requires
significant memory. We explore the application of an augmentation to this
buffer which alleviates the memory constraints, and use it with a world model
model-based reinforcement learning algorithm, to evaluate …
agent arxiv balance change continual cs.lg environment performance reinforcement reinforcement learning requirements stability tasks the environment world world models