Feb. 26, 2024, 5:42 a.m. | Martin Benfeghoul, Umais Zahid, Qinghai Guo, Zafeirios Fountas

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.15283v1 Announce Type: new
Abstract: In an unfamiliar setting, a model-based reinforcement learning agent can be limited by the accuracy of its world model. In this work, we present a novel, training-free approach to improving the performance of such agents separately from planning and learning. We do so by applying iterative inference at decision-time, to fine-tune the inferred agent states based on the coherence of future state representations. Our approach achieves a consistent improvement in both reconstruction accuracy and task …

abstract accuracy agent agents arxiv cs.ai cs.lg free imagination iterative novel performance planning reasoning reinforcement reinforcement learning think training type work world

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York