May 27, 2024, 4:44 a.m. | Ruijie Zheng, Xiyao Wang, Yanchao Sun, Shuang Ma, Jieyu Zhao, Huazhe Xu, Hal Daum\'e III, Furong Huang

cs.LG updates on arXiv.org arxiv.org

arXiv:2306.13229v3 Announce Type: replace
Abstract: Despite recent progress in reinforcement learning (RL) from raw pixel data, sample inefficiency continues to present a substantial obstacle. Prior works have attempted to address this challenge by creating self-supervised auxiliary tasks, aiming to enrich the agent's learned representations with control-relevant information for future state prediction. However, these objectives are often insufficient to learn representations that can represent the optimal policy or value function, and they often consider tasks with small, abstract discrete action spaces …

abstract action agent arxiv challenge control cs.ai cs.lg data future information loss pixel prior progress raw reinforcement reinforcement learning replace sample state tasks temporal type visual

Senior Data Engineer

@ Displate | Warsaw

Lead Python Developer - Generative AI

@ S&P Global | US - TX - VIRTUAL

Analytics Engineer - Design Experience

@ Canva | Sydney, Australia

Data Architect

@ Unisys | Bengaluru - RGA Tech Park

Data Architect

@ HP | PSR01 - Bengaluru, Pritech Park- SEZ (PSR01)

Streetlight Analyst

@ DTE Energy | Belleville, MI, US