Web: http://arxiv.org/abs/2102.08363

Jan. 28, 2022, 2:11 a.m. | Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, Chelsea Finn

cs.LG updates on arXiv.org arxiv.org

Model-based algorithms, which learn a dynamics model from logged experience
and perform some sort of pessimistic planning under the learned model, have
emerged as a promising paradigm for offline reinforcement learning (offline
RL). However, practical variants of such model-based algorithms rely on
explicit uncertainty quantification for incorporating pessimism. Uncertainty
estimation with complex models, such as deep neural networks, can be difficult
and unreliable. We overcome this limitation by developing a new model-based
offline RL algorithm, COMBO, that regularizes the value …

arxiv model optimization policy

Engineering Manager, Machine Learning (Credit Engineering)

@ Affirm | Remote Poland

Sr Data Engineer

@ Rappi | [CO] Bogotá

Senior Analytics Engineer

@ GetGround | Porto

Senior Staff Software Engineer, Data Engineering

@ Galileo, Inc. | New York City or Remote

Data Engineer

@ Atlassian | Bengaluru, India

Data Engineer | Hybrid (Pune)

@ Velotio | Pune, Maharashtra, India