Jan. 28, 2022, 2:11 a.m. | Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, Chelsea Finn

cs.LG updates on arXiv.org arxiv.org

Model-based algorithms, which learn a dynamics model from logged experience
and perform some sort of pessimistic planning under the learned model, have
emerged as a promising paradigm for offline reinforcement learning (offline
RL). However, practical variants of such model-based algorithms rely on
explicit uncertainty quantification for incorporating pessimism. Uncertainty
estimation with complex models, such as deep neural networks, can be difficult
and unreliable. We overcome this limitation by developing a new model-based
offline RL algorithm, COMBO, that regularizes the value …

arxiv optimization policy

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

AI Scientist/Engineer

@ OKX | Singapore

Research Engineering/ Scientist Associate I

@ The University of Texas at Austin | AUSTIN, TX

Senior Data Engineer

@ Algolia | London, England

Fundamental Equities - Vice President, Equity Quant Research Analyst (Income & Value Investment Team)

@ BlackRock | NY7 - 50 Hudson Yards, New York

Snowflake Data Analytics

@ Devoteam | Madrid, Spain