April 24, 2023, 12:44 a.m. | Philippe Hansen-Estruch, Ilya Kostrikov, Michael Janner, Jakub Grudzien Kuba, Sergey Levine

cs.LG updates on arXiv.org arxiv.org

Effective offline RL methods require properly handling out-of-distribution
actions. Implicit Q-learning (IQL) addresses this by training a Q-function
using only dataset actions through a modified Bellman backup. However, it is
unclear which policy actually attains the values represented by this implicitly
trained Q-function. In this paper, we reinterpret IQL as an actor-critic method
by generalizing the critic objective and connecting it to a
behavior-regularized implicit actor. This generalization shows how the induced
actor balances reward maximization and divergence from the …

actor-critic arxiv backup behavior dataset diffusion distribution divergence function loss offline paper policy q-learning shows through training values

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineer, Machine Learning (Tel Aviv)

@ Meta | Tel Aviv, Israel

Senior Data Scientist- Digital Government

@ Oracle | CASABLANCA, Morocco