Jan. 20, 2022, 2:11 a.m. | Lionel Blondé, Alexandros Kalousis, Stéphane Marchand-Maillet

cs.LG updates on arXiv.org arxiv.org

The performance of state-of-the-art offline RL methods varies widely over the
spectrum of dataset qualities, ranging from far-from-optimal random data to
close-to-optimal expert demonstrations. We re-implement these methods to test
their reproducibility, and show that when a given method outperforms the others
on one end of the spectrum, it never does on the other end. This prevents us
from naming a victor across the board. We attribute the asymmetry to the amount
of inductive bias injected into the agent to …

arxiv biases guidelines learning reinforcement learning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Machine Learning Engineer (m/f/d)

@ StepStone Group | Düsseldorf, Germany

2024 GDIA AI/ML Scientist - Supplemental

@ Ford Motor Company | United States