May 23, 2022, 1:10 a.m. | Michael Janner, Yilun Du, Joshua B. Tenenbaum, Sergey Levine

cs.LG updates on arXiv.org arxiv.org

Model-based reinforcement learning methods often use learning only for the
purpose of estimating an approximate dynamics model, offloading the rest of the
decision-making work to classical trajectory optimizers. While conceptually
simple, this combination has a number of empirical shortcomings, suggesting
that learned models may not be well-suited to standard trajectory optimization.
In this paper, we consider what it would look like to fold as much of the
trajectory optimization pipeline as possible into the modeling problem, such
that sampling from …

arxiv behavior diffusion planning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Social Insights & Data Analyst (Freelance)

@ Media.Monks | Jakarta

Cloud Data Engineer

@ Arkatechture | Portland, ME, USA