all AI news
A Simple Solution for Offline Imitation from Observations and Examples with Possibly Incomplete Trajectories. (arXiv:2311.01329v1 [cs.LG])
cs.LG updates on arXiv.org arxiv.org
Offline imitation from observations aims to solve MDPs where only
task-specific expert states and task-agnostic non-expert state-action pairs are
available. Offline imitation is useful in real-world scenarios where arbitrary
interactions are costly and expert actions are unavailable. The
state-of-the-art "DIstribution Correction Estimation" (DICE) methods minimize
divergence of state occupancy between expert and learner policies and retrieve
a policy with weighted behavior cloning; however, their results are unstable
when learning from incomplete trajectories, due to a non-robust optimization in
the dual …
art arxiv dice distribution examples expert interactions offline simple solution solve state world