all AI news
Model-Based Imitation Learning Using Entropy Regularization of Model and Policy. (arXiv:2206.10101v2 [cs.LG] UPDATED)
cs.LG updates on arXiv.org arxiv.org
Approaches based on generative adversarial networks for imitation learning
are promising because they are sample efficient in terms of expert
demonstrations. However, training a generator requires many interactions with
the actual environment because model-free reinforcement learning is adopted to
update a policy. To improve the sample efficiency using model-based
reinforcement learning, we propose model-based Entropy-Regularized Imitation
Learning (MB-ERIL) under the entropy-regularized Markov decision process to
reduce the number of interactions with the actual environment. MB-ERIL uses two
discriminators. A policy …
arxiv entropy imitation learning learning policy regularization