Sept. 1, 2022, 1:11 a.m. | Eiji Uchibe

cs.LG updates on arXiv.org arxiv.org

Approaches based on generative adversarial networks for imitation learning
are promising because they are sample efficient in terms of expert
demonstrations. However, training a generator requires many interactions with
the actual environment because model-free reinforcement learning is adopted to
update a policy. To improve the sample efficiency using model-based
reinforcement learning, we propose model-based Entropy-Regularized Imitation
Learning (MB-ERIL) under the entropy-regularized Markov decision process to
reduce the number of interactions with the actual environment. MB-ERIL uses two
discriminators. A policy …

arxiv entropy imitation learning learning policy regularization

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Lead Data Engineer

@ JPMorgan Chase & Co. | Jersey City, NJ, United States

Senior Machine Learning Engineer

@ TELUS | Vancouver, BC, CA

CT Technologist - Ambulatory Imaging - PRN

@ Duke University | Morriville, NC, US, 27560

BH Data Analyst

@ City of Philadelphia | Philadelphia, PA, United States