all AI news
Continuous Mean-Zero Disagreement-Regularized Imitation Learning (CMZ-DRIL)
March 5, 2024, 2:41 p.m. | Noah Ford, Ryan W. Gardner, Austin Juhl, Nathan Larson
cs.LG updates on arXiv.org arxiv.org
Abstract: Machine-learning paradigms such as imitation learning and reinforcement learning can generate highly performant agents in a variety of complex environments. However, commonly used methods require large quantities of data and/or a known reward function. This paper presents a method called Continuous Mean-Zero Disagreement-Regularized Imitation Learning (CMZ-DRIL) that employs a novel reward structure to improve the performance of imitation-learning agents that have access to only a handful of expert demonstrations. CMZ-DRIL uses reinforcement learning to minimize …
abstract agents arxiv continuous cs.lg data environments function generate imitation learning machine mean paper reinforcement reinforcement learning type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Lead Data Modeler
@ Sherwin-Williams | Cleveland, OH, United States