all AI news
SAFE-GIL: SAFEty Guided Imitation Learning
April 9, 2024, 4:43 a.m. | Yusuf Umut Ciftci, Zeyuan Feng, Somil Bansal
cs.LG updates on arXiv.org arxiv.org
Abstract: Behavior Cloning is a popular approach to Imitation Learning, in which a robot observes an expert supervisor and learns a control policy. However, behavior cloning suffers from the "compounding error" problem - the policy errors compound as it deviates from the expert demonstrations and might lead to catastrophic system failures, limiting its use in safety-critical applications. On-policy data aggregation methods are able to address this issue at the cost of rolling out and repeated training …
abstract arxiv behavior cloning control cs.lg cs.ro cs.sy eess.sy error errors expert gil however imitation learning policy popular robot safe safety type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Software Engineer, Data Tools - Full Stack
@ DoorDash | Pune, India
Senior Data Analyst
@ Artsy | New York City