all AI news
LORD: Large Models based Opposite Reward Design for Autonomous Driving
March 29, 2024, 4:42 a.m. | Xin Ye, Feng Tao, Abhirup Mallik, Burhaneddin Yaman, Liu Ren
cs.LG updates on arXiv.org arxiv.org
Abstract: Reinforcement learning (RL) based autonomous driving has emerged as a promising alternative to data-driven imitation learning approaches. However, crafting effective reward functions for RL poses challenges due to the complexity of defining and quantifying good driving behaviors across diverse scenarios. Recently, large pretrained models have gained significant attention as zero-shot reward models for tasks specified with desired linguistic goals. However, the desired linguistic goals for autonomous driving such as "drive safely" are ambiguous and incomprehensible …
abstract arxiv autonomous autonomous driving challenges complexity cs.ai cs.lg cs.ro data data-driven design diverse driving functions good however imitation learning large models pretrained models reinforcement reinforcement learning type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Data Science Analyst
@ Mayo Clinic | AZ, United States
Sr. Data Scientist (Network Engineering)
@ SpaceX | Redmond, WA