all AI news
On the Road to Portability: Compressing End-to-End Motion Planner for Autonomous Driving
March 5, 2024, 2:48 p.m. | Kaituo Feng, Changsheng Li, Dongchun Ren, Ye Yuan, Guoren Wang
cs.CV updates on arXiv.org arxiv.org
Abstract: End-to-end motion planning models equipped with deep neural networks have shown great potential for enabling full autonomous driving. However, the oversized neural networks render them impractical for deployment on resource-constrained systems, which unavoidably requires more computational time and resources during reference.To handle this, knowledge distillation offers a promising approach that compresses models by enabling a smaller student model to learn from a larger teacher model. Nevertheless, how to apply knowledge distillation to compress motion planners …
abstract arxiv autonomous autonomous driving computational cs.cv deployment driving enabling motion planning networks neural networks planning portability reference resources systems them type
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Scientist
@ Publicis Groupe | New York City, United States
Bigdata Cloud Developer - Spark - Assistant Manager
@ State Street | Hyderabad, India