all AI news
Hacking Predictors Means Hacking Cars: Using Sensitivity Analysis to Identify Trajectory Prediction Vulnerabilities for Autonomous Driving Security
May 22, 2024, 4:43 a.m. | Marsalis Gibson, David Babazadeh, Claire Tomlin, Shankar Sastry
cs.LG updates on arXiv.org arxiv.org
Abstract: Adversarial attacks on learning-based multi-modal trajectory predictors have already been demonstrated. However, there are still open questions about the effects of perturbations on inputs other than state histories, and how these attacks impact downstream planning and control. In this paper, we conduct a sensitivity analysis on two trajectory prediction models, Trajectron++ and AgentFormer. The analysis reveals that between all inputs, almost all of the perturbation sensitivities for both models lie only within the most recent …
abstract adversarial adversarial attacks analysis arxiv attacks autonomous autonomous driving cars control cs.cr cs.lg cs.ro cs.sy driving eess.sy effects hacking however identify impact inputs modal multi-modal planning prediction questions replace security sensitivity state trajectory type vulnerabilities
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Data Engineer
@ Displate | Warsaw
Engineer III, Back-End Server (mult.)
@ Samsung Electronics | 645 Clyde Avenue, Mountain View, CA, USA
Senior Product Security Engineer - Cyber Security Researcher
@ Boeing | USA - Arlington, VA
Senior Manager, Software Engineering, DevOps
@ Capital One | Richmond, VA
PGIM Quantitative Solutions, Investment Multi-Asset Research (Hybrid)
@ Prudential Financial | Prudential Tower, 655 Broad Street, Newark, NJ
Cyber Security Engineer
@ HP | FTC02 - Fort Collins, CO East Link (FTC02)