May 22, 2024, 4:43 a.m. | Marsalis Gibson, David Babazadeh, Claire Tomlin, Shankar Sastry

cs.LG updates on arXiv.org arxiv.org

arXiv:2401.10313v2 Announce Type: replace-cross
Abstract: Adversarial attacks on learning-based multi-modal trajectory predictors have already been demonstrated. However, there are still open questions about the effects of perturbations on inputs other than state histories, and how these attacks impact downstream planning and control. In this paper, we conduct a sensitivity analysis on two trajectory prediction models, Trajectron++ and AgentFormer. The analysis reveals that between all inputs, almost all of the perturbation sensitivities for both models lie only within the most recent …

abstract adversarial adversarial attacks analysis arxiv attacks autonomous autonomous driving cars control cs.cr cs.lg cs.ro cs.sy driving eess.sy effects hacking however identify impact inputs modal multi-modal planning prediction questions replace security sensitivity state trajectory type vulnerabilities

Senior Data Engineer

@ Displate | Warsaw

Analyst, Data Analytics

@ T. Rowe Price | Owings Mills, MD - Building 4

Regulatory Data Analyst

@ Federal Reserve System | San Francisco, CA

Sr. Data Analyst

@ Bank of America | Charlotte

Data Analyst- Tech Refresh

@ CACI International Inc | 1J5 WASHINGTON DC (BOLLING AFB)

Senior AML/CFT & Data Analyst

@ Ocorian | Ebène, Mauritius