all AI news
Revisiting PGD Attacks for Stability Analysis of Large-Scale Nonlinear Systems and Perception-Based Control. (arXiv:2201.00801v1 [math.OC])
Jan. 4, 2022, 2:10 a.m. | Aaron Havens, Darioush Keivan, Peter Seiler, Geir Dullerud, Bin Hu
cs.LG updates on arXiv.org arxiv.org
Many existing region-of-attraction (ROA) analysis tools find difficulty in
addressing feedback systems with large-scale neural network (NN) policies
and/or high-dimensional sensing modalities such as cameras. In this paper, we
tailor the projected gradient descent (PGD) attack method developed in the
adversarial learning community as a general-purpose ROA analysis tool for
large-scale nonlinear systems and end-to-end perception-based control. We show
that the ROA analysis can be approximated as a constrained maximization problem
whose goal is to find the worst-case initial condition …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Business Intelligence Analyst
@ Rappi | COL-Bogotá
Applied Scientist II
@ Microsoft | Redmond, Washington, United States