April 9, 2024, 4:41 a.m. | Yuezhu Xu, S. Sivaranjani

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.04375v1 Announce Type: new
Abstract: The Lipschitz constant plays a crucial role in certifying the robustness of neural networks to input perturbations and adversarial attacks, as well as the stability and safety of systems with neural network controllers. Therefore, estimation of tight bounds on the Lipschitz constant of neural networks is a well-studied topic. However, typical approaches involve solving a large matrix verification problem, the computational cost of which grows significantly for deeper networks. In this letter, we provide a …

abstract adversarial adversarial attacks arxiv attacks cs.lg cs.sy eess.sy network networks neural network neural networks robustness role safety stability systems type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Analyst

@ Alstom | Johannesburg, GT, ZA