Web: http://arxiv.org/abs/2203.00295

June 24, 2022, 1:11 a.m. | Can Zhou, Razin A. Shaikh, Yiran Li, Amin Farjudian

cs.LG updates on arXiv.org arxiv.org

We present a domain-theoretic framework for validated robustness analysis of
neural networks. We first analyze the global robustness of a general class of
networks. Then, using the fact that Edalat's domain-theoretic L-derivative
coincides with Clarke's generalized gradient, we extend our framework for
attack-agnostic local robustness analysis. Our framework is ideal for designing
algorithms which are correct by construction. We exemplify this claim by
developing a validated algorithm for estimation of Lipschitz constant of
feedforward regressors. We prove the completeness of …

analysis arxiv framework lg networks neural neural networks robustness

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY