March 13, 2024, 4:41 a.m. | Stefan Balauca, Mark Niklas M\"uller, Yuhao Mao, Maximilian Baader, Marc Fischer, Martin Vechev

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.07095v1 Announce Type: new
Abstract: Training neural networks with high certified accuracy against adversarial examples remains an open problem despite significant efforts. While certification methods can effectively leverage tight convex relaxations for bound computation, in training, these methods perform worse than looser relaxations. Prior work hypothesized that this is caused by the discontinuity and perturbation sensitivity of the loss surface induced by these tighter relaxations. In this work, we show theoretically that Gaussian Loss Smoothing can alleviate both of these …

abstract accuracy adversarial adversarial examples arxiv certification computation cs.lg examples networks neural networks paradox prior training type work

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne