Feb. 29, 2024, 5:42 a.m. | Yuhao Mao, Mark Niklas M\"uller, Marc Fischer, Martin Vechev

cs.LG updates on arXiv.org arxiv.org

arXiv:2306.10426v2 Announce Type: replace
Abstract: As robustness verification methods are becoming more precise, training certifiably robust neural networks is becoming ever more relevant. To this end, certified training methods compute and then optimize an upper bound on the worst-case loss over a robustness specification. Curiously, training methods based on the imprecise interval bound propagation (IBP) consistently outperform those leveraging more precise bounding methods. Still, we lack an understanding of the mechanisms making IBP so successful.
In this work, we thoroughly …

abstract arxiv case compute cs.ai cs.lg interval loss networks neural networks propagation robust robustness training type understanding verification

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne