all AI news
On the Paradox of Certified Training. (arXiv:2102.06700v3 [cs.LG] UPDATED)
Oct. 14, 2022, 1:12 a.m. | Nikola Jovanović, Mislav Balunović, Maximilian Baader, Martin Vechev
cs.LG updates on arXiv.org arxiv.org
Certified defenses based on convex relaxations are an established technique
for training provably robust models. The key component is the choice of
relaxation, varying from simple intervals to tight polyhedra.
Counterintuitively, loose interval-based training often leads to higher
certified robustness than what can be achieved with tighter relaxations, which
is a well-known but poorly understood paradox. While recent works introduced
various improvements aiming to circumvent this issue in practice, the
fundamental problem of training models with high certified robustness remains …
More from arxiv.org / cs.LG updates on arXiv.org
The Perception-Robustness Tradeoff in Deterministic Image Restoration
1 day, 19 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne