all AI news
Detecting Brittle Decisions for Free: Leveraging Margin Consistency in Deep Robust Classifiers
June 27, 2024, 4:45 a.m. | Jonas Ngnaw\'e, Sabyasachi Sahoo, Yann Pequignot, Fr\'ed\'eric Precioso, Christian Gagn\'e
cs.LG updates on arXiv.org arxiv.org
Abstract: Despite extensive research on adversarial training strategies to improve robustness, the decisions of even the most robust deep learning models can still be quite sensitive to imperceptible perturbations, creating serious risks when deploying them for high-stakes real-world applications. While detecting such cases may be critical, evaluating a model's vulnerability at a per-instance level using adversarial attacks is computationally too intensive and unsuitable for real-time deployment scenarios. The input space margin is the exact score to …
abstract adversarial adversarial training applications arxiv cases classifiers cs.ai cs.cv cs.lg decisions deep learning deploying free research risks robust robustness strategies them training type while world
More from arxiv.org / cs.LG updates on arXiv.org
MixerFlow: MLP-Mixer meets Normalising Flows
2 days, 5 hours ago |
arxiv.org
Kernelised Normalising Flows
2 days, 5 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer II –Decision Intelligence Delivery and Support
@ Bristol Myers Squibb | Hyderabad
Senior Data Governance Consultant (Remote in US)
@ Resultant | Indianapolis, IN, United States
Power BI Developer
@ Brompton Bicycle | Greenford, England, United Kingdom
VP, Enterprise Applications
@ Blue Yonder | Scottsdale
Data Scientist - Moloco Commerce Media
@ Moloco | Redwood City, California, United States
Senior Backend Engineer (New York)
@ Kalepa | New York City. Hybrid