all AI news
DNNShield: Dynamic Randomized Model Sparsification, A Defense Against Adversarial Machine Learning. (arXiv:2208.00498v1 [cs.CR])
Aug. 2, 2022, 2:10 a.m. | Mohammad Hossein Samavatian, Saikat Majumdar, Kristin Barber, Radu Teodorescu
cs.LG updates on arXiv.org arxiv.org
DNNs are known to be vulnerable to so-called adversarial attacks that
manipulate inputs to cause incorrect results that can be beneficial to an
attacker or damaging to the victim. Recent works have proposed approximate
computation as a defense mechanism against machine learning attacks. We show
that these approaches, while successful for a range of inputs, are insufficient
to address stronger, high-confidence adversarial attacks. To address this, we
propose DNNSHIELD, a hardware-accelerated defense that adapts the strength of
the response to …
adversarial machine learning arxiv defense learning machine machine learning
More from arxiv.org / cs.LG updates on arXiv.org
The Perception-Robustness Tradeoff in Deterministic Image Restoration
1 day, 18 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne