all AI news
Constraining the Attack Space of Machine Learning Models with Distribution Clamping Preprocessing. (arXiv:2205.08989v1 [cs.LG])
May 19, 2022, 1:11 a.m. | Ryan Feng, Somesh Jha, Atul Prakash
cs.LG updates on arXiv.org arxiv.org
Preprocessing and outlier detection techniques have both been applied to
neural networks to increase robustness with varying degrees of success. In this
paper, we formalize the ideal preprocessor function as one that would take any
input and set it to the nearest in-distribution input. In other words, we
detect any anomalous pixels and set them such that the new input is
in-distribution. We then illustrate a relaxed solution to this problem in the
context of patch attacks. Specifically, we demonstrate …
arxiv distribution learning machine machine learning machine learning models space
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior ML Researcher - 3D Geometry Processing | 3D Shape Generation | 3D Mesh Data
@ Promaton | Europe
Data Scientist, Senior
@ Pacific Gas and Electric Company | Oakland, CA, US, 94612
AML Reporting Data Specialist
@ Wise | Tallinn, Estonia
Bachelorarbeit im Bereich IT - "Einsatz von Generative AI im Konzernumfeld" (WiSe 24/25)
@ AGCO | Marktoberdorf, DE
Big Data Engineer
@ ACL Technology | Argentina
REF25217Q-Deputy Manager - MIS (Power BI, Dashboard, Excel) - GGN
@ WNS Global Services | Gurgaon, India