all AI news
Detecting Adversarial Examples in Batches -- a geometrical approach. (arXiv:2206.08738v1 [cs.LG])
Web: http://arxiv.org/abs/2206.08738
June 20, 2022, 1:10 a.m. | Danush Kumar Venkatesh, Peter Steinbach
cs.LG updates on arXiv.org arxiv.org
Many deep learning methods have successfully solved complex tasks in computer
vision and speech recognition applications. Nonetheless, the robustness of
these models has been found to be vulnerable to perturbed inputs or adversarial
examples, which are imperceptible to the human eye, but lead the model to
erroneous output decisions. In this study, we adapt and introduce two geometric
metrics, density and coverage, and evaluate their use in detecting adversarial
samples in batches of unseen data. We empirically study these metrics …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
Data Science Intern
@ NannyML | Remote
Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY