Web: http://arxiv.org/abs/1909.10480

June 16, 2022, 1:11 a.m. | Alesia Chernikova, Alina Oprea

cs.LG updates on arXiv.org arxiv.org

As advances in Deep Neural Networks (DNNs) demonstrate unprecedented levels
of performance in many critical applications, their vulnerability to attacks is
still an open question. We consider evasion attacks at testing time against
Deep Learning in constrained environments, in which dependencies between
features need to be satisfied. These situations may arise naturally in tabular
data or may be the result of feature engineering in specific application
domains, such as threat detection in cyber security. We propose a general
iterative gradient-based …

arxiv attacks evasion networks neural neural networks on

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY