June 1, 2022, 1:11 a.m. | Zeyan Liu, Fengjun Li, Jingqiang Lin, Zhu Li, Bo Luo

cs.LG updates on arXiv.org arxiv.org

With the growing popularity of artificial intelligence and machine learning,
a wide spectrum of attacks against deep learning models have been proposed in
the literature. Both the evasion attacks and the poisoning attacks attempt to
utilize adversarially altered samples to fool the victim model to misclassify
the adversarial sample. While such attacks claim to be or are expected to be
stealthy, i.e., imperceptible to human eyes, such claims are rarely evaluated.
In this paper, we present the first large-scale study …

arxiv attacks deep learning hide learning systems

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne