Aug. 16, 2022, 1:11 a.m. | Zeyan Liu, Fengjun Li, Jingqiang Lin, Zhu Li, Bo Luo

cs.LG updates on arXiv.org arxiv.org

With the growing popularity of artificial intelligence and machine learning,
a wide spectrum of attacks against deep learning models have been proposed in
the literature. Both the evasion attacks and the poisoning attacks attempt to
utilize adversarially altered samples to fool the victim model to misclassify
the adversarial sample. While such attacks claim to be or are expected to be
stealthy, i.e., imperceptible to human eyes, such claims are rarely evaluated.
In this paper, we present the first large-scale study …

arxiv attacks deep learning hide learning systems

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US