Aug. 30, 2022, 1:14 a.m. | Zhaoxi Zhang, Leo Yu Zhang, Xufei Zheng, Jinyu Tian, Jiantao Zhou

cs.CV updates on arXiv.org arxiv.org

Deep learning models are known to be vulnerable to adversarial examples that
are elaborately designed for malicious purposes and are imperceptible to the
human perceptual system. Autoencoder, when trained solely over benign examples,
has been widely used for (self-supervised) adversarial detection based on the
assumption that adversarial examples yield larger reconstruction errors.
However, because lacking adversarial examples in its training and the too
strong generalization ability of autoencoder, this assumption does not always
hold true in practice. To alleviate this …

arxiv detection example representation

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst (CPS-GfK)

@ GfK | Bucharest

Consultant Data Analytics IT Digital Impulse - H/F

@ Talan | Paris, France

Data Analyst

@ Experian | Mumbai, India

Data Scientist

@ Novo Nordisk | Princeton, NJ, US

Data Architect IV

@ Millennium Corporation | United States