Jan. 21, 2022, 2:10 a.m. | Zhen Yu, Xiaosen Wang, Wanxiang Che, Kun He

cs.LG updates on arXiv.org arxiv.org

Deep neural networks are vulnerable to adversarial examples in Natural
Language Processing. However, existing textual adversarial attacks usually
utilize the gradient or prediction confidence to generate adversarial examples,
making it hard to be deployed in real-world applications. To this end, we
consider a rarely investigated but more rigorous setting, namely hard-label
attack, in which the attacker could only access the prediction label. In
particular, we find that the changes on prediction label caused by word
substitutions on the adversarial example …

arxiv hybrid learning search

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Senior Product Manager - Real-Time Payments Risk AI & Analytics

@ Visa | London, United Kingdom

Business Analyst (AI Industry)

@ SmartDev | Cầu Giấy, Vietnam

Computer Vision Engineer

@ Sportradar | Mont-Saint-Guibert, Belgium

Data Analyst

@ Unissant | Alexandria, VA, USA

Senior Applied Scientist

@ Zillow | Remote-USA