all AI news
A Classification-Guided Approach for Adversarial Attacks against Neural Machine Translation
Feb. 23, 2024, 5:49 a.m. | Sahar Sadrizadeh, Ljiljana Dolamic, Pascal Frossard
cs.CL updates on arXiv.org arxiv.org
Abstract: Neural Machine Translation (NMT) models have been shown to be vulnerable to adversarial attacks, wherein carefully crafted perturbations of the input can mislead the target model. In this paper, we introduce ACT, a novel adversarial attack framework against NMT systems guided by a classifier. In our attack, the adversary aims to craft meaning-preserving adversarial examples whose translations in the target language by the NMT model belong to a different class than the original translations. Unlike …
abstract act adversarial adversarial attacks arxiv attacks classification classifier cs.cl framework machine machine translation neural machine translation novel paper systems translation type vulnerable
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Software Engineer, Generative AI (C++)
@ SoundHound Inc. | Toronto, Canada