Feb. 23, 2024, 5:49 a.m. | Sahar Sadrizadeh, Ljiljana Dolamic, Pascal Frossard

cs.CL updates on arXiv.org arxiv.org

arXiv:2308.15246v2 Announce Type: replace
Abstract: Neural Machine Translation (NMT) models have been shown to be vulnerable to adversarial attacks, wherein carefully crafted perturbations of the input can mislead the target model. In this paper, we introduce ACT, a novel adversarial attack framework against NMT systems guided by a classifier. In our attack, the adversary aims to craft meaning-preserving adversarial examples whose translations in the target language by the NMT model belong to a different class than the original translations. Unlike …

abstract act adversarial adversarial attacks arxiv attacks classification classifier cs.cl framework machine machine translation neural machine translation novel paper systems translation type vulnerable

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada