March 1, 2024, 5:42 a.m. | Fangyuan Zhang, Huichi Zhou, Shuangjiao Li, Hongtao Wang

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.18792v1 Announce Type: new
Abstract: Deep neural networks have been proven to be vulnerable to adversarial examples and various methods have been proposed to defend against adversarial attacks for natural language processing tasks. However, previous defense methods have limitations in maintaining effective defense while ensuring the performance of the original task. In this paper, we propose a malicious perturbation based adversarial training method (MPAT) for building robust deep neural networks against textual adversarial attacks. Specifically, we construct a multi-level malicious …

abstract adversarial adversarial attacks adversarial examples arxiv attacks building cs.cl cs.cr cs.lg defense examples language language processing limitations natural natural language natural language processing networks neural networks performance processing robust tasks textual type vulnerable

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Data Engineering Manager

@ Microsoft | Redmond, Washington, United States

Machine Learning Engineer

@ Apple | San Diego, California, United States