March 25, 2024, 4:42 a.m. | Mingze Ni, Zhensu Sun, Wei Liu

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.14731v1 Announce Type: cross
Abstract: Recent studies on adversarial examples expose vulnerabilities of natural language processing (NLP) models. Existing techniques for generating adversarial examples are typically driven by deterministic hierarchical rules that are agnostic to the optimal adversarial examples, a strategy that often results in adversarial samples with a suboptimal balance between magnitudes of changes and attack successes. To this end, in this research we propose two algorithms, Reversible Jump Attack (RJA) and Metropolis-Hasting Modification Reduction (MMR), to generate highly …

abstract adversarial adversarial examples arxiv balance classifiers cs.cl cs.cr cs.lg examples hierarchical language language processing natural natural language natural language processing nlp processing results rules samples strategy studies textual type vulnerabilities

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Quantexa | Sydney, New South Wales, Australia

Staff Analytics Engineer

@ Warner Bros. Discovery | NY New York 230 Park Avenue South