March 25, 2024, 4:42 a.m. | Mingze Ni, Zhensu Sun, Wei Liu

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.14731v1 Announce Type: cross
Abstract: Recent studies on adversarial examples expose vulnerabilities of natural language processing (NLP) models. Existing techniques for generating adversarial examples are typically driven by deterministic hierarchical rules that are agnostic to the optimal adversarial examples, a strategy that often results in adversarial samples with a suboptimal balance between magnitudes of changes and attack successes. To this end, in this research we propose two algorithms, Reversible Jump Attack (RJA) and Metropolis-Hasting Modification Reduction (MMR), to generate highly …

abstract adversarial adversarial examples arxiv balance classifiers cs.cl cs.cr cs.lg examples hierarchical language language processing natural natural language natural language processing nlp processing results rules samples strategy studies textual type vulnerabilities

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US