all AI news
Assessing Adversarial Robustness of Large Language Models: An Empirical Study
May 7, 2024, 4:43 a.m. | Zeyu Yang, Zhao Meng, Xiaochen Zheng, Roger Wattenhofer
cs.LG updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs) have revolutionized natural language processing, but their robustness against adversarial attacks remains a critical concern. We presents a novel white-box style attack approach that exposes vulnerabilities in leading open-source LLMs, including Llama, OPT, and T5. We assess the impact of model size, structure, and fine-tuning strategies on their resistance to adversarial perturbations. Our comprehensive evaluation across five diverse text classification tasks establishes a new benchmark for LLM robustness. The findings of …
abstract adversarial adversarial attacks arxiv attacks box cs.cl cs.lg impact language language models language processing large language large language models llama llms natural natural language natural language processing novel processing robustness study style type vulnerabilities
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US