April 2, 2024, 7:45 p.m. | Hai Huang, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang

cs.LG updates on arXiv.org arxiv.org

arXiv:2310.07676v2 Announce Type: replace-cross
Abstract: Large language models (LLMs) have demonstrated superior performance compared to previous methods on various tasks, and often serve as the foundation models for many researches and services. However, the untrustworthy third-party LLMs may covertly introduce vulnerabilities for downstream tasks. In this paper, we explore the vulnerability of LLMs through the lens of backdoor attacks. Different from existing backdoor attacks against LLMs, ours scatters multiple trigger keys in different prompt components. Such a Composite Backdoor Attack …

abstract arxiv attacks backdoor cs.cl cs.cr cs.lg explore foundation however language language models large language large language models llms paper performance serve services tasks type vulnerabilities vulnerability

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US