April 23, 2024, 4:49 a.m. | Narek Maloyan, Ekansh Verma, Bulat Nutfullin, Bislan Ashinov

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.13660v1 Announce Type: new
Abstract: Large Language Models (LLMs) have demonstrated remarkable capabilities in various domains, but their vulnerability to trojan or backdoor attacks poses significant security risks. This paper explores the challenges and insights gained from the Trojan Detection Competition 2023 (TDC2023), which focused on identifying and evaluating trojan attacks on LLMs. We investigate the difficulty of distinguishing between intended and unintended triggers, as well as the feasibility of reverse engineering trojans in real-world scenarios. Our comparative analysis of …

abstract arxiv attacks backdoor capabilities challenge challenges competition cs.cl detection domains insights language language models large language large language models llms paper risks security type vulnerability

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York