June 21, 2024, 4:42 a.m. | Daniil Khomsky, Narek Maloyan, Bulat Nutfullin

cs.CL updates on arXiv.org arxiv.org

arXiv:2406.14048v1 Announce Type: new
Abstract: Large language models play a crucial role in modern natural language processing technologies. However, their extensive use also introduces potential security risks, such as the possibility of black-box attacks. These attacks can embed hidden malicious features into the model, leading to adverse consequences during its deployment.
This paper investigates methods for black-box attacks on large language models with a three-tiered defense mechanism. It analyzes the challenges and significance of these attacks, highlighting their potential implications …

abstract arxiv attacks box consequences cs.cl deployment embed features hidden however language language models language processing large language large language models modern natural natural language natural language processing paper possibility potential processing prompt prompt injection prompt injection attacks risks role security systems technologies type

AI Focused Biochemistry Postdoctoral Fellow

@ Lawrence Berkeley National Lab | Berkeley, CA

Senior Data Engineer

@ Displate | Warsaw

PhD Student AI simulation electric drive (f/m/d)

@ Volkswagen Group | Kassel, DE, 34123

AI Privacy Research Lead

@ Leidos | 6314 Remote/Teleworker US

Senior Platform System Architect, Silicon

@ Google | New Taipei, Banqiao District, New Taipei City, Taiwan

Fabrication Hardware Litho Engineer, Quantum AI

@ Google | Goleta, CA, USA