Feb. 26, 2024, 5:48 a.m. | Xiaotian Zou, Yongkang Chen, Ke Li

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.14857v1 Announce Type: new
Abstract: The rapid evolution of Large Language Models (LLMs) has rendered them indispensable in modern society. While security measures are typically in place to align LLMs with human values prior to release, recent studies have unveiled a concerning phenomenon named "jailbreak." This term refers to the unexpected and potentially harmful responses generated by LLMs when prompted with malicious questions. Existing research focuses on generating jailbreak prompts but our study aim to answer a different question: Is …

abstract arxiv cs.ai cs.cl cs.cr evolution human jailbreak language language models large language large language models llms modern prior release security society studies them type values

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US