Feb. 26, 2024, 5:48 a.m. | Somnath Banerjee, Sayan Layek, Rima Hazra, Animesh Mukherjee

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.15302v1 Announce Type: new
Abstract: In this study, we tackle a growing concern around the safety and ethical use of large language models (LLMs). Despite their potential, these models can be tricked into producing harmful or unethical content through various sophisticated methods, including 'jailbreaking' techniques and targeted manipulation. Our work zeroes in on a specific issue: to what extent LLMs can be led astray by asking them to generate responses that are instruction-centric such as a pseudocode, a program or …

abstract arxiv cs.cl cs.cr ethical guardrails jailbreaking language language models large language large language models llms queries responses safety study through type vulnerabilities

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote