Oct. 6, 2023, 5:32 p.m. | /u/Successful-Western27

Machine Learning www.reddit.com

Researchers from Brown University presented a new study supporting that translating unsafe prompts into \`low-resource languages\` allows them to easily bypass safety measures in LLMs.

By converting English inputs like "how to steal without getting caught" into Zulu and feeding to GPT-4, harmful responses slipped through 80% of the time. English prompts were blocked over 99% of the time, for comparison.

The study benchmarked attacks across 12 diverse languages and categories:

* High-resource: English, Chinese, Arabic, Hindi
* Mid-resource: Ukrainian, …

english gpt gpt-4 jailbreak languages llms low machinelearning paper prompts researchers responses safety safety measures study them university

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Principal Data Architect - Azure & Big Data

@ MGM Resorts International | Home Office - US, NV

GN SONG MT Market Research Data Analyst 11

@ Accenture | Bengaluru, BDC7A