June 28, 2024, 6:38 a.m. | Thomas Claburn

The Register - Software: AI + ML www.theregister.com

Simple jailbreak prompt can bypass safety guardrails on major models

Microsoft on Thursday published details about Skeleton Key – a technique that bypasses the guardrails used by makers of AI models to prevent their generative chatbots from creating harmful content.…

ai models chatbots generative guardrails jailbreak key major makers microsoft prompt safety simple

Data Scientist

@ Ford Motor Company | Chennai, Tamil Nadu, India

Systems Software Engineer, Graphics

@ Parallelz | Vancouver, British Columbia, Canada - Remote

Engineering Manager - Geo Engineering Team (F/H/X)

@ AVIV Group | Paris, France

Data Analyst

@ Microsoft | San Antonio, Texas, United States

Azure Data Engineer

@ TechVedika | Hyderabad, India

Senior Data & AI Threat Detection Researcher (Cortex)

@ Palo Alto Networks | Tel Aviv-Yafo, Israel