March 26, 2024, 1 p.m. | Anthony Alford

InfoQ - AI, ML & Data Engineering www.infoq.com

Researchers from the University of Washington, the Pennsylvania State University, and Allen Institute for AI have open-sourced SafeDecoding, a technique for protecting large language models (LLMs) against jailbreak attacks. SafeDecoding outperforms baseline jailbreak defenses without incurring significant computational overhead.

By Anthony Alford

ai algorithm allen allen institute allen institute for ai anthony attacks computational deep learning defense generative-ai institute jailbreak language language models large language large language models llm llms ml & data engineering neural networks researchers state university university of washington washington

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York