March 26, 2024, 1 p.m. | Anthony Alford

InfoQ - AI, ML & Data Engineering www.infoq.com

Researchers from the University of Washington, the Pennsylvania State University, and Allen Institute for AI have open-sourced SafeDecoding, a technique for protecting large language models (LLMs) against jailbreak attacks. SafeDecoding outperforms baseline jailbreak defenses without incurring significant computational overhead.

By Anthony Alford

ai algorithm allen allen institute allen institute for ai anthony attacks computational deep learning defense generative-ai institute jailbreak language language models large language large language models llm llms ml & data engineering neural networks researchers state university university of washington washington

More from www.infoq.com / InfoQ - AI, ML & Data Engineering

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Machine Learning Engineer (AI, NLP, LLM, Generative AI)

@ Palo Alto Networks | Santa Clara, CA, United States

Consultant Senior Data Engineer F/H

@ Devoteam | Nantes, France