March 11, 2024, 4:41 a.m. | Stephen Casper, Lennart Schulze, Oam Patel, Dylan Hadfield-Menell

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.05030v1 Announce Type: cross
Abstract: AI systems sometimes exhibit harmful unintended behaviors post-deployment. This is often despite extensive diagnostics and debugging by developers. Minimizing risks from models is challenging because the attack surface is so large. It is not tractable to exhaustively search for inputs that may cause a model to fail. Red-teaming and adversarial training (AT) are commonly used to make AI systems more robust. However, they have not been sufficient to avoid many real-world failure modes that differ …

abstract adversarial adversarial training ai systems arxiv cs.ai cs.cr cs.lg debugging deployment developers diagnostics failure inputs risks search surface systems tractable training type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne