March 11, 2024, 4:41 a.m. | Stephen Casper, Lennart Schulze, Oam Patel, Dylan Hadfield-Menell

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.05030v1 Announce Type: cross
Abstract: AI systems sometimes exhibit harmful unintended behaviors post-deployment. This is often despite extensive diagnostics and debugging by developers. Minimizing risks from models is challenging because the attack surface is so large. It is not tractable to exhaustively search for inputs that may cause a model to fail. Red-teaming and adversarial training (AT) are commonly used to make AI systems more robust. However, they have not been sufficient to avoid many real-world failure modes that differ …

abstract adversarial adversarial training ai systems arxiv cs.ai cs.cr cs.lg debugging deployment developers diagnostics failure inputs risks search surface systems tractable training type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US