March 28, 2024, 4:42 a.m. | Roman Belaire, Pradeep Varakantham, Thanh Nguyen, David Lo

cs.LG updates on arXiv.org arxiv.org

arXiv:2302.06912v4 Announce Type: replace
Abstract: Deep Reinforcement Learning (DRL) policies have been shown to be vulnerable to small adversarial noise in observations. Such adversarial noise can have disastrous consequences in safety-critical environments. For instance, a self-driving car receiving adversarially perturbed sensory observations about nearby signs (e.g., a stop sign physically altered to be perceived as a speed limit sign) or objects (e.g., cars altered to be recognized as trees) can be fatal. Existing approaches for making RL algorithms robust to …

abstract adversarial arxiv car consequences cs.ai cs.lg defense driving environments instance noise policies reinforcement reinforcement learning safety safety-critical self-driving self-driving car sensory small type vulnerable

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US