March 10, 2022, 2:12 a.m. | Prasanth Buddareddygari, Travis Zhang, Yezhou Yang, Yi Ren

cs.LG updates on arXiv.org arxiv.org

Recent studies demonstrated the vulnerability of control policies learned
through deep reinforcement learning against adversarial attacks, raising
concerns about the application of such models to risk-sensitive tasks such as
autonomous driving. Threat models for these demonstrations are limited to (1)
targeted attacks through real-time manipulation of the agent's observation, and
(2) untargeted attacks through manipulation of the physical environment. The
former assumes full access to the agent's states/observations at all times,
while the latter has no control over attack outcomes. …

arxiv autonomous autonomous driving deep rl rl

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US