March 10, 2022, 2:12 a.m. | Prasanth Buddareddygari, Travis Zhang, Yezhou Yang, Yi Ren

cs.LG updates on arXiv.org arxiv.org

Recent studies demonstrated the vulnerability of control policies learned
through deep reinforcement learning against adversarial attacks, raising
concerns about the application of such models to risk-sensitive tasks such as
autonomous driving. Threat models for these demonstrations are limited to (1)
targeted attacks through real-time manipulation of the agent's observation, and
(2) untargeted attacks through manipulation of the physical environment. The
former assumes full access to the agent's states/observations at all times,
while the latter has no control over attack outcomes. …

arxiv autonomous autonomous driving deep rl rl

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Science Specialist

@ Telstra | Telstra ICC Bengaluru

Senior Staff Engineer, Machine Learning

@ Nagarro | Remote, India