Web: http://arxiv.org/abs/2004.09677

May 6, 2022, 1:11 a.m. | Finbarr Timbers, Nolan Bard, Edward Lockhart, Marc Lanctot, Martin Schmid, Neil Burch, Julian Schrittwieser, Thomas Hubert, Michael Bowling

cs.LG updates on arXiv.org arxiv.org

Researchers have demonstrated that neural networks are vulnerable to
adversarial examples and subtle environment changes, both of which one can view
as a form of distribution shift. To humans, the resulting errors can look like
blunders, eroding trust in these agents. In prior games research, agent
evaluation often focused on the in-practice game outcomes. While valuable, such
evaluation typically fails to evaluate robustness to worst-case outcomes. Prior
research in computer poker has examined how to assess such worst-case
performance, both …

arxiv games learning

More from arxiv.org / cs.LG updates on arXiv.org

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote