all AI news
On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach. (arXiv:2211.01498v1 [cs.LG])
Nov. 4, 2022, 1:13 a.m. | Dennis Wei, Rahul Nair, Amit Dhurandhar, Kush R. Varshney, Elizabeth M. Daly, Moninder Singh
stat.ML updates on arXiv.org arxiv.org
Interpretable and explainable machine learning has seen a recent surge of
interest. We focus on safety as a key motivation behind the surge and make the
relationship between interpretability and safety more quantitative. Toward
assessing safety, we introduce the concept of maximum deviation via an
optimization problem to find the largest deviation of a supervised learning
model from a reference model regarded as safe. We then show how
interpretability facilitates this safety assessment. For models including
decision trees, generalized linear …
More from arxiv.org / stat.ML updates on arXiv.org
Mixture of partially linear experts
10 hours ago |
arxiv.org
Adaptive deep learning for nonlinear time series models
1 day, 10 hours ago |
arxiv.org
A Full Adagrad algorithm with O(Nd) operations
1 day, 10 hours ago |
arxiv.org
Minimax Regret Learning for Data with Heterogeneous Subgroups
1 day, 10 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead Data Engineer
@ WorkMoney | New York City, United States - Remote