Nov. 4, 2022, 1:11 a.m. | Dennis Wei, Rahul Nair, Amit Dhurandhar, Kush R. Varshney, Elizabeth M. Daly, Moninder Singh

cs.LG updates on arXiv.org arxiv.org

Interpretable and explainable machine learning has seen a recent surge of
interest. We focus on safety as a key motivation behind the surge and make the
relationship between interpretability and safety more quantitative. Toward
assessing safety, we introduce the concept of maximum deviation via an
optimization problem to find the largest deviation of a supervised learning
model from a reference model regarded as safe. We then show how
interpretability facilitates this safety assessment. For models including
decision trees, generalized linear …

arxiv deviation machine machine learning safety

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote