Web: http://arxiv.org/abs/2110.06177

May 6, 2022, 1:12 a.m. | Aleksandr Podkopaev, Aaditya Ramdas

cs.LG updates on arXiv.org arxiv.org

When deployed in the real world, machine learning models inevitably encounter
changes in the data distribution, and certain -- but not all -- distribution
shifts could result in significant performance degradation. In practice, it may
make sense to ignore benign shifts, under which the performance of a deployed
model does not degrade substantially, making interventions by a human expert
(or model retraining) unnecessary. While several works have developed tests for
distribution shifts, these typically either use non-sequential methods, or
detect …

arxiv distribution ml model risk tracking

More from arxiv.org / cs.LG updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California