Web: http://arxiv.org/abs/2201.11676

Jan. 28, 2022, 2:11 a.m. | Carlos Mougan, Dan Saattrup Nielsen

cs.LG updates on arXiv.org arxiv.org

Monitoring machine learning models once they are deployed is challenging. It
is even more challenging to decide when to retrain models in real-case
scenarios when labeled data is beyond reach, and monitoring performance metrics
becomes unfeasible. In this work, we use non-parametric bootstrapped
uncertainty estimates and SHAP values to provide explainable uncertainty
estimation as a technique that aims to monitor the deterioration of machine
learning models in deployment environments, as well as determine the source of
model deterioration when target …

arxiv model monitoring uncertainty

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Product Manager (Europe, Remote)

@ FreshBooks | Germany

Field Operations and Data Engineer, ADAS

@ Lucid Motors | Newark, CA

Machine Learning Engineer - Senior

@ Novetta | Reston, VA

Analytics Engineer

@ ThirdLove | Remote

Senior Machine Learning Infrastructure Engineer - Safety

@ Discord | San Francisco, CA or Remote

Internship, Data Scientist

@ Everstream Analytics | United States (Remote)