all AI news
Root Causing Prediction Anomalies Using Explainable AI
March 6, 2024, 5:41 a.m. | Ramanathan Vishnampet, Rajesh Shenoy, Jianhui Chen, Anuj Gupta
cs.LG updates on arXiv.org arxiv.org
Abstract: This paper presents a novel application of explainable AI (XAI) for root-causing performance degradation in machine learning models that learn continuously from user engagement data. In such systems a single feature corruption can cause cascading feature, label and concept drifts. We have successfully applied this technique to improve the reliability of models used in personalized advertising. Performance degradation in such systems manifest as prediction anomalies in the models. These models are typically trained continuously using …
abstract application arxiv concept corruption cs.ai cs.lg data engagement explainable ai feature learn machine machine learning machine learning models novel paper performance prediction systems type user engagement xai
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US