Feb. 27, 2024, 5:43 a.m. | Oliver Hennh\"ofer, Christine Preisach

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.16388v1 Announce Type: cross
Abstract: Given the growing significance of reliable, trustworthy, and explainable machine learning, the requirement of uncertainty quantification for anomaly detection systems has become increasingly important. In this context, effectively controlling Type I error rates ($\alpha$) without compromising the statistical power ($1-\beta$) of these systems can build trust and reduce costs related to false discoveries, particularly when follow-up procedures are expensive. Leveraging the principles of conformal prediction emerges as a promising approach for providing respective statistical guarantees …

abstract alpha anomaly anomaly detection arxiv become beta build context cs.lg detection error explainable machine learning machine machine learning power quantification significance statistical stat.ml systems trust trustworthy type uncertainty values

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

#13721 - Data Engineer - AI Model Testing

@ Qualitest | Miami, Florida, United States

Elasticsearch Administrator

@ ManTech | 201BF - Customer Site, Chantilly, VA