April 19, 2024, 4:43 a.m. | Bitya Neuhof, Yuval Benjamini

cs.LG updates on arXiv.org arxiv.org

arXiv:2307.15361v2 Announce Type: replace-cross
Abstract: Machine learning models are widely applied in various fields. Stakeholders often use post-hoc feature importance methods to better understand the input features' contribution to the models' predictions. The interpretation of the importance values provided by these methods is frequently based on the relative order of the features (their ranking) rather than the importance values themselves. Since the order may be unstable, we present a framework for quantifying the uncertainty in global importance values. We propose …

abstract arxiv cs.ai cs.lg feature features fields importance interpretation machine machine learning machine learning models predictions ranking stakeholders stat.ml type values

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York