June 13, 2022, 1:10 a.m. | Claudia V. Roberts, Ehtsham Elahi, Ashok Chandrashekar

cs.LG updates on arXiv.org arxiv.org

We evaluate two popular local explainability techniques, LIME and SHAP, on a
movie recommendation task. We discover that the two methods behave very
differently depending on the sparsity of the data set. LIME does better than
SHAP in dense segments of the data set and SHAP does better in sparse segments.
We trace this difference to the differing bias-variance characteristics of the
underlying estimators of LIME and SHAP. We find that SHAP exhibits lower
variance in sparse segments of the …

arxiv bias bias-variance lg lime recommendation shap sparsity variance

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

AI Engineering Manager

@ M47 Labs | Barcelona, Catalunya [Cataluña], Spain