Feb. 3, 2022, 2:11 a.m. | Clément Bénard (LPSM (UMR\_8001)), Gérard Biau (LPSM (UMR\_8001)), Sébastien da Veiga, Erwan Scornet (CMAP)

cs.LG updates on arXiv.org arxiv.org

Interpretability of learning algorithms is crucial for applications involving
critical decisions, and variable importance is one of the main interpretation
tools. Shapley effects are now widely used to interpret both tree ensembles and
neural networks, as they can efficiently handle dependence and interactions in
the data, as opposed to most other variable importance measures. However,
estimating Shapley effects is a challenging task, because of the computational
complexity and the conditional expectation estimates. Accordingly, existing
Shapley algorithms have flaws: a costly …

arxiv ml random random forests

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

[Job - 14823] Senior Data Scientist (Data Analyst Sr)

@ CI&T | Brazil

Data Engineer

@ WorldQuant | Hanoi

ML Engineer / Toronto

@ Intersog | Toronto, Ontario, Canada

Analista de Business Intelligence (Industry Insights)

@ NielsenIQ | Cotia, Brazil