all AI news
[D] Are black-box explainers (e.g. SHAP) just another black-box?
Jan. 26, 2022, 3:58 p.m. | /u/candalfigomoro
Machine Learning www.reddit.com
Tools such as SHAP are now very common to explain "black box" models such as GBMs or Neural Networks. However, people who use them often don't really understand how they work. And these tools are no less complex than the models they would like to explain. Aren't we just explaining "black box" models with black box explainers?
submitted by /u/candalfigomoro[link] [comments]
More from www.reddit.com / Machine Learning
Jobs in AI, ML, Big Data
Senior Marketing Data Analyst
@ Amazon.com | Amsterdam, North Holland, NLD
Senior Data Analyst
@ MoneyLion | Kuala Lumpur, Kuala Lumpur, Malaysia
Data Management Specialist - Office of the CDO - Chase- Associate
@ JPMorgan Chase & Co. | LONDON, LONDON, United Kingdom
BI Data Analyst
@ Nedbank | Johannesburg, ZA
Head of Data Science and Artificial Intelligence (m/f/d)
@ Project A Ventures | Munich, Germany
Senior Data Scientist - GenAI
@ Roche | Hyderabad RSS