July 14, 2023, 11:55 a.m. | /u/zeoNoeN

Machine Learning www.reddit.com

I hope that this question fits this sub: I'm currently interested in explainable AI methods. I want to incorporate them into a dashboard to increase the transparency and trust in an underlying Text Classification Model. Currently, SHAP looks promising, but I'm wondering: Which methods work best from a non-technical enduser perspective? What do I need to consider during the design phase? I haven't found good papers that compare different methods and their effectiveness. Does anyone know good papers regarding this?

classification classification model dashboard evaluation explainable ai human machinelearning shap technical text text classification them transparency trust work xai

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

AI Engineering Manager

@ M47 Labs | Barcelona, Catalunya [Cataluña], Spain