July 14, 2023, 11:55 a.m. | /u/zeoNoeN

Machine Learning www.reddit.com

I hope that this question fits this sub: I'm currently interested in explainable AI methods. I want to incorporate them into a dashboard to increase the transparency and trust in an underlying Text Classification Model. Currently, SHAP looks promising, but I'm wondering: Which methods work best from a non-technical enduser perspective? What do I need to consider during the design phase? I haven't found good papers that compare different methods and their effectiveness. Does anyone know good papers regarding this?

classification classification model dashboard evaluation explainable ai human machinelearning shap technical text text classification them transparency trust work xai

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US