May 12, 2023, 6:47 a.m. | Shuyang Xiang

Towards Data Science - Medium towardsdatascience.com

Exploring PyMC’s Insights with SHAP Framework via an Engaging Toy Example

The Gap between Bayesian Models and Explainability

SHAP values (SHapley Additive exPlanations) are a game-theory-based method used to increase the transparency and interpretability of machine learning models. However, this method, along with other machine learning explainability frameworks, has rarely been applied to Bayesian models, which provide a posterior distribution capturing uncertainty in parameter estimates instead of point estimates used by classical machine learning models.

While Bayesian models offer a …

bayesian bayesian-machine-learning data science explainability framework frameworks game gap insights interpretability machine machine learning machine learning models pymc shap theory through transparency values

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US