May 12, 2023, 6:47 a.m. | Shuyang Xiang

Towards Data Science - Medium towardsdatascience.com

Exploring PyMC’s Insights with SHAP Framework via an Engaging Toy Example

The Gap between Bayesian Models and Explainability

SHAP values (SHapley Additive exPlanations) are a game-theory-based method used to increase the transparency and interpretability of machine learning models. However, this method, along with other machine learning explainability frameworks, has rarely been applied to Bayesian models, which provide a posterior distribution capturing uncertainty in parameter estimates instead of point estimates used by classical machine learning models.

While Bayesian models offer a …

bayesian bayesian-machine-learning data science explainability framework frameworks game gap insights interpretability machine machine learning machine learning models pymc shap theory through transparency values

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

AI Engineering Manager

@ M47 Labs | Barcelona, Catalunya [Cataluña], Spain