June 18, 2023, 11:38 a.m. | AI Coffee Break with Letitia

AI Coffee Break with Letitia www.youtube.com

Today we present our own work on MM-SHAP which measures how much a multimodal model uses each modality. Ah, what is multimodality again? 👉 https://youtu.be/jReaoJWdO78

📜 Parcalabescu, Letitia, and Anette Frank. "MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks." arXiv preprint arXiv:2212.08158 (2022). https://arxiv.org/abs/2212.08158

📺 VeLO trained optimizers: https://youtu.be/9a6PQJxzUpM
📺 Watermarking Large Language models: https://youtu.be/-vToUx5SDW4
📺 Paella text-to-image diffusion model: https://youtu.be/6zeLSANd41k

❓Check out our #MachineLearning Quiz Questions: https://www.youtube.com/c/AICoffeeBreak/community

Outline:
00:00 Paper for ACL …

acl information language paper shap support toronto transformers vision words

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Sr. VBI Developer II

@ Atos | Texas, US, 75093

Wealth Management - Data Analytics Intern/Co-op Fall 2024

@ Scotiabank | Toronto, ON, CA