Feb. 23, 2022, 1:47 p.m. | AI Coffee Break with Letitia

AI Coffee Break with Letitia www.youtube.com

It turns out that multi-head self-attention and convolutions are complementary. So, what makes multi-head self-attention different from convolutions? How and why do Vision Transformers work? In this video, we will find out by explaining the paper “How Do Vision Transformers Work?” by Namuk & Kim, 2021.

SPONSOR: Weights & Biases 👉 https://wandb.me/ai-coffee-break

⏩ Vision Transformers explained playlist: https://youtube.com/playlist?list=PLpZBeKTZRGPMddKHcsJAOIghV8MwzwQV6
📺 ViT: An image is worth 16x16 pixels: https://youtu.be/DVoHvmww2lQ
📺 Swin Transformer: https://youtu.be/SndHALawoag
📺 ConvNext: https://youtu.be/QqejV0LNDHA
📺 DeiT: https://youtu.be/-FbV2KgRM8A
📺 Adversarial attacks: …

attention explained head multi-head paper self-attention transformers vision work

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

C003549 Data Analyst (NS) - MON 13 May

@ EMW, Inc. | Braine-l'Alleud, Wallonia, Belgium

Marketing Decision Scientist

@ Meta | Menlo Park, CA | New York City