all AI news
Transformers Provably Learn Feature-Position Correlations in Masked Image Modeling
March 5, 2024, 2:42 p.m. | Yu Huang, Zixin Wen, Yuejie Chi, Yingbin Liang
cs.LG updates on arXiv.org arxiv.org
Abstract: Masked image modeling (MIM), which predicts randomly masked patches from unmasked ones, has emerged as a promising approach in self-supervised vision pretraining. However, the theoretical understanding of MIM is rather limited, especially with the foundational architecture of transformers. In this paper, to the best of our knowledge, we provide the first end-to-end theory of learning one-layer transformers with softmax attention in MIM self-supervised pretraining. On the conceptual side, we posit a theoretical mechanism of how …
abstract architecture arxiv best of correlations cs.lg feature image learn math.oc modeling paper pretraining stat.ml transformers type understanding vision
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote