all AI news
PatchDropout: Economizing Vision Transformers Using Patch Dropout. (arXiv:2208.07220v2 [cs.CV] UPDATED)
Oct. 6, 2022, 1:13 a.m. | Yue Liu, Christos Matsoukas, Fredrik Strand, Hossein Azizpour, Kevin Smith
cs.LG updates on arXiv.org arxiv.org
Vision transformers have demonstrated the potential to outperform CNNs in a
variety of vision tasks. But the computational and memory requirements of these
models prohibit their use in many applications, especially those that depend on
high-resolution images, such as medical image classification. Efforts to train
ViTs more efficiently are overly complicated, necessitating architectural
changes or intricate training schemes. In this work, we show that standard ViT
models can be efficiently trained at high resolution by randomly dropping input
image patches. …
More from arxiv.org / cs.LG updates on arXiv.org
The Perception-Robustness Tradeoff in Deterministic Image Restoration
1 day, 10 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne