Oct. 6, 2022, 1:13 a.m. | Yue Liu, Christos Matsoukas, Fredrik Strand, Hossein Azizpour, Kevin Smith

cs.LG updates on arXiv.org arxiv.org

Vision transformers have demonstrated the potential to outperform CNNs in a
variety of vision tasks. But the computational and memory requirements of these
models prohibit their use in many applications, especially those that depend on
high-resolution images, such as medical image classification. Efforts to train
ViTs more efficiently are overly complicated, necessitating architectural
changes or intricate training schemes. In this work, we show that standard ViT
models can be efficiently trained at high resolution by randomly dropping input
image patches. …

arxiv dropout transformers vision

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne