April 2, 2024, 7:48 p.m. | Jianqiao Zheng, Xueqian Li, Simon Lucey

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.01139v1 Announce Type: new
Abstract: The training of vision transformer (ViT) networks on small-scale datasets poses a significant challenge. By contrast, convolutional neural networks (CNNs) have an architectural inductive bias enabling them to perform well on such problems. In this paper, we argue that the architectural bias inherent to CNNs can be reinterpreted as an initialization bias within ViT. This insight is significant as it empowers ViTs to perform equally well on small-scale problems while maintaining their flexibility for large-scale …

abstract arxiv attention bias challenge cnns contrast convolutional neural networks cs.cv datasets enabling inductive networks neural networks paper scale small them training transformer transformers type vision vision transformers vit

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineer, Data Tools - Full Stack

@ DoorDash | Pune, India

Senior Data Analyst

@ Artsy | New York City