March 19, 2024, 4:50 a.m. | Yunke Wang, Bo Du, Wenyuan Wang, Chang Xu

cs.CV updates on arXiv.org arxiv.org

arXiv:2203.01587v3 Announce Type: replace
Abstract: Recently, Vision Transformer (ViT) has achieved promising performance in image recognition and gradually serves as a powerful backbone in various vision tasks. To satisfy the sequential input of Transformer, the tail of ViT first splits each image into a sequence of visual tokens with a fixed length. Then the following self-attention layers constructs the global relationship between tokens to produce useful representation for the downstream tasks. Empirically, representing the image with more tokens leads to …

abstract arxiv cs.cv image image recognition inference performance recognition tasks tokens transformer type vision visual vit

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US