April 23, 2024, 4:46 a.m. | Yuang Liu, Zhiheng Qiu, Xiaokai Qin

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.13434v1 Announce Type: new
Abstract: Transformer has been applied in the field of computer vision due to its excellent performance in natural language processing, surpassing traditional convolutional neural networks and achieving new state-of-the-art. ViT divides an image into several local patches, known as "visual sentences". However, the information contained in the image is vast and complex, and focusing only on the features at the "visual sentence" level is not enough. The features between local patches should also be taken into …

abstract art arxiv computer computer vision convolutional neural networks cs.ai cs.cv feature hierarchical however image information language language processing natural natural language natural language processing networks neural networks performance processing scale state the information transformer transformers type vision vision transformers visual vit

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Machine Learning Engineer

@ Samsara | Canada - Remote