May 8, 2024, 4:45 a.m. | Huihong Shi, Haikuo Shao, Wendong Mao, Zhongfeng Wang

cs.CV updates on arXiv.org arxiv.org

arXiv:2405.03882v1 Announce Type: new
Abstract: Motivated by the huge success of Transformers in the field of natural language processing (NLP), Vision Transformers (ViTs) have been rapidly developed and achieved remarkable performance in various computer vision tasks. However, their huge model sizes and intensive computations hinder ViTs' deployment on embedded devices, calling for effective model compression methods, such as quantization. Unfortunately, due to the existence of hardware-unfriendly and quantization-sensitive non-linear operations, particularly {Softmax}, it is non-trivial to completely quantize all operations …

abstract arxiv computer computer vision cs.ai cs.cv deployment devices embedded embedded devices free hinder however language language processing natural natural language natural language processing nlp performance processing quantization softmax success tasks training transformer transformers type vision vision transformers vit

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US