all AI news
Trio-ViT: Post-Training Quantization and Acceleration for Softmax-Free Efficient Vision Transformer
May 8, 2024, 4:45 a.m. | Huihong Shi, Haikuo Shao, Wendong Mao, Zhongfeng Wang
cs.CV updates on arXiv.org arxiv.org
Abstract: Motivated by the huge success of Transformers in the field of natural language processing (NLP), Vision Transformers (ViTs) have been rapidly developed and achieved remarkable performance in various computer vision tasks. However, their huge model sizes and intensive computations hinder ViTs' deployment on embedded devices, calling for effective model compression methods, such as quantization. Unfortunately, due to the existence of hardware-unfriendly and quantization-sensitive non-linear operations, particularly {Softmax}, it is non-trivial to completely quantize all operations …
abstract arxiv computer computer vision cs.ai cs.cv deployment devices embedded embedded devices free hinder however language language processing natural natural language natural language processing nlp performance processing quantization softmax success tasks training transformer transformers type vision vision transformers vit
More from arxiv.org / cs.CV updates on arXiv.org
Retrieval-Augmented Egocentric Video Captioning
2 days, 16 hours ago |
arxiv.org
Mirror-Aware Neural Humans
2 days, 16 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US