all AI news
Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers
April 15, 2024, 4:43 a.m. | Tobias Christian Nauen, Sebastian Palacio, Andreas Dengel
cs.LG updates on arXiv.org arxiv.org
Abstract: Transformers come with a high computational cost, yet their effectiveness in addressing problems in language and vision has sparked extensive research aimed at enhancing their efficiency. However, diverse experimental conditions, spanning multiple input domains, prevent a fair comparison based solely on reported results, posing challenges for model selection. To address this gap in comparability, we design a comprehensive benchmark of more than 30 models for image classification, evaluating key efficiency aspects, including accuracy, speed, and …
abstract analysis arxiv comparative analysis comparison computational cost cs.ai cs.cv cs.lg diverse domains efficiency experimental fair however language multiple research results transformer transformers type vision vision transformers
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Software Engineer, Data Tools - Full Stack
@ DoorDash | Pune, India
Senior Data Analyst
@ Artsy | New York City