April 15, 2024, 4:43 a.m. | Tobias Christian Nauen, Sebastian Palacio, Andreas Dengel

cs.LG updates on arXiv.org arxiv.org

arXiv:2308.09372v2 Announce Type: replace-cross
Abstract: Transformers come with a high computational cost, yet their effectiveness in addressing problems in language and vision has sparked extensive research aimed at enhancing their efficiency. However, diverse experimental conditions, spanning multiple input domains, prevent a fair comparison based solely on reported results, posing challenges for model selection. To address this gap in comparability, we design a comprehensive benchmark of more than 30 models for image classification, evaluating key efficiency aspects, including accuracy, speed, and …

abstract analysis arxiv comparative analysis comparison computational cost cs.ai cs.cv cs.lg diverse domains efficiency experimental fair however language multiple research results transformer transformers type vision vision transformers

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US