all AI news
Scaling (Down) CLIP: A Comprehensive Analysis of Data, Architecture, and Training Strategies
April 15, 2024, 4:44 a.m. | Zichao Li, Cihang Xie, Ekin Dogus Cubuk
cs.CV updates on arXiv.org arxiv.org
Abstract: This paper investigates the performance of the Contrastive Language-Image Pre-training (CLIP) when scaled down to limited computation budgets. We explore CLIP along three dimensions: data, architecture, and training strategies. With regards to data, we demonstrate the significance of high-quality training data and show that a smaller dataset of high-quality data can outperform a larger dataset with lower quality. We also examine how model performance varies with different dataset sizes, suggesting that smaller ViT models are …
abstract analysis architecture arxiv budgets clip computation cs.cv data dimensions explore image language paper performance pre-training quality scaling show significance strategies training training data type
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Robotics Technician - 3rd Shift
@ GXO Logistics | Perris, CA, US, 92571