all AI news
On Pretraining Data Diversity for Self-Supervised Learning
March 21, 2024, 4:43 a.m. | Hasan Abed Al Kader Hammoud, Tuhin Das, Fabio Pizzati, Philip Torr, Adel Bibi, Bernard Ghanem
cs.LG updates on arXiv.org arxiv.org
Abstract: We explore the impact of training with more diverse datasets, characterized by the number of unique samples, on the performance of self-supervised learning (SSL) under a fixed computational budget. Our findings consistently demonstrate that increasing pretraining data diversity enhances SSL performance, albeit only when the distribution distance to the downstream data is minimal. Notably, even with an exceptionally large pretraining data diversity achieved through methods like web crawling or diffusion-generated data, among other ways, the …
arxiv cs.ai cs.cv cs.lg data data diversity diversity pretraining self-supervised learning supervised learning type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Cloud Data Platform Engineer
@ First Central | Home Office (Remote)
Associate Director, Data Science
@ MSD | USA - New Jersey - Rahway
Data Scientist Sr.
@ MSD | CHL - Santiago - Santiago (Calle Mariano)