April 15, 2024, 4:42 a.m. | Changho Shin, Jitian Zhao, Sonia Cromp, Harit Vishwakarma, Frederic Sala

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.08461v1 Announce Type: new
Abstract: Popular zero-shot models suffer due to artifacts inherited from pretraining. A particularly detrimental artifact, caused by unbalanced web-scale pretraining data, is mismatched label distribution. Existing approaches that seek to repair the label distribution are not suitable in zero-shot settings, as they have incompatible requirements such as access to labeled downstream task data or knowledge of the true label balance in the pretraining distribution. We sidestep these challenges and introduce a simple and lightweight approach to …

abstract access artifact arxiv classification cs.ai cs.lg data distribution improving otter popular pretraining repair requirements scale transport type via web zero-shot

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

AI Engineering Manager

@ M47 Labs | Barcelona, Catalunya [Cataluña], Spain