April 15, 2024, 4:42 a.m. | Changho Shin, Jitian Zhao, Sonia Cromp, Harit Vishwakarma, Frederic Sala

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.08461v1 Announce Type: new
Abstract: Popular zero-shot models suffer due to artifacts inherited from pretraining. A particularly detrimental artifact, caused by unbalanced web-scale pretraining data, is mismatched label distribution. Existing approaches that seek to repair the label distribution are not suitable in zero-shot settings, as they have incompatible requirements such as access to labeled downstream task data or knowledge of the true label balance in the pretraining distribution. We sidestep these challenges and introduce a simple and lightweight approach to …

abstract access artifact arxiv classification cs.ai cs.lg data distribution improving otter popular pretraining repair requirements scale transport type via web zero-shot

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York