April 26, 2024, 4:44 a.m. | Eric Slyman, Stefan Lee, Scott Cohen, Kushal Kafle

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.16123v1 Announce Type: new
Abstract: Recent dataset deduplication techniques have demonstrated that content-aware dataset pruning can dramatically reduce the cost of training Vision-Language Pretrained (VLP) models without significant performance losses compared to training on the original dataset. These results have been based on pruning commonly used image-caption datasets collected from the web -- datasets that are known to harbor harmful social biases that may then be codified in trained models. In this work, we evaluate how deduplication affects the prevalence …

abstract arxiv cost cs.ai cs.cl cs.cv dataset datasets deduplication fairness image language losses performance pruning reduce results semantic training type vision vision-language

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote