Nov. 13, 2023, 1:39 a.m. | /u/APaperADay

Machine Learning www.reddit.com

**Paper**: [https://arxiv.org/abs/2309.17425](https://arxiv.org/abs/2309.17425)

**Models**: [https://github.com/mlfoundations/open\_clip](https://github.com/mlfoundations/open_clip)

**Results table**: [https://github.com/mlfoundations/open\_clip/blob/main/docs/openclip\_results.csv](https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_results.csv)

**X thread**: [https://twitter.com/gabriel\_ilharco/status/1721905861369762096](https://twitter.com/gabriel_ilharco/status/1721905861369762096)

**Abstract**:

>Large training sets have become a cornerstone of machine learning and are the foundation for recent advances in language modeling and multimodal learning. While data curation for pre-training is often still ad-hoc, one common paradigm is to first collect a massive pool of data from the Web and then filter this candidate pool down to an actual training set via various heuristics. In this work, we study the problem …

abstract advances become curation data data curation filter foundation language machine machine learning machinelearning massive modeling multimodal multimodal learning paradigm pool pre-training set training web

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Business Intelligence Architect - Specialist

@ Eastman | Hyderabad, IN, 500 008