Feb. 6, 2024, 5:53 a.m. | Qingpei Guo Furong Xu Hanxiao Zhang Wang Ren Ziping Ma Lin Ju Jian Wang Jingdong Chen

cs.CV updates on arXiv.org arxiv.org

Vision-language foundation models like CLIP have revolutionized the field of artificial intelligence. Nevertheless, VLM models supporting multi-language, e.g., in both Chinese and English, have lagged due to the relative scarcity of large-scale pretraining datasets. Toward this end, we introduce a comprehensive bilingual (Chinese-English) dataset BM-6B with over 6 billion image-text pairs, aimed at enhancing multimodal foundation models to well understand images in both languages. To handle such a scale of dataset, we propose a novel grouped aggregation approach for image-text …

artificial artificial intelligence bilingual billion chinese clip cs.ai cs.cv dataset datasets encoder english foundation image intelligence language pretraining scale text text understanding understanding vision vlm

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US