April 8, 2024, 4:42 a.m. | Zitao Shuai, Liyue Shen

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.03854v1 Announce Type: new
Abstract: Vision-language pre-training (VLP) has arised as an efficient scheme for multimodal representation learning, but it requires large-scale multimodal data for pre-training, making it an obstacle especially for biomedical applications. To overcome the data limitation, federated learning (FL) can be a promising strategy to scale up the dataset for biomedical VLP while protecting data privacy. However, client data are often heterogeneous in real-world scenarios, and we observe that local training on heterogeneous client data would distort …

abstract applications arxiv biomedical cs.cl cs.cv cs.lg data federated learning language making multimodal multimodal data multimodal learning pre-training representation representation learning scale strategy training type vision

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

DevOps Engineer (Data Team)

@ Reward Gateway | Sofia/Plovdiv