all AI news
Towards Multi-modal Transformers in Federated Learning
April 22, 2024, 4:42 a.m. | Guangyu Sun, Matias Mendieta, Aritra Dutta, Xin Li, Chen Chen
cs.LG updates on arXiv.org arxiv.org
Abstract: Multi-modal transformers mark significant progress in different domains, but siloed high-quality data hinders their further improvement. To remedy this, federated learning (FL) has emerged as a promising privacy-preserving paradigm for training models without direct access to the raw data held by different clients. Despite its potential, a considerable research direction regarding the unpaired uni-modal clients and the transformer architecture in FL remains unexplored. To fill this gap, this paper explores a transfer multi-modal federated learning …
abstract access arxiv cs.cv cs.lg data domains federated learning improvement modal multi-modal paradigm privacy progress quality quality data raw research training training models transformers type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US