Feb. 23, 2024, 5:43 a.m. | Wenlong Deng, Christos Thrampoulidis, Xiaoxiao Li

cs.LG updates on arXiv.org arxiv.org

arXiv:2310.18285v3 Announce Type: replace
Abstract: Vision Transformers (ViT) and Visual Prompt Tuning (VPT) achieve state-of-the-art performance with improved efficiency in various computer vision tasks. This suggests a promising paradigm shift of adapting pre-trained ViT models to Federated Learning (FL) settings. However, the challenge of data heterogeneity among FL clients presents a significant hurdle in effectively deploying ViT models. Existing Generalized FL (GFL) and Personalized FL (PFL) methods have limitations in balancing performance across both global and local data distributions. In …

abstract art arxiv challenge computer computer vision cs.cv cs.lg data efficiency federated learning generalized paradigm performance personalized prompt prompt tuning shift state tasks transformers type vision vision transformers visual vit

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US