all AI news
FeDeRA:Efficient Fine-tuning of Language Models in Federated Learning Leveraging Weight Decomposition
April 30, 2024, 4:42 a.m. | Yuxuan Yan, Shunpu Tang, Zhiguo Shi, Qianqian Yang
cs.LG updates on arXiv.org arxiv.org
Abstract: Pre-trained Language Models (PLMs) have shown excellent performance on various downstream tasks after fine-tuning. Nevertheless, the escalating concerns surrounding user privacy have posed significant challenges to centralized training reliant on extensive data collection. Federated learning(FL), which only requires training on the clients and aggregates weights on the server without sharing data, has emerged as a solution. However, the substantial parameter size of PLMs places a significant burden on the computational resources of client devices, while …
abstract arxiv challenges collection concerns cs.ai cs.lg data data collection federated learning fine-tuning language language models performance privacy tasks training type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US