all AI news
Rethinking Resource Management in Edge Learning: A Joint Pre-training and Fine-tuning Design Paradigm
April 2, 2024, 7:43 p.m. | Zhonghao Lyu, Yuchen Li, Guangxu Zhu, Jie Xu, H. Vincent Poor, Shuguang Cui
cs.LG updates on arXiv.org arxiv.org
Abstract: In some applications, edge learning is experiencing a shift in focusing from conventional learning from scratch to new two-stage learning unifying pre-training and task-specific fine-tuning. This paper considers the problem of joint communication and computation resource management in a two-stage edge learning system. In this system, model pre-training is first conducted at an edge server via centralized learning on local pre-stored general data, and then task-specific fine-tuning is performed at edge devices based on the …
abstract applications arxiv communication computation cs.dc cs.it cs.lg design edge fine-tuning management math.it paper paradigm pre-training resource management scratch shift stage training type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
AI Engineering Manager
@ M47 Labs | Barcelona, Catalunya [Cataluña], Spain