all AI news
Towards Robust and Efficient Cloud-Edge Elastic Model Adaptation via Selective Entropy Distillation
Feb. 28, 2024, 5:46 a.m. | Yaofo Chen, Shuaicheng Niu, Shoukai Xu, Hengjie Song, Yaowei Wang, Mingkui Tan
cs.CV updates on arXiv.org arxiv.org
Abstract: The conventional deep learning paradigm often involves training a deep model on a server and then deploying the model or its distilled ones to resource-limited edge devices. Usually, the models shall remain fixed once deployed (at least for some period) due to the potential high cost of model adaptation for both the server and edge sides. However, in many real-world scenarios, the test environments may change dynamically (known as distribution shifts), which often results in …
abstract arxiv cloud cs.cv deep learning devices distillation edge edge devices elastic entropy least model adaptation paradigm robust server training type via
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Sr. VBI Developer II
@ Atos | Texas, US, 75093
Wealth Management - Data Analytics Intern/Co-op Fall 2024
@ Scotiabank | Toronto, ON, CA