May 25, 2022, 1:10 a.m. | Zhiwei Hao, Guanyu Xu, Yong Luo, Han Hu, Jianping An, Shiwen Mao

cs.LG updates on arXiv.org arxiv.org

Recently, deploying deep neural network (DNN) models via collaborative
inference, which splits a pre-trained model into two parts and executes them on
user equipment (UE) and edge server respectively, becomes attractive. However,
the large intermediate feature of DNN impedes flexible decoupling, and existing
approaches either focus on the single UE scenario or simply define tasks
considering the required CPU cycles, but ignore the indivisibility of a single
DNN layer. In this paper, we study the multi-agent collaborative inference
scenario, where …

arxiv collaborative compression dnn edge feature inference learning

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst

@ SEAKR Engineering | Englewood, CO, United States

Data Analyst II

@ Postman | Bengaluru, India

Data Architect

@ FORSEVEN | Warwick, GB

Director, Data Science

@ Visa | Washington, DC, United States

Senior Manager, Data Science - Emerging ML

@ Capital One | McLean, VA