Nov. 4, 2022, 1:11 a.m. | Dacheng Li, Rulin Shao, Hongyi Wang, Han Guo, Eric P. Xing, Hao Zhang

cs.LG updates on arXiv.org arxiv.org

Enabling private inference is crucial for many cloud inference services that
are based on Transformer models. However, existing private inference solutions
for Transformers can increase the inference latency by more than 60x or
significantly compromise the quality of inference results. In this paper, we
design the framework MPCFORMER using secure multi-party computation (MPC) and
Knowledge Distillation (KD). It can be used in tandem with many specifically
designed MPC-friendly approximations and trained Transformer models. MPCFORMER
significantly speeds up Transformer model inference …

arxiv inference mpc transformer

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Principal Data Architect - Azure & Big Data

@ MGM Resorts International | Home Office - US, NV

GN SONG MT Market Research Data Analyst 11

@ Accenture | Bengaluru, BDC7A