June 2, 2022, 1:11 a.m. | Tianyu Chen, Hangbo Bao, Shaohan Huang, Li Dong, Binxing Jiao, Daxin Jiang, Haoyi Zhou, Jianxin Li

cs.CL updates on arXiv.org arxiv.org

As more and more pre-trained language models adopt on-cloud deployment, the
privacy issues grow quickly, mainly for the exposure of plain-text user data
(e.g., search history, medical record, bank account). Privacy-preserving
inference of transformer models is on the demand of cloud service users. To
protect privacy, it is an attractive choice to compute only with ciphertext in
homomorphic encryption (HE). However, enabling pre-trained models inference on
ciphertext data is difficult due to the complex computations in transformer
blocks, which are …

arxiv encryption homomorphic encryption inference privacy transformer

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US