Nov. 2, 2022, 1:15 a.m. | Dou Hu, Xiaolong Hou, Xiyang Du, Mengyuan Zhou, Lianxin Jiang, Yang Mo, Xiaofeng Shi

cs.CL updates on arXiv.org arxiv.org

Pre-trained language models have achieved promising performance on general
benchmarks, but underperform when migrated to a specific domain. Recent works
perform pre-training from scratch or continual pre-training on domain corpora.
However, in many specific domains, the limited corpus can hardly support
obtaining precise representations. To address this issue, we propose a novel
Transformer-based language model named VarMAE for domain-adaptive language
understanding. Under the masked autoencoding objective, we design a context
uncertainty learning module to encode the token's context into a …

arxiv autoencoder language masked autoencoder pre-training training understanding

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US