all AI news
C3: Continued Pretraining with Contrastive Weak Supervision for Cross Language Ad-Hoc Retrieval. (arXiv:2204.11989v1 [cs.IR])
April 27, 2022, 1:11 a.m. | Eugene Yang, Suraj Nair, Ramraj Chandradevan, Rebecca Iglesias-Flores, Douglas W. Oard
cs.CL updates on arXiv.org arxiv.org
Pretrained language models have improved effectiveness on numerous tasks,
including ad-hoc retrieval. Recent work has shown that continuing to pretrain a
language model with auxiliary objectives before fine-tuning on the retrieval
task can further improve retrieval effectiveness. Unlike monolingual retrieval,
designing an appropriate auxiliary task for cross-language mappings is
challenging. To address this challenge, we use comparable Wikipedia articles in
different languages to further pretrain off-the-shelf multilingual pretrained
models before fine-tuning on the retrieval task. We show that our approach …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Robotics Technician - 3rd Shift
@ GXO Logistics | Perris, CA, US, 92571