Aug. 11, 2022, 1:11 a.m. | Jia-Huei Ju, Jheng-Hong Yang, Chuan-Ju Wang

cs.CL updates on arXiv.org arxiv.org

Recently, much progress in natural language processing has been driven by
deep contextualized representations pretrained on large corpora. Typically, the
fine-tuning on these pretrained models for a specific downstream task is based
on single-view learning, which is however inadequate as a sentence can be
interpreted differently from different perspectives. Therefore, in this work,
we propose a text-to-text multi-view learning framework by incorporating an
additional view -- the text generation view -- into a typical single-view
passage ranking model. Empirically, the …

arxiv learning ranking text

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst

@ Aviva | UK - Norwich - Carrara - 1st Floor

Werkstudent im Bereich Performance Engineering mit Computer Vision (w/m/div.) - anteilig remote

@ Bosch Group | Stuttgart, Lollar, Germany

Applied Research Scientist - NLP (Senior)

@ Snorkel AI | Hybrid / San Francisco, CA

Associate Principal Engineer, Machine Learning

@ Nagarro | Remote, India