Web: http://arxiv.org/abs/2204.07834

Sept. 22, 2022, 1:15 a.m. | Changtong Zan, Liang Ding, Li Shen, Yu Cao, Weifeng Liu, Dacheng Tao

cs.CL updates on arXiv.org arxiv.org

For multilingual sequence-to-sequence pretrained language models
(multilingual Seq2Seq PLMs), e.g. mBART, the self-supervised pretraining task
is trained on a wide range of monolingual languages, e.g. 25 languages from
CommonCrawl, while the downstream cross-lingual tasks generally progress on a
bilingual language subset, e.g. English-German, making there exists the data
discrepancy, namely domain discrepancy, and cross-lingual learning objective
discrepancy, namely task discrepancy, between the pretraining and finetuning
stages. To bridge the above cross-lingual domain and task gaps, we extend the
vanilla pretrain-finetune …

arxiv cross-lingual text text generation understanding

Postdoctoral Fellow: ML for autonomous materials discovery

@ Lawrence Berkeley National Lab | Berkeley, CA

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Senior Data Engineer

@ HealthVerity | United States

Data Analyst - Business Insights

@ Sertis | Bangkok

Biomarker Data Management Specialist

@ Precision Medicine Group | Remote, United States