Web: http://arxiv.org/abs/2209.10372

Sept. 22, 2022, 1:15 a.m. | Hui Su, Xiao Zhou, Houjing Yu, Yuwen Chen, Zilin Zhu, Yang Yu, Jie Zhou

cs.CL updates on arXiv.org arxiv.org

Large Language Models pre-trained with self-supervised learning have
demonstrated impressive zero-shot generalization capabilities on a wide
spectrum of tasks. In this work, we present WeLM: a well-read pre-trained
language model for Chinese that is able to seamlessly perform different types
of tasks with zero or few-shot demonstrations. WeLM is trained with 10B
parameters by "reading" a curated high-quality corpus covering a wide range of
topics. We show that WeLM is equipped with broad knowledge on various domains
and languages. On …

arxiv language language model

More from arxiv.org / cs.CL updates on arXiv.org

Postdoctoral Fellow: ML for autonomous materials discovery

@ Lawrence Berkeley National Lab | Berkeley, CA

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Software Engineer II, Data Pipeline

@ Amplitude | San Francisco, CA

Data Operations Researcher

@ Cognism | Skopje, Greater Skopje, North Macedonia

Data Engineer - Commodities

@ DRW | Chicago and London