Sept. 28, 2022, 1:16 a.m. | Hui Su, Xiao Zhou, Houjin Yu, Yuwen Chen, Zilin Zhu, Yang Yu, Jie Zhou

cs.CL updates on arXiv.org arxiv.org

Large Language Models pre-trained with self-supervised learning have
demonstrated impressive zero-shot generalization capabilities on a wide
spectrum of tasks. In this work, we present WeLM: a well-read pre-trained
language model for Chinese that is able to seamlessly perform different types
of tasks with zero or few-shot demonstrations. WeLM is trained with 10B
parameters by "reading" a curated high-quality corpus covering a wide range of
topics. We show that WeLM is equipped with broad knowledge on various domains
and languages. On …

arxiv language language model

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineer, Machine Learning, Payments

@ Google | Bengaluru, Karnataka, India

Business Intelligence Analyst, Analytics and Data Science, YouTube

@ Google | Bengaluru, Karnataka, India