Oct. 25, 2022, 1:18 a.m. | Jiacheng Ye, Jiahui Gao, Qintong Li, Hang Xu, Jiangtao Feng, Zhiyong Wu, Tao Yu, Lingpeng Kong

cs.CL updates on arXiv.org arxiv.org

There is a growing interest in dataset generation recently due to the
superior generative capacity of large pre-trained language models (PLMs). In
this paper, we study a flexible and efficient zero-short learning method,
\textsc{ZeroGen}. Given a zero-shot task, we first generate a dataset from
scratch using PLMs in an unsupervised manner. Then, we train a tiny task model
(e.g., LSTM) under the supervision of the synthesized dataset. This approach
allows highly efficient inference as the final task model only has …

arxiv dataset dataset generation

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne