Oct. 25, 2022, 1:17 a.m. | Jiacheng Ye, Jiahui Gao, Jiangtao Feng, Zhiyong Wu, Tao Yu, Lingpeng Kong

cs.CL updates on arXiv.org arxiv.org

Recently, dataset-generation-based zero-shot learning has shown promising
results by training a task-specific model with a dataset synthesized from large
pre-trained language models (PLMs). The final task-specific model often
achieves compatible or even better performance than PLMs under the zero-shot
setting, with orders of magnitude fewer parameters. However, synthetic datasets
have their drawbacks. They have long been suffering from low-quality issues
(e.g., low informativeness and redundancy). This explains why the massive
synthetic data does not lead to better performance -- a …

arxiv context dataset dataset generation feedback

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Engineer - Takealot Group (Takealot.com | Superbalist.com | Mr D Food)

@ takealot.com | Cape Town