all AI news
Selecting Better Samples from Pre-trained LLMs: A Case Study on Question Generation. (arXiv:2209.11000v1 [cs.CL])
Sept. 23, 2022, 1:15 a.m. | Xingdi Yuan, Tong Wang, Yen-Hsiang Wang, Emery Fine, Rania Abdelghani, Pauline Lucas, Hélène Sauzéon, Pierre-Yves Oudeyer
cs.CL updates on arXiv.org arxiv.org
Large Language Models (LLMs) have in recent years demonstrated impressive
prowess in natural language generation. A common practice to improve generation
diversity is to sample multiple outputs from the model. However, there lacks a
simple and robust way of selecting the best output from these stochastic
samples. As a case study framed in the context of question generation, we
propose two prompt-based approaches to selecting high-quality questions from a
set of LLM-generated candidates. Our method works under the constraints of …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Business Intelligence Developer / Analyst
@ Transamerica | Work From Home, USA
Data Analyst (All Levels)
@ Noblis | Bethesda, MD, United States