June 21, 2024, 4:41 a.m. | Naiming Liu, Zichao Wang, Richard Baraniuk

cs.CL updates on arXiv.org arxiv.org

arXiv:2406.13188v1 Announce Type: new
Abstract: Despite rapid advancements in large language models (LLMs), QG remains a challenging problem due to its complicated process, open-ended nature, and the diverse settings in which question generation occurs. A common approach to address these challenges involves fine-tuning smaller, custom models using datasets containing background context, question, and answer. However, obtaining suitable domain-specific datasets with appropriate context is often more difficult than acquiring question-answer pairs. In this paper, we investigate training QG models using synthetic …

abstract arxiv challenges context cs.cl cs.lg custom models datasets diverse fine-tuning language language models large language large language models llms nature problem process question synthetic tuning type

AI Focused Biochemistry Postdoctoral Fellow

@ Lawrence Berkeley National Lab | Berkeley, CA

Senior Quality Specialist - JAVA

@ SAP | Bengaluru, IN, 560066

Aktuar Financial Lines (m/w/d)

@ Zurich Insurance | Köln, DE

Senior Network Engineer

@ ManTech | 054H - 124TchnlgyPrkWy,SBurlington,VT

Pricing Analyst

@ EDF | Exeter, GB

Specialist IS Engineer

@ Amgen | US - California - Thousand Oaks - Field/Remote