all AI news
Unfamiliar Finetuning Examples Control How Language Models Hallucinate
March 12, 2024, 4:41 a.m. | Katie Kang, Eric Wallace, Claire Tomlin, Aviral Kumar, Sergey Levine
cs.LG updates on arXiv.org arxiv.org
Abstract: Large language models (LLMs) have a tendency to generate plausible-sounding yet factually incorrect responses, especially when queried on unfamiliar concepts. In this work, we explore the underlying mechanisms that govern how finetuned LLMs hallucinate. Our investigation reveals an interesting pattern: as inputs become more unfamiliar, LLM outputs tend to default towards a ``hedged'' prediction, whose form is determined by how the unfamiliar examples in the finetuning data are supervised. Thus, by strategically modifying these examples' …
abstract arxiv become concepts control cs.ai cs.cl cs.lg examples explore finetuning generate inputs investigation language language models large language large language models llm llms responses type work
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Codec Avatars Research Engineer
@ Meta | Pittsburgh, PA