all AI news
Your Finetuned Large Language Model is Already a Powerful Out-of-distribution Detector
April 16, 2024, 4:42 a.m. | Andi Zhang, Tim Z. Xiao, Weiyang Liu, Robert Bamler, Damon Wischik
cs.LG updates on arXiv.org arxiv.org
Abstract: We revisit the likelihood ratio between a pretrained large language model (LLM) and its finetuned variant as a criterion for out-of-distribution (OOD) detection. The intuition behind such a criterion is that, the pretrained LLM has the prior knowledge about OOD data due to its large amount of training data, and once finetuned with the in-distribution data, the LLM has sufficient knowledge to distinguish their difference. Leveraging the power of LLMs, we show that, for the …
abstract arxiv criterion cs.ai cs.cl cs.lg data detection distribution intuition knowledge language language model large language large language model likelihood llm prior stat.ml type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Codec Avatars Research Engineer
@ Meta | Pittsburgh, PA