April 16, 2024, 4:42 a.m. | Andi Zhang, Tim Z. Xiao, Weiyang Liu, Robert Bamler, Damon Wischik

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.08679v1 Announce Type: cross
Abstract: We revisit the likelihood ratio between a pretrained large language model (LLM) and its finetuned variant as a criterion for out-of-distribution (OOD) detection. The intuition behind such a criterion is that, the pretrained LLM has the prior knowledge about OOD data due to its large amount of training data, and once finetuned with the in-distribution data, the LLM has sufficient knowledge to distinguish their difference. Leveraging the power of LLMs, we show that, for the …

abstract arxiv criterion cs.ai cs.cl cs.lg data detection distribution intuition knowledge language language model large language large language model likelihood llm prior stat.ml type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Codec Avatars Research Engineer

@ Meta | Pittsburgh, PA