Feb. 15, 2024, 5:44 a.m. | Pittawat Taveekitworachai, Febri Abdullah, Ruck Thawonmas

cs.LG updates on arXiv.org arxiv.org

arXiv:2401.08273v2 Announce Type: replace-cross
Abstract: This paper presents null-shot prompting. Null-shot prompting exploits hallucination in large language models (LLMs) by instructing LLMs to utilize information from the "Examples" section that never exists within the provided context to perform a task. While reducing hallucination is crucial and non-negligible for daily and critical uses of LLMs, we propose that in the current landscape in which these LLMs still hallucinate, it is possible, in fact, to exploit hallucination to increase performance in performing …

abstract arxiv context cs.ai cs.cl cs.lg daily examples exploits hallucination information language language models large language large language models llms null paper prompting type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Director, Clinical Data Science

@ Aura | Remote USA

Research Scientist, AI (PhD)

@ Meta | Menlo Park, CA | New York City