April 22, 2024, 4:43 a.m. | Yihe Deng, Weitong Zhang, Zixiang Chen, Quanquan Gu

cs.LG updates on arXiv.org arxiv.org

arXiv:2311.04205v2 Announce Type: replace-cross
Abstract: Misunderstandings arise not only in interpersonal communication but also between humans and Large Language Models (LLMs). Such discrepancies can make LLMs interpret seemingly unambiguous questions in unexpected ways, yielding incorrect responses. While it is widely acknowledged that the quality of a prompt, such as a question, significantly impacts the quality of the response provided by LLMs, a systematic method for crafting questions that LLMs can better comprehend is still underdeveloped. In this paper, we present …

arxiv cs.ai cs.cl cs.lg language language models large language large language models questions rephrase type

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote