April 2, 2024, 7:52 p.m. | Chen Cecilia Liu, Fajri Koto, Timothy Baldwin, Iryna Gurevych

cs.CL updates on arXiv.org arxiv.org

arXiv:2309.08591v2 Announce Type: replace
Abstract: Large language models (LLMs) are highly adept at question answering and reasoning tasks, but when reasoning in a situational context, human expectations vary depending on the relevant cultural common ground. As languages are associated with diverse cultures, LLMs should also be culturally-diverse reasoners. In this paper, we study the ability of a wide range of state-of-the-art multilingual LLMs (mLLMs) to reason with proverbs and sayings in a conversational context. Our experiments reveal that: (1) mLLMs …

abstract adept arxiv context cs.cl diverse human investigation language language models languages large language large language models llms multilingual question question answering reasoning tasks type

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US