June 25, 2024, 4:42 a.m. | Yizhuo Zhang, Heng Wang, Shangbin Feng, Zhaoxuan Tan, Xiaochuang Han, Tianxing He, Yulia Tsvetkov

cs.CL updates on arXiv.org arxiv.org

arXiv:2406.15992v1 Announce Type: new
Abstract: Large language models (LLMs) demonstrate great potential for problems with implicit graphical structures, while recent works seek to enhance the graph reasoning capabilities of LLMs through specialized instruction tuning. The resulting 'graph LLMs' are evaluated with in-distribution settings only, thus it remains underexplored whether LLMs are learning generalizable graph reasoning skills or merely memorizing patterns in the synthetic training data. To this end, we propose the NLGift benchmark, an evaluation suite of LLM graph reasoning …

abstract arxiv beyond capabilities cs.cl distribution graph instruction tuning language language models large language large language models llm llms pattern potential reasoning seek the graph through tuning type while

AI Focused Biochemistry Postdoctoral Fellow

@ Lawrence Berkeley National Lab | Berkeley, CA

Senior Quality Specialist - JAVA

@ SAP | Bengaluru, IN, 560066

Aktuar Financial Lines (m/w/d)

@ Zurich Insurance | Köln, DE

Senior Network Engineer

@ ManTech | 054H - 124TchnlgyPrkWy,SBurlington,VT

Pricing Analyst

@ EDF | Exeter, GB

Specialist IS Engineer

@ Amgen | US - California - Thousand Oaks - Field/Remote