all AI news
LLM as Prompter: Low-resource Inductive Reasoning on Arbitrary Knowledge Graphs
Feb. 20, 2024, 5:52 a.m. | Kai Wang, Yuwei Xu, Zhiyong Wu, Siqiang Luo
cs.CL updates on arXiv.org arxiv.org
Abstract: Knowledge Graph (KG) inductive reasoning, which aims to infer missing facts from new KGs that are not seen during training, has been widely adopted in various applications. One critical challenge of KG inductive reasoning is handling low-resource scenarios with scarcity in both textual and structural aspects. In this paper, we attempt to address this challenge with Large Language Models (LLMs). Particularly, we utilize the state-of-the-art LLMs to generate a graph-structural prompt to enhance the pre-trained …
abstract applications arxiv challenge cs.ai cs.cl cs.si facts graph graphs inductive knowledge knowledge graph knowledge graphs llm low reasoning textual training type
More from arxiv.org / cs.CL updates on arXiv.org
Benchmarking LLMs via Uncertainty Quantification
1 day, 13 hours ago |
arxiv.org
CARE: Extracting Experimental Findings From Clinical Literature
1 day, 13 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Global Data Architect, AVP - State Street Global Advisors
@ State Street | Boston, Massachusetts
Data Engineer
@ NTT DATA | Pune, MH, IN