Oct. 11, 2022, 1:19 a.m. | Shangbin Feng, Zhaoxuan Tan, Wenqian Zhang, Zhenyu Lei, Yulia Tsvetkov

cs.CL updates on arXiv.org arxiv.org

With the advent of pre-trained language models (LMs), increasing research
efforts have been focusing on infusing commonsense and domain-specific
knowledge to prepare LMs for downstream tasks. These works attempt to leverage
knowledge graphs, the de facto standard of symbolic knowledge representation,
along with pre-trained LMs. While existing approaches leverage external
knowledge, it remains an open question how to jointly incorporate knowledge
graphs representing varying contexts, from local (e.g., sentence), to
document-level, to global knowledge, to enable knowledge-rich and interpretable
exchange …

arxiv document understanding global integration knowledge understanding

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne