Sept. 16, 2022, 1:16 a.m. | Yang Liu, Zequn Sun, Guangyao Li, Wei Hu

cs.CL updates on arXiv.org arxiv.org

Knowledge graph (KG) embedding seeks to learn vector representations for
entities and relations. Conventional models reason over graph structures, but
they suffer from the issues of graph incompleteness and long-tail entities.
Recent studies have used pre-trained language models to learn embeddings based
on the textual information of entities and relations, but they cannot take
advantage of graph structures. In the paper, we show empirically that these two
kinds of features are complementary for KG embedding. To this end, we propose …

arxiv distillation embedding graph knowledge knowledge graph

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analytics & Insight Specialist, Customer Success

@ Fortinet | Ottawa, ON, Canada

Account Director, ChatGPT Enterprise - Majors

@ OpenAI | Remote - Paris