Aug. 23, 2022, 1:14 a.m. | Yang Liu, Zequn Sun Guangyao Li, Wei Hu

cs.CL updates on arXiv.org arxiv.org

Knowledge graph (KG) embedding seeks to learn vector representations for
entities and relations. Conventional models reason over graph structures, but
they suffer from the issues of graph incompleteness and long-tail entities.
Recent studies have used pre-trained language models to learn embeddings based
on the textual information of entities and relations, but they cannot take
advantage of graph structures. In the paper, we show empirically that these two
kinds of features are complementary for KG embedding. To this end, we propose …

arxiv distillation embedding graph knowledge knowledge graph learning

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Robotics Technician - Weekend Day Shift

@ GXO Logistics | Hillsboro, OR, US, 97124

Gen AI Developer

@ NTT DATA | Irving, TX, US

Applied AI/ML - Vice President

@ JPMorgan Chase & Co. | LONDON, United Kingdom

Research Fellow (Computer Science/Engineering/AI)

@ Nanyang Technological University | NTU Main Campus, Singapore

Senior Machine Learning Engineer

@ Rasa | Remote - Germany