July 25, 2022, 1:11 a.m. | Peyman Baghershahi, Reshad Hosseini, Hadi Moradi

cs.LG updates on arXiv.org arxiv.org

A few models have tried to tackle the link prediction problem, also known as
knowledge graph completion, by embedding knowledge graphs in comparably lower
dimensions. However, the state-of-the-art results are attained at the cost of
considerably increasing the dimensionality of embeddings which causes
scalability issues in the case of huge knowledge bases. Transformers have been
successfully used recently as powerful encoders for knowledge graphs, but
available models still have scalability issues. To address this limitation, we
introduce a Transformer-based model …

arxiv attention graph knowledge knowledge graph lg link prediction prediction self-attention

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Data Engineer

@ Bosch Group | San Luis Potosí, Mexico

DATA Engineer (H/F)

@ Renault Group | FR REN RSAS - Le Plessis-Robinson (Siège)

Advisor, Data engineering

@ Desjardins | 1, Complexe Desjardins, Montréal

Data Engineer Intern

@ Getinge | Wayne, NJ, US

Software Engineer III- Java / Python / Pyspark / ETL

@ JPMorgan Chase & Co. | Jersey City, NJ, United States