Sept. 1, 2022, 1:14 a.m. | Tianyi Li, Wenyu Huang, Nikos Papasarantopoulos, Pavlos Vougiouklis, Jeff Z. Pan

cs.CL updates on arXiv.org arxiv.org

We present a system for knowledge graph population with Language Models,
evaluated on the Knowledge Base Construction from Pre-trained Language Models
(LM-KBC) challenge at ISWC 2022. Our system involves task-specific pre-training
to improve LM representation of the masked object tokens, prompt decomposition
for progressive generation of candidate objects, among other methods for
higher-quality retrieval. Our system is the winner of track 1 of the LM-KBC
challenge, based on BERT LM; it achieves 55.0% F-1 score on the hidden test set …

arxiv graph knowledge knowledge graph language language models population pre-training training

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Social Insights & Data Analyst (Freelance)

@ Media.Monks | Jakarta

Cloud Data Engineer

@ Arkatechture | Portland, ME, USA