all AI news
Pre-training Language Model Incorporating Domain-specific Heterogeneous Knowledge into A Unified Representation
March 22, 2024, 4:48 a.m. | Hongyin Zhu, Hao Peng, Zhiheng Lyu, Lei Hou, Juanzi Li, Jinghui Xiao
cs.CL updates on arXiv.org arxiv.org
Abstract: Existing technologies expand BERT from different perspectives, e.g. designing different pre-training tasks, different semantic granularities, and different model architectures. Few models consider expanding BERT from different text formats. In this paper, we propose a heterogeneous knowledge language model (\textbf{HKLM}), a unified pre-trained language model (PLM) for all forms of text, including unstructured text, semi-structured text, and well-structured text. To capture the corresponding relations among these multi-format knowledge, our approach uses masked language model objective to …
abstract architectures arxiv bert cs.cl designing domain expand knowledge language language model paper perspectives pre-training representation semantic tasks technologies text training type
More from arxiv.org / cs.CL updates on arXiv.org
Benchmarking LLMs via Uncertainty Quantification
2 days, 8 hours ago |
arxiv.org
CARE: Extracting Experimental Findings From Clinical Literature
2 days, 8 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Field Sample Specialist (Air Sampling) - Eurofins Environment Testing – Pueblo, CO
@ Eurofins | Pueblo, CO, United States
Camera Perception Engineer
@ Meta | Sunnyvale, CA