Oct. 24, 2022, 1:13 a.m. | Zonghai Yao, Yi Cao, Zhichao Yang, Vijeta Deshpande, Hong Yu

cs.LG updates on arXiv.org arxiv.org

Language Models (LMs) have performed well on biomedical natural language
processing applications. In this study, we conducted some experiments to use
prompt methods to extract knowledge from LMs as new knowledge Bases (LMs as
KBs). However, prompting can only be used as a low bound for knowledge
extraction, and perform particularly poorly on biomedical domain KBs. In order
to make LMs as KBs more in line with the actual application scenarios of the
biomedical domain, we specifically add EHR notes …

arxiv biomedical context electronic health knowledge language language model pretrained language model

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Data Engineer (m/f/d)

@ Project A Ventures | Berlin, Germany

Principle Research Scientist

@ Analog Devices | US, MA, Boston