all AI news
Extracting Biomedical Factual Knowledge Using Pretrained Language Model and Electronic Health Record Context. (arXiv:2209.07859v1 [cs.IR])
Sept. 19, 2022, 1:11 a.m. | Zonghai Yao, Yi Cao, Zhichao Yang, Vijeta Deshpande, Hong Yu
cs.LG updates on arXiv.org arxiv.org
Language Models (LMs) have performed well on biomedical natural language
processing applications. In this study, we conducted some experiments to use
prompt methods to extract knowledge from LMs as new knowledge Bases (LMs as
KBs). However, prompting can only be used as a low bound for knowledge
extraction, and perform particularly poorly on biomedical domain KBs. In order
to make LMs as KBs more in line with the actual application scenarios of the
biomedical domain, we specifically add EHR notes …
arxiv biomedical context electronic health knowledge language language model pretrained language model
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Data Science Analyst
@ Mayo Clinic | AZ, United States
Sr. Data Scientist (Network Engineering)
@ SpaceX | Redmond, WA