May 20, 2022, 1:11 a.m. | Bernal Jiménez Gutiérrez, Nikolas McNeal, Clay Washington, You Chen, Lang Li, Huan Sun, Yu Su

cs.CL updates on arXiv.org arxiv.org

The strong few-shot in-context learning capability of large pre-trained
language models (PLMs) such as GPT-3 is highly appealing for application
domains such as biomedicine, which feature high and diverse demands of language
technologies but also high data annotation costs. In this paper, we present the
first systematic and comprehensive study to compare the few-shot performance of
GPT-3 in-context learning with fine-tuning smaller (i.e., BERT-sized) PLMs on
two highly representative biomedical information extraction tasks, named entity
recognition and relation extraction. We …

arxiv biomedical context gpt gpt-3 learning think thinking

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Intelligence Analyst

@ Rappi | COL-Bogotá

Applied Scientist II

@ Microsoft | Redmond, Washington, United States