all AI news
CamemBERT-bio: Leveraging Continual Pre-training for Cost-Effective Models on French Biomedical Data
April 4, 2024, 4:47 a.m. | Rian Touchent, Laurent Romary, Eric de la Clergerie
cs.CL updates on arXiv.org arxiv.org
Abstract: Clinical data in hospitals are increasingly accessible for research through clinical data warehouses. However these documents are unstructured and it is therefore necessary to extract information from medical reports to conduct clinical studies. Transfer learning with BERT-like models such as CamemBERT has allowed major advances for French, especially for named entity recognition. However, these models are trained for plain language and are less efficient on biomedical data. Addressing this gap, we introduce CamemBERT-bio, a dedicated …
abstract arxiv bert bio biomedical clinical continual cost cs.ai cs.cl data data warehouses documents extract french hospitals however information major medical pre-training reports research studies through training transfer transfer learning type unstructured warehouses
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Developer AI Senior Staff Engineer, Machine Learning
@ Google | Sunnyvale, CA, USA; New York City, USA
Engineer* Cloud & Data Operations (f/m/d)
@ SICK Sensor Intelligence | Waldkirch (bei Freiburg), DE, 79183