all AI news
CamemBERT-bio: Leveraging Continual Pre-training for Cost-Effective Models on French Biomedical Data
April 4, 2024, 4:47 a.m. | Rian Touchent, Laurent Romary, Eric de la Clergerie
cs.CL updates on arXiv.org arxiv.org
Abstract: Clinical data in hospitals are increasingly accessible for research through clinical data warehouses. However these documents are unstructured and it is therefore necessary to extract information from medical reports to conduct clinical studies. Transfer learning with BERT-like models such as CamemBERT has allowed major advances for French, especially for named entity recognition. However, these models are trained for plain language and are less efficient on biomedical data. Addressing this gap, we introduce CamemBERT-bio, a dedicated …
abstract arxiv bert bio biomedical clinical continual cost cs.ai cs.cl data data warehouses documents extract french hospitals however information major medical pre-training reports research studies through training transfer transfer learning type unstructured warehouses
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US