all AI news
Improving Pre-trained Language Model Sensitivity via Mask Specific losses: A case study on Biomedical NER
March 28, 2024, 4:42 a.m. | Micheal Abaho, Danushka Bollegala, Gary Leeming, Dan Joyce, Iain E Buchan
cs.LG updates on arXiv.org arxiv.org
Abstract: Adapting language models (LMs) to novel domains is often achieved through fine-tuning a pre-trained LM (PLM) on domain-specific data. Fine-tuning introduces new knowledge into an LM, enabling it to comprehend and efficiently perform a target domain task. Fine-tuning can however be inadvertently insensitive if it ignores the wide array of disparities (e.g in word meaning) between source and target domains. For instance, words such as chronic and pressure may be treated lightly in social conversations, …
abstract arxiv biomedical case case study cs.ai cs.cl cs.ir cs.lg data domain domains enabling fine-tuning however improving knowledge language language model language models lms losses ner novel sensitivity study through type via
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Risk Management - Machine Learning and Model Delivery Services, Product Associate - Senior Associate-
@ JPMorgan Chase & Co. | Wilmington, DE, United States
Senior ML Engineer (Speech/ASR)
@ ObserveAI | Bengaluru