all AI news
Gender bias in (non)-contextual clinical word embeddings for stereotypical medical categories. (arXiv:2208.01341v2 [cs.CL] UPDATED)
Aug. 9, 2022, 1:12 a.m. | Gizem Sogancioglu, Fabian Mijsters, Amar van Uden, Jelle Peperzak
cs.CL updates on arXiv.org arxiv.org
Clinical word embeddings are extensively used in various Bio-NLP problems as
a state-of-the-art feature vector representation. Although they are quite
successful at the semantic representation of words, due to the dataset - which
potentially carries statistical and societal bias - on which they are trained,
they might exhibit gender stereotypes. This study analyses gender bias of
clinical embeddings on three medical categories: mental disorders, sexually
transmitted diseases, and personality traits. To this extent, we analyze two
different pre-trained embeddings namely …
More from arxiv.org / cs.CL updates on arXiv.org
Benchmarking LLMs via Uncertainty Quantification
1 day, 14 hours ago |
arxiv.org
CARE: Extracting Experimental Findings From Clinical Literature
1 day, 14 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Global Data Architect, AVP - State Street Global Advisors
@ State Street | Boston, Massachusetts
Data Engineer
@ NTT DATA | Pune, MH, IN