Aug. 9, 2022, 1:12 a.m. | Gizem Sogancioglu, Fabian Mijsters, Amar van Uden, Jelle Peperzak

cs.CL updates on arXiv.org arxiv.org

Clinical word embeddings are extensively used in various Bio-NLP problems as
a state-of-the-art feature vector representation. Although they are quite
successful at the semantic representation of words, due to the dataset - which
potentially carries statistical and societal bias - on which they are trained,
they might exhibit gender stereotypes. This study analyses gender bias of
clinical embeddings on three medical categories: mental disorders, sexually
transmitted diseases, and personality traits. To this extent, we analyze two
different pre-trained embeddings namely …

arxiv bias gender gender bias medical word embeddings

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Global Data Architect, AVP - State Street Global Advisors

@ State Street | Boston, Massachusetts

Data Engineer

@ NTT DATA | Pune, MH, IN