all AI news
Disparate Censorship & Undertesting: A Source of Label Bias in Clinical Machine Learning. (arXiv:2208.01127v1 [cs.LG])
Aug. 3, 2022, 1:10 a.m. | Trenton Chang, Michael W. Sjoding, Jenna Wiens
cs.LG updates on arXiv.org arxiv.org
As machine learning (ML) models gain traction in clinical applications,
understanding the impact of clinician and societal biases on ML models is
increasingly important. While biases can arise in the labels used for model
training, the many sources from which these biases arise are not yet
well-studied. In this paper, we highlight disparate censorship (i.e.,
differences in testing rates across patient groups) as a source of label bias
that clinical ML models may amplify, potentially causing harm. Many patient
risk-stratification …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Engineer
@ Parker | New York City
Sr. Data Analyst | Home Solutions
@ Three Ships | Raleigh or Charlotte, NC