Aug. 3, 2022, 1:10 a.m. | Trenton Chang, Michael W. Sjoding, Jenna Wiens

cs.LG updates on arXiv.org arxiv.org

As machine learning (ML) models gain traction in clinical applications,
understanding the impact of clinician and societal biases on ML models is
increasingly important. While biases can arise in the labels used for model
training, the many sources from which these biases arise are not yet
well-studied. In this paper, we highlight disparate censorship (i.e.,
differences in testing rates across patient groups) as a source of label bias
that clinical ML models may amplify, potentially causing harm. Many patient
risk-stratification …

arxiv bias censorship learning lg machine machine learning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer

@ Parker | New York City

Sr. Data Analyst | Home Solutions

@ Three Ships | Raleigh or Charlotte, NC