Nov. 17, 2022, 2:11 a.m. | Anaelia Ovalle, Sunipa Dev, Jieyu Zhao, Majid Sarrafzadeh, Kai-Wei Chang

cs.LG updates on arXiv.org arxiv.org

Auditing machine learning-based (ML) healthcare tools for bias is critical to
preventing patient harm, especially in communities that disproportionately face
health inequities. General frameworks are becoming increasingly available to
measure ML fairness gaps between groups. However, ML for health (ML4H) auditing
principles call for a contextual, patient-centered approach to model
assessment. Therefore, ML auditing tools must be (1) better aligned with ML4H
auditing principles and (2) able to illuminate and characterize communities
vulnerable to the most harm. To address this …

algorithmic fairness arxiv fairness health machine machine learning

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US